It is important to understand what happens when you fill out a hashtable. (The dictionary uses a hashtable as the underlying data structure.)
When you create a new Hashtable, .NET creates an array containing 11 codes that are linked by lists of dictionary entries. When you add an entry, its key receives a hash, the hash code falls into one of 11 codes, and the record (key + value + hash code) is added to the linked list.
At a certain point (and this depends on the load factor used in constructing the Hashtable), the Hashtable determines during the add operation that it encounters too many collisions and that the initial 11 codes are not enough, Thus, it creates a new array of buckets, which in twice the old one (not exactly, the number of buckets is always simple), and then fills in a new table from the old one.
Thus, in terms of memory usage, there are two things.
Firstly, every so often a hashtable needs to use twice as much memory as it currently does, so it can copy a table while resizing. Therefore, if you have a Hashtable that uses 1.8 GB of memory and needs to be changed, it will need to use 3.6 GB, and, well, now you have a problem.
Secondly, each entry in the hash table contains about 12 bytes of overhead data: pointers to a key, a value and the next entry in the list, as well as a hash code. For most applications, this overhead is negligible, but if you create a Hashtable with 100 million entries in it, then this is approximately 1.2 GB of overhead.
You can overcome the first problem by using the Dictionary constructor overload, which allows you to provide initial capacity. If you specify a capacity sufficient to hold all the records you are about to add, the Hashtable will not need to be rebuilt while you fill it. There is almost nothing to do with the second.