On modern processors, the memory cache is king. Using a cache effectively is of utmost importance; a processor can easily be stopped for hundreds of cycles when a program accesses an address whose contents are not cached, expecting a very slow memory bus to provide data.
Access to memory is most effective with sequential access to it. The likelihood that the byte will be available in the cache will then be the greatest; it will most likely be present on the same cache line. What makes arrays the most efficient collection object today, assuming you are indexing the elements of the array sequentially.
Accordingly, all .NET collection classes, with the exception of LinkedList, use arrays to store data. Including hashed collections (Hashtable, Dictionary, Hashset), they use an array of arrays. Including ArrayList. LinkedList should be avoided due to the very poor cache location, unless cheap insertion and deletion at randomly known locations is a major concern.
The problem with arrays is that their size is fixed, which makes it difficult to implement auto-separation collections such as ArrayList. This is solved by intentionally using the address space. Whenever an array is filled to capacity, the array is redistributed and elements are copied. The redistribution is twice the previous size; you can observe this from the Capacity property. Although this sounds expensive, the algorithm is depreciated by O (1), and the virtual memory subsystem in the operating system ensures that you do not actually pay for memory that you are not using.
You can avoid not-so-cheap copying by guessing "Forward Gain." More on this in this answer .
source share