It is difficult to compare the two, because their memory usage patterns are completely different.
Vector Resize
A vector changes itself dynamically as necessary. He does this by allocating a new piece of memory, moving (or copying) data from the old piece to a new piece, freeing the old one. In a typical case, a new piece is 1.5 times larger than the old one (contrary to popular belief, 2x in practice seems rather unusual). This means that for a short time, redistribution requires about 2.5 times more memory than the data that you actually save. The rest of the time, the “piece” that is used is at least 2/3 rds and the maximum is completely full. If all sizes are equally likely, we can expect it to be on average about 5/6 ths filled. Looking at it from a different direction, we can expect about 1/6 th or about 17% of the space that will be "wasted" at any given time.
When we resize using such a constant factor (instead of, for example, always adding a certain piece size, for example, increasing in increments of 4 Kb), we get what is called added time with added depreciation. In other words, as the array grows, resizing occurs exponentially less often. The average number of times the array elements were copied tends to be constant (usually around 3, but depends on the growth factor used).
highlighted link lists
Using a linked list, the situation is quite different. We never see resizing, so we don’t see extra time or memory usage for some inserts. At the same time, we see that the extra time and memory are used almost all the time. In particular, each node in the linked list must contain a pointer to the next node. Depending on the size of the data in the node compared to the size of the pointer, this can lead to significant overhead. For example, suppose you need an int s stack. In a typical case, when an int is the same size as the pointer, this will mean 50% of the overhead - all the time. For a pointer, more than int ; twice the size is pretty common (64-bit pointer, 32-bit int). In this case, you have an overhead of ~ 67%, that is, obviously, each node allocates twice as much space for the pointer when saving data.
Unfortunately, this is often just the tip of the iceberg. In a typical linked list, each node is dynamically allocated individually. At least, if you store small data elements (for example, int ), then the memory allocated for node can be (usually will be) even more than the amount you requested. So, you are requesting 12 bytes of memory to store the int and pointer, but the piece of memory that you get will most likely be rounded to 16 or 32 bytes. Now you are looking at the top at least 75% and possibly at ~ 88%.
Regarding speed, the situation is quite similar: dynamic allocation and freeing of memory often happens rather slowly. The heap manager usually has blocks of free memory and should spend time looking through them to find the block that is most suitable for the size you are asking for. Then it (as a rule) should break this block into two parts, one to satisfy your allocation, and another the remaining memory which it can use to satisfy other distributions. Similarly, when you free memory, it usually goes back to the same list of free blocks and checks if there is already a contiguous memory block, so it can join the two back together.
Allocating and managing multiple memory blocks is expensive.
cache usage
Finally, with recent processors, we are faced with another important factor: the use of cache. In the case of a vector, we have all the data next to each other. Then, after the end of the part of the vector that is used, we have some empty memory. This results in superior cache usage — the data we use is cached; data that we don’t use has practically no effect on the cache.
With a linked list, pointers (and the possible overhead in each node) propagate throughout the list. I., each piece of data that we care about has a pointer overhead and a blank space allocated for the node that we don’t use. In short, the effective cache size decreases by about the same factor as the total overhead of each node in the list, that is, we can easily see only 1/8 th of the cache storing the date we care about, and 7/8 ths of those intended for storing pointers and / or clean trash.
Summary
A linked list may work well if you have a relatively small number of nodes, each of which individually is large enough. If (as is more typical of the stack), you are dealing with a relatively large number of elements, each of which is individually small, you are much less likely to see time savings or memory usage. In contrast, for such cases, a linked list is much more likely to be wasted a lot of time and memory.