Why don't compilers try to allocate contiguous memory (if possible) when the vector is full?

When the value of std::vector full, new memory is allocated. From what I read, the new ability grows exponentially (but this is not relevant to the question), then the old information is copied to a new area of ​​memory, and the old one is freed.

Based on this assumption, my questions are:

  • Why don't compilers try to see if there is enough free contiguous memory at the end of our std::vector , only select the part at the end of our std::vector and don't waste time copying it?

  • People tried to implement this, but it was decided that this is not worth it? (average / always)

  • Are there other, more subtle reasons why this does not happen?

+6
source share
1 answer

This is a combination of your points 2) and 3).

At first it was motivated (I cannot say how many measurements were taken at that time) that the benefits were rare and not very good. You can only (significantly) increase the memory if the distribution of the distribution did not occur after the allocation of the original, and the cost of growth of the vector is amortized.

However, many noted that even this scenario is not so rare and can significantly improve performance and prevent memory fragmentation. So there was an offer

+3
source

All Articles