In addition to what was said earlier, a few more things are needed:
The performance of realloc(<X-sized-buf>, X + inc) depends on two things:
malloc(N + inc) speed malloc(N + inc) , which usually worsens in the O(N) direction with the size of the allocated blockmemcpy(newbuf, oldbuf, N) speed memcpy(newbuf, oldbuf, N) , which is also O(N) with block size
This means that for small increments but large existing blocks, realloc() performance is O(N^2) relative to the size of the existing data block. Think of bubbles versus quicksort ...
It is relatively cheap if you start with a small block, but will significantly punish you if the redistributed block is large. To reduce, you must make sure that inc not small relative to the existing size; realloc'ing a constant value is a recipe for performance problems.
In addition, even if you grow in large increments (say, scale the new size to 150% of the old), it uses a burst of memory usage from reallocating a large buffer; When copying existing content, you use the amount of memory twice. Sequence:
addr = malloc(N); addr = realloc(addr, N + inc);
therefore fails (a lot) earlier than:
addr[0] = malloc(N); addr[1] = malloc(inc);
There are data structures that do not require realloc() growth; linked lists, skip lists, spanning trees can add data without having to copy existing data. C ++ vector<> grows this way, it starts with an array for the initial size and continues to be added if you grow it above this, but it will not be realloc() (i.e.Copy). Consider implementing (or using a pre-existing implementation) something like this.
FrankH.
source share