I came across a strange situation.
In my program, I have a loop that combines a collection of data in a giant vector. I tried to understand why it works so slowly, although it seemed like I was doing everything right to efficiently allocate memory on the go.
In my program, it is difficult to determine how large the final vector of combined data is, but the size of each piece of data is known as being processed. Therefore, instead of backing up and resizing the combined data vector at a time, I reserved enough space for each data block, as it is added to a larger vector. This is when I came across this problem, which is repeated with a simple snippet below:
std::vector<float> arr1;
std::vector<float> arr2;
std::vector<float> arr3;
std::vector<float> arr4;
int numLoops = 10000;
int numSubloops = 50;
{
for (int q = 0; q < numLoops; q++)
{
for (int g = 0; g < numSubloops; g++)
{
arr1.push_back(q * g);
}
}
}
{
arr2.reserve(numLoops * numSubloops);
for (int q = 0; q < numLoops; q++)
{
for (int g = 0; g < numSubloops; g++)
{
arr2.push_back(q * g);
}
}
}
{
int arrInx = 0;
for (int q = 0; q < numLoops; q++)
{
arr3.resize(arr3.size() + numSubloops);
for (int g = 0; g < numSubloops; g++)
{
arr3[arrInx++] = q * g;
}
}
}
{
for (int q = 0; q < numLoops; q++)
{
arr4.reserve(arr4.size() + numSubloops);
for (int g = 0; g < numSubloops; g++)
{
arr4.push_back(q * g);
}
}
}
The results of this test after compilation in Visual Studio 2017 are as follows:
Test 1: 7 ms
Test 2: 3 ms
Test 3: 4 ms
Test 4: 4000 ms
Why is there a huge discrepancy during work?
Why is calling a reservebunch of times followed by push_backtakes 1000 times longer than calling a resizebunch of times followed by direct access to the index?
, 500 , , ?