I have a piece of code that works with large arrays double(contains at least 6000 elements) and runs several hundred times (usually 800).
When I use a standard loop, for example:
double[] singleRow = new double[6000];
int maxI = 800;
for(int i=0; i<maxI; i++)
{
singleRow = someObject.producesOutput();
}
Memory usage increases by approximately 40 MB (from 40 MB at the beginning of the cycle, up to 80 MB at the end).
When I forcefully use the garbage collector to execute at each iteration, memory usage stays at 40 MB (growth is negligible).
double[] singleRow = new double[6000];
int maxI = 800;
for(int i=0; i<maxI; i++)
{
singleRow = someObject.producesOutput();
GC.Collect()
}
But the lead time is 3 times longer! (it is important)
How can I make C # use the same memory area instead of allocating new ones? Note. I have access to the class code someObject, so if I need it, I can change it.