This shows that allocating a new array is fast. This is to be expected when a lot of available memory is available - basically it increases the pointer and a small part of the household.
However, note that this will create a new array with all elements as False, not True.
A more suitable test might be to call Array.Clear in an existing array, in the first case, which will destroy the content quite quickly.
Please note that your second form will create a lot more garbage - in this case it will always remain in gen0 and will be easily collected, but in real applications with more realistic memory usage you may end up with garbage collection performance problems creating new arrays instead of cleaning old ones.
Here is a quick C # test that tests three strategies:
using System; using System.Diagnostics; public class Test { const int Iterations = 100000000; static void Main() { TestStrategy(Clear); TestStrategy(ManualWipe); TestStrategy(CreateNew); } static void TestStrategy(Func<bool[], bool[]> strategy) { bool[] array = new bool[200]; GC.Collect(); GC.WaitForPendingFinalizers(); Stopwatch sw = Stopwatch.StartNew(); for (int i = 0; i < Iterations; i++) { array = strategy(array); } sw.Stop(); Console.WriteLine("{0}: {1}ms", strategy.Method.Name, (long) sw.ElapsedMilliseconds); } static bool[] Clear(bool[] original) { Array.Clear(original, 0, original.Length); return original; } static bool[] ManualWipe(bool[] original) { for (int i = 0; i < original.Length; i++) { original[i] = false; } return original; } static bool[] CreateNew(bool[] original) { return new bool[original.Length]; } }
Results:
Clear: 4910ms ManualWipe: 19185ms CreateNew: 2802ms
However, that still just uses 0 generation - I would personally expect Clear be better for overall application performance. Note that they behave differently if any other code has references to the original array - the βcreate newβ ( ReDim ) strategy does not modify the existing array at all.
source share