@smaclell asked why reverse iteration was more efficient in commenting on @ sambo99.
This is sometimes more effective. You have a list of people and you want to remove or filter all customers with a credit rating of <1000;
The following data are available
"Bob" 999 "Mary" 999 "Ted" 1000
If we iterated ahead, we would soon be in trouble
for( int idx = 0; idx < list.Count ; idx++ ) { if( list[idx].Rating < 1000 ) { list.RemoveAt(idx);
At idx = 0, delete Bob , which then shifts all remaining elements to the left. Next time through the loop idx = 1, but list [1] is now Ted instead of Mary . As a result, we skip Mary by mistake. We could use a while loop, and we could introduce more variables.
Or we just go to iteration:
for (int idx = list.Count-1; idx >= 0; idx--) { if (list[idx].Rating < 1000) { list.RemoveAt(idx); } }
All indexes to the left of the deleted item remain unchanged, so you do not skip any items.
The same principle applies if you are given a list of indexes to remove from the array. For everything to be in order, you need to sort the list and then remove the items from the highest index to the lowest.
Now you can simply use Linq and declare what you are doing, in a straightforward manner.
list.RemoveAll(o => o.Rating < 1000);
In this case, it is more efficient to remove one element, iterating forward or backward. You can also use Linq for this.
int removeIndex = list.FindIndex(o => o.Name == "Ted"); if( removeIndex != -1 ) { list.RemoveAt(removeIndex); }