Better to have decrementing loops?

I remember how many years ago I heard that it is more efficient to have cycles decreasing instead of increasing, especially when programming microprocessors. Is this true, and if so, what are the reasons?

+5
source share
3 answers

One thing that happens to the bat is that the final condition on the decrementing loop can be faster. If you go to the certian value, a comparison with that value will be required at each iteration. However, if you are looping to zero, then the decrement itself on most processors will set the flag to zero, if the value that decreases decreases to zero, so an additional comparison operation is not required.

I understand a small potato, but in a large narrow inner loop this can make a big difference.

+10
source

In C #, this does not make any difference to efficiency. The only reason to have decrementing loops is if you loop the collection and delete the elements as you go.

+2
source

, , ++ ++ .

There are many articles on the Internet why this is so, but it boils down to the following: using ++ does this so that I automatically declare it automatically, and using ++ does not. (at least in C #)

Again, I don’t know if decrement is more effective, I just thought that I would drop it there, seeing how you ask about performance :)

You just use decrement, because in some situations it is easier for everyone I know.

0
source

All Articles