The main operating costs of processor time

I was wondering how to optimize loops for systems with very limited resources. Say, if we have a basic for loop like (written in javascript ):


for(var i = someArr.length - 1; i > -1; i--) { someArr[i] }


I honestly don't know, no != Cheaper than > ?

I would be grateful for any resources covering the cost of computing in the context of basic operators, such as the above, >> , ~ ! etc.

+7
optimization
source share
6 answers

Performance on a modern processor is far from trivial. Here are a few things that complicate it:

  • Computers are fast. Your processor can execute up to 6 billion instructions per second. Thus, even the slowest instruction can be executed millions of times per second, which means that it really matters if you use it very often.
  • A modern processor has hundreds of flight instructions at a time. They are pipelined, which means that while one command is being read, the other is reading from registers, the third is executing, and the fourth is writing back to the register. A modern processor has 15-20 such stages. In addition, they can carry out 3-4 instructions at the same time at each of these stages. And they can reorder these instructions. If the unit of multiplication is used by another command, perhaps we can, for example, find the add command to execute instead. Therefore, even if you have a few slow instructions, their cost can be hidden very well most of the time by following other instructions, waiting for the slow one to end.
  • Memory is hundreds of times slower than a processor. The instructions that are executed do not matter much if their value is overshadowed by extracting data from memory. And even this is unreliable, because the processor has its own built-in caches to try to hide this cost.

So, the short answer is: "Don't try to outsmart the compiler." If you can choose between two equivalent expressions, perhaps the compiler can do the same and choose the most efficient one. The cost of training varies depending on all of the above factors. What other instructions are being executed, what data is in the CPU cache, what exact processor model is the code that works, and so on. Code that is super-efficient in one case can be very inefficient in other cases. The compiler will try to select the most effective instructions and plan them as best as possible. If you don't know more than the compiler about this, you are unlikely to be able to handle this better.

Do not try to use such micro-optimizations unless you really know what you are doing. As shown above, low-level performance is a ridiculously complex question, and it is very easy to write โ€œoptimizationsโ€ that lead to much slower code. Or that just sacrifices readability on something that doesn't make any difference.

In addition, most of your code simply does not have a noticeable effect on performance. People generally like to quote (or misquote) Knut on this subject:

We must forget about little efficiency, say, about 97% of the time: premature optimization is the root of all evil

People often interpret this as "don't try to optimize your code." If you really read the full quote, some more interesting implications should become clear:

Most of the time we should forget about microoptimizations. Most code runs so rarely that optimization does not matter. Bearing in mind the number of instructions that a processor can execute per second, it is obvious that a block of code must be executed very often in order for the optimization in it to have any effect. So in about 97% of cases, your optimizations will be a waste of time. But he also says that sometimes (3% of the time), your optimizations will make a difference. And, obviously, searching for these 3% is a bit like finding a needle in a haystack. If you just decide to "optimize your code" in general, you will spend your time on the first 97%. Instead, you first need to find 3% that really need optimization. In other words, run your code through the profiler and let it tell you which code takes the most processor time. Then you know where to optimize. And then your optimizations are no longer premature.

+15
source share

It is extremely unlikely that such micro-optimizations will have a noticeable difference with your code in any, but most extreme (real-time embedded systems?) Circumstances. Your time is likely to be better served by worrying about making your code readable and maintainable.

When in doubt, always start by asking Donald Knuth:

http://shreevatsa.wordpress.com/2008/05/16/premature-optimization-is-the-root-of-all-evil/

Or, for a slightly lower eyebrow, take micro-optimization:

http://www.codinghorror.com/blog/archives/000185.html

+9
source share

Most comparisons have the same bank, because the processor simply compares it in all aspects, and then makes a decision based on the flags created by this previous comparison, so the comparison signal does not matter at all. But some architectures are trying to speed up this process based on the value with which you are comparing, for example, comparisons with 0.

As far as I know, bitwise operations are the cheapest operations, a little faster than addition and subtraction. Multiplication and division operations are slightly more expensive, and comparison is the highest coastal operation.

+1
source share

It's like asking for fish, when I prefer to teach you how to fish.

There are simple ways to make sure how long it takes. My favorite is to simply copy the code 10 times and then wrap it in a loop 10 ^ 8 times. If I ran it and looked at the clock, the number of seconds it takes is translated to nanoseconds.

Saying does not make premature optimization "not necessary." If you want to do, you can try a proactive performance tuning technique such as this .

By the way, my favorite way to code your loop is:

 for (i = N; --i >= 0;){...} 
+1
source share

Premature optimization can be dangerous, as the best approach is to write an application without worrying about it, and then find slow points and optimize them. If you are really worried about this, use a lower level language. An interpreted language such as javascript will cost you some processing power compared to a lower level language such as C.

0
source share

In this particular case,> vs = is probably not a performance issue. HOWEVER> is generally a safer choice because it prevents you from modifying your code to run into weeds and stuck in an infinite loop.

0
source share

All Articles