Double Multiplication vs. Add Speed

Background

I am an aerospace technician and student at EECS. I am at the point where I work with a lot of mathematics and physics, but have not yet received algorithms or assembly language.

I develop and code a comically wide range of programs, from business proposal software to satellite hardware controllers.

Most of this work involves performing math on some other medium, and then writing code to implement it.

I algebraically simplify these equations before introducing them into the code. But before I take the time to do this, I would like to know if I should advocate additional addition operations or more multiplication operations. (I already know that sharing is much more expensive.)


Example

B sub-x prime

This is an equation that I got from some other work, and it is pretty typical of what I see.

We can clearly see that there are at least several ways to simplify this equation. Since the simplification is in my discretion, I would like to choose an option that will support performance, but practical . I am not going to do the overload due to the development time of the algorithm.


Question

In general, a double operation is faster: addition or multiplication?

I know the only final way to find out which is faster is to write and run tests, but that’s not the point. This is not a high enough priority in what I do to justify writing test code every time I need to simplify the equation. I need a rule to apply to my algebra.

If the difference is so marginal that it borders on insignificant or inconclusive, this is an acceptable answer, if I know, it practically does not matter.


Support study

I know that in C and C ++, the optimizer takes care of algebra , so this is a zero problem. However, as I understand it, the Java compiler does not perform algebraic simplification / optimization . In particular, this answer indicates that it is, and that the programmer should do such an optimization.

There are scattered answers on the Internet, but I cannot come to a final answer. Former University of Maryland Physics performed these tests in Java, but dual-performance data is not in the tables, and graph scales are indistinguishable. This test at the University of Quebec CS shows only results for entire operations. This SO answer explains that at the hardware level, multiplication is a more complicated operation, but I also know that engineers design processors with such things.

Other useful links:

+5
source share
1 answer

In general, you should write the code that is the clearest. JIT (not javac) accepts simple, generic templates and optimizes them. This means that simple simple patterns are often the best way to optimize your code.

If you profile the application and find that this code does not work optimally, you can try to optimize the code yourself;

  • meaningful micro tests are hard to write.
  • The results can be very sensitive to the environment. Modify the Java update or processor model and you may get conflicting results.
  • When you test all the code, you will most likely find delays if you do not expect them. for example, they are often in IO.

If you are not sure that the optimization really helps, you should stick with code that is simple and easy to maintain, and you are likely to find it fast enough.

+3
source

All Articles