Numerical optimization

I was wondering which Integer or Float types are the fastest.
I thought that a byte is faster than an integer because it has a smaller range.
Some people told me that in some cases an integer is faster than a byte.

second question:
The GPU is on its way to world domination.
so I asked myself: can Double be faster than Integer .. because of FPU
so where are the experts? :)

+4
source share
8 answers

You have to think more than clock cycles to do arithmetic. You can say that adding two ints takes a lot of cycles, adding two doubles that take a lot of cycles, etc., but that may not be relevant. If all your data fits into the cache at the same time, then the timing of individual operations makes sense. But if not, the extra time required due to a cache error dominates the difference in individual operations. Sometimes working with smaller data types is faster because it makes the difference between having to pull something out of the cache or not, or whether you need to go to disk or not.

These days, computers spend most of their time moving data without doing arithmetic, even among crunchy applications. And the ratio of the former to the latter is increasing. You cannot simply compare, for example, the time required to multiply shorts against doubling. You may find that with two versions of your program, one version works faster on a small problem, and the other version works faster in a larger program, all because of the relative effectiveness of the types of memory.

+6
source

i thought that the byte is faster than the integer because it has a smaller range.

Something I experienced: using short gave me a performance hit, while using int was fine. This is due to the fact that in architecture there are usually no shorts. These are convenient types. The processor actually works with its own word size. In my case, the word size was the size of the word int. Thus, when accessing the short one, you had to first package the value in int, work with it, and then unpack it and get the result in short form. All this led to a performance hit. So, shorter is not necessarily better.

+4
source

It depends on any data in the architecture. The floating point processor will handle the float and is twice identical when performing the calculations. They are evaluated with 80-bit accuracy and therefore take the same amount of time. Loading and storing values ​​in FPU registers can make a difference. Double takes up twice as much RAM space and can therefore be slower due to misses in the cache. It is noticeable if you have large arrays that you tend to index arbitrarily.

+1
source

There are no bytes at the CPU level, only words that are currently 32-bit or 64-bit. Arithmetic devices are usually tightly coupled to numbers with a word size (or more, in the case of a floating point).

Thus, there is no speed advantage when using types smaller than a word with respect to arithmetic operations, and there may be a speed limit, because you need to do additional work to simulate types that the processor does not initially have, for example. writing one byte to memory requires that you first read the word in which it appears, change it and then write it down. To avoid this, most compilers will actually use the full memory word for all smaller variables, so even a boolean variable takes 32 or 64 bits.

However, if you have a large amount of data, such as a large array, then using smaller types will usually give better performance because you will have fewer misses in the cache.

+1
source

The length of bytes of numeric types depends on the language, and sometimes on the platform you use. For example, in java, both int and float use 4 bytes, so the processing time should be equal. It surprises me, although longer types are processed faster. If there is evidence that I would like to read about it.

0
source

About which one is faster, integer or byte, since they both fit into the register, they work the same way, or at least without noticeable difference.

About integer vs .double: Maybe the GPU does faster doubling arithmetic than a regular processor, but I doubt that it doubles arithmetic faster than an integer, since integer arithmetic simply registers arithmetic.

0
source

The biggest optimization is the transition from using cyclic scalar computing to using vector computing. Then take advantage of the GPU or CPU SSE.

0
source

Well, if you do not perform any vector optimizations, you can use integers the size of your registers (32/64 bits) without any actual performance improvement.

Floating-point numbers are slightly different: while processors are optimized for doubling, GPUs usually work with boards.

0
source

All Articles