Any reason to use bytes / shorts etc. In C #?

By type int?

A lot of code uses int with double / float.

I know there are such mobile versions of .NET, so byte / short comes into its own, but is there any question for desktop applications?

When I was doing work in C ++ (game programming), I was very knowledgeable about every data type that I used, although I do not have this feeling in working in C # / Java.

Will there be any benefit from using a byte if I know that my loop will never go beyond the byte?

+7
java c # types
source share
5 answers

The only byte compared to long will not make much difference in memory, but when you start to have large arrays, these 7 extra bytes will make a big difference.

The more data types help communicate developer intentions much better: when you come across byte length; , you know for sure that the range of length is byte .

+10
source share

I think this question arises, is that 10 years ago it was common practice to think about what values โ€‹โ€‹you need to store variables, and if, for example, you save a percentage (0,100), you can use bytes (from - 128 to 127 signed or from 0 to 255 unsigned), since it was large enough for the job and, therefore, was considered less "wasteful."

These days, however, such measures are not needed. The memory is usually not that big, and if you would probably be defeated by modern computers, aligning everything to 32-bit word boundaries (if not 64).

If you do not store arrays of thousands of these things, then such types of microoptimization (now) are an irrelevant distraction.

Honestly, I donโ€™t remember how the last time I didnโ€™t use bytes for anything other than raw data, and I canโ€™t remember the last time I used short, well, anything.

+8
source share

There is a slight performance loss when using data types whose size is smaller than the size of the processor's native word. When the CPU needs to add two bytes together, it loads them into (32-bit) dictionary registers, adds them, corrects them (cuts off the three most significant bytes, calculates the transfer / overflow) and saves them back to the byte.

This is a lot of work. If you intend to use a variable in a loop, do not make it smaller than the processor's native word.

These data types exist, so the code can process the structures that contain them, due to size limitations or due to an outdated API or not.

+4
source share

This is a case of "using the right tool for the job." If you are working with something that is a byte, you are using the byte data type. For example, for a large number of codes, including byte streams, the use of byte arrays is required. Conversely, if you are just working with arbitrary integers, you can use int or long if they are larger than int can handle.

+3
source share

there are many reasons for using byte - everything that processes raw binary streams (images, files, serialization code, etc.) will have to speak in terms of byte[] buffers byte[] .

I would not use byte as a counter, however, the processor can process int more efficiently.

With short ... well, when you have an array of them, it can save a little space, but overall I would just use an int .

+3
source share

All Articles