You cannot understand this if you do not know about:
- hex / binary represenation and
- CPU endianess.
Enter the decimal number 320 in hexadecimal. Separate it in bytes. Assuming int is 4 bytes, you should indicate what parts of the number that go in bytes.
After that, consider the final goal of this processor and sort the bytes in that order. (First MS byte or first LS byte.)
The code refers to the byte allocated at the lowest address of the integer. What it contains depends on the ultimate goal of the processor. You will either get hex 0x40 or hex 0x00.
Note. You should not use char for this kind of thing, because it has an application-specific signature. If the data bytes contain values exceeding 0x7F, you may get some very strange errors that appear / disappear inconsistently for several compilers. Always use uint8_t* when doing any form of bit / byte manipulation.
You can detect this error by replacing 320 with 384. Your small end system can either print -128 or 128, you will get different results for different compilers.
source share