I read the descriptions online, describing the big and small endian. However, they all seem to mostly read the same way, and I'm still confused with the actual implementation regarding the “majority” and “least significant bytes”. I understand that small endian values first evaluate the "least significant" values, and by and large, the "most significant" bytes are evaluated first. However, I do not understand what “majority” and “least” significant mean. I think this will help me understand if I am using the actual example, which I will give here:
I have an integer value: 12345
If I convert it to a hex value using a Windows calculator, I get the value: 3039 (basically a two-byte value). Is the value 3039 showing bytes representing the integer value 12345 stored as a small or large endian value and how can this be determined based on the value?
source
share