Why is network byte order defined as big-endian?

As the title says, my question is why does TCP / IP use a large end encoding for data transfer rather than an alternative little-endian scheme?

+68
endianness networking network-protocols tcp-ip
Nov 22 '12 at 14:18
source share
1 answer

RFC1700 stated it should be like that . (and certain network byte order as big-endian).

The Convention, in documentation on Internet protocols, express the numbers in decimal form and display the data in large order [COEN]. That is, the fields are described from left to right, with the most significant octet on the left and the least significant octet on the right.

The link they make is

On Holy Wars and a Plea for Peace Cohen, D. Computer 

A summary can be found on IEN-137 or on this IEEE page .




Summary:

Which path is chosen is not too much difference. It is much more important to coordinate the order than the order is agreed.

In conclusion, it is concluded that schemes with large and low orders are also possible. There is no better / worse scheme, and it can be used instead of another if it is consistent across the entire system / protocol.

+65
Nov 22
source share



All Articles