X86 segment descriptor markup - why is this weird?

Why did Intel decide to split the base and segment limit into different parts of the segment descriptor rather than using contiguous bits?

See Figure 5-3 http://css.csail.mit.edu/6.858/2014/readings/i386/s05_01.htm

Why didnโ€™t they store the base address in bits 0 through 31, the limit in bits 32 through 51, and using the remaining position for other bits (or some similar arrangement)?

+5
source share
1 answer

Raymond Chen already answered this question in the comments:

For compatibility with 80286. 80286 had a maximum segment size of 2 ^ 16 and a maximum base of 2 ^ 24. Therefore, the base and limit fields were 16 and 24 bits wide. When the size and base were expanded to 32 bits, they had to be placed somewhere else, because good places were already taken.

Here is a scan of a segment descriptor (code or data type) in the Intel 80286 Programmer's Reference Guide:

enter image description here

For comparison, here is a screenshot from the Intelยฎ 64 and IA-32 Software Developers Guide (Volume 3A):

enter image description here

The format is exactly the same except for the use of reserved bits. The base was expanded from 24 to 32 bits, the segment limit was increased from 16 to 20 bits, and some additional flags were added. (The Access bit is included as part of the Type field in the second screenshot)

So, in short: the layout is just weird because it is a backward compatible extension of the old layout for a 16-bit processor.

+2
source

All Articles