Interesting point; most buffer overflows go past the end rather than the beginning, so this will almost certainly help. Compilers could put local arrays with the highest address in the stack frame, so there would be no scalar locales to write after the array.
There is still a danger if you pass the address of the local array to another function. Since the return address of the called function will be located just beyond the end of the array.
unsafe() { char buf[128]; gets(buf);
Thus, there are likely to be many possible buffer overflows . This idea only defeats buffer overflows when unsafe array write code is embedded, so the overflow happens with something more than over the array.
However, some other common causes of buffer overflows can be easily built in, such as strcat . Sometimes growing stacks grow.
Security measures do not have to be reliable in order to be useful, so this will definitely help sometimes. Probably not enough for someone to want to change an existing architecture such as x86, but an interesting idea for new architectures . However, Stack-grows-down is an almost universal standard for processors. Does anything use the growing call stack up? How much does the software really depend on this assumption? I hope not a lot ...
The traditional layout left room for the heap and / or stack to grow, only causing a problem if they meet in the middle.
Predictable code / data addresses are more important than predicted stack addresses, so a computer with a large amount of RAM can place the stack farther from data / code, and when loading code / data with a constant address. (This is very exciting. I believe that I was lucky not to write real 16-bit programs, but only to recognize, but not to use segmentation. Perhaps someone who still remembers DOS can shed light on why it works well so that the stack with a high address instead of a rising level at the bottom of your segment and data / code at the top, for example, with a "tiny" code model where everything is in one segment).
The only real chance to change this behavior is AMD64 . This is the first time that x86 has really broken backward compatibility. Modern Intel processors still support 8086 undocumented operation codes, such as D6 : SALC (set AL from the Carry flag) , limiting the space for ISA extensions (for example, SSSE3 and SSE4 instructions will be 1 byte shorter if Intel refuses to support undocumented opcodes.
Even then it will be only for the new regime; AMD64 processors still need to support legacy mode, and in 64-bit mode they must mix long mode with map mode (usually to run 32-bit user space processes from 32-bit binaries).
AMD64 might have added a stack direction flag, but that would make the hardware more complex. As I said above, I do not think that would be a big security benefit. Otherwise, perhaps AMD architects would have considered this, but are still unlikely. They definitely aimed for minimal obsessive actions and were not sure that this would be caught. They didnβt want to get hung up on additional baggage in order to maintain AMD64 compatibility in their processors, if the world basically just continued with 32-bit OS and 32-bit code.
This is a shame because there are many small things that they could do that probably would not require too many additional transistors in the actuators. (for example, in long mode, replace setcc r/m8 with setcc r/m32 ).