One reason is that it allows you to generate efficient code without any optimization phases in compilers, provided that the programmer knew what he (or she) was doing. For example, when copying characters from one buffer to another, you can:
register char *ptr1; register char *ptr2; ... for ( ... ) { *ptr1++ = *ptr2++; }
The compiler I used to work with (on my own minicomputer) would generate the following register operations for assignment:
load $r1,*$a1++ // load $r1 from address in $a1 and increment $a1 store $r1,*$a2++ // store $r1 at address in $a2 and increment $a2
I forget the actual transaction codes. The compiler did not contain an optimization phase, but the code that was generated was very dense, provided that you understood the compiler and the architecture of the machine. He could do this because the hardware architecture had addressing modes before decrement and post-incremental processing for both address registers and general registers. As far as I remember, there were no preset and post-decrement addressing modes, but you could do without them.
I believe that the DEC mini-computers on which C was originally developed had these addressing modes. The machine I was working on was not made by DEC, but the architecture was pretty similar.
An optimization phase has been planned for the compiler. However, it was mainly used by system programmers, and when they saw how good the generated code was, the implementation of the optimization phase was quietly postponed.
The whole rationale for the C design was to allow the creation of simple and portable compilers that would generate reasonably efficient code with minimal (or missing) intermediate code optimization. For this reason, increment and decrement operators, as well as compound assignment operators, played an important role in the generation of compact and efficient code by the early C compilers. They were not just syntactic sugar, as suggested by Nicklaus Wirth et al.
Mick
source share