Some processors may not work as efficiently on small data types as on large ones. For example, given:
uint32_t foo(uint32_t x, uint8_t y) { x+=y; y+=2; x+=y; y+=4; x+=y; y+=6; x+=y; return x; }
if y were uint32_t , the compiler for ARM Cortex-M3 could just generate
add r0,r0,r1,asl #2 ; x+=(y<<2) add r0,r0,#12 ; x+=12 bx lr ; return x
but since y is uint8_t , the compiler will have to generate:
add r0,r0,r1 ; x+=y add r1,r1,#2 ; Compute y+2 and r1,r1,#255 ; y=(y+2) & 255 add r0,r0,r1 ; x+=y add r1,r1,#4 ; Compute y+4 and r1,r1,#255 ; y=(y+4) & 255 add r0,r0,r1 ; x+=y add r1,r1,#6 ; Compute y+6 and r1,r1,#255 ; y=(y+6) & 255 add r0,r0,r1 ; x+=y bx lr ; return x
The alleged purpose of the “fast” types was to allow compilers to replace smaller types that could not be efficiently handled with faster ones. Unfortunately, the semantics of the “fast” types are rather poorly specified, which in turn leaves gloomy questions about whether expressions will be evaluated using signed or unsigned mathematics.
supercat Jan 28 '16 at 23:59 2016-01-28 23:59
source share