I want to note that, as Olaf pointed out, the compiler is not to blame.
Disclaimer: I'm not quite sure that this behavior is related to compiler optimization.
In any case, in C I am trying to determine if the nth bit (n must be between 0 and 7 inclusive) is an 8-bit byte 1or 0. At first I came up with this solution:
#include <stdint.h>
#include <stdbool.h>
bool one_or_zero( uint8_t t, uint8_t n )
{
return (t << (n - (n % 8) - 1)) >> 7;
}
Which from my previous understanding would make the following byte:
Suppose t = 5and n = 2. Then the byte tcan be represented as 0000 0101. I assumed that it would (t << (n - (n % 8) - 1))shift the bit tso that it twas 1010 0000. This assumption is somewhat true. I also suggested that the next bit-shift ( >> 7) will move a bit tso that twas 0000 0001. This assumption is also somewhat true.
TL DR : I thought the line return (t << (n - (n % 8) - 1)) >> 7;did this:
t 0000 0101- First bit shift;
tNow1010 0000 - Second bit offset;
tNow0000 0001 t returns as 0000 0001
Although I intend for this to happen, it is not. Instead, I should write the following to get my intended results:
bool one_or_zero( uint8_t t, uint8_t n )
{
uint8_t val = (t << (n - (n % 8) - 1));
return val >> 7;
}
, uint8_t val . , :
, , , . , "" , .
user3835277