Is there a way around this compiler optimization in C?

I want to note that, as Olaf pointed out, the compiler is not to blame.


Disclaimer: I'm not quite sure that this behavior is related to compiler optimization.

In any case, in C I am trying to determine if the nth bit (n must be between 0 and 7 inclusive) is an 8-bit byte 1or 0. At first I came up with this solution:

#include <stdint.h>
#include <stdbool.h>

bool one_or_zero( uint8_t t, uint8_t n ) // t is some byte, n signifies which bit
{
    return (t << (n - (n % 8) - 1)) >> 7;
}

Which from my previous understanding would make the following byte:

Suppose t = 5and n = 2. Then the byte tcan be represented as 0000 0101. I assumed that it would (t << (n - (n % 8) - 1))shift the bit tso that it twas 1010 0000. This assumption is somewhat true. I also suggested that the next bit-shift ( >> 7) will move a bit tso that twas 0000 0001. This assumption is also somewhat true.

TL DR : I thought the line return (t << (n - (n % 8) - 1)) >> 7;did this:

  • t 0000 0101
  • First bit shift; tNow1010 0000
  • Second bit offset; tNow0000 0001
  • t returns as 0000 0001

Although I intend for this to happen, it is not. Instead, I should write the following to get my intended results:

bool one_or_zero( uint8_t t, uint8_t n ) // t is some byte, n signifies which bit
{
    uint8_t val = (t << (n - (n % 8) - 1));
    return val >> 7;
}

, uint8_t val . , :

  • , , ?
  • , ?

, , , . , "" , .

+4
2

, . :

return (t & (1U << n)) != 0;

n, . (n & 7) (n % 8) ( ) . , -, .

, 8 : (sizeof(t) * CHAR_BIT). t. .

:

(n - (n % 8) - 1))

n < 8, (-1 ). undefined , ( ).

+7

, .

: x operator y, . -, ( ) " " .

:

  • (t << (n - (n % 8) - 1)) >> 7; 8 int, n%8 int.
  • (t << (n - (integer) - 1)) >> 7 (n - integer - 1) , , (t << integer) int. , "" , , ( ) 32 , 8, .

, , int uint8_t, , .

, uint8_t :

(t << (uint8_t)(n - (n % 8) - 1))) >> 7;

, , :

(t & ((uint8_t)1 << n)) != 0 
0

All Articles