1u << 8u 0x100u , which is greater than any uint8_t value, so the condition is never fulfilled. Your "conversion" procedure is actually simple:
return x;
what really makes sense.
You need to more clearly define what you want for the conversion. C99 defines a conversion from unsigned to signed integer types as follows ( §6.3.1.3 "Integer and unsigned integers" )
When a value with an integer type is converted to another integer type other than _Bool , if this value can be represented by a new type, it is unchanged.
...
Otherwise, the new type will be signed and the value cannot be represented in it; either the result is a specific implementation or the signal determined by the implementation is raised.
Thus, uint8_t values between 0 and 127 are preserved, and the behavior for values greater than 127 is undefined. Many (but not all) implementations simply interpret unsigned values as a two-component representation of a signed integer. Perhaps you are really asking how to guarantee this behavior across platforms?
If so, you can use:
return x < 128 ? x : x - 256;
The value x - 256 is an int , the guaranteed value of x , interpreted as an 8-bit integer with two additions. The implicit conversion to int8_t then saves this value.
All this suggests that sint8_t means int8_t , since sint8_t not a standard type. If this is not the case, then all bets are disabled because the correctness of the proposed conversion depends on the guarantee that int8_t has a representation with two additions ( §7.18.1.1 "Integers of exact width" ).
If sint8_t is not some kind of weird type for a particular platform, it can use some other representation, like one-complement, which has a different set of representable values, so the transformation described above is determined by the implementation (therefore, not portable ) for certain inputs.
EDIT
Alf argued that it was "stupid," and that it would never be necessary for any production system. I disagree, but this is admittedly a corner case of a corner case. His argument is not entirely without virtues.
His claim that it is “ineffective” and therefore should be avoided, however, is unfounded. A reasonable optimizing compiler optimizes this on platforms where it is not needed. Using GCC on x86_64, for example:
#include <stdint.h> int8_t alf(uint8_t x) { return x; } int8_t steve(uint8_t x) { return x < 128 ? x : x - 256; } int8_t david(uint8_t x) { return (x ^ 0x80) - 0x80; }
compiled with -Os -fomit-frame-pointer, gives the following:
_alf: 0000000000000000 movsbl %dil,%eax 0000000000000004 ret _steve: 0000000000000005 movsbl %dil,%eax 0000000000000009 ret _david: 000000000000000a movsbl %dil,%eax 000000000000000e ret
Please note that after optimization, all three implementations are identical. Clang / LLVM gives exactly the same result. Similarly, if we create for ARM instead of x86:
_alf: 00000000 b240 sxtb r0, r0 00000002 4770 bx lr _steve: 00000004 b240 sxtb r0, r0 00000006 4770 bx lr _david: 00000008 b240 sxtb r0, r0 0000000a 4770 bx lr
Protecting your implementation from corner cases, when it does not require the costs of a “normal” case, is never “stupid”.
To the argument that this adds unnecessary complexity, I say: it’s harder to write a comment to explain the conversion and why it is there, or your successor trainee tries to debug the problem after 10 years, when the new compiler breaks the fluke that you were silent depending from all this time? Is it so hard to maintain?
// The C99 standard does not guarantee the behavior of conversion // from uint8_t to int8_t when the value to be converted is larger // than 127. This function implements a conversion that is // guaranteed to wrap as though the unsigned value were simply // reinterpreted as a twos-complement value. With most compilers // on most systems, it will be optimized away entirely. int8_t safeConvert(uint8_t x) { return x < 128 ? x : x - 256; }
When all is said and done, I agree that this is vague from above, but I also think that we should try to answer the question at face value. Of course, the best solution for standard C would be to associate unsigned to signed conversion behavior when the signed type is an integer with two intN_t without padding (thus defining behavior for all intN_t types).