I am writing a function that converts bits to an int / uint value, given that a bitset can have fewer bits than the type of target.
Here is the function I wrote:
template <typename T,size_t count> static T convertBitSetToNumber( const std::bitset<count>& bitset ) { T result; #define targetSize (sizeof( T )*CHAR_BIT) if ( targetSize > count ) {
And the "test program":
uint16_t val1 = Base::BitsetUtl::convertBitSetToNumber<uint16_t,12>( std::bitset<12>( "100010011010" ) ); // val1 is 0x089A int16_t val2 = Base::BitsetUtl::convertBitSetToNumber<int16_t,12>( std::bitset<12>( "100010011010" ) ); // val2 is 0xF89A
Note: See the comment / exchange with Ped7g, the code above is correct and retains the bit sign and makes the conversion 12-> 16 bits correct for signed or unsigned bits. But if you are looking at how to shift 0xABC0 to 0x0ABC on a signed object, the answers may help you, so I am not deleting the question.
See the program works when using uint16 as the target type, for example:
uint16_t val = 0x89A0; // 1000100110100000 val = val >> 4; // 0000100010011010
However, when using int16_t it fails because 0x89A0 >> 4 is 0xF89A instead of the expected 0x089A .
int16_t val = 0x89A0; // 1000100110100000 val = val >> 4; // 1111100010011010
I donβt understand why β the operator sometimes inserts 0, and sometimes 1. And I canβt find out how to safely perform the final operation of my function ( result = result >> missingbits; at some point it should be wrong ...)
jpo38 source share