C / C ++ - convert a 24-bit signed integer to float

I am programming in C ++. I need to convert a 24-bit signed integer (stored in a 3-byte array) to float (normalized to [-1.0,1.0]).

The platform is MSVC ++ on x86 (which means that the input is little-endian).

I tried this:

float convert(const unsigned char* src) { int i = src[2]; i = (i << 8) | src[1]; i = (i << 8) | src[0]; const float Q = 2.0 / ((1 << 24) - 1.0); return (i + 0.5) * Q; } 

I'm not quite sure, but it seems that the results I get from this code are incorrect. So my code is wrong, and if so, why?

+6
c ++ c floating-point integer 24bit
source share
7 answers

You do not sign 24 bits in an integer; high bits will always be zero. This code will work regardless of your int size:

 if (i & 0x800000) i |= ~0xffffff; 

Edit: Problem 2 is your scaling constant. Simply put, you want to multiply by a new maximum and divide by the old maximum, assuming that 0 remains at 0.0 after the conversion.

 const float Q = 1.0 / 0x7fffff; 

Finally, why are you adding 0.5 to the final conversion? I could understand if you are trying to round to an integer value, but you are going in a different direction.

Edit 2: The source you are pointing to has a very detailed rationale for your choices. Not as I would have chosen, but perfectly justified, nonetheless. My advice for the multiplier is still there, but the maximum is different due to the 0.5 factor added:

 const float Q = 1.0 / (0x7fffff + 0.5); 

Since after adding the positive and negative values โ€‹โ€‹coincide, this should correctly scale both directions.

+9
source share

Since you are using a char array, this does not necessarily mean that the input is a bit limited, since it is x86; char array makes an independent byte order architecture.

Your code is a bit more complicated. A simple solution is to shift the 24-bit data to scale it to a 32-bit value (so that the machineโ€™s natural arithmetic operation works), and then use a simple ratio of the result with the maximum possible value (which is INT_MAX less than 256, because of the 8 free bits) .

 #include <limits.h> float convert(const unsigned char* src) { int i = src[2] << 24 | src[1] << 16 | src[0] << 8 ; return i / (float)(INT_MAX - 256) ; } 

Test code:

 unsigned char* makeS24( unsigned int i, unsigned char* s24 ) { s24[2] = (unsigned char)(i >> 16) ; s24[1] = (unsigned char)((i >> 8) & 0xff); s24[0] = (unsigned char)(i & 0xff); return s24 ; } #include <iostream> int main() { unsigned char s24[3] ; volatile int x = INT_MIN / 2 ; std::cout << convert( makeS24( 0x800000, s24 )) << std::endl ; // -1.0 std::cout << convert( makeS24( 0x7fffff, s24 )) << std::endl ; // 1.0 std::cout << convert( makeS24( 0, s24 )) << std::endl ; // 0.0 std::cout << convert( makeS24( 0xc00000, s24 )) << std::endl ; // -0.5 std::cout << convert( makeS24( 0x400000, s24 )) << std::endl ; // 0.5 } 
+3
source share

Since this is not symmetrical, this is probably the best compromise.

Maps - ((2 ^ 23) -1) - -1.0 and ((2 ^ 23) -1) to 1.0.

(Note: this is the same conversion style used by 24-bit WAV files)

 float convert( const unsigned char* src ) {  int i = ( ( src[ 2 ] << 24 ) | ( src[ 1 ] << 16 ) | ( src[ 0 ] << 8 ) ) >> 8; return ( ( float ) i ) / 8388607.0; } 
+1
source share

Solution that works for me:

 /** * Convert 24 byte that are saved into a char* and represent a float * in little endian format to a C float number. */ float convert(const unsigned char* src) { float num_float; // concatenate the chars (short integers) and // save them to a long int long int num_integer = ( ((src[2] & 0xFF) << 16) | ((src[1] & 0xFF) << 8) | (src[0] & 0xFF) ) & 0xFFFFFFFF; // copy the bits from the long int variable // to the float. memcpy(&num_float, &num_integer, 4); return num_float; } 
+1
source share

Works for me:

 float convert(const char* stream) { int fromStream = (0x00 << 24) + (stream[2] << 16) + (stream[1] << 8) + stream[0]; return (float)fromStream; } 
+1
source share

It looks like you are considering it as a 24-bit unsigned integer. If the most significant bit is 1, you need to make i negative by setting the remaining 8 bits to 1.

0
source share

I'm not sure if this is a good programming practice, but it seems to work (at least with g ++ on 32-bit Linux, havenโ€™t tried it yet) and certainly more elegant than extracting the -byte byte from a char array, especially if it is not a char array, but a stream (in my case it is a file stream) that you are reading (if it is a char array, you can use memcpy instead of istream::read ).

Just load the 24-bit variable into the less significant 3 bytes of the signed 32-bit ( signed long ). Then shift the long variable one byte to the left, so that the character bit appears where it is intended. Finally, just normalize the 32-bit variable and you are all set.

 union _24bit_LE{ char access; signed long _long; }_24bit_LE_buf; float getnormalized24bitsample(){ std::ifstream::read(&_24bit_LE_buf.access+1, 3); return (_24bit_LE_buf._long<<8) / (0x7fffffff + .5); } 

(Oddly enough, it doesn't work when you just read three more significant bytes at once).

EDIT : It turns out that this method seems to have some problems that I don't quite understand yet. Itโ€™s better not to use it at this time.

0
source share

All Articles