Forcing float in unsigned char on ARM vs Intel

When I run the following code on an Intel computer ...

float f = -512;
unsigned char c;

while ( f < 513 )
{
    c = f;
    printf( "%f -> %d\n", f, c );
    f += 64;
}

... the output is as follows:

-512.000000 -> 0
-448.000000 -> 64
-384.000000 -> 128
-320.000000 -> 192
-256.000000 -> 0
-192.000000 -> 64
-128.000000 -> 128
-64.000000 -> 192
0.000000 -> 0
64.000000 -> 64
128.000000 -> 128
192.000000 -> 192
256.000000 -> 0
320.000000 -> 64
384.000000 -> 128
448.000000 -> 192
512.000000 -> 0

However, when I run the same code on an ARM device (in my case an iPad), the results are completely different:

-512.000000 -> 0
-448.000000 -> 0
-384.000000 -> 0
-320.000000 -> 0
-256.000000 -> 0
-192.000000 -> 0
-128.000000 -> 0
-64.000000 -> 0
0.000000 -> 0
64.000000 -> 64
128.000000 -> 128
192.000000 -> 192
256.000000 -> 0
320.000000 -> 64
384.000000 -> 128
448.000000 -> 192
512.000000 -> 0

As you can imagine, such a difference can lead to terrible errors in cross-platform projects. My questions:

  • I was mistaken in believing that forced use of float in unsigned char will give the same results on all platforms?

  • Could it be a compiler problem?

  • Is there an elegant workaround?

+5
source share
3 answers

C , . 6.3.1 ( 6.3.1.4 ):

, _Bool, (.. ). , undefined.

, :

, , , . , (βˆ’1, Utype_MAX+1).

UtypeMAX+1 256. - . (-1, 256), "undefined ". , , 256, - .

, :

  • , .
  • , , , .
  • , - , , - SO .
+7

3 , . :

c = (char) f;

(int) () . , : .

+1

, , . c = *((char *)&f + sizeof(float) - 1); - , , .

, , , . ARM , IA. , , C , : .

? , . , , . .

-1

All Articles