When I run the following code on an Intel computer ...
float f = -512;
unsigned char c;
while ( f < 513 )
{
c = f;
printf( "%f -> %d\n", f, c );
f += 64;
}
... the output is as follows:
-512.000000 -> 0
-448.000000 -> 64
-384.000000 -> 128
-320.000000 -> 192
-256.000000 -> 0
-192.000000 -> 64
-128.000000 -> 128
-64.000000 -> 192
0.000000 -> 0
64.000000 -> 64
128.000000 -> 128
192.000000 -> 192
256.000000 -> 0
320.000000 -> 64
384.000000 -> 128
448.000000 -> 192
512.000000 -> 0
However, when I run the same code on an ARM device (in my case an iPad), the results are completely different:
-512.000000 -> 0
-448.000000 -> 0
-384.000000 -> 0
-320.000000 -> 0
-256.000000 -> 0
-192.000000 -> 0
-128.000000 -> 0
-64.000000 -> 0
0.000000 -> 0
64.000000 -> 64
128.000000 -> 128
192.000000 -> 192
256.000000 -> 0
320.000000 -> 64
384.000000 -> 128
448.000000 -> 192
512.000000 -> 0
As you can imagine, such a difference can lead to terrible errors in cross-platform projects. My questions:
I was mistaken in believing that forced use of float in unsigned char will give the same results on all platforms?
Could it be a compiler problem?
Is there an elegant workaround?
source
share