Sin (int) broken by Xcode debugger (lldb)

I have a universal iOS application that focuses on the iOS SDK 6.1, and the compiler is configured on the Apple LLVM 4.2 compiler . When I put a breakpoint in my code and run the following, I get weird results for sin(int) .

For reference, sin(70) = 0.7739 (70 in radians).

 (lldb) p (double)sin(70) (double) $0 = -0.912706376367676 // initial value (lldb) p (double)sin(1.0) (double) $1 = 0.841470984807897 // reset the value sin(int) will return (lldb) p (double)sin(70) (double) $2 = 0.841470984807905 // returned same as sin(1.0) (lldb) p (double)sin(70.0) (double) $3 = 0.773890681557889 // reset the value sin(int) will return (lldb) p (double)sin(70) (double) $4 = 0.773890681558519 (lldb) p (double)sin((float)60) (double) $5 = -0.304810621102217 // casting works the same as appending a ".0" (lldb) p (double)sin(70) (double) $6 = -0.30481062110269 (lldb) p (double)sin(1) (double) $7 = -0.304810621102223 // every sin(int) behaves the same way 

remarks:

  • The first value for sin(int) in a debugging session is always -0.912706376367676 .
  • sin(int) will always return the same value that was returned from the last sin(float) executed.
  • If I replace p with po or expr (e.g. expr (double) sin (70)), I get accurate, accurate results.

Why does the debugger behave like this?

Does this mean that I have to type every single parameter every time I call a function?

More interesting behavior with NSLog:

 (lldb) expr (void)NSLog(@"%f", (float)sin(70)) 0.000000 // new initial value (lldb) expr (void)NSLog(@"%f", (float)sin(70.0)) 0.773891 (lldb) expr (void)NSLog(@"%f", (float)sin(70)) 0.000000 // does not return the previous sin(float) value (lldb) p (double)sin(70) (double) $0 = 1.48539705402154e-312 // sin(int) affected by sin(float) differently (lldb) p (double)sin(70.0) (double) $1 = 0.773890681557889 (lldb) expr (void)NSLog(@"%f", (float)sin(70)) 0.000000 // not affected by sin(float) 
+4
source share
1 answer

You go to the wonderful world of default promotions in C. Remember, lldb does not know what argument types or return type sin() is. The correct prototype is double sin (double) . When you write

 (lldb) p (float) sin(70) 

There are two problems with this. First, you provide an integer argument, and the default C promotion rules are going to pass this as an int , 4-byte value for the architectures in question. double , in addition to 8 bytes, is a completely different encoding. This way sin gets garbage input. Secondly, sin() returns double or 8-byte code on these architectures, but you tell lldb to capture 4 bytes and do something meaningful. If you called p (float)sin((double)70) (so that only the return type was wrong), lldb would print a meaningless value, for example 9.40965e + 21 instead of 0.773891.

When you wrote

 (lldb) p (double) sin(70.0) 

You have fixed these errors. By default, C for a floating-point type should be passed as double . If you called sinf() , you would have problems because the function only expected a float .

If you want to provide lldb with a proper prototype for sin() and not worry about these issues, this is easy. Add this to your ~/.lldbinit file,

 settings set target.expr-prefix ~/lldb/prefix.h 

(I have a ~/lldb where useful python files are stored, etc.), and ~/lldb/prefix.h will read

 extern "C" { int strcmp (const char *, const char *); void printf (const char *, ...); double sin(double); } 

(you can see that I also have prototypes for strcmp() and printf() in my prefix file, so I donโ€™t need to drop them.) You donโ€™t want to enter too many things here - this file is added to every expression that you evaluate in lldb and it slows down the evaluation of expressions if you put all the prototypes there in /usr/include .

With this prototype added to my target.expr-prefix parameter:

 (lldb) p sin(70) (double) $0 = 0.773890681557889 
+4
source

All Articles