Use CGFloat instead of float? They don't do it

[Perhaps I do not understand this whole topic, because I grew up with languages ​​that allow the developer to almost completely ignore the processor architecture, such as Java, except for some specific cases. Please correct me if I have some of the wrong concepts. ]

Reading here it seems that we recommend using CGFloat instead of, say, float, because in the future it will prove my code for different processor architectures (64-bit handles the float differently). Assuming this is correct, why does UISlider , for example, use float directly (for value)? Wouldn't it be wrong (or something) for me to read their float and convert it to CGFloat , because in any case my code is wrong if the architecture changes anyway?

+4
source share
2 answers

CGFloat is just a typedef for float . This provides the flexibility for CGFloat to be something else in the future. That is why using this future protects your code. Objective-C does this with many types, NSInteger is another example.

Although they can be used interchangeably, I agree that in the case of UISlider does not appear that Apple was a dog.

+6
source

CGFloat is part of Core Graphics and is used for things like pixel values ​​that have changed a lot over the years (is anyone here a TRS-80 lover?). SO using CGFloat for drawing-related drawing values ​​is recommended.

The value of the slider is not related to graphics. This is just 0.0 to 1.0, which can be handled accurately with a float and possibly with a 16-bit float.

+2
source

Source: https://habr.com/ru/post/1315213/


All Articles