[Perhaps I do not understand this whole topic, because I grew up with languages ββthat allow the developer to almost completely ignore the processor architecture, such as Java, except for some specific cases. Please correct me if I have some of the wrong concepts. ]
Reading here it seems that we recommend using CGFloat instead of, say, float, because in the future it will prove my code for different processor architectures (64-bit handles the float differently). Assuming this is correct, why does UISlider , for example, use float directly (for value)? Wouldn't it be wrong (or something) for me to read their float and convert it to CGFloat , because in any case my code is wrong if the architecture changes anyway?
source share