I use to select the smallest data type needed to fully represent my values while preserving semantics. I do not use long when int guaranteed to be enough. Same thing for int vs short .
But for real numbers, C # usually uses double - and does not match single or float . I can still use System.Single , but I wonder why C # didn't bother to turn it into a language keyword, like it was with double .
In contrast, there are language keywords short , int , long , ushort , uint and ulong .
So, is this a signal for developers that single-point deprecated, deprecated, or otherwise should not be used in favor of double or decimal ?
(Needless to say, single-point has a disadvantage of less accuracy. This is a well-known trade-off for a smaller size, so you should not focus on it.)
edit . My apologies, I mistakenly believed that float not a keyword in C #. But this is what makes this question controversial.
source share