Any reason to prefer single precision to double precision data?

I use to select the smallest data type needed to fully represent my values ​​while preserving semantics. I do not use long when int guaranteed to be enough. Same thing for int vs short .

But for real numbers, C # usually uses double - and does not match single or float . I can still use System.Single , but I wonder why C # didn't bother to turn it into a language keyword, like it was with double .

In contrast, there are language keywords short , int , long , ushort , uint and ulong .

So, is this a signal for developers that single-point deprecated, deprecated, or otherwise should not be used in favor of double or decimal ?

(Needless to say, single-point has a disadvantage of less accuracy. This is a well-known trade-off for a smaller size, so you should not focus on it.)

edit . My apologies, I mistakenly believed that float not a keyword in C #. But this is what makes this question controversial.

+4
source share
5 answers

By default, any literal of type 2.0 automatically interpreted as double , unless otherwise specified. This may contribute to a higher use of double than other floating point views. Just a note.

As for the lack of a single , the keyword float translates to type System.Single .

+3
source

Alias float represents the .NET System.Single data type, so I would say it's safe to use.

+7
source

The following correspondence exists between C # keywords and .NET type names:

 double - System.Double float - System.Single 

So, there is one keyword in C # for each of the two types in question.

I don’t know how it seems that float not a C # keyword. It certainly is.

+5
source

Actually there is a "floating" key for single precision floating point.

Also, you should not be sure that short or bytes are better than int. int is generally the best choice for integers, read more about it here: Why should I use int instead of byte or short in C #

+5
source

C # System.Single is an alias for float

+2
source

All Articles