Why is Java allowed to use two before char?

Why Java allows char c = (char)65.8; ?

Should this not cause an error, since 65.8 not an exact Unicode value? I understand that double is truncated to an integer, in this case 65 , but it seems to me a bad design to allow the programmer to make such a selection.

+8
java double casting char
source share
4 answers

This is called casting restriction . From oracle docs :

22 specific transformations on primitive types are called narrowing primitive transformations:

short to byte or char

char in bytes or short

int in byte, short or char

long to byte, short, char or int

float to byte, short, char, int or long

double to byte, short, char, int, long or float

Narrowing a primitive conversion may lose information about the total value of a numerical value, and may also lose accuracy and range.

In Java, there are two main types of type conversions: extension and narrowing .

A widening conversion occurs when converting from a type with a smaller (or narrower) type with a larger (or wider) range. Because of this, there is no way to lose data, and the conversion is considered safe.

A narrowing conversion occurs when converting from a type with a larger (or wider) type with a smaller (or narrower) range. As we shorten the range, there is a chance of data loss, so this conversion is considered "unsafe"

narrowing type conversion

Converting from byte to char is a special case and represents an extension and narrowing at the same time. The conversion begins by converting the byte to int , and then the int is converted to char .

One of the reasons why I can think about why narrowing the casting type does not result in an error / exception is to provide convenient / simple / fast type conversion in cases where the data will not be lost. The compiler leaves this to us to make sure that the converted data can fit in a smaller range. This is also useful if we want to quickly truncate values ​​such as rounding a double value (by entering an int ).

+10
source share

this does not happen automatically upon assignment: it will be a compilation error.

The fact that the programmer makes a conscious choice (for example, type cast) means that it takes into account the possibility and possible truncation.

+4
source share

You may have code such as encryption algorithms that may be useful for adding double or float to char . Furthermore, char is an unsigned type, which means that (char)200.5 gives something other than (char)(byte)200.5 .

+2
source share

How can a dumb computer know what was intended?

 char c = (char)65.8; // valid, double gets converted and explicitly truncated to a char 

It may happen that during the calculation you can perform complex calculations, including double arithmetic and, finally, the final value, you use the passage and display as a symbol. What's wrong?

+1
source share

All Articles