Because in Java, char is an integral data type whose values ββare 16-bit unsigned integers representing UTF-16 code units. Since char is a numeric data type, when you assign it a numeric value, it simply accepts the encoding of any Unicode character represented by that value.
Run the following two lines of code:
char c = 65; System.out.println("Character: " + c);
You will see the result:
Character: A
(I would use 7, as in the example, but this is a non-printable character.) The "A" character is printed because the decimal value 65 (or 41 hex) is encoded in that letter of the alphabet. See Joel Spolsky's article Absolute Minimum Every software developer Absolutely, should know positively about Unicode and character sets (no excuses!) For more information on Unicode.
Update:
If you say that assigning an int value to char usually gives a compiler error "a possible loss of precision", as shown in the following code:
int i = 65; char c = i; System.out.println("Character: " + c);
The answer is what PSpeed ββmentioned in his comment. The first (2-line) version of the code performs literal assignment because the value is known at compile time. Since the value 65 is in the correct range for char ('\ u0000' to '\ uffff' inclusive or from 0 to 65535), an assignment is allowed. In the second (3-line) version, assignment is not allowed, since the int variable can take any value from -2147483648 to 2147483647 inclusive, most of which are outside the range for the char variable that should contain.
source share