Java uses Unicode internally. Always. In fact, it uses UTF-16 most of the time, but so far there are too many details.
It may not use ASCII internally (e.g. for String ). You can represent any String that can be represented in ASCII in Unicode, so this should not be a problem.
The only place the platform comes into play is when Java has to choose the encoding, when you didn't specify it. For example, when you create a FileWriter to write String values ββto a String: at this point, Java should use an encoding to indicate how a particular character should be matched with bytes. If you do not specify one, the default encoding of the platform is used. This default encoding is almost never ASCII . Most Linux platforms use UTF-8, Windows often uses some derivatives of ISO-8859- * (or other culture-specific 8-bit encodings), but no current OS uses ASCII (simply because ASCII cannot represent much important characters).
In fact, pure ASCII is almost irrelevant these days: nobody uses it. ASCII is important only as a general display subset of most 8-bit encodings (including UTF-8): the bottom 128 Unicode code points display 1: 1 numeric values ββ0-127 in many, many encodings. But pure ASCII (where the values ββare 128-255 undefined) is no longer used.
source share