Latin-1 (ISO-8859-1) and its extension Windows CP-1252 must be supported for Western users. UTF-8 is arguably an excellent choice, but people often don't have that choice. Chinese users will need GB-18030, and remember that there are Japanese, Russians, and Greeks who all have their own encodings next to Unicode with the UTF-8 encoding.
Regarding detection, most encodings cannot be safely detected. In some (e.g. Latin-1) some byte values ββare simply invalid. In UTF-8, any byte value may appear, but not every sequence of byte values. In practice, however, you will not do the decoding yourself, but use the encoding / decoding library, try to decode and catch errors. So, why not support all the encodings supported by this library?
You can also develop heuristics such as decoding for a specific encoding, and then check the result for strange characters or combinations of characters or the frequency of such characters. But it will never be safe, and I agree with Wilks that you should not worry. In my experience, people usually know that a file has a specific encoding or that only two or three are possible. Therefore, if they see that you have chosen the wrong one, they can easily adapt. And take a look at other editors. The smartest solution is not always better, especially if people are used to other programs.
source share