Given these symptoms, UTF-8 data was re-mapped using ISO-8859-x encoding. č ( LATIN SMALL LETTER C WITH CARON (U + 010D) ) exist in UTF-8 bytes 0xC4 and 0x8D . According to ISO-8859-1 codepage layout, these bytes represent Ä and [nothing] characters, respectively, which is exactly what kind of re is.
This particular problem can have many causes. Since Facelets itself already uses UTF-8 by default to process the HTTP POST request parameters and to record the HTTP response, there should / should not be anything you need to fix / change in the Java / JSF side,
However, when you manually grab the request parameter before the JSF creates / restores the view (for example, in a custom filter), it may be too late for Facelets to set the correct character encoding of the request. You will need to add the following line to the custom filter before continuing the chain, or to a new filter that appears before the filter causing the problems:
request.setCharacterEncoding("UTF-8");
Also, if you explicitly or implicitly change the default character encoding of Facelets, for example, <?xml version="1.0" charset="ISO-8859-1"?> Or <f:view encoding="ISO-8859-1"> then Facelets will use ISO-8859-1 instead. You need to replace it with UTF-8 or delete it altogether.
If this is not the case, then only the database side will be the main suspect. On this side, I see two possible reasons:
- The DB table does not use UTF-8.
- The JDBC driver does not use UTF-8.
How to solve this exactly depends on the database server used. Usually you need to specify the encoding during the CREATE the database table, but you can also change it with ALTER . As for the JDBC driver, this usually needs to be resolved by explicitly specifying charset as the parameter for the connection URL. For example, in the case of MySQL:
jdbc:mysql://localhost:3306/db_name?useUnicode=yes&characterEncoding=UTF-8
See also:
Balusc
source share