I am working on an application in C # and must read and write from a specific data file format. The only problem at the moment is that the format uses strictly single byte characters, and C # keeps trying to throw out Unicode when I use the write and char array (which doubles the file size, among other serious problems). I worked on modifying the code to use byte arrays, but this causes some complaints when submitting them to the tree structure and datagrid controls, and also includes transformations and much more.
I spent a bit of time searching on Google, and there seems to be no simple typedef that I can use to force char use bytes for my program, at least without causing additional complications.
Is there an easy way to force a C # .NET program to use only ASCII and not touch Unicode?
Later, I got it almost working. Using ASCIIEncoding in BinaryReader / Writers resolved most of the problems (some problems with adding an extra character to the lines, but I fixed it). I have one last problem, which is very small, but can be big: in a file, does a specific character (prints like Euro sign) convert to ? when loading / saving files. This is not a problem in the texts a lot, but if it happened in record length, it could change the size per kilobyte (not good, obviously). I think this is caused by the encoding, but if it comes from the file, why won't it come back?
The exact problem / results are as follows:
None of these results will work, since it can change anywhere in the file (if 80 is changed to 3F in an int-length record, this may be 65 * (256 ^ 3) difference). Not good. I tried using UTF-8 encoding, believing that this would fix the problem pretty well, but now it adds that second character, which is even worse.
ssube source share