You do not return the data that you read from the method, as usual. Instead, when read is called, the calling server gives you the cbuf array, which is essentially the address of the memory block, and tells you to write len char .
When you do cbuf = fnlStr.toCharArray() , you simply replace your local copy of this address with a different address, but you do not actually change the memory you were supposed to write to. You need to either iterate over the array that you specified in the for loop and write to it, or use System.arraycopy if you created another buffer containing the result.
For example, the following read method will always read "Test\n" :
public int read(char[] cbuf, int off, int len) throws IOException { char[] result = "Test\n".toCharArray(); int numRead = Math.min(len, result.length); System.arraycopy(result, 0, cbuf, off, numRead); return numRead; }
Replacing the literal "Test\n" your unpacked line, you should start. Of course, you still have to control how much of your source you already consume.
As for the BufferedReader calling read twice: you don't care how often it calls. Just get the data from your original source, write it to cbuf and return the char number you wrote. If there is nothing to read, return -1 to signal the end of the stream (in this case, BufferedReader will stop calling read ).
As an aside, Reader designed to read character streams, while an InputStream is for binary data (this is basically the same, only with byte[] instead of char[] and without using encoding). Since compressed files are binary, you can switch your FileReader to FileInputStream .
I could imagine strange errors if, for some reason, the encoding you are encoding is different from the one with which you are decoding. Or less dramatically, you can use more space than you think if one 16-bit code in UTF-16 needs 3 8-bit code modules in UTF-8.