Check RandomAccessFile: http://docs.oracle.com/javase/7/docs/api/java/io/RandomAccessFile.html
You need to track the position in which you are reading and the position you are writing to. Initially, both begin. Then you read N bytes (one line), shorten it, look back N bytes and write M bytes (shortened line). Then you look ahead (N - M) bytes to return to the position where the next line begins. Then you do it again and again. As a result, truncate the excess with setLength (long).
You can also do this in batches (e.g. read 4kb, process, write, repeat) to make it more efficient.
The process is identical in all languages. Some simplify things by hiding back and forth searches behind the API.
Of course, you must be absolutely sure that your program works flawlessly, as there is no way to cancel this process.
In addition, RandomAccessFile is a bit limited because it cannot tell you what position the file is currently at. Therefore, you need to do the conversion between the "decoded strings" and the "encoded bytes" along the way. If your file is in UTF-8, a given character in a string can take up one or more bytes in the file. Therefore, you cannot just search (string.length ()). You should use seek (string.getBytes (encoding) .length) and the ratio of possible line break conversions (Windows uses two characters to break the line, Unix uses only one). But if you have ASCII, ISO-Latin-1 or similar trivial character encoding and know which lines interrupt the characters that have the file, the problem should be pretty simple.
And when I edit my answer to all possible angular cases, I think it would be better to read the file using BufferedReader and fix the character encoding, and also open RandomAccessFile for writing. If your OS supports opening the file twice. This way you get full Unicode support from BufferedReader, and you wonβt need to keep track of read and write positions. You should write with RandomAccessFile, because using Writer in a file may just truncate it (although you havenβt tried it).
Something like that. It works on trivial examples, but it does not have error checking, and I absolutely do not give any guarantees. First check it for a smaller file.
public static void main(String[] args) throws IOException { File f = new File(args[0]); BufferedReader reader = new BufferedReader(new InputStreamReader( new FileInputStream(f), "UTF-8")); // Use correct encoding here. RandomAccessFile writer = new RandomAccessFile(f, "rw"); String line = null; long totalWritten = 0; while ((line = reader.readLine()) != null) { line = line.trim() + "\n"; // Remove your prefix here. byte[] b = line.getBytes("UTF-8"); writer.write(b); totalWritten += b.length; } reader.close(); writer.setLength(totalWritten); writer.close(); }