What is the best way to delete a binary file section in Java 7

ie I have a 10 megabyte file, and I want to delete bytes from 1M to 2M, so the resulting file is 9 MB, and the data starting with 2 m bytes now starts with 1 M

I use Java 7, so I can do NIO, files are usually 10 MB in size and often accessible over the network, so I'm looking for an elegant solution that works well.

I know about BteBuffer.allocateDirect () and File.getChannel (), but I try to work if there is a way to do what I want, which is not connected with the need to write 8 MB from the file channel to a temporary buffer just to write it back to the file elsewhere or is it really normal if you use allocateDirect ()

+6
source share
3 answers

In my opinion, the approach of using a temporary file on disk is a good approach. If you are in a situation where you can create a new temporary file on disk, then nio has some options that may help you. I just walk away from the API here and the nio tutorial, but it seems that FileChannel.transferFrom or FileChannel.transferTo may be necessary tools.

I have not tested the following code, but it should point you in the right direction.

 public static void main(String[] args) { int megaByte = 1024 * 1024; // prepare the paths Path inPath = Paths.get("D:/file.dat"); // java.nio.file.Paths Path outPath; // java.nio.file.Path try { outPath = Files.createTempFile(null, "swp"); // java.nio.file.Files } catch (IOException ex) { throw new IOError(ex); } // process the file try ( FileChannel readChannel = new FileChannel.open(inPath, READ); FileChannel writeChannel = new FileChannel.open(outPath, WRITE, TRUNCATE_EXISTING) ) { long readFileSize = readChannel.size(); long expectedWriteSize = readFileSize; if (readFileSize > 2 * megabyte) expectedWriteSize = readFileSize - megabyte; else if (readFileSize > megabyte) expectedWriteSize = megabyte; // copy first megabyte (or entire file if less than a megabyte) long bytesTrans = readChannel.transferTo(0, megabyte, writeChannel); // copy everything after the second megabyte if (readFileSize > 2 * megabyte) bytesTrans += readChannel.transferTo(2 * megabyte, readFileSize - 2 * megabyte, writeChannel); if (bytesTrans != expectedWriteSize) System.out.println("WARNING: Transferred " + bytesTrans + " bytes instead of " + expectedWriteSize); } catch (FileNotFoundException ex) { throw new RuntimeException("File not found!", ex); } catch (IOException ex) { throw new RuntimeException("Caught IOException", ex); } // replace the original file with the temporary file try { // ATOMIC_MOVE may cause IOException here . . . Files.move(outPath, inPath, REPLACE_EXISTING, ATOMIC_MOVE); } catch (IOException e1) { try { // . . . so it probably worth trying again without that option Files.move(outPath, inPath, REPLACE_EXISTING); } catch (IOException e2) { throw new IOError(e2); } } } 

It is possible that nio will help, even if you cannot open a new file. If you open a read / write channel to a file or open two different channels in one file, one part of the file can be transferred to another part of the file using the transferTo method. I do not have enough experience to find out. The API states that a method that takes an explicit position argument (for example, the first argument of transferTo ) can continue at the same time as the operation that writes to the file, so I don't exclude this. You might want to rewrite the file into megabyte-sized chunks if you try this. If this works, then FileChannel.truncate can be used to chop the last megabyte of the file when you finish writing parts of the file to one megabyte early position in the file.

0
source

Write the result to a temporary file, then replace the old file with your temporary file (which acts as a buffer on disk).

Code example:

 public static void main(String[] args) { // -- prepare files File inFile = new File("D:/file.dat"); File outFile; try { outFile = File.createTempFile("swap", "buffer"); } catch (IOException ex) { throw new IOError(ex); } // -- process file try ( InputStream inStream = new FileInputStream(inFile); OutputStream outStream = new FileOutputStream(outFile) ) { //drop some bytes (will be removed) inStream.skip(4); //modify some bytes (will be changed) for (int i = 0; i < 4; i++) { byte b = (byte) inStream.read(); outStream.write(b >> 4); } //copy bytes in chunks (will be kept) final int CHUNK_SIZE = 1024; byte[] chunkBuffer = new byte[CHUNK_SIZE]; while (true) { int chunkSize = inStream.read(chunkBuffer, 0, CHUNK_SIZE); if (chunkSize < 0) { break; } outStream.write(chunkBuffer, 0, chunkSize); } } catch (FileNotFoundException ex) { throw new RuntimeException("input file not found!", ex); } catch (IOException ex) { throw new RuntimeException("failed to trim data!", ex); } // -- release changes //replace inFile with outFile inFile.delete(); outFile.renameTo(inFile); } 
+3
source

I don’t think you can read a huge file in memory and just chop off part of it. The @ Binkan approach is much smarter as this test has been tested for a file of up to 15 MB.

  try { // read the file byte[] results = Files.readAllBytes(Paths.get("xfile.apk")); // strip off say 1MB byte[] trimmed = Arrays.copyOfRange(results, 0, (results.length / (1024 * 1024)) - 1); Files.write(Paths.get("xfile.apk"), trimmed, StandardOpenOption.TRUNCATE_EXISTING); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } 

Perhaps using a MappedByteBuffer could lead to performance overhead

0
source

All Articles