When changing the file length do I need to reassign all related MappedByteBuffers?

I have a small and simple storage system accessible through memory mapped files. Since I need to address more than 2 GB of space, I need a MappedByteBuffer list with a fixed size of 2GB (I use less for various reasons). Then everything is relationally simple: the buffer maps to a specific space, say, up to 1 GB, and when I need more, I map to a new MappedByteBuffer (the file automatically grows), then when I need more than the third buffer, it is displayed, etc. It just worked.

But then I read in the Java NIO book that problems can occur when changing the file length:

A MappedByteBuffer directly reflects the disk file with which it is associated. If the file has a structural modification under the action of matching, strange behavior may occur (the exact behavior depends on the OS and the file system). MappedByteBuffer has a fixed size, but the file to which it is attached is elastic. In particular, if the file size changes during the mapping operation, some or all of the buffer may become inaccessible, undefined data may be returned, or exceptions may be thrown. Be careful about how files are controlled by other threads or external processes when they are mapped to memory.

I think problems can occur, since the OS can move the file when it grows, and MappedByteBuffers can then point to invalid space (or am I interpreting this incorrectly?)

So, instead of adding a new MappedByteBuffer to the list, now I do the following

  • file length increase
  • clearing the buffer list (discarding old buffers and making sure that the buffers issued through the garbage collector. hmmh, maybe I should clear all the buffers explicitly through clean.clean ()?)
  • re-mapping (filling the list with fresh buffers)

BUT THIS procedure has the disadvantage that sometimes it crashes when displayed using

IOException: Operation not permitted at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:734) 

Why? Since clearing the list of buffers did not allow you to properly empty and flush the buffers, and several mappings are not allowed? Should I just stick to the old working method and ignore the comment in the book?

Update

  • splitting a display on a 32-bit OS has the advantage that it is better to find free space and less likely to make a mistake ( ref )
  • splitting the display into smaller parts is an advantage since setting up mmap can be expensive ( ref )
  • Both approaches are not clean, while my second approach should work, but it will need unmap (will try to force the release with the usual cleanup cleanup.). The first approach should work on systems (for example, ibm ), where I could increase the file size, but in general it will not work, although I have not yet been able to find the exact reason ...
  • The cleanest way is to use multiple files that I'm afraid of (one file on a MappedByteBuffer)
+2
source share
1 answer

The root cause was my mistake: I accidentally redirected the main file too often (capacity was increased only with the help of mini-steps).

But even in this emergency, I was able to finally fix the IOException (the operation is not allowed) when I repeated the operation of unsuccessful matching (+ System.gc + 5ms sleep β†’, which should enable jvm to disable the buffer). Now I have seen only a huge number of reprints, which leads to a final conclusion.

At least I learned a little more about mmap: it is very dependent on the OS + file system - thanks to auselen ! If you like a clean solution, you should use one MappedByteBuffer for each file, as originally suggested from it. But it can also be problematic if you need a lot of space and the OS file descriptor limit is too low.

And last but not least, I would highly recommend against my first solution, as I could not find a guarantee (only on IBM OS;)) that leaves the modified buffer unchanged after increasing the file size.

+1
source

All Articles