Memory Mapping Files: IOException for CreateViewAccessor for Big Data

I work with large and growing files using managed shells for memory-mapped files: MemoryMappedFile , MemoryMappedViewAccessor .

I create empty files using this code:

 long length = 1024L * 1024L * 1L; // 1MB // create blank file of desired size (nice and quick!) FileStream fs = new FileStream(filename, FileMode.CreateNew); fs.Seek(length, SeekOrigin.Begin); fs.WriteByte(0); fs.Close(); // open MMF and view accessor for whole file this._mmf = MemoryMappedFile.CreateFromFile(filename, FileMode.Open); this._view = this._mmf.CreateViewAccessor(0, 0, MemoryMappedFileAccess.ReadWrite); 

This works great, up to 1 GB. When I try 2GB, I get an IOException :

 Not enough storage is available to process this command. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.MemoryMappedFiles.MemoryMappedView.CreateView(SafeMemoryMappedFileHandle memMappedFileHandle, MemoryMappedFileAccess access, Int64 offset, Int64 size) at System.IO.MemoryMappedFiles.MemoryMappedFile.CreateViewAccessor(Int64 offset, Int64 size, MemoryMappedFileAccess access) at (my code here) 

I have a 64-bit version of Windows 7, the application works as 64-bit, I have 6 GB of RAM. As far as I know, all this should be irrelevant. These are large amounts of data, yes, however, as I understand it, MemoryMappedFile and related classes are a way to handle large amounts of data like this.

According to the documentation http://msdn.microsoft.com/en-us/library/dd267577.aspx , IOException literally means that an I / O error has occurred. However, the file on disk is just fine.

The application regularly increases the file size as necessary, as mentioned, and in fact the error occurs at some point in a rather random way between ~ 400 MB and ~ 2 GB. Starting with 1 GB, this always succeeds. When launched from 1 MB by default, it fails much earlier, apparently due to the release and redistribution of resources. (I always Flush and Close on view, MMF and threads).

I need to randomly access the entire range of data. I hope that I do not need to dynamically maintain the MemoryMappedViewAccessor object MemoryMappedViewAccessor - my interpretation of the virtual memory system used here assumes that pages from a file of any size will be uploaded and unloaded as needed using the system memory in Windows.

In the form of a question: why is this happening? How can I stop him? Is there a better way to achieve full, random access to files of any size? For example, up to 100 GB?

+4
source share
1 answer

The app was really intended to assign x86, not x64 in the specific project build configuration that I selected.

My guess is that my process address space was full because it was running in 32-bit mode.

Decision. Change the target platform of the platform to x64 and run it on a 64-bit OS.

+3
source

All Articles