Record at the end of the file

I am working on a system requiring high performance file I / O (from C #). Basically, I fill in large files (~ 100 MB) from the very beginning of the file to the end of the file. Every ~ 5 seconds I add ~ 5 MB to the file (sequentially from the very beginning of the file), on each array I flush the stream. Every few minutes I need to update the structure that I write at the end of the file (some metadata).

When flushing each package I have no performance issues. However, when updating the metadata at the end of the file, I get very poor performance. I assume that when creating the file (which should also be done very quickly), the file does not actually allocate all 100 MB on the disk, and when I clear the metadata, it should allocate all the space to the end of the file.

Guys / Girls, any idea how I can overcome this problem?

Thank you so much!

From the comment:

In general, the code is as follows: first the file is opened:

m_Stream = new FileStream(filename, FileMode.CreateNew, FileAccess.Write, FileShare.Write, 8192, false); m_Stream.SetLength(100*1024*1024); 

Every few seconds I write ~ 5 MB.

 m_Stream.Seek(m_LastPosition, SeekOrigin.Begin); m_Stream.Write(buffer, 0, buffer.Length); m_Stream.Flush(); m_LastPosition += buffer.Length; // HH: guessed the += m_Stream.Seek(m_MetaDataSize, SeekOrigin.End); m_Stream.Write(metadata, 0, metadata.Length); m_Stream.Flush(); // Takes too long on the first time(~1 sec). 
+6
performance c # file-io
source share
4 answers

As suggested above, it makes no sense (assuming you should have metadata at the end of the file) write first.

This would do 2 things (assuming a non-sparse file) ... 1. Allocate the total space for the entire file 2. Do any subsequent write operations a little faster when the space is ready and waiting.

Can't you do this asynchronously? At the very least, the application can go to other things.

+2
source share

Have you tried the AppendAllText method?

0
source share

Your question is not entirely clear, but I assume that you create a file, write 5 MB, and then look for 100 MB and write metadata, and then go back to 5 MB and write another 5 MB, etc.

If so, this is a file system problem. When you expand a file, NTFS should fill in the blank with something. As you say, the file is not highlighted until you write it. When writing metadata for the first time, the file is only 5 MB long, so when writing metadata, NTFS should allocate and write 95 MB of zeros before writing metadata. Calm down, I think it also does this in sync, so you don’t even win using overlapped I / O.

0
source share
0
source share

All Articles