Executing System.IO.ReadAllxxx / WriteAllxxx Methods

Is there a comparison of the performance of the System.IO.File.ReadAllxxx / WriteAllxxx methods with the StreamReader / StremWriter classes available on the Internet. What do you think is the best way (in terms of performance) for reading / writing text files in .net 3.0?

When I checked the page

+2
source share
7 answers

You probably don't want to use File.ReadAllxxx / WriteAllxxx if you intend to support loading / saving really large files.

In other words, for the editor that you are going to use when editing gigabyte- sized files, you need some kind of design with StreamReader / StreamWriter and a search, so you only download part of the file.

For anything without these (rare) requirements, I would say make a simple route and use File.ReadAllxxx / WriteAllxxx. They just use the same StreamReader / Writer template inside, since you still code manually, as aku shows.

+5
source

File.ReadAllText and similar methods use StreamReader / Writers internally, so performance should be comparable to what you do yourself.

I would say, if possible, using File.XXX methods, it makes your code a) easier to read b) less likely to contain errors (in any case, you write yourself).

+4
source

If you are not doing something like applying a regular expression that is multi-line matching a text file, you usually want to avoid ReadAll / WriteAll. Doing business in smaller, manageable pieces will almost always lead to increased productivity.

For example, reading a table from a database and sending it to a client web browser should be performed in small sets that use the nature of small network messages and reduce the use of computer memory. There is no reason to buffer 10,000 entries in memory on a web server and flush them all at once. The same goes for file systems. If you are worried about the write performance of many small amounts of data, for example, what is going on in the underlying file system to allocate space and that is overhead, you can find these articles covering:

Using the Windows File Cache

File Reading Tests

Explanation: if you do ReadAll and then String.Split ('\ r') to get an array of all the lines in the file, and also use a for loop to process each line, which will usually lead to worse performance than reading the file line by line and running your process on every line. This is not a tough rule - if you have some processing that takes a large chunk of time, it is often better to free system resources (file descriptor) sooner rather than later. However, with regard to writing files, it is almost always better to delete the results of any conversion process (for example, call ToString () in a large list of elements) for each element than to buffer it in memory.

+1
source

This MSR (Microsoft Research) is a good start, they also document a number of point tools, such as IOSpeed, FragDisk, etc ... that you can use and test in your environment.

There is also an updated report / presentation that you can read about how to maximize sequential I / O. The very interesting things that they debunked, the myths β€œmoving the HD head is the most time-consuming operation”, they also fully document their test applications and related configurations up to the motherboard, raid controller and almost any reliable information for you to reproduce them Job. Some of the highlights are how Opteron / XEON are mapped, but then they compared them to the insane \ hype NEC Itanium (32 or 64 proc or something else) for measurement. From the second link here, you can find many more resources to test and evaluate high-performance features and needs.

Some of the other MSR articles in the same research topic include guidance on where to maximize your expenses (e.g. RAM, CPU, Disk spindles ... etc.) to use your usage patterns ... all very neatly.

However, some of them are outdated, but usually older - APIs are still faster / lower-level;)

I am currently pushing hundreds of thousands of TPS on a specially designed application server using a combination of C #, C ++ / CLI, native code and bitmap caching (rtl * bitmap).

Use caution;

+1
source

@ Fredrick Calset is right. The File.ReadXXX methods are just handy wrappers around the StreamReader class.

For example, an implementation of File.ReadAllText is presented here .

public static string ReadAllText(string path, Encoding encoding) { using (StreamReader reader = new StreamReader(path, encoding)) { return reader.ReadToEnd(); } } 
0
source

Others explained the performance, so I will not add to it, however I will add that it is likely that the MSDN code sample was written before .NET 2.0 when helper methods are not available.

0
source

This link has benchmarks for reading 50 + K lines and indicates that the cleanup thread is 40% faster.

http://dotnetperls.com/Content/File-Handling.aspx

0
source

All Articles