Why cpu bound is better with blocking I / O and I / O bounds better with non-blocking I / O

I was told that for I / O-bound applications, unsafe I / O would be better. For processor-bound applications, I / O locking is much better. I could not find the reason for such a statement. I tried Google, but a few articles just cover the topic with a little detail. Can someone explain the reason for the deep depth?

With this, I want to clear myself of what is a short arrival of non-blocking I / O.

Having gone through another thread here , the reason I could be connected was because the I / O process is quite heavy, and only we can see significant performance improvements using non-blocking I / O. It also indicates that if the number of I / O operations is large (a typical web application scenario), where there are many requests looking for I / O requests, then we also see significant improvements using non-blocking I / O.

So my questions come down to the following list:

  • In the case of processor-intensive applications, is it better to start a threadpool (or executeContext scala) and split the work between thread threads. (I think this is definitely an advantage over creating your own threads and dividing the work manually. In addition, using asyn concept of the future, even processor intensity, can be returned using callbacks, which avoids the problems associated with multithreading blocking?). Also, if there is I / O that is fast enough, then perform I / O using the thread locking principles of the thread pool itself. I'm right?

  • Actually, are these short sentences or overheads for using non-blocking I / O technically? Why don't we see much success in using non-blocking I / O if I / O is fast enough or if I / O is required? After all, it is an OS that handles I / O. Regardless of whether the I / O volumes are large or small, let the OS handle this pain. What is doing here.

+3
nonblocking cpu-speed blocking
Jan 19 '16 at 13:11
source share
1 answer

From a programmer’s point of view, blocking I / O is easier to use than non-blocking I / O. You simply call the read / write function, and when it returns, you are done. With non-blocking I / O, you need to check whether you can read / write, then read / write, and then check the return values. If not everything has been read or written, you will need mechanisms to read or write again or later when the recording can be done.

In terms of performance: non-blocking I / O on a single thread is no faster than blocking I / O on a single thread. The speed of an I / O operation is determined by the device (for example, a hard disk) that is read or written to. The speed is not determined by someone who is waiting (blocked) or not waiting (non-blocking). Also, if you call a blocking I / O function, the OS can effectively block the lock. If you need to do lock / wait in an application, you can do it almost as good as the OS, but you can also make it worse.

So why do programmers make life difficult and implement non-blocking I / O? Because, and this is a key point, their program has more features than just one I / O operation. When using I / O lock, you need to wait until the block I / O is complete. When using non-blocking I / O, you can perform some calculations until blocking I / O is performed. Of course, during non-blocking I / O, you can also start other I / O operations (blocking or non-blocking).

Another approach to non-blocking I / O is to add more threads with I / O blocking, but as the SO message you linked goes with Cost. This cost is higher than the cost of (non-supported OS) non-blocking I / O.

If you have an application with massive I / O, but only with low CPU consumption, such as a web server with many clients, then use multiple threads with non-blocking I / O. When blocking I / O, you get a lot of threads -> high costs, so use only a few threads -> requires non-blocking I / O.

If you have an application with an intensive processor, such as a program that reads a file, performs intensive calculations on the full data and writes the result to a file, then 99% of the time will be spent on part of the processor. Therefore, create multiple threads (for example, one per processor) and do a parallel computation. As for I / O, you are likely to block I / O blocking because it is easier to implement and because the program has nothing to do with it.

If you have an application with heavy CPU usage and I / O intensity, you can also use multiple threads and non-blocking I / O. You might think of a web server with lots of clients and web page requests where you do heavy computing in a cgi script. While waiting for I / O on connection, the program can calculate the result for another connection. Or think of a program that reads a large file and can do intensive calculations on chunks of a file (for example, calculating an average value or adding 1 to all values). In this case, you can use non-blocking reads, and, waiting for the next read to finish, you can already calculate the available data. If the result file is just a small compressed value (for example, an average value), you can use a lock record for the result. If the result file is sized as the input file and as β€œall values ​​+1”, you can write the results without locking, and during the recording, you can perform calculations in the next block.

+3
Jan 20 '16 at 10:45
source share



All Articles