The first statement is true only when the number of parallel queries is relatively small (or rather tens of thousands). It's all about using multiple threads (blocking) instead of one or more threads (non-blocking). Let's say you want to write an application that downloads only a file from a remote server. If your application needs to download only one file at a time, you only need one stream. But if you have a crawler that runs thousands of HTTP requests, you need to have thousands of streams (or use a limited number of streams + NIO instead). For so many threads, the problem is context switching, which can significantly slow down your application (therefore, NIO is better for this number of concurrent requests).
But back to your question. Why can NIO be slower in terms of raw data performance? The reason is that CPU time is used by NIO-driven applications. For such a case, in the blocking model, your code performs only one thing - waiting for data (it performs the recv () operation in a loop). In a NIO application, the logic is much more complicated: in a loop, the code uses a selector to select a set of keys (which includes the epoll_wait system call on Linux, Oracle JVM), then iterates over the set, selects a channel for each key, and then reads data from the channel (reading () in OS). In the standard locking model, all you do is execute the recv () system function. Bottom line: an NIO-driven application in this case uses more CPU time and generates more mode-switching operations due to more system calls (speaking of mode switching, I mean switching from user mode to kernel mode). Therefore, the time required to download the file will be higher.
hakyer
source share