What happened to the TCP Nagle flag?

According to this Socket FAQ article , the Nagle algorithm is one of many algorithms that can cause a bunch of data to be in the TCP buffer and not get into the wire. The delay with the Nagle algorithm can reach 200 ms.

For some reason, the Nagle algorithm may be completely disabled, but will not blush only once. It really puzzles me. Why is there no way to say that "only once, do not wait for more data. Just act as if Nagll has risen 200 ms."

Wouldn't that make perfect sense and create a good balance between the Nagle in general, the Nagle all the time, and wouldn’t execute his own protocol from scratch?

+4
source share
2 answers
Good question. I think no one ever needed this, or they circumvented it. If I remember correctly, enabling TCP_NODELAY immediately TCP_NODELAY data. Then you can just turn it off.

Of course, this is due to the high cost of two system calls for a "flash". What you could do: send(2) , in Unix implementations there is a flags argument. You can implement your own flag, something like: MSG_JUSTPUSHIT (well, maybe a different name) and look at it in tcp_output .

+1
source

In applications with a high sensitivity to performance, when the delays that occur in the Nagle algorithm are a problem, it is often easier to simply completely disable the Nagle algorithm and emulate its batch processing in software using IO scatter / assembly (for example, writev() or by implementing buffering in software where necessary). As an added bonus, this reduces some system call overhead.

Alternatively, you can open two separate sockets and disable Nagling on one of them. Just keep in mind that data sent to one socket will not necessarily be synchronized with another.

0
source

All Articles