This question is the result of two other questions that I asked in the last few days.
I am creating a new question because I think it is connected with the “next step” in my understanding of how to control the flow of my sending / receiving, which I have not received yet. Other related issues:
IOCP Documentation Interpretation Question - Buffer Owner Ambiguity
Non-Blocking TCP Buffer Issues
In conclusion, I use Windows I / O completion ports.
I have several threads that handle notifications from the completion port.
I believe that the question is platform independent and will have the same answer as doing the same on * nix, * BSD, Solaris.
So, I need to have my own flow control system. Fine
So I send send and send, a lot. How do you know when to start the dispatch order, since the recipient side is limited to X?
Take an example (closest to my question): FTP protocol.
I have two servers; One is on the 100 MB link, and the other is on the 10 MB link.
I order 100 MB one to send another (linked 10 MB) 1 GB file. It ends with an average transfer rate of 1.25 MB / s.
How did the sender (associated with 100 MB) know when to send, so the slower one will not be flooded? (In this case, the "to-be-sent" queue is the actual file on the hard disk).
Another way to ask a question:
Can I get a “keep your messages” notification from the far side? Is it built into TCP or the so-called "reliable network protocol", do I need to do this?
I could, of course, limit my messages to a fixed number of bytes, but that just doesn't sound right to me.
Again, I have a loop with many sendings to the remote server, and at some point, in this loop, I will need to determine whether I have to queue to send or whether I can pass it to the transport layer (TCP) .
How can I do it? What would you do? Of course, when I receive a completion notification from the IOCP that the transfer has been completed, I will issue other pending shipments that will be cleared.
Another design issue related to this:
Since I have to use custom buffers with a send queue, and these buffers are freed up for reuse (thus not using the delete keyword) when a sent notification is received, I will have to use mutual exclusion from this buffer pool.
Using a mutex slows down, so I thought; Why would not each thread have its own buffer pool, so accessing it, at least when you get the necessary buffers for the send operation, would not require mutex, because it belongs only to this thread.
The buffer pool is located at the local thread storage (TLS) level.
The absence of a mutual pool implies the absence of the need for blocking, implies faster operations, but also implies more memory used by the application, because even if one thread has already assigned 1000 buffers, the other one that sends right now will need 1000 buffers to send something , they will need to be distinguished by themselves.
One more problem:
Say I have buffers A, B, C in the "to-be-sent" queue.
Then I get a completion notification that tells me that the recipient received 10 out of 15 bytes. Should I re-send from the relative offset of the buffer, or will TCP process it for me, i.e. complete the send? And if I should, can I be sure that this buffer is next-to-be-sent in the queue, or maybe, for example, buffer B?
This is a long question, and I hope no one was hurt (:
I would really like someone to take the time to answer here. I promise that I will vote for him twice! (:
Thanks everyone!