C ++ TCP Socket Send Speed

I am sending messages to a remote server using a simple TCP socket lock, and the problem I have is that for each message it takes a completely different time to send.

And here is what I will get (example):

Bytes Sent: 217, Time: 34.3336 usec Bytes Sent: 217, Time: 9.9107 usec Bytes Sent: 226, Time: 20.1754 usec Bytes Sent: 226, Time: 38.2271 usec Bytes Sent: 217, Time: 33.6257 usec Bytes Sent: 217, Time: 12.7424 usec Bytes Sent: 217, Time: 21.5912 usec Bytes Sent: 217, Time: 31.1480 usec Bytes Sent: 218, Time: 28.3164 usec Bytes Sent: 218, Time: 13.0963 usec Bytes Sent: 218, Time: 82.8254 usec Bytes Sent: 218, Time: 13.0963 usec Bytes Sent: 227, Time: 30.7941 usec Bytes Sent: 218, Time: 27.9624 usec Bytes Sent: 216, Time: 2.1237 usec Bytes Sent: 218, Time: 12.3884 usec Bytes Sent: 227, Time: 31.1480 usec Bytes Sent: 227, Time: 88.4887 usec Bytes Sent: 218, Time: 93.0901 usec Bytes Sent: 218, Time: 7.7870 usec Bytes Sent: 218, Time: 28.3164 usec Bytes Sent: 227, Time: 89.5505 usec Bytes Sent: 218, Time: 84.2412 usec Bytes Sent: 218, Time: 13.8042 usec Bytes Sent: 227, Time: 99.4612 usec Bytes Sent: 218, Time: 86.0110 usec Bytes Sent: 218, Time: 12.3884 usec Bytes Sent: 218, Time: 87.7807 usec Bytes Sent: 216, Time: 3.5395 usec Bytes Sent: 218, Time: 4.6014 usec Bytes Sent: 218, Time: 36.1034 usec Bytes Sent: 218, Time: 14.8661 usec Bytes Sent: 218, Time: 24.0689 usec Bytes Sent: 218, Time: 18.0517 usec Bytes Sent: 227, Time: 24.4229 usec 

Does anyone know why this could happen? Why does one use 3 usec for sending and 80 usec for the other?
And is there a way to fix this?

Note. The main goal I want to archive is to send each message as quickly as possible. I don't need any additional sockets, at least as long as they work faster.

Some additional information about what I am doing:

C ++, Visual Studio 2013

How do I discover:

 ... hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; ... ConnectSocket = socket(ptr->ai_family, ptr->ai_socktype, ptr->ai_protocol); ... 

How to send and calculate the time:

 ... LARGE_INTEGER cT; QueryPerformanceCounter(&cT); long long dT = cT.QuadPart; iBytesSent = send(ConnectSocket, msgFinal, msgFinalLen, 0); QueryPerformanceCounter(&cT); dT = cT.QuadPart - dT; ... 

I am also listening to this socket from another thread, I do not know if this can affect the send or not:

 iResult = recv(ConnectSocket, recvbuf, DEFAULT_BUFLEN, 0); 
+4
source share
3 answers

Your methodology is invalid. You only measure how long it takes to put data into the send buffer. If there is room for this, there is no network operation whatsoever. If there is no place, you block until there is no place, which depends on the recipient reading what is already there. So, you see that sometimes there is a place, and sometimes not, depending on whether the receiver is reading and not far behind.

If you want to measure the round-trip time, you need to send a timestampe and repeat the echo, and then when you receive it, compare it with the current time.

+3
source

You do not measure the time it takes to send a message; you measure the time it takes for a message to get into the TCP send buffer. This could be due to memory allocation, lock conflict, and many other things. It may also include another planned process that will result in the loss of your temporary fragment.

+1
source

What you measure is the amount of time that the call sends. This is basically a write (I / O) operation to the socket level buffer.

Your process tries to log into the system and blocks - as soon as the I / O process ends, the daemon wakes up. The time difference you see for sending calls includes:

I am. Actual recording time.

II. sleep interruption - because the scheduler will not wake up your process immediately after I / O is completed. There may be another process that may be awakened.

Tweaks:

I am. Try optimizing the size of the send / receive window. This is the amount of data that can be sent without waiting for the ACK.

Visit: Configuring TCP Receive Window on C and Using tcpdump on Linux

II. The buffers that you pass to the send call must match the size of the window. Thus, tcp does not wait until the OPTIMUM size is reached before actually deleting the data on the network.

III. A flag like TCP_NODELAY for your OS implementation will help.

IV. Adjust the good value for your daemon so that it wakes up as soon as the blocking I / O call completes.

0
source

All Articles