I work for a provider company. We are developing a speed tester for our clients, but we encounter some problems when testing TCP speed.
One client had a total time duration of 102 seconds, transferring 100 MB with a packet size of 8192. 100.000.000 / 8192 = 12.202 packets. If the client sends an ACK every other packet that looks like a lot of time, it just sends the ACK. Say the client sends 6000 ACKs, and RTT - 15 ms - 6000 * 7.5 = 45.000ms = 45 seconds for ACK only?
If I use this calculation for Mbit / s:
(((sizeof_download_in_bytes / durationinseconds) /1000) /1000) * 8 = Mbp/s
I will get the result in Mbp / s, but the higher the TTL between the sender and the client, the lower the Mbp / s speed.
To simulate that the user is closer to the server, would it be “legal” to remove the ACK response time in the end result on Mbp / s? Will it be like an end-user simulation close to the server?
So, I would display this calculation for the end user:
(((sizeof_download_in_bytes / (durationinseconds - 45sec)) /1000)/1000) * 8 = Mbp/s
It's right?
c # tcp
newandfresh
source share