What factors other than latency and bandwidth affect network speed?

I noticed that browsing images or websites hosted on American servers (Im in europe) is much slower. The main reason may be latency due to distance.

But if 1 packet receives n milliseconds to be received, could it not be reduced by sending more packets at the same time?

Is this really happening or are packets being sent one by one? And if so, what determines how many packets can be sent at the same time (do something with the cable, I think)?

+4
source share
5 answers

But if 1 packet takes n milliseconds to be received, could it be easier by sending more packets at the same time?

Not unlimited, according to TCP / IP standards, because there are algorithms that determine how much can be in flight and have not yet been recognized in order to avoid overloading the entire network.

Is this really happening or are packets being sent one by one?

TCP may not reach a certain number of packets and data "in flight."

And if so, what determines how many packets can be sent at the same time (do I think something to do with the cable)?

Which cable? The same standards apply regardless of whether you are connected to cable, wireless or mixed connection sequences (remember that your packet goes through many routers on the way to the destination, and the sequence of routers can change between packets).

You can start learning TCP, for example. wikipedia . Your specific questions relate to overload algorithms and the standard, Wikipedia will provide you with pointers to all relevant algorithms and RFCs, but the whole picture will not be good for you if you try to start your studies at this place without a great understanding of TCP (for example, its flow control concepts).

Wikipedia and similar sites in the encyclopedia / study guide can give you a summary of the resume, while RFCs are not studied to be readable or understandable to laypeople. If you are worried about TCP, I would advise you to start your research with Stevens' immortal trilogy of books (although there are many other valid ones, Stevens is “definitely my personal favorites”.

+3
source

Problem: parallelism.

Delay does not directly affect your channel bandwidth. For example, a dump truck across the country has terrible latency, but remarkable throughput if you stuff it with full 2 ​​TB tapes.

The problem is that your web browser cannot start asking for things until it finds out what to ask for. Therefore, when you load a webpage with ten images, you need to wait until the img tags arrive before you can send a request for them. Thus, everything is noticeably slower, not because your connection is saturated, but because there is an idle time between one request and the next.

The prefect helps to solve this problem.

For “multiple packets at a time,” a single TCP connection will have many packets in transit at the same time, as determined by the window scaling algorithm that uses the end. But this only helps one connection at a time ...

+1
source

TCP uses what is called a sliding window. Basically, the amount of buffer space, X, the receiver must reassemble from order packets. The sender can send X bytes for the last confirmed byte, the sequence number is N, say. Thus, you can fill the channel between the sender and the recipient with X unconfirmed bytes under the assumption that the packets are likely to get there, and if not, the recipient will inform you without confirming the missing packets. In each response packet, the receiver sends a cumulative acknowledgment, saying: "I have all the bytes up to byte X". This allows multiple packages at once.

Imagine that the client sends 3 packets, X, Y and Z, starting with the sequence number N. Due to the fact that routing is performed first Y, then Z, and then X. Y and Z will be buffered in the destination stack and when X then the receiver will receive N + (cumulative lengths X, Y and Z). This will mean the beginning of a sliding window, allowing the client to send additional packets.

Perhaps with selective acknowledgment, in order to receive a part of the sliding window and ask the sender to retransmit only the lost parts. In the classic Y pattern, the sender will have to resend Y and Z. Selective acknowledgment means that the sender can simply resend Y. See the wikipedia page .

As for speed, one thing that can slow you down is DNS. This adds an extra round if the IP is not cached before you can even request the image in question. If this is not a general site, this may be so.

TCP Illustrated Volume 1, Richard Stevens is huge if you want to know more. The name sounds funny, but packet diagrams and annotated arrows from one node to another really make it easier to understand. This is one of those books from which you can learn, and then ultimately save the link. This is one of my three books on network projects. alt text http://ecx.images-amazon.com/images/I/21NraFSkMOL._SL500_AA300_.jpg

+1
source

The TCP protocol will try to send more and more packets at a time, up to a certain amount (I think) until it starts to notice that they are discarded (packets die on the router / switch ground when their time to Live expires), and then he throttles back. Thus, it determines the size of the window (bandwidth) that it can use. If the sending node notices many dropped packets from its end, then the receiving node will simply see it as a slow connection. It could very well be blasting you with data; you just don't see it.

0
source

I assume that it is also possible to transmit packets in parallel (ya .. may be limited by the number of packets sent at a time). U will get additional information about packet transmission from topics ::> message switching, packet switching, circuit switching and packet switching of a virtual network ...

-1
source

All Articles