I have a simple scenario where two servers are connected through a gigabit connection. I run iperf on both sides to measure bandwidth.
What surprises me, when I run traffic in two directions, it always supports only one side (for example, ~ 900 Mbit / s versus ~ 100 Mbit / s). If I run unidirectional traffic, each side receives ~ 900 Mbps.
If I connect one of the servers (lower memory) to another server, bidirectional traffic will be balanced. So definitely not the iperf problem.
Other facts:
- One server has a fairly large memory (~ 12 GB), while the other only has ~ 4 GB.
- Both servers have the same TCP memory configurations, in this case the same TCP w / r mem, core w / r mem, TX queue length.
- Both use the same Ethernet card (E1000 driver).
- The same version of Linux, RedHat 2.6.9. A large server works with a 64-bit version due to 12 gigabyte memory.
- Both have no other traffic, next to a small SSH and random ping every second.
- Both have "tcp_moderate_rcvbuf" on.
Questions:
- Why unbalanced?
- In which area should I see if the socket buffer is heavily used on one side, and how?
- Besides iperf, what other good software (not hardware / tester) for measuring performance?
- What is the best way to get an idea of ββhow Linux allocates a buffer from the Ethernet ring buffer, TCP buffer, socket buffer, and other buffers?
- What could be another object that could affect throughput, which I did not consider above?
- Is there any documentation explaining how Linux allocates memory allocation between user, kernel, device drivers, and network stack?
Any recommendations are deeply appreciated.
Kokon source share