I am building a Linux embedded real-time application that has a lot of network traffic. Of the set of traffic, two connections are time critical. One is the input, and the other is the output. My application requires that this traffic take precedence over other, time-independent traffic.
Two things cost me:
- Minimize the number of remote packets due to congestion on these two connections.
- Minimize the delay through the device (input to output) on these two connectors.
I came (somewhat!) To speed up Linux traffic management, and I understand that it applies primarily to outgoing traffic, since the remote device is responsible for the priority of the data transferred to it. I set up my application as a real-time process and worked on problems related to what priority to run it.
Now I'm starting tc setup. For my test case, here is what I use:
tc qdisc add dev eth0 root handle 1: prio bands 3 priomap 2 2 2 2 2 2 2 0 2 2 2 2 2 2 2 2 tc qdisc add dev eth0 parent 1:1 handle 10: pfifo tc qdisc add dev eth0 parent 1:2 handle 20: pfifo tc qdisc add dev eth0 parent 1:3 handle 30: pfifo
Basically I say: send all priority 7 traffic to range 0 and all other traffic through lane 2. As soon as I have this simple test work, I will do a better job with other traffic.
First let me check my expectations: I expect that any traffic that has priority 7 should always go outside before the traffic has any other priority. This should lead to the fact that the delay on such traffic will be relatively not affected by other traffic on the box, no? My mtu is set to 1500, and through the interface I get about 10 MB / s. The maximum additional latency in range 0 caused by band 2 traffic is one packet (<= 1500 bytes) or 150 us (1500 bytes / 10 MB / s = 150 us).
Here is my test setup:
Two Linux boxes. Box 1, which runs a TCP server that uses echos input. Box 2 connects to the first, sends packets over TCP and measures the delay (time is sent for the received time).
I use the same tc setting for Linux boxes.
In applications (both on the server and on the client), I set SO_PRIORITY to the socket as follows:
int so_priority = 7; setsockopt(m_socket.native(), SOL_SOCKET, SO_PRIORITY, &so_priority, sizeof(so_priority));
I use tc to verify that my traffic is in range 0, and all other traffic is in range 2:
tc -s qdisc ls dev eth0
Here rub: When there is no other traffic, I see delays in the range of 500 us. When I have other traffic (for example, an scp job copying a 100 MB file), latencies jump to 10+ ms. What is really strange is that NONE OF THE WORK I DO NOT ACCEPT. In fact, if I change lanes (so that all my traffic goes through lower priority 2, and the other traffic passes through lane 1), I donβt see the difference in delay.
What I expected is that when there is other traffic on the network, I would see an increase in latency of about 150, not 10 ms! By the way, I checked that loading a field with other processes with priority not in real time does not affect the delay, as well as traffic on other interfaces.
Another point of note: if I drop mtu to 500 bytes, the latency decreases to about 5 ms. Nevertheless, this is an order of magnitude worse than the unloaded case. Also - why does changing mtu affect it so much, but using tc to set the priority queue has no effect ???
Why doesn't tc help me? What am I missing?
Thanks!
Eric