Cable quality is usually a red herring. I would have thought more about connecting a network analyzer to find out if you have enough retransmissions to leave. If you get a lot, try isolating where they occur and replace the cable (s) that causes / causes the problem. If you do not receive errors that result in retransmissions, then the cable (practically) does not affect latency.
Large buffers on network adapters and (especially) switches alone do not reduce latency. In fact, to truly minimize latency, you usually want to use the smallest buffers you can, rather than larger ones. Data in the buffer instead of processing immediately increases the delay. Honestly, this is rarely worth the worry, but still. If you really want to minimize latency (and care less about bandwidth), you'd better use a hub than a switch (sort of hard to find, but certainly low latency, while network overflow is quite low).
Several network adapters can significantly increase bandwidth, but their effect on latency is generally quite minimal.
Edit: My main advice, however, would be to get a sense of scale. Reducing the network cable on the foot saves you about a nanosecond - in the same general manner as speeding up packet processing with a few instructions in assembly language.
Bottom line. Like any other optimization, to get very far, you need to measure where you get latency before you can do a lot to reduce it. In most cases, reducing the length of the wire (to use one example) will not make enough difference to notice, simply because it is quick to start. If something starts with 10 microseconds, you can’t do anything; speed it up to 10 microseconds, so if you don’t have so fast that 10 we are a significant percentage of your time, this is not worth attacking.
source share