Unix Domain Sockets are usually faster than loopback TCP sockets. Typically, Unix Domain Sockets have an average latency of 2 microseconds, while TCP sockets have 6 microseconds.
If I run redis-benchmark with default settings (without a pipeline), I see 160 thousand requests per second, mainly because a single-threaded redis server is limited by a TCP socket, 160k requests work with an average response time of 6 microseconds.
Using Unix Domain Sockets, Redos reaches 320K SET / GET requests per second.
But there is a limit that we, in Torusware, actually achieved with our Speedus product, a high-performance TCP socket implementation with an average delay of 200 nanoseconds (ping us on info@torusware.com to request Extreme Performance version). With an almost zero delay, we see that the redis-benchmark reaches about 500 thousand requests per second. Thus, we can say that the latency of the redis server averages about 2 microseconds per request.
If you want to answer ASAP, and your load is below the maximum performance of the repeated server, then it is better to avoid pipelining. However, if you want to be able to handle higher throughput, you can handle request pipelines. The answer may take a little longer, but you can handle more requests on some equipment.
Thus, in the previous scenario with a pipeline of 32 requests (buffering 32 requests before sending the actual request through the socket), you could process up to 1 million requests per second via the loopback interface. And in this case, the advantages of UDS are not so high, especially because the processing of such pipelining is a performance bottleneck. In fact, 1M requests with a pipeline of 32 is about 31 thousand. "Actual" requests per second, and we saw that the redis server is capable of processing 160 thousand requests per second.
Unix Domain Sockets process around 1.1M and 1.7M SET / GET requests per second, respectively. TCP loopback processes 1M and 1.5 SET / GET requests per second.
During conveyor processing, a narrow web moves from the transport protocol to the pipeline processing mode.
This is consistent with the information mentioned in redis-benchmark .
However, pipelining significantly increases the response time. Thus, without pipelining, 100% of operations usually work in less than 1 millisecond. When pipelining 32 requests a maximum response time of 4 milliseconds on a high-performance server and tens of milliseconds if the redis server is running on another computer or virtual machine.
So you need response time and maximum throughput.