Zeromq: using majordomo broker with asynchronous clients

While reading the zeromq manual, I came across client code that sends 100k requests in a loop and then gets a response in a second loop.

#include "../include/mdp.h" #include <time.h> int main (int argc, char *argv []) { int verbose = (argc > 1 && streq (argv [1], "-v")); mdp_client_t *session = mdp_client_new ("tcp://localhost:5555", verbose); int count; for (count = 0; count < 100000; count++) { zmsg_t *request = zmsg_new (); zmsg_pushstr (request, "Hello world"); mdp_client_send (session, "echo", &request); } printf("sent all\n"); for (count = 0; count < 100000; count++) { zmsg_t *reply = mdp_client_recv (session,NULL,NULL); if (reply) zmsg_destroy (&reply); else break; // Interrupted by Ctrl-C printf("reply received:%d\n", count); } printf ("%d replies received\n", count); mdp_client_destroy (&session); return 0; } 

I added a counter to count the number of responses that the worker (test_worker.c) sends to the broker, and another counter in mdp_broker.c to count the number of responses sent by the broker to the client. Both of them count up to 100 thousand. But the client receives answers only about 37 thousand. Responses.

If the number of client requests is set to about 40 thousand, then it receives all the answers. Can someone please tell me why packets are lost when a client sends more than 40 thousand Asynchronous requests?

I tried to install HWM on 100k for a broker socket, but the problem persists:

 static broker_t * s_broker_new (int verbose) { broker_t *self = (broker_t *) zmalloc (sizeof (broker_t)); int64_t hwm = 100000; // Initialize broker state self->ctx = zctx_new (); self->socket = zsocket_new (self->ctx, ZMQ_ROUTER); zmq_setsockopt(self->socket, ZMQ_SNDHWM, &hwm, sizeof(hwm)); zmq_setsockopt(self->socket, ZMQ_RCVHWM, &hwm, sizeof(hwm)); self->verbose = verbose; self->services = zhash_new (); self->workers = zhash_new (); self->waiting = zlist_new (); self->heartbeat_at = zclock_time () + HEARTBEAT_INTERVAL; return self; } 
+8
distributed-computing zeromq
source share
3 answers

Without configuring HWM and using the default TCP settings, packet loss was caused only by 50 thousand messages.

The following steps helped to reduce the loss of packages from the broker:

  • Installing HWM for the zeromq socket.
  • Increase TCP transmit / receive buffer size.

This only helped until a certain point. With two clients, each of which sent 100 thousand messages, the broker was able to cope perfectly. But when the number of customers was increased to three, they stopped receiving all the answers.

Finally, that helped me to avoid packet loss in order to change the design of client code as follows:

  • A client can send up to N messages at a time. The RCVHWM client and SNDHWM broker must be tall enough to contain a total of N messages.
  • After that, for each response received by the client, it sends two requests.
+3
source share

You send 100 thousand messages, and then you begin to receive them. Thus, 100k messages should be stored in a buffer. When the buffer is exhausted and can no longer store messages, you reach the high mark of ZeroMQ . The behavior at the high water mark is indicated in the ZeroMQ documentation.

In the case of the above code, the broker may refuse some messages, since majordomo broker uses the ROUTER Connector . One of the permissions would split the send / receive directions into separate streams

+1
source share

Why is it lost?

In ZeroMQ v2.1, the default value for ZMQ_HWM was INF (infinity), which helped the specified test to be somewhat significant, but with a high degree of risk, an overflow failure, because the buffer allocation policy was not limited / controlled to cause any physical limit.

From ZeroMQ v3.0 +, ZMQ_SNDHWM / ZMQ_RCVHWM by default to 1000, which can be installed later.

You can also read the explicit warning that

ØMQ does not guarantee that the socket will accept as many ZMQ_SNDHWM messages, and the actual limit may be 60-70% lower depending on the message flow in the socket.

Will there be a separation of the transmitting / receiving part into separate streams ??

Not.

Quick fix?

Yes, for demonstration experiments, set the endless high water signs again, but be careful to avoid this practice in any production-grade software.

Why test the performance of ZeroMQ this way?

As stated above, the initial demo test seems to have some significance in the v2.1 implementation.

Since those days, ZeroMQ has changed a lot. A very enjoyable reading for your particular interest regarding performance envelopes, which may help you understand this domain, is in the walkthrough with code examples in the ZeroMQ protocol overhead / performance evaluation for large file transfers

... we already have a problem: if we send too much data to the ROUTER socket, we can easily overflow it. A simple but stupid solution is to put an endless high water mark in the nest. This is stupid, because now we have no protection against running out of server memory. However, without endless HWM, we risk losing pieces of large files.

Try this: set the HWM to 1000 (in ZeroMQ v3.x this is the default), and then reduce the block size to 100K so that we send 10 thousand pieces at a time. Run the test and you will see that it never ends. According to the zmq_socket () man page with hilarious brutality, for the ROUTER socket: "ZMQ_HWM: Drop option action."

We must control the amount of data sent by the server. There is no point in sending more than the network can work. Try sending one piece at a time. In this version of the protocol, the client will explicitly say, “Give me chunk N,” and the server will select that particular fragment from disk and send it.

The best part, as far as I know, is the commented progress of the resulting performance in the "model 3" flow control, and you can learn a lot from the wonderful chapters and real notes in the ZeroMQ manual.

+1
source share

All Articles