Yes, this method does not seem to be a powerful use of ZeroMQ permissions.
The good news is that you can reconstruct the solution to make it closer to best practices.
MOTIVATION
ZeroMQ cannot be a very powerful and very smart tool for developing, managing, managing, distributed events, distributed distributed distributed systems. There are many resources published in the best technical ZeroMQ system design techniques.
Lightweight does not mean a gold bullet or perpetual motion machine with zero overhead.
ZeroMQ still consumes additional resources and target ecosystems, especially for those who have minimal resources (hidden hyper- vCPU restriction on some vCPU / vCPU-core VPS-system vCPU , as only one vivid example here), you can understand that there is no the benefits and cost adjustments for concurrency threads to consume additional ZeroMQ (1+) I / O threads for each Context() instance.
Exception handling . No, rather, preventing exceptions and preventing blocking is alpha / omega for a production, continuous, distributed processing system. Your experience becomes bitter and bitter, but you will learn a lot about the practice of developing software using ZeroMQ. One such lesson to learn is resource management and graceful termination. Each process is responsible for freeing all allocated resources, so the port blocked by the corresponding .bind() -s should be systematically free and released in a clear way.
(Plus, you will soon realize that a port release is not instantaneous due to operating system overheads that are outside of one control code, so do not rely on such a port, immediately becoming an RTO to reuse the next port (you can find many messages here , on ports blocked in this way)).
Resource Envelope Facts [FIRST]:
While quantitative data on processing Performance / resources Use envelopes are not yet available, an added image can help identify the key importance of such knowledge. p>
vCPU -workload Envelope after markets started the next 24/5 on Sunday 22:00 GMT+0000

However, + 55% CPU-power avail vCPU -workload and other resources-use Envelopes

Cron Queues and Relative Priority for Hacking [NEXT]:
Without detailed information about whether 75-minute WORK-UNIT downtime is related to CPU or I / O issues, system configuration can reduce the relative priorities of cron jobs, so that your system performance is "focused" on the main tasks of the clock peak. It is possible to create a separate queue with the adapted priority nice . A good trick in this was introduced by @Pederabo:
cron usually works with nice 2 , but this is controlled by the queuedefs file. Queues a , b and c for at , batch and cron .
- should be able to add a line for the queue, say, Z , which defines a special queue, and sets the nice value to zero .
- should be able to run the script from the console using at -q Z ...
- if this works well, put the at command in crontab .
The " at " command will run with cron priority by default, but it only takes a few seconds. Then, the created work will be performed with what you set in the queuedefs file for queue Z
Avoid unnecessary overhead [ALWAYS]:
There is always a reason not to waste CPU-clks. Especially in minimalist systems. Using the tcp:// transport class in the same localhost can be a PoC practice during the prototyping phase, but never to go into the production phase 24/7. Try to avoid all services that are never used - why you upgrade to L3, consuming even more operating system resources (ZeroMQ is not Zero-Copy at this phase - therefore double distributions appear here) when delivered only to the same localhost . ipc:// and inproc:// transport classes are much better suited for this modus-operandi (also the link below when distributing trully)
The main problem (processing design using ZeroMQ tools)
based on this high-level description of what is intent, there seems to be a way to avoid the cron mechanism and allow the entire pipeline / distribution / collection process to become a continuous distributed ZeroMQ distributed processing system where you can build a standalone CLI ( r/KBD terminal for ad- hoc with continuous processing system) to:
- remove one dependency on functions / limitations of the operating system.
- Reduce overall overhead associated with simultaneously servicing a process at the system level.
- exchange one, central,
Context() (thus, paying the minimum cost of just one additional input / output stream), because the processing does not seem to be sensitive to messaging / sensitive to ultra-low latency.
Your ZeroMQ ecosystem can help you create a scalable, scalable, or even adaptive scaling feature, because scalable distributed processing does not only limit you to your VPS localhost device (if your VPS hyper-thread restrictions do not allow such processed processing to match your 24/7 WORK-UNIT -s stream -productivity).
Everything that just changes the appropriate transport class from ipc:// to tcp:// allows you to distribute tasks ( WORK-UNIT -s) literally around the globe for any node processing, you can "plug-in" to increase your computing power. all without SLOC source code changes.

It’s worth reviewing your design strategy once, right?