Do you need reliable / persistent outgoing sockets?

I have a Scala application that supports (or tries) a TCP connection to various servers for several hours (maybe> 24) at a time. Each server sends a short message ~ 30 characters approximately twice a second. These messages are passed to iteration, where they are parsed and ultimately bring state changes to the database.

If any of these connections does not work for any reason, my application should constantly try to connect, unless I specify otherwise. Any lost messages are bad. I do not control the servers to which I connect, or the protocols used.

It can be assumed that there will immediately be 300 such connections. There is definitely no high-load scenario, so I don’t think NIO is needed, although it might be nice to have? Other application bits have a high load.

I am looking for some kind of controller / socket manager that can support these connections as reliably as possible. Now I run my own lock controller, but since I'm inexperienced with socket encoding (and all the various settings, options, timeouts, etc.), I doubt that it will achieve the best uptime. Also, I might need SSL support at some point down.

Will NIO offer any real benefits?

Would Netty be the best choice here? I saw the Uptime example here and thought of simply duplicating it, but as a newbie to lower-level networks, I was not sure there were better options.

+6
source share
1 answer

However, I am not sure of the best strategies for providing as few packages as possible, and suggested that this would be a “resolved” problem in one library.

Yeah. JMS example.

I believe that many of them will reach the timeout guessing strategy? Close and reopen the socket too soon and you have lost all packets on the route.

It is right. This approach will not be reliable, especially if the joints are constantly raising and lowering.

The real solution implies that the other end keeps track of what it received, and letting the sender know when the connection will be restored. If this cannot be done, you have no real way to control how much is lost. (This is what reliable messaging services do ...)

I have no control over the servers to which I connect. Therefore, if there is no other way to adapt JMS to a common TCP stream, I do not think that it will work.

Yeah. And the same if you try to implement this manually. The other end must work together.

I assume that you could build something where you run (say) a JMS endpoint on each remote server, and have the endpoint use UNIX domain sockets or loopback (i.e. 127.0.0.1) to talk to the server . But you still have the potential for losing messages.

+1
source

All Articles