How to plan Akka's global work across multiple processes?

I use Akka in the Play Framework instead of Job to schedule code to run every X seconds. I have a kind of cluster (works on Heroku, currently 1 dino, but sometimes there can be several simultaneous instances).

Is there an easy way to get the β€œwork” to execute every N seconds globally across the entire cluster? I know that Quartz supports out-of-process storage / synchronization mechanisms, for example. DB - can I use something like this in Scala?

This is the actor’s installation, launched at the beginning of playback:

object Global extends GlobalSettings { override def onStart(app: Application) { val monitorActor = Akka.system.actorOf(Props[MonitorLoadJob], name = "monitorLoad") Akka.system.scheduler.schedule(0 seconds, 10 seconds, monitorActor, Tick) } } 
+7
source share
3 answers

Check out the ClusterSingletonManager .

For some use cases, this is convenient, and sometimes also make sure that you have exactly one actor of a certain type somewhere in the cluster.

Some examples:

  • single point of responsibility for certain coordinated decisions for the entire cluster or coordination of actions in a cluster system.

This requires the launch of Akka Cluster, but he made a script for this type.

+9
source

One possible way would be to run an actor on each node, which expects the notification message to start this task, and from the scheduler will send a message to these participants.

0
source

Or you can use the worker dynamometer dedicated to your tasks using the standard main method to schedule work with Akka.

You can check this link (for Java, but you get an idea for Scala).

0
source

All Articles