The general scheme is to use separate execution contexts : one for accessing the synchronous database via JDBC, one for reactive processing. Also see Akka futures documentation .
When you create an actor system, it creates its own execution context - this is the one you use for your usual reactive processing with actors. You need to create a second execution context for JDBC calls. You will then pass this execution context to the future factory, as shown here in the Akka documentation .
To be notified of the completion of the future, you can (optionally) use the pipeline design (also shown in the previous link, but in the previous section of the documentation). The effect of the pipe design is to accept the return value of the future, the type of which relates to the parameter of the future general type (for example, the result of your request) and send it to the specified mailbox of the participant.
The code executed by the future should not change or even read any mutable data belonging to the initiator (or any actor, for that matter). You will need to mark the result of the future so that when it arrives in the actorβs mailbox, the actor will be able to associate it with the original JDBC request. Finally, your actor will ultimately receive the result, and you can continue to process it (in accordance with the Akka delivery guarantee with a maximum delivery time).
Note that you do not need to use two execution contexts - one will work, but it will be dangerous that your database queries will consume all available threads in the execution context.
source share