In fact, there is nothing good for low-latency, high-bandwidth applications such as real-time online games, at least not as part of arbitrary middleware.
The Darkstar project made a remarkable attempt on this side of convenience and complexity, but found (not surprisingly) that it does not scale.
Ultimately, this is a complex (although not intractable) problem when there is no solution that is almost universally applicable. In particular, you probably have a compromise between the need to act using outdated game data, on the one hand, from the need to constantly exchange common data, on the other hand. Lack of correctness and an exponential increase in complexity ... choose your poison.
It is worth noting - especially if your application domain is not real-time games - that you often do not mind if you are working with outdated data, if it will be soon enough. In such cases, simple caching systems like memcache are great. Similarly, if you need a better job, but don’t need to worry about bandwidth, something like Hazelcast (mentioned in another answer) may be great, but for most online games that are large enough to require load balancing "thousands of operations / sec" is simply not enough.
Some MMO technologies attempt to distribute the application by breaking it down geographically, which means that there really is not a lot of general condition, and this requires that this scheme make sense in the game world and fiction.
Another approach is to share it by service and implement most of the services with your favorite off-the-shelf RPC approach. This allows you to scale quite easily if your services are independent, but any dependencies between services bring you back to square.
source share