I think you need to do two things:
- Reduce the number of completed queries.
- Reduce the cost of requests
As btilly points out, you're probably best at querying related nodes for each visible node, so if they are clicked, the visualization responds right away - you don't want the request plus transition time as a response lag.
However, if you have such a great need to reduce the number of requests, this suggests that your requests themselves are too expensive, since the overall load is requestCost * numRequests . Consider pre-calculating the set of nodes associated with each node, so the query is a read request, not a complete database request. If you think this sounds complicated, this is what Google does every time you look for a new thing; they cannot search the Internet every time you start typing, so they do it ahead of time and cache.
This may mean some degree of denormalization; if you have a cache and a request, there is no guarantee that they are synchronized, but the question arises whether your data set is changing; write once, read a lot?
To minimize the space required by all these nodes and their relationships, we consider in more detail the problem of particle interaction; dividing the space in which you can group nodes so that you only need to request a group of nodes for its aggregate neighbors, and save this. Thus, you can perform much less filtering for each query, rather than a full database query. If it is O (n log n) and you make it a hundred times smaller, it is more than 100 times faster.
source share