You should cache just when it is cheaper to cache than to create results from scratch.
This cost depends on such things as:
- processing power of various servers and software. You may have limited features on your db server, but excess capacity on another server.
- money: is it cheaper to buy more powerful equipment than to create a cache system?
- The CPU cost of generating results from scratch and the operational cost of the cache. Most often, DB servers are tied to a processor, and cache servers are tied to memory. It is up to you to decide which is cheaper to upgrade in your case.
- cache retrieval rate and db retrieval rate. If, as you say, requests are expensive and getting them from the cache is cheaper, caching will speed up your requests.
- how often your cached items need to be updated. If they last only a few seconds, this may not be worth the hassle.
- having a method of expiring and updating cached items. This is often a very difficult problem.
- with technical knowledge and time to manage additional complexity.
But always start from the source. Have you studied MySQL slow query log to find out which queries are expensive? This can help you see where you are missing important indexes, and what kind of queries happen unexpectedly long. [pt-query-digest] 1 of the Percona-Toolkit can help by summing this log file. Optimize your databases before caching begins.
Looking at your query types, it seems to me that caching results and even pre-heating the cache is worth it.
Choosing a cache is important, of course. I assume you are already using the MySQL built-in cache request? Make sure it is turned on and that it has enough memory assigned to it. Simple queries, such as the "SELECT username", are still cheap, but also easily cached by MySQL itself. However, there are many limitations to inline query caching, and many of the reasons that queries are not cached or cached are blushing. For example, queries with features (like your location-based queries) are simply skipped. Read the docs.
Using a cache such as Redis gives you much more control over what you need to cache, how long and how to finish it. There are many ideas on how to implement this, and they also depend on your application. Look around the net.
I would suggest turning on the cache request, simply because it is easy, inexpensive, and a little helpful, and I will definitely look at the implementation of the in-memory cache level for your database. You might want to pay attention to an index server such as Solr, which has built-in methods for location-based queries. We use it together with MySQL.
Memcached and Redis are good choices for caching. I would personally choose Redis, because it has more use cases and an optional emphasis on the disk, but it is completely up to you. There may be some existing components in your selection framework that you can use in your application.
One more tip: measure everything. You only know what needs to be optimized or cached if you know what takes time. In addition, the results of your optimizations will be clear only if you measure again. Add something like statsd and measure various events and timings in your application. Better too much than not enough. Graph the results and analyze them over time. You will be surprised at what happens.