Setting up PostgreSQL is much more than just setting up caches. In fact, the primary high-level stuff is the “shared buffers” (imagine this is the main cache of data and indexes) and work_mem.
Shared buffers help with reading and writing. You want to give it a decent size, but for the entire cluster ... and you cannot configure it for each table or especially for the query. The important thing is that it does not store query results .. it stores tables, indexes, and other data. In an ACID-compliant database, this is not very efficient or useful for caching query results.
"Work_mem" is used to sort the query results in memory, and you do not need to resort to writing to disk. Depending on your request, this area can be as important as the buffer cache, and it is easier to configure. Before executing the query, which should perform a larger sort, you can execute the set command, such as "SET work_mem = '256MB';"
As others have suggested, you can figure out WHY the request is slow using the “explanation”. I would personally suggest exploring the "access path" that postgresql uses to access your data. This is much more complicated and, frankly, it’s better to use resources than just thinking about “caching results”.
You can honestly improve the situation with the help of data design, and with the help of functions such as partitioning, functional indexes and other methods.
Another thing is that you can improve performance by writing better queries. Things like "with" clauses can prevent the postgres optimizer from optimizing queries completely. The optimizer itself also has parameters that can be adjusted--, so the database will spend more (or less) time optimizing the query before it is executed ... which may make a difference.
You can also use certain methods to write queries to help the optimizer. One of these methods is the use of bind variables (colon variables) - this will lead to the fact that the optimizer will receive the same request again and again with different data transferred. Thus, the structure does not need to be evaluated again and again. query plans can be cached in this way.
Without seeing some of your queries, the design of tables and indexes, and the plan of explanation, it is difficult to give a specific recommendation.
In general, you need to find queries that are not as effective as you think, and find out where the conflict occurs. This is probably disk access, however, the reason is ultimately the most important part ... do I have to go to disk to sort it? Is it an internal choice of the wrong path to access the data, so that it reads data that could be easily deleted at an earlier stage of the query process ... I have been an Oracle Certified Administrator for over 20 years, and PostgreSQL is definitely different, however, many of The same methods are used when it comes to diagnosing query performance problems. Although you really can't provide hints, you can still rewrite queries or tune certain parameters to get better performance ... in general, I found that postgresql is easier to tune in the long run. If you can provide some details, such as a request and explain the information, I would be happy to give you specific recommendations. Unfortunately, the "cache setting" is likely to provide you with the desired speed by itself.