Postgres low-level cache hit ratio - data size or something else?

I just upgraded my Heroku postgres database from the Kappa plan (800MB RAM, postgres 9.1) to the Ronin plan (1.7GB RAM, postgres 9.2), but the performance has worsened.

Following the guide here , I checked, and the cache hit rate is even lower than that of our Kappa database (now ~ 57%, previously ~ 69%). Our app design should be decent, since we saw a cache hit rate of ~ 99% .

The recommendation is that the data set should be large in memory, which now should not be a problem - the size of our data is 1.27 GB (at least most of them should be consistent).

Is there a low cache hit rate due to data size, or is there something else I can learn? Or is it just a case where the database cache is not fully warmed up? (it was almost 2 days)

+7
source share
1 answer

If you have a lot of memory and no longer work on db, one thing that can change is shared_buffers. What shared buffers do is cache frequently used data so that it is maximized in all cases where not all databases fit in memory.

Unfortunately, this cache does not work the same as its OS cache. If your data fits easily into memory, make sure effective_cache_size is high enough and then try to reduce shared_buffers

Please note that this is not a magic bullet. The corresponding size of shared_buffers depends on how much data you have, how much space it takes, your types of requests, how much memory goes towards things like childbirth and the like. You can count on this from time to time to find a nice place for your current setup and database.

+3
source

All Articles