How much faster does the database work in RAM?

I want to run PostgreSQL in RAM to improve performance. The database does not exceed 1 GB and should not grow to 5 GB. Is it worth doing? Are there any tests there?

My second major concern: how easy it is to maintain when it works exclusively in RAM. Is this like using RAM as a level 1 HD, or is it much more complicated?

+7
optimization database
source share
4 answers

This can be useful if your database is related to I / O. If it is connected to the processor, then the RAM disk will not make any difference.

But first, first make sure that your database is configured correctly, you can get a huge profit from productivity, without losing any guarantees. Even a RAM database will not work well if it is not configured correctly. See the PostgreSQL wiki for this , mostly shared_buffers, effective_cache_size, checkpoint_ *, default_statistics_target

Secondly, if you want to avoid buffer synchronization on every commit (for example, codeka explained in his comment), disable the synchronous_commit configuration parameter. When your computer loses power, it will lose some recent transactions, but your database will still be 100% consistent. In this mode, RAM will be used to buffer all entries, including writing to the transaction log. Thus, with very rare control points, large shared_buffers and wal_buffers, it can actually approach speeds close to those of a RAM drive.

Also, hardware can make a huge difference. Drives at 15,000 rpm can in practice be 3 times faster than cheap drives for working with databases. RAID cache controllers with battery also have a significant difference.

If this is not enough, then it makes sense to consider the possibility of accessing volatile memory.

+4
source share

The thing is, whether to store your database in memory depends on the size and performance, as well as how reliable you are in writing. I assume that you are writing to your database and want to save the data in case of failure.

Personally, I would not worry about this optimization until I ran into performance issues. It just seems risky to me.

If you make a lot of readings and very few letters, the cache can serve your purpose. Many ORMs come with one or more caching mechanisms.

In terms of performance, clustering over a network into another DBMS that does all the writing to disk seems to be much more inefficient than just having a regular DBMS and setting it to the maximum possible amount in RAM as you want.

+4
source share

Actually ... as long as you have enough memory, your database will be fully running in RAM. Your file system fully buffers all the data, so this will not make much difference.

But ... of course, there is always a bit of overhead, so you can still try to run it all from ramdrive.

As for backups, then, like any other database. You can use regular Postgres dump utilities to back up your system. Or even better, let it replicate to another server as a backup.

+1
source share

5 to 40 times faster than a resident DBMS. Check out the Gartner Magic Quadrant for Operating Database Engine 2013. Gartner shows who is strong and, more important, notices serious warnings ... bugs ..errors ... lack of support and difficulties in using vendors.

+1
source share

All Articles