This can be useful if your database is related to I / O. If it is connected to the processor, then the RAM disk will not make any difference.
But first, first make sure that your database is configured correctly, you can get a huge profit from productivity, without losing any guarantees. Even a RAM database will not work well if it is not configured correctly. See the PostgreSQL wiki for this , mostly shared_buffers, effective_cache_size, checkpoint_ *, default_statistics_target
Secondly, if you want to avoid buffer synchronization on every commit (for example, codeka explained in his comment), disable the synchronous_commit configuration parameter. When your computer loses power, it will lose some recent transactions, but your database will still be 100% consistent. In this mode, RAM will be used to buffer all entries, including writing to the transaction log. Thus, with very rare control points, large shared_buffers and wal_buffers, it can actually approach speeds close to those of a RAM drive.
Also, hardware can make a huge difference. Drives at 15,000 rpm can in practice be 3 times faster than cheap drives for working with databases. RAID cache controllers with battery also have a significant difference.
If this is not enough, then it makes sense to consider the possibility of accessing volatile memory.
intgr
source share