This is a complex topic and probably one of them for discussion to some extent. I think itβs worth explaining why PostgreSQL does not support this, as well as what you can do instead of the latest versions to fit what you are trying to do.
PostgreSQL has a pretty good approach to caching a variety of datasets for multiple users. In the general case, you do not want the programmer to indicate that the temporary table should be stored in memory if it becomes very large. However, temporary tables are managed completely differently than regular tables:
This means that usually you do not generate a lot of disk I / O for temporary tables. Tables do not usually clear WAL segments, and they are controlled by local content, so they do not affect the use of a shared buffer. This means that only sometimes data is written to disk and only if necessary to free memory for other (usually more frequent) tasks. You, of course, do not force to write disks and read only disks when something else uses memory.
In the end, you do not need to worry about it. PostgreSQL is already trying to do what you ask to some extent, and temporary tables have much lower disk I / O than standard tables do. This does not make the tables remain in memory, though, and if they become large enough, the pages may expire in the OS cache and, ultimately, on disk. This is an important feature because it ensures that performance is correctly degraded when many people create many large temporary tables.
source share