You can use SET LOCAL for this transaction. I quote the manual:
The effects of SET LOCAL persist only until the end of the current transaction, regardless of whether it is committed or not.
But it is like taking antibiotics when you continue to get sick, rather than finding a cause. There is usually a reason why the planner chooses a suboptimal plan, and you have to find and fix it. Find out more about this in the answer on this question:
Keep PostgreSQL from choosing a bad query plan
In particular, I suspect that omitting the random_page_cost parameter might be a good idea. The default value is regularly too conservative (too high). If most or all of your DBs are cached (the system cache does this in order to be reused and fit into RAM), random_page_cost can be almost as low (or in extreme cases as low) as seq_page_cost . random_page_cost is the main factor in calculating the cost of using the index.
And make sure autovacuum is working and configured correctly (will take care of VACUUM and ANALYZE ). You need your statistics to be relevant for proper query planning.
And effective_cache_size regularly set too low out of the box.
Exceptions apply, and sometimes the query planner simply does not receive it, especially with older versions. This brings me another delicate point: upgrade to a newer version of PostgreSQL . Sustainable release 9.2. The query planner has been improved quite a bit since Postgres 8.4.
source share