pgsql-performance is a great mailing list to ask such questions.
You seem to have two problems:
1) You want to be able to index update_on, but if you do, PostgreSQL will choose the wrong plan.
My first wild guess was that PostgreSQL overestimates the number of tuples that match the predicate " (responses.contest_id = 17469) AND (user_id is not null) ". If postgres first uses this predicate, it should later sort the values ββto implement ORDER BY. You say that it corresponds to 1000 tuples; if postgresql thinks it matches 100000, maybe he thinks scanning in order using the update_on index will be cheaper. Another factor might be your configuration: if work_mem set to a low level, it might seem that sorting is more expensive than it is.
You really need to show the EXPLAIN ANALYZE output of the slow query so that we can understand why it can choose to index scan on updated_on .
2) Even if it is not indexed, sometimes it takes some time to execute, but you do not know, because if you run it manually, it works fine.
Use the auto_explain contrib module, new since 8.4. It allows you to log the output of EXPLAIN ANALYZE requests that take too much time. Just registering the request, you put exactly what you have now: every time you quickly launch the request,
source share