The reason you get sequential scans is because Postgres believes it will read fewer disk pages in such a way as to use indexes. This is probably correct. Think, if you use an index without coverage, you need to read all the relevant index pages. it essentially lists the line identifiers. Then the database engine needs to read each relevant data page.
Your position table uses 71 bytes per row, plus any type of geometry (I will assume 16 bytes for illustration), amounting to 87 bytes. Postgres page - 8192 bytes. So you have approximately 90 lines per page.
Your query matches 3950815 of 5563070 rows, or about 70% of the total. Assuming the data is randomly distributed as far as your filters are concerned, the likelihood that the corresponding row cannot be found on the data page is almost 30%. This is essentially nothing. Therefore, no matter how good your indexes are, you still have to read all the data pages. If you still have to read all the pages, scanning the table is usually a good approach.
Get out here, this is what I said, not a covering index. If you are ready to create indexes that themselves can respond to queries, you may not search for data pages at all, so you are back in the game. I would advise you to pay attention to the following:
flight_2012_09_12 (flightkey, departure, arrival) position_2012_09_12 (filghtkey, time, lon, ...) position_2012_09_12 (lon, time, flightkey, ...) position_2012_09_12 (time, long, flightkey, ...)
The dots here represent the rest of the columns that you select. You only need one of the indices in the position, but itβs hard to say what will turn out to be the best. The first approach may allow merging to be merged into preliminary data, while the cost of reading the entire second index is done for filtering. The second and third allow you to pre-filter the data, but require a hash connection. Indicate how much of the cost will be in the hash join, merge merging can be a good option.
Since your query requires 52 out of 87 bytes per row, and indexes have overhead, you can omit the index, which takes up a lot, if any, less space than the table itself.
Another approach is to attack the "randomly distributed" side, looking at clustering.