First of all, you need to run the query again at each isolation level and average the result, discarding the one with the maximum time. This will eliminate the effect of buffer warm-up: you want all runs to be in a warm cache, and not just one request heats the cache and pays a fine in comparison.
Then you need to make sure that you are measuring a realistic concurrency scenario. If you get updates / inserts / deletes in real life, then you should add them to your test, as they will greatly affect reading at different isolation levels. The last thing you want is to conclude that serializable reads are the fastest, let you use them everywhere, and then watch how the system melts during production because everything is serialized.
Other than that, only the isolation level, which is legally faster, is dirty because it does not receive locks. A read fixed snapshot (which you did not measure) also does not receive locks, but it affects overall performance due to line overhead.
Remus Rusanu
source share