In your scenario, I would recommend explicitly setting the isolation level to the snapshot - this will prevent read access to the write method (insert and update), preventing locks, but these reads will still be considered “good” (i.e., not dirty data - this is not the same as NOLOCK)
As a rule, I find that when I have problems with blocking with my requests, I manually manage the lock. for example, I would do updates with row-level locks to avoid page / table-level locks, and set my reads to readpast (assuming that I can skip some data, in some scenarios, which may be ok) link | edit | remove | flag
EDIT - combining all comments in response
As part of the optimization process, the sql server avoids receiving read messages on a page that, as you know, has not changed, and automatically returns to a smaller blocking strategy. In your case, the sql server falls from serializable read to repeat read.
Q: Thank you for the helpful information on reducing isolation levels. Can you come up with some reason why he will use Serializable IsolationLevel in the first place, given that we are not using an explicit transaction for SELECT - did we understand that an implicit transaction would use ReadCommitted?
A: By default, SQL Server will use Read Commmited if this is your default isolation level, but if you haven’t yet specified a blocking strategy in your query, you basically tell sql to "do what you think is best, but I prefer Read Commited " Since SQL Server is free to choose, it does this to optimize the query. (The optimization algorithm in sql server is very complicated, and I do not fully understand it). What is not explicitly executed in the transaction does not affect the isolation level used by the sql server.
Q: Last, is it reasonable that SQL Server will increase the isolation level (and, presumably, the number of locks) to optimize the query? I am also wondering if reusing a merged join will impact if it inherits the last used isolation level?
A: The Sql server will do this as part of a process called Escalation Lock. From http://support.microsoft.com/kb/323630 I quote: “Microsoft SQL Server dynamically determines when to perform escalation of locks. When making this decision, SQL Server takes into account the number of locks that are stored on a particular scan, the number of locks held throughout transaction, and the memory that is used for locks in the system as a whole.Normally, the default behavior of SQL Server only causes locks to occur at points where it would improve performance or when you need to reduce excessive system memory to a more reasonable level. However, some application or query projects may initiate an escalation of the lock at a time when this is undesirable, and an enhanced table lock may block other users. "
Although the escalation of the lock is not quite the same as changing the isolation level at which the query is executed, this surprises me because I would not expect the sql server to make more locks than the default isolation level allows.