Each batch (query) included in SQL Server creates a task. The task is scheduled for execution and picked up by the worker. The worker is very similar to a thread. The task remains with the worker until it ends, and then frees the worker to pick up another task. The system runs a limited number of workers configured on sp_configure 'max worker threads' . At a minimum, there are 256 workers, of which about 35 are systems. An employee needs a scheduler to run, and there is one scheduler for each processor core. Employees collaborate in an exchange planner.
In some tasks, sub-tasks arise, such as parallel queries. These tasks are also queued for execution and need to be completed by the employee. Tasks that arise sub-tasks cannot be completed until all the tasks that it runs are completed.
There are also some system tasks, depending on the user's actions, such as a handshake to log into the system. When a client opens a new connector, authentication / authorization authorization and login authorization are performed by a task that requires a worker.
When 1000 requests arrive on the server, 1000 tasks are created and queued for execution. Free workers raise tasks and begin to carry them out. Upon completion of one task, they select the next task until all tasks created by 1000 requests are completed.
DMVs that show what is happening:
These details are described in SQL Server Batch or Task Scheduling and the Glory Blog .
Further, after completing the task, the request will be compiled. Compilation first looks for the query text in memory and looks for an existing compiled plan for the query with an identical plan. You can read my answer Dynamically created SQL vs Parameters in SQL Server for a more detailed study of how this happens. Also see Caching and reusing an execution plan . After creating the plan, it is launched. A query of type SELECT ... FROM table will create a trivial plan in which there are only a few statements that basically extract each row and put it in the TDS stream back to the client. A query plan is a statement tree, and a query is always executed by querying the root of the tree for the next line in the loop until the root returns EOF. Tree query operators become more specific until the bottom operator has physical access to the selected access path (index or heap selected by the optimizer to satisfy the request). See Processing SQL Queries . Access to the index will always request data from the buffer pool, never from disk. When the buffer pool does not have a cached page, PAGEIOLATCH is placed on the page, and the request to read the page is sent to the I / O subsystem. Subsequent requests for the same page will wait for this I / O to complete, and as soon as the page is in the buffer pool, all other requests that need this page will be received from the buffer pool. unused pages are sent when the buffer pool needs free pages, but if the system has enough RAM, the page will never be unloaded after loading. Index and heap scan operations will request a preliminary check, expecting that pages prior to the current page in the page link chain will be requested. Moving forward is limited to fragments of index fragments, and it is when index fragmentation is included in the picture because it reduces the size of read requests forward, see Understanding Pages and Extents .
Another aspect of query execution is logical row locking. For stability, reading can place row locks or range locks depending on the isolation model in the rows it reads, to prevent matching updates when the request passes validation. At the SNAPSHOT isolation level, the request will not request locks at all, but instead a version label will be used to possibly serve the data requested from the version store (see SQL Transaction Isolation Based on Server 2005 Row Version ). In the READ UNCOMMITED section (or when using the nolock hint), the query does not request locks on the lines it reads, but the reads are incompatible if consistent updates occur (unsolicited lines are read, one line can be read twice or an existing line can not be read at all).