There are several performance issues to solve ...
Do not access the same table more than once if possible
Do not use a subquery for criteria that can be met without having to reference additional copies of the same table. This is acceptable if you need data from a copy of the table due to the use of aggregate functions (MAX, MIN, etc.), although analytic functions (ROW_NUMBER, RANK, etc.) may be more convenient (provided support).
Do not compare what you do not need
If your parameter is NULL, which means that you want to get some value for the columns you are comparing with, do not include filtering criteria. Such statements:
WHERE a.Name LIKE '%' + ISNULL(@Name, N'') + '%'
... to ensure that the optimizer will have to compare the values for the name column, wildcards or not. Even worse, with LIKE , substituting the left side of the estimate ensures that the index cannot be used if it is present in the column under study.
More efficient approach:
IF @Name IS NOT NULL BEGIN SELECT ... FROM ... WHERE a.name LIKE '%' + @Name + '%' END ELSE BEGIN SELECT ... FROM ... END
SQL works well - that's all it takes to tailor. This is why you should consider dynamic SQL when you have queries with two or more independent criteria .
Use right tool
The LIKE operator is not very effective at finding text when you check for a string in text data. Full Text Search (FTS) technology was developed to address the shortcomings:
IF @Name IS NOT NULL BEGIN SELECT ... FROM ... WHERE CONTAINS(a.name, @Name) END ELSE BEGIN SELECT ... FROM ... END
Always check and compare
I agree with LittleBobbyTables - the solution ultimately relies on checking the query / execution plan for all alternatives, because the table design and data can affect the optimizer's decision and performance. SQL Server is the most efficient version with the least subtreecost, but may change over time if table statistics and indexes are not supported.
OMG Ponies
source share