Why is a stored procedure performed differently when remotely deleted locally?

We have a stored procedure that allows us to create some dynamic SQL and execute using a parameterized call to sp_executesql .

Under normal conditions, this works wonderfully and has been very useful at run time for the procedure (from ~ 8 seconds to ~ 1 second), however, under some unknown conditions, something strange happens and the performance goes completely (~ 31 seconds), but only at execution through RPC (i.e., calling from a .Net application with SqlCommand.CommandType of CommandType.StoredProcedure or as a remote request from a linked server) - if executed as an SQL package using SQL Server Management Studio, we do not see any performance degradation.

Changing the white space in the generated SQL and recompiling the stored procedure seems to fix the problem, at least in the short term, but we would like to understand the reason or ways to make the execution plans rebuild for the generated SQL; but at the moment I'm not sure how to continue working?


To illustrate, a stored procedure looks something like this:

 CREATE PROCEDURE [dbo].[usp_MyObject_Search] @IsActive AS BIT = NULL, @IsTemplate AS BIT = NULL AS DECLARE @WhereClause NVARCHAR(MAX) = '' IF @IsActive IS NOT NULL BEGIN SET @WhereClause += ' AND (svc.IsActive = @xIsActive) ' END IF @IsTemplate IS NOT NULL BEGIN SET @WhereClause += ' AND (svc.IsTemplate = @xIsTemplate) ' END DECLARE @Sql NVARCHAR(MAX) = ' SELECT svc.[MyObjectId], svc.[Name], svc.[IsActive], svc.[IsTemplate] FROM dbo.MyObject svc WITH (NOLOCK) WHERE 1=1 ' + @WhereClause + ' ORDER BY svc.[Name] Asc' EXEC sp_executesql @Sql, N'@xIsActive BIT, @xIsTemplate BIT', @xIsActive = @IsActive, @xIsTemplate = @IsTemplate 

With this approach, the query plan will be cached for NULL / not-NULL permutations, and we get the benefits of cached query plans. I do not understand why he will use a different query plan for remote deletion and local after "something happens"; I also do not understand what is "something"?

I understand that I can move away from parameterization, but then we will lose the advantage of caching, which is usually a good execution plan.

+4
source share
2 answers

I would suspect the sniffing option. If you are on SQL Server 2008, you can try enabling OPTIMIZE FOR UNKNOWN to minimize the likelihood that when it generates a plan, it does this for atypical parameter values.

RE: What I don't understand is why it would use a different query plan when executed remotely vs. locally after "something happens" What I don't understand is why it would use a different query plan when executed remotely vs. locally after "something happens"

When you run in SSMS, it will not use the same bad plan due to different SET options (e.g. SET ARITHABORT ON ), so it will compile a new plan that works well for the parameter values ​​that you are currently testing.

You can see these plans with

 SELECT usecounts, cacheobjtype, objtype, text, query_plan, value as set_options FROM sys.dm_exec_cached_plans CROSS APPLY sys.dm_exec_sql_text(plan_handle) CROSS APPLY sys.dm_exec_query_plan(plan_handle) cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa where text like '%FROM dbo.MyObject svc WITH (NOLOCK)%' and attribute='set_options' 

Edit

The next bit only in response to badbod99's answer

 create proc #foo @mode bit, @date datetime as declare @Sql nvarchar(max) if(@mode=1) set @Sql = 'select top 0 * from sys.objects where create_date < @date /*44FC79BD-2AF5-4774-9674-04D6C3D4B228*/' else set @Sql = 'select top 0 * from sys.objects where modify_date < @date /*44FC79BD-2AF5-4774-9674-04D6C3D4B228*/' EXEC sp_executesql @Sql, N'@date datetime', @date = @date go declare @d datetime set @d = getdate() exec #foo 0,@d exec #foo 1, @d SELECT usecounts, cacheobjtype, objtype, text, query_plan, value as set_options FROM sys.dm_exec_cached_plans CROSS APPLY sys.dm_exec_sql_text(plan_handle) CROSS APPLY sys.dm_exec_query_plan(plan_handle) cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa where text like '%44FC79BD-2AF5-4774-9674-04D6C3D4B228%' and attribute='set_options' 

Returns

enter image description here

+3
source

Recompilation

At any time when the execution of the SP will be significantly different due to conditional statements, the execution plan that was cached from the last request may not be optimal for this.

It's all about when SQL compiles an execution plan for SP. This is the key section regarding compiling sp on Microsoft Docs :

... this optimization happens automatically when you first start the stored procedure after restarting SQL Server. This also happens if the underlying table used by the stored procedure changes. But if a new index is added from which the stored procedure can be useful, optimization will not be performed until the next time the stored procedure is started after SQL Server restarts. In this situation, it may be useful to force the stored procedure to be reset the next time it is run.

SQL sometimes recompiles execution plans, from Microsoft Docs

SQL Server automatically recompiles stored procedures and triggers when it is beneficial.

... but he will not do this with every call (unless using WITH RECOMPILE), so if each execution can lead to a different SQL, you can get stuck in the same old plan for at least one call.

RECOMPILE prompt

RECOMPILE prompt , takes your parameter values ​​into account when checking what needs to be recompiled at the instruction level.

With the RECOMPILE option

WITH RECOMPILE (see section F) will compile the execution plan with each call, so you will never have a sub-optimal plan, but you will have the overhead of collecting.

Reorganization into several SP

Looking at your specific case, the execution plan for proc never changes, and 2 sql statements should have prepared execution plans.

I would suggest that restructuring the code to split the SP, rather than conditionally generating SQL, will simplify the situation and ensure that you always have an optimal execution plan without any magic SQL sauce.

+1
source

All Articles