How does Dateadd affect SQL query performance?

Say, for example, I am joining a table of numbers to perform some operation between two dates in a subquery, for example:

select n ,(select avg(col1) from table1 where timestamp between dateadd(minute, 15*n, @ArbitraryDate) and dateadd(minute, 15*(n+1), @ArbitraryDate)) from numbers where n < 1200 

Will the query work better if, for example, I built a date from the varchars concatenation than using the dateadd function?

+7
performance sql sql-server tsql
source share
6 answers

Saving data in datetime format using DATEADD is likely to be faster

Check this question: The most efficient way in SQL Server to get dates from date + time?

The accepted answer (not me!) Demonstrates DATEADD over string conversions. I saw many years ago that showed the same

+4
source share

Be careful with dates, see How to work with dates in SQL Server?

I once polled a launch request from 24 to 36 seconds. Just do not use the date or conversion functions in the column, see here: Only in the database can you get 1000% + improvement by changing a few lines of code

to see which query works better, execute both queries and see execution plans, you can also use io statistics and statistics time to get the number of reads and the time taken to complete the queries

+4
source share

I would not go with varchars concatenation.

DateAdd will work better than line wrapping and discarding DATETIME.

As always, it would be best to profile 2 options and determine the best result, since no databases are specified.

+3
source share

Most likely, one way or another there will be no differences. I would run this:

 SET STATISTICS IO ON; SET STATISTICS TIME ON; 

followed by both versions of your request so that you can see and compare the real costs of execution.

+3
source share

As long as your predicate calculations do not include references to the columns of the table you are querying, your approach should not matter in any case (for clarity).

If you were to calculate something from Table1 in the calculation, I would follow the table scan or index scan, since it can no longer be sargable .

In any case, check (or send!)

+2
source share

Why would you ever use a correlated subquery? This will slow you down a lot more than the date. They look like cursors, they work sequentially. Will there be something like this work?

  select nn , avgcol1 from numbers n left outer join ( select avg(col1) as avgcol1, n from table1 where timestamp between dateadd(minute, 15*n, @ArbitraryDate) and dateadd(minute, 15*(n+1), @ArbitraryDate) Group by n ) t on nn = tn where n < 1200 
+2
source share

All Articles