Is there a way to calculate correlation in TSQL using OVER Clauses instead of CTE?

Say you have a table with columns, Date, GroupID, X and Y.

CREATE TABLE #sample
  (
     [Date]  DATETIME,
     GroupID INT,
     X       FLOAT,
     Y       FLOAT
  )

DECLARE @date DATETIME = getdate()

INSERT INTO #sample VALUES(@date, 1, 1,3)
INSERT INTO #sample VALUES(DATEADD(d, 1, @date), 1, 1,1)
INSERT INTO #sample VALUES(DATEADD(d, 2, @date), 1, 4,2)
INSERT INTO #sample VALUES(DATEADD(d, 3, @date), 1, 3,3)
INSERT INTO #sample VALUES(DATEADD(d, 4, @date), 1, 6,4)
INSERT INTO #sample VALUES(DATEADD(d, 5, @date), 1, 7,5)
INSERT INTO #sample VALUES(DATEADD(d, 6, @date), 1, 1,6)

and you want to calculate the ratio of X and Y for each group. I am currently using CTEs which are a bit confusing:

;WITH DataAvgStd
     AS (SELECT GroupID,
                AVG(X)   AS XAvg,
                AVG(Y)   AS YAvg,
                STDEV(X) AS XStdev,
                STDEV(Y) AS YSTDev,
                COUNT(*) AS SampleSize
         FROM   #sample
         GROUP  BY GroupID),
     ExpectedVal
     AS (SELECT s.GroupID,
                SUM(( X - XAvg ) * ( Y - YAvg )) AS ExpectedValue
         FROM   #sample s
                JOIN DataAvgStd das
                  ON s.GroupID = das.GroupID
         GROUP  BY s.GroupID)
SELECT das.GroupID,
       ev.ExpectedValue / ( das.SampleSize - 1 ) / ( das.XStdev * das.YSTDev )
       AS
       Correlation
FROM   DataAvgStd das
       JOIN ExpectedVal ev
         ON das.GroupID = ev.GroupID

DROP TABLE #sample  

There seems to be a way to use OVER and PARTITION to do it in one fell swoop without any subqueries. Ideally, TSQL will have a function so you can write:

SELECT GroupID, CORR(X, Y) OVER(PARTITION BY GroupID)
FROM #sample
GROUP BY GroupID
+5
source share
3 answers

corellation, , over(). , , , . sum(x - avg(x)). , , , with.

.

;WITH DataAvgStd
     AS (SELECT GroupID,
                STDEV(X) over(partition by GroupID) AS XStdev,
                STDEV(Y) over(partition by GroupID) AS YSTDev,
                COUNT(*) over(partition by GroupID) AS SampleSize,
                ( X - AVG(X) over(partition by GroupID)) * ( Y - AVG(Y) over(partition by GroupID)) AS ExpectedValue
         FROM   #sample s)         
SELECT distinct GroupID,
       SUM(ExpectedValue) over(partition by GroupID) / (SampleSize - 1 ) / ( XStdev * YSTDev ) AS Correlation
FROM DataAvgStd 

equilevant Wikipedia.

SELECT GroupID,
       Correlation=(COUNT(*) * SUM(X * Y) - SUM(X) * SUM(Y)) / 
                   (SQRT(COUNT(*) * SUM(X * X) - SUM(X) * SUM(x))
                    * SQRT(COUNT(*) * SUM(Y* Y) - SUM(Y) * SUM(Y)))
FROM #sample s
GROUP BY GroupID;
+5
+1

:

, . , , , :

-- Methods for calculating the two Pearson correlation coefficients
SELECT  
        -- For Population
        (avg(x * y) - avg(x) * avg(y)) / 
        (sqrt(avg(x * x) - avg(x) * avg(x)) * sqrt(avg(y * y) - avg(y) * avg(y))) 
        AS correlation_coefficient_population,
        -- For Sample
        (count(*) * sum(x * y) - sum(x) * sum(y)) / 
        (sqrt(count(*) * sum(x * x) - sum(x) * sum(x)) * sqrt(count(*) * sum(y * y) - sum(y) * sum(y))) 
        AS correlation_coefficient_sample
    FROM (
        -- The following generates a table of sample data containing two columns with a luke-warm and tweakable correlation 
        -- y = x for 0 thru 99, y = x - 100 for 100 thru 199, etc.  Execute it as a stand-alone to see for yourself
        -- x and y are CAST as DECIMAL to avoid integer math, you should definitely do the same
        -- Try TOP 100 or less for full correlation (y = x for all cases), TOP 200 for a PCC of 0.5, TOP 300 for one near 0.33, etc.
        -- The superfluous "+ 0" is where you could apply various offsets to see that they have no effect on the results
        SELECT TOP 200
                CAST(ROW_NUMBER() OVER (ORDER BY [object_id]) - 1 + 0 AS DECIMAL) AS x, 
                CAST((ROW_NUMBER() OVER (ORDER BY [object_id]) - 1) % 100 AS DECIMAL) AS y 
            FROM sys.all_objects
    ) AS a

As I already noted in the comments, you can try the example with TOP-100 or less for full correlation (y = x for all cases); TOP 200 gives correlations very close to 0.5; TOP 300, about 0.33; etc. There is a place ("+ 0") to add an offset if you like; warning spoiler, it has no effect. Make sure you CAST your values, since DECIMAL integer math can significantly affect these calculations.

+1
source

All Articles