Python pandas function applied to all pairwise string combinations

I am trying to run a function (correlation) for all paired string combinations in a pandas frame:

stats = dict() for l in itertools.combinations(dat.index.tolist(),2): stats[l] = pearsonr(dat.loc[l[0],:], dat.loc[l[1],:]) # stores (r, p) 

Of course, this is pretty slow, and I am wondering how to make an equivalent using something like apply() or otherwise.

Note. I know that I can directly find data correlation using the pandas corr () function, however it does not return the associated p value (which I need for filtering purposes)

+6
source share
1 answer

This should lead to some acceleration. Define the Pearson function modified from the documents in the Primer link:

 def Pearson(r, n=len(dat)): r = max(min(r, 1.0), -1.0) df = n - 2 if abs(r) == 1.0: prob = 0.0 else: t_squared = r**2 * (df / ((1.0 - r) * (1.0 + r))) prob = betai(0.5*df, 0.5, df/(df+t_squared)) return (r,prob) 

Use applymap , which performs basic operations on dat.corr . You pass the correlation coefficient r to Pearson :

 np.random.seed(10) dat = pd.DataFrame(np.random.randn(5, 5)) dat[0] = np.arange(5) # seed two correlated cols dat[1] = np.arange(5) # ^^^ dat.corr().applymap(Pearson) 0 1 2 3 4 0 (1.0, 0.0) (1.0, 0.0) (0.713010395675, 0.176397305541) (0.971681374885, 0.00569624513678) (0.0188249871501, 0.97603269768) 1 (1.0, 0.0) (1.0, 0.0) (0.713010395675, 0.176397305541) (0.971681374885, 0.00569624513678) (0.0188249871501, 0.97603269768) 2 (0.713010395675, 0.176397305541) (0.713010395675, 0.176397305541) (1.0, 0.0) (0.549623945218, 0.337230071385) (-0.280514871109, 0.647578381153) 3 (0.971681374885, 0.00569624513678) (0.971681374885, 0.00569624513678) (0.549623945218, 0.337230071385) (1.0, 0.0) (0.176622737448, 0.77629170593) 4 (0.0188249871501, 0.97603269768) (0.0188249871501, 0.97603269768) (-0.280514871109, 0.647578381153) (0.176622737448, 0.77629170593) (1.0, 0.0) 

You see acceleration with this method when the dat is large, but it is still quite slow due to elementary operations.

 np.random.seed(10) dat = pd.DataFrame(np.random.randn(100, 100)) %%timeit dat.corr().applymap(Pearson) 10 loops, best of 3: 118 ms per loop %%timeit stats = dict() for l in combinations(dat.index.tolist(),2): stats[l] = pearsonr(dat.loc[l[0],:], dat.loc[l[1],:]) 1 loops, best of 3: 1.56 s per loop 
+2
source

All Articles