How to calculate the similarity of jaccard with the pandas framework

I have a dataframe as follows: frame shape (1510, 1399). Columns represent products, rows represent values ​​(0 or 1) assigned by the user for this product. How can I calculate jaccard_similarity_score?

enter image description here

I created a data delimited data file with product and product

data_ibs = pd.DataFrame(index=data_g.columns,columns=data_g.columns) 

I'm not sure how to iterate data_ibs to calculate the similarities.

 for i in range(0,len(data_ibs.columns)) : # Loop through the columns for each column for j in range(0,len(data_ibs.columns)) : ......... 
+6
source share
1 answer

Short and vectorized (quick) answer:

Use "hamming" from pairwise scikit learn distances:

 from sklearn.metrics.pairwise import pairwise_distances jac_sim = 1 - pairwise_distances(df.T, metric = "hamming") # optionally convert it to a DataFrame jac_sim = pd.DataFrame(jac_sim, index=df.columns, columns=df.columns) 

Explanation:

Suppose this is your dataset:

 import pandas as pd import numpy as np np.random.seed(0) df = pd.DataFrame(np.random.binomial(1, 0.5, size=(100, 5)), columns=list('ABCDE')) print(df.head()) ABCDE 0 1 1 1 1 0 1 1 0 1 1 0 2 1 1 1 1 0 3 0 0 1 1 1 4 1 1 0 1 0 

Using sklearn jaccard_similarity_score, the similarity between columns A and B:

 from sklearn.metrics import jaccard_similarity_score print(jaccard_similarity_score(df['A'], df['B'])) 0.43 

This is the number of lines that have the same value compared to the total number of lines, 100.

As far as I know, there is no paired version of jaccard_similarity_score, but there are pairwise versions of distances.

However, SciPy determines the Jaccard distance as follows:

For two vectors u and v, the Jaccard distance is the fraction of those elements u [i] and v [i] that do not agree that at least one of them is nonzero.

Thus, it excludes rows in which both columns have 0 values. jaccard_similarity_score no. On the other hand, the Hamming distance coincides with the definition of similarity:

The proportion of these vector elements between two n-vectors u and v that do not agree.

So, if you want to calculate jaccard_similarity_score, you can use 1 - hamming:

 from sklearn.metrics.pairwise import pairwise_distances print(1 - pairwise_distances(df.T, metric = "hamming")) array([[ 1. , 0.43, 0.61, 0.55, 0.46], [ 0.43, 1. , 0.52, 0.56, 0.49], [ 0.61, 0.52, 1. , 0.48, 0.53], [ 0.55, 0.56, 0.48, 1. , 0.49], [ 0.46, 0.49, 0.53, 0.49, 1. ]]) 

In DataFrame format:

 jac_sim = 1 - pairwise_distances(df.T, metric = "hamming") jac_sim = pd.DataFrame(jac_sim, index=df.columns, columns=df.columns) # jac_sim = np.triu(jac_sim) to set the lower diagonal to zero # jac_sim = np.tril(jac_sim) to set the upper diagonal to zero ABCDE A 1.00 0.43 0.61 0.55 0.46 B 0.43 1.00 0.52 0.56 0.49 C 0.61 0.52 1.00 0.48 0.53 D 0.55 0.56 0.48 1.00 0.49 E 0.46 0.49 0.53 0.49 1.00 

You can do the same thing by iterating over column combinations, but that will be much slower.

 import itertools sim_df = pd.DataFrame(np.ones((5, 5)), index=df.columns, columns=df.columns) for col_pair in itertools.combinations(df.columns, 2): sim_df.loc[col_pair] = sim_df.loc[tuple(reversed(col_pair))] = jaccard_similarity_score(df[col_pair[0]], df[col_pair[1]]) print(sim_df) ABCDE A 1.00 0.43 0.61 0.55 0.46 B 0.43 1.00 0.52 0.56 0.49 C 0.61 0.52 1.00 0.48 0.53 D 0.55 0.56 0.48 1.00 0.49 E 0.46 0.49 0.53 0.49 1.00 
+18
source

All Articles