I was intrigued by this, so I did some tests. below is my code.
the graphs show that the first kernelpca component is the best discriminator of a dataset. however, when explain_variance_ratios is calculated based on @EelkeSpaak's explanation, we see only a 50% deviation that does not make sense. therefore, he inclines me to agree with the explanation of @ Krishna Kalyan.
#get data from sklearn.datasets import make_moons import numpy as np import matplotlib.pyplot as plt x, y = make_moons(n_samples=100, random_state=123) plt.scatter(x[y==0, 0], x[y==0, 1], color='red', marker='^', alpha=0.5) plt.scatter(x[y==1, 0], x[y==1, 1], color='blue', marker='o', alpha=0.5) plt.show()
Faz
source share