1. I don’t understand how PCA is needed for dimension reduction.

The eigenvalue is nothing but variation in direction of Eigne vector. So, why don’t we calculate variance in a column(dimension) and drop it from dataset if it has less variance?

Is it because Eigenvalues find variation not only in x,y,z axes but also other directions?

2. In the 3-D example, we transformed the data around 2 new axes by removing third eigenvector(with 0 eigenvalue). So, what would be these 2 dimensions? How can we know which dimension the 0 valued eigenvector represents?

Could you please clarify my queries?

]]>