I am trying to achieve a better understanding of the relationship between different uses of eigenvectors, in particular how network applications (eigenvector centrality, PageRank) relate to dimension reduction applications (like principal components analysis). A geometric interpretation would be helpful. I have (I think) a reasonable geometric understanding of eigenvectors as a rotation of data points (or a covariance matrix) in PCA. I also understand the general idea of eigenvector centrality and its approximation as the result of a recursive multiplication of an adjacency matrix. And I understand that there is an interpretation in terms of random walks through the network. But I have a hard time connecting these different facts back to the more concrete geometrical interpretation of PCA. Is there a way to visualize eigenvector centrality in terms of rotation, analogous to PCA?
My question is related to this older question but narrower in scope: I am not asking about the general principle underlying many applications of eigenvectors, but trying to 'translate' what I understand about PCA to eigenvector centrality in networks, to gain a better understanding of the latter, ideally with a geometric interpretation I can visualize (while recognizing that this involves more than three dimensions for nontrivial networks, so that will be tricky).
Yes, you can create some analogies, not sure, that provides insight/ intuition – Michael T Jan 17 '25 at 06:27