I’m not sure what it is that you mean by “plot a markov graph as vectors.” The way that vectors enter into the picture is that when you have a finite number $n$ of states, the probabilities that the system is in a particular state at time $k$ can be collected into a state vector: a row vector $\mathbf\pi_k\in[0,1]^n$ with the additional constraint that the sum of the elements of $\mathbf\pi_k$ is $1$. In a discrete-time process, successive state vectors are related by a transition matrix $P$ such that $\mathbf\pi_{k+1}=\mathbf\pi_kP$. Geometrically, these state vectors all lie on a hyperplane at a distance of $1/\sqrt n$ from the origin.
A stationary distribution of $P$ is simply a state vector $\mathbf\pi$ that remains the same after a transition, i.e., $\mathbf\pi P=\mathbf\pi$. In other words, it’s a fixed point of the transformation represented by $P$. The above equation is just an instance of the general eigenvector equation $\mathbf v P=\lambda\mathbf v$ with $\lambda=1$, so a stationary distribution of the process represented by the transition matrix $P$ is a left eigenvector of $P$ with eigenvalue $1$. Fundamentally, an eigenvector of a matrix corresponds to a line that is mapped to itself by the transformation that the matrix represents. As to the stationary distribution’s being a left eigenvector, that’s just an artifact of using row vectors. Other sources use column vectors instead, and there stationary distributions are, naturally, right eigenvectors instead.