1

A scalar $\lambda$ is called an eigenvalue of a matrix $A$ if there is a nontrivial solution $\vec x$ of $A\vec x = \lambda \vec x$, such that $\vec x$ is called an eigenvector corresponding to $\lambda$.

What I don't understand is why should there be a nontrivial solution at all. If the solution is unique, the value of lambda would still exist, right? Suppose for a matrix $A$ the value of $\lambda$ say 3, has a trivial solution for the equation: $A \vec x - 3 \vec x= \vec 0$(Which is essentially the same as above), then there would be just one single vector that would satisfy the above equation. Why can't we say that this unique vector is an eigenvector? And $\lambda = 3$ is an eigenvalue?

2 Answers2

5

Note that $$ \boldsymbol A \boldsymbol 0 = \lambda \boldsymbol 0 $$ for all $\lambda$, hence if the solution is unique, then it must be $\boldsymbol 0$ which is not of much interest.

xbh
  • 9,033
  • 1
    I think I've figured why scientists go mad, I'm not a scientist but, when one studies maths for straight 8 hrs, he begins to forget what he read an hour ago. But this is the perfect answer. – mathmaniage Aug 22 '18 at 11:13
  • 1
    @mathmaniage Thanks. Glad to help. – xbh Aug 22 '18 at 11:14
2

You can also think of it this way: matrices correspond to some kind of transformation.

If you consider $2$-dimensional vectors, then matrices will transform, say, a square by enlarging it, or rotating it, or shearing it, and so on. The eigenvectors are the set of vectors who are only changed by a scalar multiple when you apply the transformation to them. That is, they're the vectors whose direction stays the same, and only their length differs.

Now, why are there always non-trivial eigenvectors? It's a slightly tricky question, which I can only answer part of: the eigenvalues $\lambda$ are found by solving $$ \det(A-\lambda I_{n}) = 0 $$ for $\lambda$. Such a $\lambda$ must always exist (it's not always real, it can be complex) since $\det(A-\lambda I_{n})$ is a polynomial and every polynomial has a root (this is the Fundamental Theorem of Algebra). Note that if the eigenvalue is not real, then the eigenvector also will not be real. See here for more.

Bilbottom
  • 2,718
  • another question came to mind, that I didn't want to make a post of, are the basis of eigenspace themselves eigenvectors? By analogy I'd say yes, but, don't think there is a proof or something? – mathmaniage Aug 22 '18 at 11:45
  • Absolutely, in fact one constructs the eigenspace by taking the linear combination of the eigenvectors, so that your question is true by definition once one shows that the eigenvectors are linearly independent (I'm not sure where to find such a proof though). – Bilbottom Aug 22 '18 at 11:55
  • ....and, is linear independence affected by transposition? and how do I prove this? (I didn't really come across these proofs through the book) – mathmaniage Aug 22 '18 at 12:22
  • What do you mean by transposition? It's not a term that I'm familiar with. – Bilbottom Aug 22 '18 at 12:29
  • transpose of the matrix A – mathmaniage Aug 22 '18 at 12:41
  • For a square matrix $A$, then no. This follows from the fact that the row rank equals the column rank in any matrix (see the Wiki article, so that if you have a square matrix with linearly independent rows (or columns), then it must have linearly independent columns (or rows). – Bilbottom Aug 22 '18 at 12:45