6

I was wondering why there can't be a nilpotent matrix of index greater than its no. of rows. Like why there does not exist a nilpotent matrix of index 3 in $M_{2×2}(F)$

When I look up on the internet it says that it is related to ring theory and an element being nilpotent is $x^n = 0$. I have a very basic understanding of rings but I know groups so I thought for $x^n=0$ n must be less than or equal to the order of group and thus this thing. Am I right?

  • 1
    If $N^m\neq0$ and $N^{m+1}=0$, then there is a vector $v$ such that $N^mv\neq0$. Show that $v,Nv,N^2v,...,N^mv$ are linearly independent. To show this assume there is a linear dependence $a_0v+a_1Nv+...+a_mN^mv=0$. Applying $N$ on both sides you get a new linear dependence $a_0Nv+a_1N^2v+...+a_{m-1}N^{m}v=0$. Keep applying $N$ until you get to $a_0N^mv=0$. This implies that $a_0=0$. Then the previous equation that you obtained implies $a_1$, ... and so on, all $a_k$ are must be zero. – MoonLightSyzygy Jan 03 '20 at 18:59
  • Therefore, $m$ is at most the maximum number of linearly independent vectors that you can find in your space. – MoonLightSyzygy Jan 03 '20 at 19:03
  • But a m order matrix space will have m^2 dimension so it must have m^2 LI vectors? – Ankush Kothiyal Jan 03 '20 at 19:14
  • The dimension of the vector space, not the dimension of the space of matrices. Also, $m$, in my argument, is not the size of the matrix, but the maximum power, of the nilpotent matrix $N$, that is not zero (the order of nilpotency of the matrix). – MoonLightSyzygy Jan 03 '20 at 19:15
  • Okay got it, thanks. – Ankush Kothiyal Jan 03 '20 at 19:38

1 Answers1

5

The characteristic polynomial of any $n\times n$ matrix is of degree at most $n$, and the matrix is a root of that polynomial.

Since the minimal polynomial of a nilpotent must divide $x^N$ for some $N$, and it also divides the characteristic polynomial, you have that the minimal polynomial is of the form $x^k$ for some $0\leq k\leq n$.


Geometrically, another way to look at it is that, viewing a nilpotent matrix $T$ as a linear transformation of $V=F^n$, $V\supseteq T(V)\supseteq T^2(V)\supseteq\cdots\supseteq \{0\}$ is a descending chain of subspaces of $V$.

Now, it cannot be the case at any point that $T^k(V)=T^{k+1}(V)$, because if that were the case, the chain would remain stable and would never reach zero no matter how high $k$ goes.

So the chain is a strictly descending chain of subspaces of the $n$ dimensional space $V$. Then at each link, you must decrease by at least one dimension. But no chain in $V$ is deeper than $n$ links, so you are guaranteed it will take no more than $n$ applications of $T$ to reach zero.

rschwieb
  • 160,592
  • Thank you so much. – Ankush Kothiyal Jan 03 '20 at 19:19
  • 2
    @MoonLightSyzygy I'll give you that the first one is a sketch, but I'd wager the second argument would be considered complete in an undergraduate ring theory course. Personally I think the details in your version obscure the big picture. But now all versions are available, so it is a win for all readers. – rschwieb Jan 03 '20 at 20:54