0

So I have come across the following theorem(see below)in my linear algebra notes and am slightly confused. I feel I understand both the statement and the proof, my confusion arises from the fact that for (1) E being minimal(note: with regards to inclusion)amongst generating sets is equivalent to it being a basis. Similarly L being maximal amongst l.i. sets means it is a basis. I guess my question is why has the proof been framed this way and why the need for E and L need to subsets? Why is this a helpful way of proving the c.o.b.?

(A useful variant on the Characterisation of Bases). Let V be a vector space.

(1) If L ⊂ V is a linearly independent subset and E is minimal amongst all generating sets of our vector space with the property that L ⊆ E, then E is a basis.

(2) If E ⊆ V is a generating system and if L is maximal amongst all linearly independent subsets of vector space with the property L ⊆ E, then L is a basis.

PROOF. (1) If E were not a basis, then there would be a non-trivial relation between its vectors λ1⃗v1 + ··· + λr⃗vr = ⃗0 with r ≥ 1, the ⃗vi ∈ E pairwise distinct, and all λi ̸= 0. Not all the vectors ⃗vi could belong to L because L is linearly independent. Thus there is a ⃗vi belonging to E \ L and it can be written as a linear combination of the other elements of E. But then E \ {⃗vi} is also a generating system that contains L and so E was not minimal.

(2) If L were not a basis, then L could not be a generating set and so there would necessarily be a vector ⃗v ∈ E that didn’t lie in the subspace generated by L. If we add that vector to L then we obtain a bigger linearly independent subset contained in E and so L was not maximal.

  • In general, if several properties are equivalent that is a justification to define the concept in the first place. – Hagen von Eitzen Jul 31 '15 at 11:46
  • Sorry I don't understand your point, perhaps you could elaborate? My query is why the proof doesn't simply say 'E is minimal among generating sets so it is a basis'. – Calvin Nesbitt Jul 31 '15 at 11:52
  • Could you start by stating what definition of "basis" you're starting with, to get us oriented? (The wikipedia entry defines a basis of V as "a set of vectors B such that every element of V may be written in a unique way as a finite linear combination of elements of B", but that definition doesn't make your excerpt make much sense.) I suspect the answer is "a linearly independent set of vectors whose span is the entire space V". – Don Hatch Jan 06 '25 at 07:08

2 Answers2

3

The idea is that a basis is sort of a "perfect" generating set.

If a generating (spanning) set $E$ is not linearly independent, it means we have some "redundancy" between the elements: we have more than we need to generate the space $V$. For example, if $v_3 = v_1 + v_2$, then we do not need all $3$ of $v_1,v_2$ and $v_3$, we can do without $v_3$ if we have $v_1$ and $v_2$. So, basically, we keep taking out elements of $E$ until it becomes linearly independent.

On the other hand, a linearly independent set is, de facto, a basis for the set it spans. However, this span may not be "all of $V$", so we need to "add to it". Adding anything that is already in $\text{span}(L)$ doesn't help our cause, we need to add something linearly independent to "get more".

It turns out that linear independence and spanning are sort of "dual" concepts, just like injective/surjective are for mappings. In fact, they correspond:

A linear map $T: V \to W$ is injective if it preserves linear independence.

A linear map $T:V \to W$ is surjective if it maps a generating set to a generating set.

So, given a set $S \subset V$, we want to know $2$ things:

$1$) Does it span $V$?

$2$) Is it linearly independent?

The neat thing is, if we only know $1$ out of the $2$, we can deduce the other based on the size of $S$. This is a real time-saving feature.

David Wheeler
  • 15,842
  • 1
  • 27
  • 39
  • So I definitely agree with your thoughts regarding the nature of a basis. Thank you also for the insight to maps, that looks really neat. My question was why the proofs don't simply say that the generating set is minimal/the l.i. Set is maximal ... and hence they are bases. The part of L in (1) seems redundant, as does that of E in (2) – Calvin Nesbitt Jul 31 '15 at 12:04
2

You don't define the notion "characterization of bases". At any rate such a characterization would be a definition, and as such needs no proof.

Given that, a proof per se is not "useful" (apart from didactical purposes); it can only be correct, or wrong.

It seems that your actual problem is: Why are the following statements useful when dealing with bases?

Concerning (1): Of course all minimal generating sets are bases, as you note in your comment. But the statement (1) involves additional data, namely a given independent set $L$ (e.g., the basis of some subspace of $V$), and only generating sets $E$ containing $L$ as a subset are considered. When $L=\emptyset$ we are back to the base case. The main application of (1) is the following principle:

Given $L$ and any generating set $G$ you can enlarge $L$ to a basis for the whole space by selecting suitable elements from $G$.

Proof. Apply (1) to $\tilde G:=G\cup L$. A minimal generating set $E$ with $L\subset E\subset\tilde G$ is then a basis.

  • Apologies, by c.o.b. I meant the following theorem: (Characterisation of Bases). The following are equivalent for a subset of a vector space: (1) Our subset is a basis, i.e. a linearly independent generating set; (2) Our subset is minimal among all generating sets; (3) Our subset is maximal among all linearly independent subsets. As you say, I indeed meant 'useful' in a didactical sense (should have pointed this out). Thanks very much, I think the instructor was hinting at what you say at the end. This makes sense. – Calvin Nesbitt Jul 31 '15 at 13:18