1

As an example, in $\mathbb{R^3}$, the canonical basis is:\begin{align} B &= \left(\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix},\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\right) \end{align}

So it is a set of 3 vectors. There are also different bases in $\mathbb{R^3}$ but they always are sets of 3 vectors. I am wondering why couldn't just \begin{align} A &= \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} \end{align}

be considered a basis? Couldn't vector A, multiplied with any scalar reproduce the whole vector space $\mathbb{R^3}$?

Ravindra
  • 444
Jason
  • 197

4 Answers4

2

A basis is just any set of linearly independent vectors. Since also $A$ is a (one element) set of linearly independent vectors, it is in fact a basis. But it is neither the basis of $R^3$ nor is it the basis of $R^1$ as you probably might think. It is a basis of a one-dimensional subspace of $R^3$.

Since $R^3$ is defined as the set of all triples $(x,y,z)^T$ with $x,y,z\in R$, it is trivial to see that $R^3$ is just the set of all linear combinations of the basis vectors from $B$: $$\left( \begin{array}{c} x\\ y\\ z\\ \end{array}\right)=x\cdot\left( \begin{array}{c} 1\\ 0\\ 0\\ \end{array}\right)+y\cdot\left( \begin{array}{c} 0\\ 1\\ 0\\ \end{array}\right)+z\cdot\left( \begin{array}{c} 0\\ 0\\ 1\\ \end{array}\right)$$

Now suppose that $A$ was also a basis of $R^3$. Then it would have to be possible to write all the vectors from $R^3$ as linear combinations of vectors from $A$. Since $B$ also contains vectors from $R^3$, these would all need to be expressible by the 'basis' A. Hence you would need to be able to write for example $$\left( \begin{array}{c} 1\\ 0\\ 0\\ \end{array}\right)=\lambda \left( \begin{array}{c} 1\\ 1\\ 1\\ \end{array}\right)$$ and the same for the other basis vectors from $B$.

Splitting this equation into components leads to equations for $\lambda$, namely $$\lambda=1$$ and $$\lambda=0$$ which is an obvious logical contradiction. Hence, $A$ cannot be a basis of $R^3$.

You can extend this line of thought to arbitrary finite dimension, where you conclude that one basis must be expressible by another basis and where you can show that bases with different number of basis vectors lead to a contradiction to the assumption, that the basis vectors are linearly independent (which must always be the case for a basis). The final conclusion of this proof will be that if there is a finite basis for a vector space, every other basis of the same space must have the same number of basis vectors. Then you call this number the 'dimension' of that vector space.

By the way, there are also infinite-dimensional vector spaces, where things are complicated by the fact, that in case of a countable basis you can number elements (basis vectors) in different ways, and there are even uncountable sets (the real numbers are an example of a uncountable set, although it is of course not a basis of anything) which cannot by numbered at all.

oliver
  • 365
2

The vector $(1,1,1)$ clearly doesn’t span all of $\mathbb R^3$ since you can only generate vectors of the form $(a,a,a)$ from it: remember that when you multiply a vector by a scalar, you multiply all of the components by the same value.

You do ask an important question earlier, though: does every basis of $\mathbb R^3$ have exactly three vectors? For that, we have to look into the definition of the dimension of a (finite-dimensional) vector space more closely. You’ve probably learned it as the number of vectors in a basis for the space, but how do we know that this number is well-defined? There are two key theorems that bear on this which are presented in more rigorous expositions of the material before defining dimension. I won’t prove them here, but they basically say that there’s a maximum number of vectors that can form a linearly-independent set, and that if you have a linearly-independent set of vectors that spans a space, then no smaller set of vectors can span it. Applying the second theorem to your question in particular, since the standard basis consists of three vectors, this means that we can never find a set of fewer than three vectors that will span all of $\mathbb R^3$.

Taken together, these two theorems say that the number of linearly-independent vectors that it takes to span a space is unique, and we call that number the dimension of the space.

amd
  • 55,082
1

"Couldn't vector A, multiplied with any scalar reproduce the whole vector space $R^3$? "

Not multiplied for any scalar but multiplied for any diagonal matrix then , yes, you can obtain all vectors in $R^3$

G Cab
  • 35,964
0

No, it cannot reproduce. The dimension of a vector space is an invariant in the sense that each basis of an $n$-dimensional vector space consists of $n$ elements.

Your vector $A$ is a linear combination of the canonical basis $B=\{e_1,e_2,e_3\}$: $A = e_1 + e_2 + e_3$, but not every vector in ${\Bbb R}^3$ is a scalar multiple of $A$.

Wuestenfux
  • 21,302