Something came to my mind as I was working on normed vector spaces over $\mathbb{R}$: are bases relevant when calculating certain norms?
For example, if we take the euclidean norm in $\mathbb{R} ^2$, we know that $\|(0,1)\|=1$. However one automatically assumes that we are talking about the canonical base here. But what if we take the basis $B=\{(1,0),(1,1)\}$? Then $(0,1)=(-1,1)_B$ but $\|(-1,1)_B\|=\sqrt2$, so we have two different "sizes" for essentially the same vector in $\mathbb{R} ^2$.
Now, I know that this example might make no sense because the euclidean norm is used with the canonical basis all the time, but I was just trying to make a point about whether the choice of a basis is always or sometimes relevant when calculating the norm of a vector, in any vector space. Surely not all norms depend on the coefficients of some linear combination, but when they do, how do we deal with bases?
I ask this mainly because of this thread : Proof that every finite dimensional normed vector space is complete, where the proof basically makes use of the equivalence between whatever norm is defined on V and the $\|\cdot\|_1$ norm. A random basis is taken, and it is noted that a Cauchy sequence is obtained for the coefficients of each vector in the sequence. These coefficients would vary when a different basis is taken, so how are we sure that there is nothing wrong or shady about this procedure?
I'm not sure if my question is clear, but I hope someone will get the idea!