Now, the most natural thing in the world to call the matrix of this operator is the change of basis matrix from to .
It seems the most natural thing in the world to you. Interesting.
(And let me abruptly say that calling it the change of basis matrix from to , reversing the target and source bases, seems to me completely illogical!)
But wait: what matrix, exactly? As far as $V$ is an abstract vector space (v.s.), $T$ is an endomorphism which admits one possible matrix representation for each choice of a couple of bases of $V$. Which one are you picking up?
(If $V$ was $R^3$, instead, then there would exist one intrinsic matrix representation of $T$. The idea is that the vector $(1,2,3)\in R^3$ has one intrinsic “name”, $(1,2,3)$, and infintely many “nicknames,” one for each possible basis of $R^3$, given by the components of $(1,2,3)$ in the considered basis. And the same applies for $T$.)
Actually, the name change of basis matrix from $u$ to $w$ should be deserved for something completely different.
Even though $V$ is a completely abstract v.s., there actually is just one matrix that serves as a change of basis. It's definable by observing that
$$
w_i=c_{ji}\,u_j~,
$$
(please, not the other way round, as someone claims: that would be a completely non-sense, because in Mathematics new things are defined in terms of old ones, not vice versa!)
where in the r.h.s. I assumed an implied summation over the repeated index $j$ (as in the Einstein notation). Since it's clear that we have $n^2$ coefficients $c_{ij}$, it's also clear that they can be arranged in a $n\times n$ matrix, paying attention to the fact that the first index, $j$, must be a row-index, while the second, $i$, a column-index.
So doing, you end up with the matrix $C=[c_{ji}]$. There you are the change of basis matrix from to .
N.B.: Just in the case $V=F^n$ (being $F$ a field, e.g. $R$ or $C$), the previous equation may be put in a matrix form. (But $C$ is a matrix in any case.) Calling $U$ the matrix obtained by putting side-by-side the $n$ vectors $u_i$ (we can do that because, in this case, the $u_i$ are $n$-tuples!), and $W$ the matrix obtained by putting side-by-side the $n$ vectors $w_i$, the previous equation assumes the form
$$
W=U\,C~,
$$
in this order.
The advocates of what I called a non-sense please consider the following example.
Suppose you have an object of length 10cm. Now I ask you to measure it in terms of a new unit: the "pippolo." (This sounds a very funny name to an Italian ear.) You'd ask me, in turn, "how long is a pippolo, in cm?" Right? (Would you really ask me, instead, "how many pippolos is a cm long?" Come on!) Well: 1pp (the new object) = 2cm (the old object).
Now, since 1pp = 2cm, you know that the object is 10/2 = 5pp.
OK: the multiplication by 2, in this case, is the matrix of change of basis, while the division by 2 (its inverse!) is the matrix of change of representation.
This example is just a silly one, but this inverse relation is a core feature of this problem.