What is the reason why we defined a matrix vector product (a transformation) this way:
$$\begin{pmatrix} a_1 & a_2 \\ a_3 & a_4 \\ \end{pmatrix}\cdot \begin{pmatrix} b_1\\ b_2\\ \end{pmatrix} = \begin{pmatrix} a_1\cdot b_1 + a_2\cdot b_2 \\ a_3\cdot b_1 + a_4\cdot b_2\\ \end{pmatrix}.$$
I know that when we want to represent a transformation, we care about the values that we're gonna multiply the vector's pieces, but why we use like a linear combination mechanism to multiply the vector's pieces?
I know that this may be a good reason:
- We can create new vectors with more or less dimensions than the original vector, just by having one, two, or more lines in the matrix (os less lines).
PS: I can't just say that this is derived from the matrix matrix product, because this type of product is derived from the matrix vector product definition, so any 'proof' or reason that uses matrix matrix product definition will be circular.