This is my first post here, so I hope I'm doing everything right. So I watched the Essence of Linear Algebra series by 3Blue1Brown, and thought a lot about the whole principle of rather seeing vectors as a representation of how many times you'd run the corresponding basis vector of each dimension.
For example in the $2$D space, there is $\hat\imath$ and $\hat\jmath$ which have clear representations with $\smash{\left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]}$ and $\smash{\left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]}$. Now a matrix basically represents the new locations of where $\hat\imath$ and $\hat\jmath$ landed and we can derive any other vector from the new location of $\hat\imath$ and $\hat\jmath$. So this multiplying a $2$D vector by a $2\times2$ matrix makes total sense to me (geometrically).
In this example of a $1\times3$ matrix $$ A = \begin{bmatrix} 3 & 1 & 4 \end{bmatrix} $$ which basically would represent $3$ basis vectors $\begin{bmatrix} \hat\imath & \hat\jmath & \hat k \end{bmatrix}$ with only 1 dimension each and multiplying it with the matrix $$ B = \begin{bmatrix} 4 & 3 \\ 2 & 5 \\ 6 & 8 \end{bmatrix} $$
which represents a transformation of two basis vectors $\begin{bmatrix} \hat\imath & \hat\jmath \end{bmatrix}$ with $3$ dimensions each.
This concept makes no geometrical sense anymore, or am I mistaken here?
I don't even know if I am supposed to, but I can't wrap my head around what it would mean to multiply $$ A B = \begin{bmatrix} 3 & 1 & 4 \end{bmatrix} \begin{bmatrix} 4 & 3 \\ 2 & 5 \\ 6 & 8 \end{bmatrix} $$
So are we simply deriving the concept here of multiplying rows by columns and this has no further reason you can think of as an image in your head?
Also I'm pretty much learning this for the cause of understanding matrix multiplication in Machine Learning to be more comfortable with thinking about tensor shapes.
Glad for any thoughts on this question. I think it should be clear with these simple examples.
Cheers!