I don't think this is necessarily a property of the SVD but more one of matrix representation and similarity. Imagine the simple case of a Hermitian matrix $A$ with eigendecomposition (equal to the SVD in this case) $A = U\Lambda U^T$ and denote the eigenvectors $u_1,\dots,u_n$. Let us also specify that the matrix $A$ is representing some operator $\mathcal{A}$ with respect to the standard basis $\{e_1,\dots,e_n\}$.
Now, eigenvalues and eigenvectors exist independently of a basis representation. They are defined by the linear operator via $\mathcal{A}u_i=\lambda_iu_i$. We don't need a basis until we want to express it in components. However, the matrix $A$ is necessarily given in components. If we want to express the eigendecomposition of $\mathcal{A}$ in terms of the matrix $A$, then we need to express the eigenvectors in the same basis and use the change-of-basis formula. The matrix form of the eigenbasis is given by the matrix $U$, so that any vector $X$ represented in the canonical basis can be transformed to the eigenvasis via the formula $\tilde{x} = U^Tx$. Notice here that we are using the fact that $U^{-1} = U^T$ since this basis is unitary. Similarly, we can transform back via the inverse, $x = U\tilde{x}$.
Applying $A$ to a vector $x$ can now be read, right to left, as a sequence of operations involving the eigendecomposition. $$Ax = U\Lambda U^Tx$$
- Represent $x$ in the eigenbasis, $\tilde{x} = U^Tx$.
- Scale each component of $\tilde{x}$ by the corresponding eigenvalue, since each component is now a multiple of an eigenvector, $\tilde{y} = \Lambda\tilde{x}$.
- Transform from the eigenbasis back to the canonical basis via the inverse basis transformation, $y = U\tilde{y}$.
This shows why, even in the simplest case of the SVD, two rotations are needed. One is needed to transform into a basis where the components line up with the correct singular values (or eigenvalues in this case), then another basis transformation is needed to change back to the canonical basis in the codomain.
Another way to think about it is like this. Let $A\in\mathbb{R}^{n\times m}$, so it maps vectors from $\mathbb{R}^m$ into vectors in $\mathbb{R}^n$. Say for the sake of argument that the SVD only contains a single unitary matrix, $V^T$. In this formulation, since $A$ maps between different spaces, $V$ is a rotation on the domain, $\mathbb{R}^m$ and there is no rotation in the codomain, $\mathbb{R}^n$. Now consider a second matrix $\tilde{A} = QA$, where $Q\in\mathbb{R}^{n\times n}$ is a unitary matrix. This matrix will have the same singular values and right singular vectors as $A$, so the one-rotation SVD will be the same, but $A$ and $\tilde{A}$ are not equal in general. We need the second rotation of the SVD to account for rotations in the codomain while the first rotation accounts for rotations in the domain.