I'm going to provide a full characterisation of the set of SVDs for a given matrix $A$, using two different (but of course ultimately equivalent) kinds of formalisms. First a standard matrix formalism, and then using dyadic notation.
The TL;DR is that if
$$A=UDV^\dagger=\tilde U \tilde D\tilde V^\dagger$$
for $U,\tilde U,V,\tilde V$ isometries and $D,\tilde D>0$ square strictly positive diagonal matrices, then we can safely assume $D=\tilde D$ by trivially rearranging the basis for the underlying space, and $\tilde V=V W$ and $\tilde U=U W$, with $W$ a block-diagonal unitary matrix such that $WD W^\dagger = D$.
In particular, this means that $W$ can only mix subspaces that correspond to equal singular values. This characterises all and only the possible SVDs. So given any SVD, the freedom in the choice of other SVDs corresponds to the freedom in choosing these unitaries $W$ (how much freedom is that, depends in turn on the degeneracy of the singular values of $A$).
Unitaries vs isometries in the SVD —
Note that I'm using here a convention where $D$ is square, and $V$ are isometries. This might appear slightly different than the other common method of taking $D$ generally rectangular and $U$ unitary, but it's essentially equivalent. If you take $D$ rectangular, and allow zeros on its diagonal, the corresponding columns of the unitary $U$ are irrelevant, and therefore we might just as well remove them, making $U$ an isometry and $D$ square with strictly positive diagonal.
For a toy example of what I mean, consider the following SVD, written in the standard notation with unitaries and rectangular $D$:
$$\begin{pmatrix}1 & 0 \\ 0 & 1 \\ 1 & 1\end{pmatrix} =
\underbrace{\frac1{\sqrt6}\begin{pmatrix}1 & -\sqrt3 & -\sqrt2 \\ 1 & \sqrt3 & -\sqrt2 \\ 2 & 0 & \sqrt2\end{pmatrix}}_{U}
\underbrace{\begin{pmatrix}\sqrt3 & 0\\0 & 1 \\ 0&0\end{pmatrix}}_{D}
\underbrace{\frac{1}{\sqrt2}\begin{pmatrix}1&1\\-1&1\end{pmatrix}}_{V^\dagger}.
$$
Clearly, the third column of $U$, which corresponds to the zero third row of $D$, is inconsequential. We can therefore more expressively write the SVD as
$$\begin{pmatrix}1 & 0 \\ 0 & 1 \\ 1 & 1\end{pmatrix} =
\underbrace{\frac1{\sqrt6}\begin{pmatrix}1 & -\sqrt3 \\ 1 & \sqrt3 \\ 2 & 0\end{pmatrix}}_{U}
\underbrace{\begin{pmatrix}\sqrt3 & 0\\0 & 1\end{pmatrix}}_{D}
\underbrace{\frac{1}{\sqrt2}\begin{pmatrix}1&1\\-1&1\end{pmatrix}}_{V^\dagger},
$$
by simply allowing $V$ to be an isometry rather than a unitary, and enforcing a strictly positive square diagonal $D$.
Regular notation
Consider the SVD of a given matrix $A$, in the form $A=UDV^\dagger$ with $D>0$ a square diagonal matrix with strictly positive entries, and $U,V$ isometries.
The question is, if you have
$$A = UDV^\dagger = \tilde U \tilde D \tilde V^\dagger,$$
with $U,\tilde U,V,\tilde V$ isometries, and $D,\tilde D>0$ diagonal squared matrices, what does this imply for $\tilde U,\tilde D,\tilde V$? More specifically, can we somehow find an explicit relationship between them?
The first easy observation is that you must have $D=\tilde D$, modulo permutations of equal singular values (i.e. trivial rearrangements of the basis elements).
This follows from $AA^\dagger=UD^2 U^\dagger = \tilde U \tilde D^2 \tilde U^\dagger$, which means that $D^2$ and $\tilde D^2$ both contain the eigenvalues of $AA^\dagger$. Because the spectrum is determined algebraically from $AA^\dagger$, the set of eigenvalues must be identical, and the singular values are by definition positive reals, we must have $D=\tilde D$.
The above reduces the question to: if $UDV^\dagger=\tilde U D\tilde V^\dagger$, with $U,\tilde U,V,\tilde V$ isometries, what can we say about $\tilde U,\tilde V$? To this end, we observe that the freedom in the choice of $U,V$ amounts to the different possible ways to decompose the subspaces associated to each distinct singular value.
More precisely, consider the subspace $V^{(d)}\equiv \{v: \, \|Av\|=d\}$ corresponding to a singular value $d$. We can then uniquely decompose the matrix as $A=\sum_d A_d$ where $A_d\equiv A \Pi_d$ and $\Pi_d$ is the projection onto $V^{(d)}$, and the sum is over all nonzero singular values $d$ of $A$. We can now observe that any and all SVDs of $A$ correspond to a choice of orthonormal basis for each $V^{(d)}$. Namely, for any such basis $\{\mathbf v_k\}$ we associate the partial isometry $V_d\equiv \sum_k \mathbf v_k \mathbf e_k^\dagger$. The corresponding basis for the image of $A_d$ is then determined as $\mathbf u_k= A \mathbf v_k$, and we then define the partial isometry $U_d\equiv \sum_k \mathbf u_k \mathbf e_k^\dagger$.
Here, $\mathbf e_k$ denotes an orthonormal basis spanning the elements of $D$ corresponding to the singular value $d$.
This procedure provides a decomposition $A_d= U_d D V_d^\dagger$, and therefore an SVD for $A$ itself by summing these.
Any SVD can be constructed this way.
In conclusion, the freedom in choosing an SVD is entirely in the
choice of bases $\{\mathbf v_k\}$ above. In matrix language, this means that given any SVD $A=UDV^\dagger$, all other SVDs can be written as $A=UW D (VW)^\dagger$ for some unitary $W$ that commutes with $D$. This commutation property is a concise way to state that $W$ is only allowed to mix vectors corresponding to the same singular value, that is, to the same eigenspace of $D$.
Toy example #1
Let
$$H \equiv \begin{pmatrix}1&1\\1&-1\end{pmatrix}.$$
This is a somewhat trivial example because $H$ is Hermitian and unitary. A standard SVD reads
$$ H =
\underbrace{\frac{1}{\sqrt2}\begin{pmatrix}1&1\\-1&1\end{pmatrix} }_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt2 & 0\\0&\sqrt2\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}0&1\\1&0\end{pmatrix}}_{\equiv V^\dagger}.$$
In this case, we have two identical singular values. According to our discussion above, this means that we can apply a(ny) unitary transformation to the columns of $V$ and still obtain another SVD. That is, given any unitary $W$, $\tilde V\equiv V W$ gives another SVD for $H$. In this simple case, you can also observe this directly, as $D=\sqrt2 I$, and therefore
$$H = UDV^\dagger= UD W \tilde V^\dagger
= (UW) D \tilde V^\dagger,$$
hence $\tilde U\equiv UW$, $\tilde V\equiv VW$ give the alternative SVD $H=\tilde U D\tilde V^\dagger$, and all SVDs have this form.
For example taking $W=V$, we get $\tilde V=I$, and thus the decomposition
$$H =
\underbrace{\frac{1}{\sqrt2}\begin{pmatrix}1&1\\1&-1\end{pmatrix} }_{\equiv \tilde U}
\underbrace{\begin{pmatrix}\sqrt2 & 0\\0&\sqrt2\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}1&0\\0&1\end{pmatrix}}_{\equiv \tilde V^\dagger}.$$
Toy example #2
Consider a simple non-squared case. Let
$$A \equiv \begin{pmatrix}1&1\\1 & \omega \\ 1 & \omega^2\end{pmatrix},
\qquad \omega\equiv e^{2\pi i/3}.$$
This is again almost trivial because $A$ is an isometry, up to a constant. Still, we can write its SVD as
$$A =
\underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\1&\omega\\1&\omega^2\end{pmatrix}}_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}0&1\\1&0\end{pmatrix}}_{\equiv V^\dagger}.
$$
Notice that now $U,V$ are isometries but $U$ is not unitary, and that $D>0$ is squared. Per our results above, any SVD will have the form $A=\tilde U D \tilde V^\dagger$ with $\tilde V=V W$ for some unitary $W$.
for example, taking $W=V$ (we can do this, because here $V$ is also unitary), we get the alternative SVD $A=\tilde U D \tilde V^\dagger$ with $\tilde V=VW=VV=I$ and $\tilde U= U W^\dagger=UV=\frac1{\sqrt3}\begin{pmatrix}1&\omega&\omega^2\\1&1&1\end{pmatrix}^T$, that is,
$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\\omega&1\\\omega^2&1\end{pmatrix}}_{\equiv \tilde U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}1&0\\0&1\end{pmatrix}}_{\equiv \tilde V^\dagger}.$$
Toy example #3
Let's do an example with non-degenerate singular values. Let
$$A = \begin{pmatrix}1& 2 \\ 1 & 2\omega \\ 1 & 2\omega^2\end{pmatrix},
\qquad \omega\equiv e^{2\pi i/3}.$$
This time the singular values are $\sqrt3$ and $2\sqrt3$.
One SVD is easily derived as
$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\1&\omega\\1&\omega^2\end{pmatrix}}_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&2\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}1&0\\0&1\end{pmatrix}}_{\equiv V^\dagger}.$$
However, in this case there is much less freedom in choosing other SVDs, because these must correspond to $\tilde V=VW$ where $W$ only mixes columns of $V$ corresponding to the same values of $D$. In this case $D$ is non-degenerate, thus $W$ must be diagonal with unimodular entries, and therefore the full set of SVDs must correspond to
$W=\begin{pmatrix}e^{i\alpha}&0\\0&e^{i\beta}\end{pmatrix}$, that is,
$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}e^{-i\alpha}&e^{-i\beta}\\e^{-i\alpha}&\omega e^{-i\beta}\\e^{-i\alpha}&\omega^2 e^{-i\beta}\end{pmatrix}}_{\equiv \tilde U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&2\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}e^{i\alpha}&0\\0&e^{i\beta}\end{pmatrix}}_{\equiv \tilde V^\dagger}.$$
All SVDs will look like this, for some $\alpha,\beta\in\mathbb{R}$.
Permuting the elements of $D$ we can obtain SVDs which look superficially different but are ultimately equivalent to the above.
In dyadic notation
SVD in dyadic notation removes "trivial" redundancies
The SVD of an arbitrary matrix $A$ can be written in dyadic notation as
$$A=\sum_k s_k u_k v_k^*,\tag A$$
where $s_k\ge0$ are the singular values, and $\{u_k\}_k$ and $\{v_k\}_k$ are orthonormal sets of vectors spanning $\mathrm{im}(A)$ and $\ker(A)^\perp$, respectively.
The connection between this and the more standard way of writing the SVD of $A$ as $A=UDV^\dagger$ is that $u_k$ is the $k$-th column of $U$, and $v_k$ is the $k$-th column of $V$.
Global phase redundancies are always present
If $A$ is nondegenerate, the only freedom in the choice of vectors $u_k,v_k$
is their global phase: replacing $u_k\mapsto e^{i\phi}u_k$ and $v_k\mapsto e^{i\phi}v_k$ does not affect $A$.
Degeneracy gives more freedom
On the other hand, when there are repeated singular values, there is additional freedom in the choice of $u_k,v_k$, similarly to how there is more freedom in the choice of eigenvectors corresponding to degenerate eigenvalues.
More precisely, note that (A) implies
$$AA^\dagger=\sum_k s_k^2 \underbrace{u_k u_k^*}_{\equiv\mathbb P_{u_k}},
\qquad
A^\dagger A=\sum_k s_k^2 \mathbb P_{v_k}.$$
This tells us that, whenever there are degenerate singular values, the corresponding set of principal components is defined up to a unitary rotation in the corresponding degenerate eigenspace.
In other words, the set of vectors $\{u_k\}$ in (A) can be chosen as any orthonormal basis of the eigenspace $\ker(AA^\dagger-s_k^2)$, and similarly $\{v_k\}_k$ can be any basis of $\ker(A^\dagger A-s_k^2)$.
However, note that a choice of $\{v_k\}_k$ determines $\{u_k\}$, and vice-versa (otherwise $A$ wouldn't be well-defined, or injective outside its kernel).
Summary
A choice of $U$ uniquely determines $V$, so we can restrict ourselves to reason about the freedom in the choice of $U$. There are twe main sources of redundancy:
- Global (columnwise) phase changes: The vectors can be always scaled by a phase factor: $u_k\mapsto e^{i\phi_k}u_k$ and $v_k\mapsto e^{i\phi_k}v_k$. In matrix notation, this corresponds to changing $U\mapsto U \Lambda$ and $V\mapsto V\Lambda$ for an arbitrary diagonal unitary matrix $\Lambda$.
- Degenerate sub-block mixing: When there are "degenerate singular values" $s_k$ (that is, singular values corresponding to degenerate eigenvalues of $A^\dagger A$), there is additional freedom in the choice of $U$, which can be chosen as any matrix whose columns form a basis for the eigenspace $\ker(AA^\dagger-s_k^2)$.
Finally, we should note that the former point is included in the latter, which therefore encodes all of the freedom allowed in choosing the vectors $\{v_k\}$. This is because multiplying the elements of an orthonormal basis by phases does not affect its being an orthonormal basis.