I don't think there is an answer in terms of symmetry, per se. But I can explain the proof given by MikeEarnest in an intuitive way that I think may help explain roughly why the whole "product of expectations" thing works out, even though it shouldn't in general.
A simpler question is to find the expected length of the shortest segment of a circle, after it has been subdivided by $n$ randomly placed cuts. This value is $1/n^2$, as a fraction of the circle's circumference. Again, one can "compute" this by noting that the expected size of each particular segment is $1/n$, and the chance that a given segment is the shortest is also $1/n$.
But this is not how probability theory works, and for an actual proof, we need a different technique. It turns out to be nice to think about transforming our space of outcomes in some way.
Let's say we cut the circle and find that segment $j$ is the minimum, and has size $l_\text{min}$. Well, we can subtract $l_\text{min}$ from all the other segments, and give all that length to segment $j$, so its new size is $nl_\text{min}$. Since $l_\text{min}$ was the shortest length, all the other segments still have non-negative length. And since the minimum segment can have length at most $1/n$, the new length satisfies $nl_\text{min} \leq 1$. Such a transformation is invertible. We can start from a completely random division of the circle. We divide segment $j$ into $n$ equal pieces. It keeps one of them, while the rest are given to the other segments. Segment $j$ is now guaranteed to be the minimum, since its length $l_\text{min}$ was added to all the other segments. If it's clearer in equations:
$$
f_j(l_1, l_2, \dots l_n) = (l_1 - l_j, l_2 - l_j \dots nl_j \dots l_{n-1} - l_j, l_n - l_j)
$$
$$
f_j^{-1}(l_1, l_2, \dots l_n) = (l_1 + \frac{l_j}{n}, l_2 + \frac{l_j}{n} \dots \frac{l_j}{n} \dots l_{n-1} + \frac{l_j}{n}, l_n + \frac{l_j}{n})
$$
So far, I've described a transformation $f_j$ and its inverse. If $C$ is the set of all possible cuts of the circle, and $J_j$ is the subset of cuts where the shortest segment is segment $j$, then we have:
$$
f_j: J_j \hookrightarrow C
$$
$$
f_j^{-1}: C \hookrightarrow J_j
$$
From those given facts, we now actually know that $f_j$ is a bijection between these sets. (also, $f_j$ is linear, so it doesn't do anything bad to the measure)
$$
f_j: J_j \leftrightarrow C
$$
So we can sample from $C$ according to the original distribution by picking a random $j$ to be the minimum, sampling $c\in C$, and then taking our output to be $f^{-1}_j(c)$. The virtue of this is that we now actually know that segment $j$ is definitely the minimal one. So we can compute: $\langle l_\text{min}\rangle$ as follows:
- When sampling $c$, we have $\langle l_\text{j} \rangle = \frac{1}{n}$, just like any other segment.
- After applying $f^{-1}$, we have $l_\text{min} = \frac{1}{n}l_j$
- So $\langle l_\text{min}\rangle = \frac{1}{n^2}$
The first factor of $1/n$ comes from the expectation of the length of a segment, just as you say. The second doesn't come directly from the probability of a segment being minimal, though. It comes from the fact that our transformation $f^{-1}$ splits length between $n$ different segments.
Generalization to your particular problem should be possible now. Now the expected segment length is $1/(n+1)$, while a transformation analogous to $f^{-1}$ splits one segment into $n-1$ pieces.