The phenomenon in convex optimization that the dual of the dual problem is (usually) the same as the primal problem is seemingly a total surprise, and it is only rarely explained. But there's a nice, enlightening explanation that I learned from reading Ekeland and Temam. This material can also be found in the book Variational Analysis by Rockafellar and Wets, starting on p. 502.
The ideas are most clear when we work at an appropriate level of generality. We don't obtain a dual problem until we specify how to perturb the primal problem.
Suppose that the primal problem is
$$
\operatorname{minimize}_x \,\phi(x,0),
$$ where $\phi:\mathbb R^m \times \mathbb R^n \to \mathbb R \cup \{ \infty \}$ is a convex function. For a given $y$, the problem of minimizing $\phi(x,y)$ with respect to $x$ can be viewed as a "perturbed" version of the primal problem. Let's introduce the "value function" $h(y) = \inf_x \, \phi(x,y)$. So, the primal problem is to evaluate $h(0)$. If we have a basic understanding of the Fenchel conjugate, then we know that $h(0) \geq h^{**}(0)$, and typically $h(0) = h^{**}(0)$. The dual problem is simply to evaluate $h^{**}(0)$.
Let's try to write the dual problem more explicitly.
First of all,
\begin{align}
h^*(z) &= \sup_y \, \langle y, z \rangle - h(y) \\
&= \sup_y \, \langle y, z \rangle - \inf_x \, \phi(x,y) \\
&= \sup_y \, \langle y, z \rangle + \sup_x - \phi(x,y) \\
&= \sup_{x,y} \, \langle x, 0 \rangle + \langle y, z \rangle - \phi(x,y) \\
&= \phi^*(0,z).
\end{align}
It follows that
\begin{align}
h^{**}(0) &= \sup_z \, \langle 0, z \rangle - \phi^*(0,z) \\
&= - \inf_z \, \phi^*(0,z).
\end{align}
So the dual problem, written as a minimization problem, is
$$
\operatorname{minimize}_z \, \phi^*(0,z).
$$
Look at the beautiful similarity between the primal and dual problems.
We did not obtain a dual problem until we specified how to perturb the primal problem. So, what if we now perturb the dual problem in the obvious way? A perturbed dual problem is
$$
\operatorname{minimize}_z \, \phi^*(w,z).
$$
Now that we have specified how to perturb the dual problem, we can obtain a dual for the dual problem, in exactly the same manner as above. And you can see immediately what the dual of the dual problem will be, without doing any work.
The dual of the dual problem is:
$$
\operatorname{minimize}_x \, \phi^{**}(x,0).
$$
But typically we have $\phi^{**} = \phi$, in which case the dual of the dual problem is exactly the primal problem.
You might wonder how this dual problem construction connects to the standard dual problem construction (where you first form the Lagrangian, etc.). Suppose the primal problem is
\begin{align}
\text{minimize} & \quad f(x) \\
\text{subject to} & \quad g(x) \leq 0.
\end{align}
A perturbed problem is
\begin{align}
\text{minimize} & \quad f(x) \\
\text{subject to} & \quad g(x) + y\leq 0.
\end{align}
Having specified how to perturb the primal problem, we now obtain a dual problem, and if you work out the details it turns out to be exactly the dual problem that you would expect. I gave more details here:
Please explain the intuition behind the dual problem in optimization.