There is a class of problems in Multivariable Calculus exemplified by, i.e... $$w = x^3 y - z^2 t,\ xy = zt,\ \text{Find}(\frac{\partial w}{\partial z})_{x,\ y}$$If I was to rewrite it in a form spiritually closer to a constrained optimization problem, I might eliminate $w$...$$(\frac{\partial}{\partial z})_{x,\ y} (x^3 y - z^2 t)$$$$s.t.\ \ \ \ xy = zt$$Two solution methods are available for this specific problem here, but my focus is on this class of problems in general, and if fewer variables would be more illustrative, then answers should ignore this example. The most unexpected part of the solution process is that $\frac{\partial t}{\partial z}$ is not $0$.
I understand how to solve these problems, but I don't have a good sense of what they are either geometrically and/or in relation to constrained optimization problems. The most natural thing to think given the term "constrained partial derivative" seems trivial upon further inspection, at least for analytic solutions. I wanted to view these as a sort of precursor to, or generalization of, constrained optimization problems, thinking of optimization as taking place across a surface in $\mathbb{R}^3$, each constraint as progressively restricting focus to smaller and smaller subsets of the points in the input space, and setting the Gradient of the objective to $0$ or proportional to the Gradient of the constraints as a sort of "quasi-constraint" further restricting focus to a smaller subset of input space points, at least in an extremely crude sense. And it doesn't take too much imagination to suppose that if one can take constrained partial derivatives then one can take constrained Gradients. So perhaps, the thought went, constrained derivatives are like a generalization of constrained optimization problems, in which we care about derivative information everywhere consistent with the constraints, not just at critical points. But this seems trivial, as mentioned, because an analytic solution to a derivative problem holds for all points in the input space, so in particular it holds for any subset of points defined by constraints.
Another idea might be that this class of problems is about taking derivatives over input spaces in which at least one dimension has been (possibly nonlinearly) transformed in some way, relative to the standard orthonormal basis, which might explain why $\frac{\partial t}{\partial z}$ is not $0$ above. If so, then constrained optimization may or may not be related in some way.
How can the geometry of this class of problems be understood? How is it related to constrained optimization, if at all?