When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors. Are there generalisations of this result to other kinds of approximations?
I'm especially interested in minimax approximations of curves in two or three dimensions. Take for example the circle $x^2 + y^2 = 1$, or its first quadrant. I have constructed very good approximations using polynomials $P(t) = (x(t), y(t))$, and I find that they are equi-oscillatory, in the sense that the error function $x(t)^2 + y(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.
Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.
In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".
I asked this question on Math.Stackexchange, and got zero response.