Skip to main content
replaced http://math.stackexchange.com/ with https://math.stackexchange.com/
Source Link

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors.

Can this be generalised to cover approximation of 2D or 3D curves?

The simplest example is the first quadrant of the circle $x^2 + y^2 = 1$. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related questionthis related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want an optimal approximation of the piece where $y \ge 0$. It looks like this: egg

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors.

Can this be generalised to cover approximation of 2D or 3D curves?

The simplest example is the first quadrant of the circle $x^2 + y^2 = 1$. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want an optimal approximation of the piece where $y \ge 0$. It looks like this: egg

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors.

Can this be generalised to cover approximation of 2D or 3D curves?

The simplest example is the first quadrant of the circle $x^2 + y^2 = 1$. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want an optimal approximation of the piece where $y \ge 0$. It looks like this: egg

added 13 characters in body
Source Link
bubba
  • 659
  • 4
  • 15

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors. Are there generalisations of

Can this resultbe generalised to other kindscover approximation of approximations2D or 3D curves?

I'm especially interested in minimax approximations of curves in two or three dimensions. Take forThe simplest example is the first quadrant of the circle $x^2 + y^2 = 1$, or its first quadrant. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want an optimal approximation of the piece where $y \ge 0$. It looks like this: egg

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors. Are there generalisations of this result to other kinds of approximations?

I'm especially interested in minimax approximations of curves in two or three dimensions. Take for example the circle $x^2 + y^2 = 1$, or its first quadrant. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want an optimal approximation of the piece where $y \ge 0$. It looks like this: egg

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors.

Can this be generalised to cover approximation of 2D or 3D curves?

The simplest example is the first quadrant of the circle $x^2 + y^2 = 1$. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want an optimal approximation of the piece where $y \ge 0$. It looks like this: egg

added 13 characters in body
Source Link
bubba
  • 659
  • 4
  • 15

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors. Are there generalisations of this result to other kinds of approximations?

I'm especially interested in minimax approximations of curves in two or three dimensions. Take for example the circle $x^2 + y^2 = 1$, or its first quadrant. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want to approximatean optimal approximation of the piece where $y \ge 0$. It looks like this: eggegg

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors. Are there generalisations of this result to other kinds of approximations?

I'm especially interested in minimax approximations of curves in two or three dimensions. Take for example the circle $x^2 + y^2 = 1$, or its first quadrant. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want to approximate the piece where $y \ge 0$. It looks like this: egg

When constructing minimax (sup-norm) polynomial approximations of real-valued functions, well-known results say (roughly speaking) that optimal solutions are characterized by the fact that they have equi-oscillatory errors. Are there generalisations of this result to other kinds of approximations?

I'm especially interested in minimax approximations of curves in two or three dimensions. Take for example the circle $x^2 + y^2 = 1$, or its first quadrant. I have constructed very good approximations using polynomials $P(t) = (u(t), v(t))$, and I find that they are equi-oscillatory, in the sense that the error function $u(t)^2 + v(t)^2 -1$ oscillates equally about zero. I'd like to know if there's any theory that supports this experimental finding.

Of course, I could just write the circle quadrant as $x=\cos t$, $y=\sin t$, and approximate the sine and cosine functions by polynomials on $[0, \tfrac12 \pi]$. But this is a different problem, and this approach gives circle approximations that are significantly inferior to the ones I constructed. So, decomposing the 2D problem into two 1D ones is not what I'm after.

In three dimensions, my "curve" would be given by a pair of equations $f(x,y,z)=0$ and $g(x,y,z)=0$. In this case, I don't even know how to define "equi-oscillatory" or even "oscillation".

I asked this question on Math.Stackexchange, and got zero response.

For circles, specifically, there are some good results in the answers to this related question, but no progress on more general curves or any underlying theory.

Edit: Here is a more formal/rigorous statement of the 2D problem. We are given a function $f:\mathbf{R}^2 \to \mathbf{R}$, and we are considering the set $F = \{(x,y) \in \mathbf{R}^2 : f(x,y)=0\}$ to be a planar curve. Let $P_0 = (x_0, y_0)$ and $P_1 = (x_1, y_1)$ be two points in $F$ (i.e. two point on our curve). We can assume that $P_0$ and $P_1$ belong to the same connected component of $F$. For each $(x,y) \in \mathbf{R}^2$, let $d(x,y)$ denote the Euclidean distance from $(x,y)$ to the set $F$. I want to find polynomials $u:[0,1] \to \mathbf{R}$ and $v:[0,1] \to \mathbf{R}$ such that $$ u(0) = x_0 \quad ; \quad u(1) = x_1 \\ v(0) = y_0 \quad ; \quad v(1) = y_1 $$ and such that $$ \max\big\{ d\big(u(t),v(t)\big) : t \in [0,1]\big\} $$ is minimised. In other words, I want the curve $t \mapsto \big( u(t), v(t) \big)$ to be an optimal approximation of the portion of my original curve lying between the points $P_0$ and $P_1$. And I'm interested in knowing if this optimal solution is equi-oscillatory, in some sense.

A simple concrete example is $f(x,y) = x^2y^2 + (x-1)(x-2)$, with $P_0 = (0,0)$ and $P_1=(2,0)$. I want an optimal approximation of the piece where $y \ge 0$. It looks like this: egg

Added example
Source Link
bubba
  • 659
  • 4
  • 15
Loading
deleted 2 characters in body
Source Link
bubba
  • 659
  • 4
  • 15
Loading
added 82 characters in body
Source Link
bubba
  • 659
  • 4
  • 15
Loading
edited tags
Source Link
bubba
  • 659
  • 4
  • 15
Loading
added 4 characters in body
Source Link
bubba
  • 659
  • 4
  • 15
Loading
Wrote a more precise defintion of the problem
Source Link
bubba
  • 659
  • 4
  • 15
Loading
Wrote a more precise defintion of the problem
Source Link
bubba
  • 659
  • 4
  • 15
Loading
added 276 characters in body
Source Link
bubba
  • 659
  • 4
  • 15
Loading
Source Link
bubba
  • 659
  • 4
  • 15
Loading