7
$\begingroup$

Let $B$ and $S:=\partial B$ denote, respectively, the unit ball and the unit sphere wrt to some norm $\|\cdot\|$ on $\Bbb R^2$. For a fixed real $r\in(0,1]$ and all $u\in S$, let $$D_u:=(B+ru)\cap(B-ru).$$ For each $u\in S$, let $M_u$ stand for the set of all maximizers of $\|x\|$ over all $x\in D_u$.

Is it then always true that for each $u\in S$ there is some $x_u\in M_u$ such that the family $(x_u)_{u\in S}$ is continuous?

If desired, one may assume here that the norm is a smooth or even polynomial function and/or the ball $B$ is strictly convex; on the other hand, one may assume here that $B$ is a polytope.

$\endgroup$
3
  • 1
    $\begingroup$ This question is a specific version of the previous one. $\endgroup$ Commented Mar 28 at 11:39
  • 1
    $\begingroup$ So now, this is always true for the euclidean norm, right? But for a general norm it is unclear. $\endgroup$ Commented Mar 28 at 11:46
  • 1
    $\begingroup$ @BenjaminVejnar : Right, the problem is with norms other than the Euclidean norm. $\endgroup$ Commented Mar 28 at 12:01

1 Answer 1

7
$\begingroup$

I prove the result only for strictly convex norms.

THEOREM If $\|\cdot\|$ is a strictly convex norm, then there exist two (one if $r=1$) continuous families $(x_u)_{u\in S}$ of maximisers of $\|\cdot\|$ constrained on $D_u$.

Let us start with a pictorial representation of the setting, for the case of the strictly convex norm $$\|(x,y)\| := \sqrt{x^2+0.5y^2 + xy} + 0.5|x| + |y|,$$ where the level curves of $\|\cdot\|$ are shown. This gives a visual idea of the line of proof.

General illustration

The case $r = 1$ is trivially true, since $D_u = \{\mathbf{0}\}$ for all $u\in S$. The continuous family of maximisers is $\{\mathbf{0}\}_{u\in S}$. Let us now consider the case $r \in ]0,1[$.

LEMMA 1 If $\|\cdot\|$ is strictly convex, then $\partial (B + ru)$ and $\partial (B - ru)$ intersect at exactly two distinct points $\{P,Q\}$ that vary continuously on $u$.

Proof. Without loss of generality, fix $u$ and choose a reference system such that $u = \frac{[1,0]}{\|[1,0]\|} \in S$. From convexity, the curve $S$ is piecewise differentiable and the outward direction $\widehat{n}(x)$, $x\in S$, is well-defined even at corner points (in the non smooth case, we use the notion of tangent cone, and the proof still works). From strict convexity, the angle $\theta(x)$ between $u$ and $\widehat{n}(x)$ is strictly monotonic (and multivalued if the non-smooth case) as $x$ moves on $S$. Therefore, there exist only one point $P_1$ such that $\theta(x) = \frac{\pi}{2}$ and one point $P_2$ such that $\theta(x) = \frac{3\pi}{2}$. The points $P_1$ and $P_2$ split $S$ into a left and a right part, defined as $$S_R := \{x\in S \ | \ \widehat{n}(x)\cdot u > 0\};\\ S_L := \{x\in S \ | \ \widehat{n}(x)\cdot u < 0\}.$$

$\hskip1.5in$ Illustration of Lemma 1

From strict convexity, $S_R$ and $S_L$ can be viewed as the graphs of two continuous and piecewise differentiable functions $x = f_R(y)$ and $x = f_L(y)$, respectively, for $y \in [y_{P_2}, y_{P_1}]$. These functions fulfil:

  • $f_R$ is a strictly concave function
  • $f_L$ is a strictly convex function
  • $f_R(y_{P_1}) = f_L(y_{P_1}) = x_{P_1}$
  • $f_R(y_{P_2}) = f_L(y_{P_2}) = x_{P_2}$

The intersections between $\partial (B + ru)$ and $\partial (B - ru)$ are clearly given by the solutions of the equation $f_R(y)-r u_x = f_L(y) +r u_x$, i.e. $$f_R(y) - f_L(y) -2ru_x = 0. \label{inters_equation}\tag{1}$$

The function $g(y) := f_R(y) - f_L(y) - 2ru_x$ fulfils:

  • $g$ is a strictly concave function
  • $g(y_{P_1}) = g(y_{P_2}) = - 2ru_x < 0$
  • there exists $\overline{y} \in [y_{P_2}, y_{P_1}]$ such that $g(\overline{y}) > 0$. This is true because $\mathring{D_u} \neq \emptyset$ since $r \in ]0,1[$.

These properties imply that \eqref{inters_equation} has exactly two solutions, say $y_Q < y_P$. Let $P := (f_R(y_P), y_P)$ and $Q := (f_R(y_Q), y_Q)$.

Now for the continuity w.r.t. $u$. Being in the reference system used in the proof entails a rotation that is continuous w.r.t. $u$ (even for non-smooth norms). Therefore, from strict convexity, $P_1$ and $P_2$ in turn vary continuously w.r.t. such rotation, and so do $f_L$, $f_R$, $g$, and the zeros of $g$. This completes the proof.

$\hskip1.5in$ Illustration of Lemma 1

LEMMA 2 The following sets are symmetric w.r.t. the origin:

  • $D_u$ and $\partial D_u$
  • $\partial (B+ru) \cap \partial (B-ru) = \{P,-P\}$, i.e. the points $P,Q$ of Lemma 1 are opposite to each other.

Proof. First, since every norm is an even function, then $B$ and $S$ are both symmetric w.r.t. the origin. Let $x \in D_u$. Then $x$ can be expressed as $x = y+tu = z-tu$, where $y,z\in B$. But from the symmetry of $B$, also $-y, -z \in B$. Therefore also $-y-tu = -z+tu = -x \in D_u$, and this proves that $D_u$ is symmetric. If, in addition, $x\in\partial D_u$, this means that either $\|y\| = 1$ or $\|z\| = 1$, therefore $\|-y\| = 1$ or $\|-z\| = 1$, therefore $-x\in\partial D_u$. If, in addition, $x\in \partial (B+ru) \cap \partial (B-ru)$, then $\|y\| = \|z\| = 1$, which implies $\|-y\| = \|-z\| = 1$, i.e. $-x\in \partial (B+ru) \cap \partial (B-ru)$, and this completes the proof.

LEMMA 3 It holds that $D_u \subset (B+tu) \cap (B-tu)$ for all $t \in [-r,r]$. Specifically, $D_u \subset B$.

Proof. Let $x \in D_u$. By definition of $D_u$, there exist $y,z\in B$ such that $x = y+ru = z-ru$, which yields $z = y+2ru$. From the convexity of $B$ we have that $y+2tu \in B$ for all $t\in [0,r]$. Analogously, we have $z-2tu \in B$ for all $t\in [0,r]$. This means that $x$ can be expressed as $x = \overline{y} + tu = \overline{z} - tu$, where $\overline{y} := y + (r-t)u \in B$ and $\overline{z} := z -(r-t)u \in B$. Therefore, $x\in (B+tu) \cap (B-tu)$ for all $t\in [0,r]$, and this completes the proof.

LEMMA 4 All maximisers of $\|x\|$ constrained to $D_u$ must be actually on $\partial D_u$.

Proof. If $x\in\mathring{D_u}$, there exists a ball $B_\ell (x) \subset D_u$, therefore $\overline{x} := \frac{\|x\| + \ell}{\|x\|}x$ fulfils $\|\overline{x}\| = \|x\|+\ell > \|x\|$. Since $D_u$ is closed, then $\overline{x} \in D_u$, and so $x$ does not maximise $\|\cdot\|$ over $D_u$. This completes the proof.

LEMMA 5 If $\|\cdot\|$ is strictly convex, there exists $0<\overline{\alpha} < 1$ such that $D_u\subset B_{\alpha}(\mathbf{0})$ if and only if $\alpha \ge \overline{\alpha}$.

Proof We start by proving that there exists $0<\alpha<1$ such that $D_u \subset B_\alpha(\mathbf{0})$. We will use the notations introduced in the proof of Lemma 1. By construction, it follows that $$S_L \cap (S_L+ru) = \emptyset;\\ S_R \cap (S_R-ru) = \emptyset.$$ The two above conditions imply that $$S_L \cap D_u = S_R \cap D_u = \emptyset.\label{empty1}\tag{2}$$

Now suppose by contradiction that $P_1 \in D_u$. Then there exists $x_1\in B$ such that $P_1 = x_1+tu$, which yields $x_1 = P_1-tu \in B$, which is not possible thanks to the strict convexity of $S$. We reason similarly for $P_2$ and we get $$P_1\notin D_u, \quad P_2\notin D_u. \label{empty2}\tag{3}$$ Since $S = S_L \cup S_R \cup \{P_1,P_2\}$, from \eqref{empty1} and \eqref{empty2} we finally have $$S \cap D_u = \emptyset.$$ Since $S=\partial B$ and $D_u$ are compact and disjoint, and $D_u \subset B$ from Lemma 3, then $\mathring{B}\setminus D_u$ is an open stripe of width $\delta > 0$ w.r.t. the norm $\|\cdot\|$ (this is true since all norms are equivalent in $\mathbb{R}^2$). Therefore, it is possible to shrink the radius of $B$ by $\delta$, i.e.: $$D_u \subset B_{1-\delta}(\mathbf{0}).$$ We are now allowed to define $$\overline{\alpha} := \inf \{0 < \alpha < 1 | D_u \subset B_\alpha(\mathbf{0})\}.$$ Since the $B_\alpha(\mathbf{0})$ are nested, then $$D_u \subset B_\alpha (\mathbf{0}) \qquad \forall \alpha > \overline{\alpha}.$$ We are left to show that $D_u \subset B_{\overline{\alpha}}(\mathbf{0})$. To this end, $$D_u \subset \bigcap_{\alpha > \overline{\alpha}} B_\alpha(\mathbf{0}) = B_{\overline{\alpha}}(\mathbf{0}),$$ and this completes the proof.

LEMMA 6 The only points of $\partial D_u$ that are also on the optimal sphere $\partial B_{\overline{\alpha}}(\mathbf{0})$ are $\{P,-P\}$ in Lemma 2. Therefore, $P$ and $-P$ are the only maximisers of $\|\cdot\|$ constrained on $D_u$.

$\hskip1.5in$ Illustration of Lemma 6

Proof. We use the notation of Lemma 1. Since all level sets of $\|\cdot\|$ are rescalings of $S$, we can parametrise $\partial B_\overline{\alpha}(\mathbf{0})$ similarly as $S$, i.e. with two curves $$x = \widetilde{f}_L(y) := \overline{\alpha}f_L\left(\frac{y}{\overline{\alpha}}\right) \text{ and } x = \widetilde{f}_R(y) := \overline{\alpha}f_R\left(\frac{y}{\overline{\alpha}}\right), \quad y \in [\overline{\alpha}y_{P_2}, \overline{\alpha}y_{P_1}].\label{parametr_alpha}\tag{4}\\\\$$ Suppose by contradiction that there exists $T \in \partial B_{\overline{\alpha}}(\mathbf{0}) \cap (\partial D_u \setminus \{-P,P\})$. There are two cases:

  1. $T \in S_L+ru$;
  2. $T \in S_R-ru$.

We will address case 1) since case 2) is similar.

We start by proving that $\widetilde{f}'_L(y_T) = f'_L(y_T)$. Since $T\notin \{-P,P\}$ (i.e. $T$ is not the lowest nor the highest point of $D_u$), $\widetilde{f}'_L(y_T) \neq f'_L(y_T)$ would imply that $\partial B_{\overline{\alpha}}(\mathbf{0})$ crosses $\partial D_u$ at $T$ and therefore $ B_{\overline{\alpha}}(\mathbf{0}) \not\supseteq D_u$, a contradiction. Therefore $$\widetilde{f}'_L(y_T) = f'_L(y_T),\label{same_tangent}\tag{5}$$ i.e. both curves have the same tangent at $y_T$. By differentiating \eqref{parametr_alpha} and evaluating at $y = y_T$, we have $$\widetilde{f}'_L(y_T) = f'_L\left(\frac{y_T}{\overline{\alpha}}\right).\label{first_der}\tag{6}$$ By combining \eqref{same_tangent} and \eqref{first_der} we get $$f'_L(y_T) = f'_L\left(\frac{y_T}{\overline{\alpha}}\right).$$ From strict convexity, $f'_L$ is strictly increasing, so $y_T = \frac{y_T}{\overline{\alpha}}$, which using $\overline{\alpha} < 1$ gives $$y_T = 0.\label{yt0}\tag{7}$$ Differentiating \eqref{parametr_alpha} twice and using \eqref{yt0} gives $$\widetilde{f}''_L(y_T) = \frac{1}{\overline{\alpha}}f''_L\left(\frac{y_T}{\overline{\alpha}}\right) = \frac{1}{\overline{\alpha}} f''_L(0) = \frac{1}{\overline{\alpha}} f''_L(y_T).$$ So $\widetilde{f}_L$ has a higher (in absolute value) second derivative than $f_L + ru_x$ at $y_T$ (or, in the non-smooth case, limit of the second derivative as $y\rightarrow y_T$, which always exists for convex functions thanks to Alexandrov's theorem). So $\partial B_{\overline{\alpha}}(\mathbf{0})$ ventures inside $D_u$, so $B_{\overline{\alpha}}(\mathbf{0}) \not\supseteq D_u$, a contradiction. This completes the proof.

P.S. I believe the result holds also for non-strictly convex norms (such as the 1-norm or the infinity norm), but it is harder to prove since there can be more than two constrained maximisers.

$\endgroup$
22
  • $\begingroup$ Thank you for your answer. I will be looking forward to its completion. $\endgroup$ Commented Mar 30 at 0:11
  • 1
    $\begingroup$ @PietroMajer If I am not missing anything, for the infinity norm, $D_u$ is always a rectangle, and $M_u$ is composed of two whole opposite edges of $D_u$, because the infinity norm itself has to be maximised, not the Euclidean norm. More specifically, when $u$ makes an angle of $\pi/4 + k \pi/2$ with the $x$-axis, $M_u$ is the whole boundary of $D_u$. So you can choose a continuous selection by moving continuously on the ($u$-dependent) boundary of $D_u$. I believe this idea works for the 1-norm too, as well as for any non-strictly convex norm. $\endgroup$ Commented Mar 31 at 21:11
  • 1
    $\begingroup$ @PietroMajer Since $u = (1-|\epsilon|,\epsilon)$ does not belong to the unit sphere in infinity norm, I assume you are considering the 1-norm. If this is the case, I don't agree with your characterisation of the set $M_u$. As shown in this picture, $M_u$ is composed of two whole edges for $\epsilon >0$, and the whole $\partial D_u$ for $\epsilon = 0$. You can easily operate a continuous selection. I believe you are still trying to maximise the Euclidean norm on $D_u$. $\endgroup$ Commented Apr 1 at 13:26
  • 1
    $\begingroup$ Sorry, that was a typo for $u:=(1,\epsilon)$, and I meant the infinity norm (thank you for the picture!), but I totally misunderstood the definition of $D_u$ (intersection of the balls, not just the spheres!). Now it’s clear thanks $\endgroup$ Commented Apr 1 at 18:12
  • 1
    $\begingroup$ @MathMax : This conversation has been going on for a while now, hopefully with more and more points fixed or clarified. Could you please take a sufficient amount of time (say a few days) to re-check your proof very carefully, to make sure that all is correct and no details are missing? And then alert me? Thank you. $\endgroup$ Commented Apr 4 at 18:45

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.