Consider the parameterized optimization problem: \begin{align} \boldsymbol{s}(p)= &\arg \min_{ \boldsymbol{x}} \quad g( \boldsymbol{x})\\ \text{s.t. } & \boldsymbol{A}(p) \textbf{x} = \boldsymbol{b}(p)\\ & \boldsymbol{x}_{min} \preccurlyeq \boldsymbol{x} \preccurlyeq \boldsymbol{x}_{max}, \end{align}
where $\boldsymbol{A}(p) \in \mathbb{R}^{m\times n}$, $\boldsymbol{x} \in \mathbb{R}^{n} $, $ \boldsymbol{b}(p) \in \mathbb{R}^{n}$, and $ \preccurlyeq $ denotes element-wise inequality.
Under the following assumptions:
A.1a. $ {g}(\boldsymbol{x}) $ is strictly convex and continuously differentiable.
A.1b. $ {g}(\boldsymbol{x}) $ is well behaved $^*$
A.2. $ \boldsymbol{A}(p) $ is full row-rank.
A.3. The elements of $ \boldsymbol{A}(p) $ and $ \boldsymbol{b}(p) $ is $ \mathcal{C}^1 $ continuous.
A.4. $\boldsymbol{A}(p) \textbf{x} = \boldsymbol{b}(p)$ is feasible for some $\boldsymbol{x} \in \boldsymbol{x}_{min} \preccurlyeq \boldsymbol{x} \preccurlyeq \boldsymbol{x}_{max}$
I want to proove continuity of $ \boldsymbol{s}(p) $.
I think I have a working draft, but could really need some input on making it more precise/verify it.
Step 1 KKT Qualifications and existence of an optimal solution
The following two KKT qualifications are met [1]
1: Linear constraint qualification is met (affine constraints)
2: Second order sufficient condition is met (by A.1a)
With the above KKT qualifications satisfied, if $ \boldsymbol{x}^* $ is a local optimum, then there exists some KKT multipliers $ \boldsymbol{\lambda}\in \mathbb{R}^{m} $ and $ \boldsymbol{\mu} \in \mathbb{R}^{2n} $, such that \begin{equation}\label{KKTPOINKKT} \boldsymbol{R}( \boldsymbol{z},p) = {\left[\begin{matrix}\nabla_{ \boldsymbol{x}^*} {g} ( \boldsymbol{x}^*) + \boldsymbol{\lambda}^{\top} \boldsymbol{A(p)} + \boldsymbol{\mu}^{\top} {\left[\begin{matrix} \boldsymbol{1}\\ \boldsymbol{-1} \end{matrix} \right]} \\ \boldsymbol{A}(p) \boldsymbol{x} - \boldsymbol{b}(p) \\ {\mu}_ 1 h_1( \boldsymbol{x^*}) \\ {\mu}_ 2 h_2( \boldsymbol{x^*}) \\ \vdots \\ {\mu}_{2n} h_{2n}( \boldsymbol{x^*}) \end{matrix} \right]}= \boldsymbol{0} \end{equation}
for \begin{align} h( \boldsymbol{x}^*) &\preccurlyeq 0,\\ \boldsymbol{\mu}& \succcurlyeq 0 \end{align} where \begin{align} h( \boldsymbol{x})=& {\left[\begin{matrix} \boldsymbol{x}- \boldsymbol{x}_{\text{max}} \\ \boldsymbol{x}_{\text{min}}- \boldsymbol{x} \end{matrix} \right]},\\ \boldsymbol{z} = &{\left[\begin{matrix} \boldsymbol{f} ^T& \boldsymbol{\lambda}^{\top} & \boldsymbol{\mu}^{\top} \end{matrix} \right]^{\top}}. \end{align}
By (A.3) a local optimum $ \boldsymbol{x}^* $ satisfying the above exists, and by strict convexity (A.1a) this optimum must be unique. This implies $ \boldsymbol{s}(p)= \boldsymbol{x}^* $.
Step 2 (Show differentiability given that the active set does not change)
Let the the active set $ {{\mathbb{A}}} $ be the set of constraints that are strictly active,
(i.e. $h_i=0 $, $ \mu_i >0$ $ \implies h_i \in h_{{\mathbb{A}}} $ )
Let $\boldsymbol{R}_{{\mathbb{A}}}( \boldsymbol{z},p)$, be $\boldsymbol{R}( \boldsymbol{z},p)$, only containing the active constraints of ${{\mathbb{A}}}$.
For a fixed active set $ {{\mathbb{A}}} $, $ \boldsymbol{z} $ is implicitly defined by $ \boldsymbol{R}_{{\mathbb{A}}}( \boldsymbol{z},p) =0 $: \begin{equation}\label{SENSITIIVITY} \frac{\partial \boldsymbol{z}}{\partial p} = - \left( \frac{\partial \boldsymbol{R} _{\mathbb{A}} }{\partial \boldsymbol{z} } \right)^{-1} \frac{\partial \boldsymbol{R} _{\mathbb{A}}}{\partial p}. \end{equation}
$\boldsymbol{R} _{ \mathbb{A} }( \boldsymbol{z},p) $ is continuously differentiable w.r.t. $ \boldsymbol{z} $;
\begin{equation}\label{key} \frac{\partial \boldsymbol{R} _{\mathbb{A}} ( \boldsymbol{z},p) }{\partial \boldsymbol{z}}= {\left[\begin{matrix}\nabla^2_{ \boldsymbol{x}} \boldsymbol{g} ( \boldsymbol{x}) & \boldsymbol{A}^{\top}(p) & diag\left( {\left[\begin{matrix} \boldsymbol{1}\\ \boldsymbol{-1} \end{matrix} \right]} _{{\mathbb{A}}} \right) \\ \boldsymbol{A}(p) & \boldsymbol{0} & \boldsymbol{0} \\ diag\left( {\left[\begin{matrix}\boldsymbol{1}\\ \boldsymbol{-1} \end{matrix} \right]} _{{\mathbb{A}}} \right) & \boldsymbol{0} & \boldsymbol{0} \end{matrix} \right]} , \end{equation} and $ \frac{\partial \boldsymbol{R} _{\mathbb{A}} ( \boldsymbol{z},p) }{\partial \boldsymbol{z}} $ is continuously invertible (by convexity $ \nabla^2_{\boldsymbol{x}} g ( \boldsymbol{x}) $ is positive definite (A.1a), while $ \boldsymbol{A}$ is full row rank (A.2).
The second term \begin{equation}\label{END} \frac{\partial R( \boldsymbol{z},p)}{\partial p } = {\left[\begin{matrix}\boldsymbol{0} \\ \frac{\partial }{ \partial p } \left( \boldsymbol{A}(p) - \boldsymbol{b}(p) \right)\\ \boldsymbol{0} \end{matrix} \right]} \end{equation} is differentiable w.r.t. $ p $ (A.3)
Therefore, by the implicit function theorem, given a fixed active set $ {{\mathbb{A}}}$, for every pair ($ \bar{ \boldsymbol{z}}, \bar{p} $) satisfying $ R_{\mathbb{A}}( \bar{ \boldsymbol{z}}, \bar{p} ) =0 $, there, in the neighborhood of $ \bar{p} $, is a unique function $ \vartheta (\bar{p}) $ such that $ R_{\mathcal{A}}(\vartheta (\bar{p}),\bar{p})=0 $. Therefore, as long as the active set does not change, inserting $ \boldsymbol{z}(p) =\vartheta (\bar{p}) $ shows that $ \boldsymbol{z} $ (and $ \boldsymbol{s}(p) $) are continuously differentiable w.r.t. $ p $.
Step 3 It is now (presumably) established that for a fixed active set, we have $ \mathcal{C}^1 $ continuity. I now want to establish $ \mathcal{C}^0 $ continuity of the whole problem.
$ \frac{\partial \boldsymbol{z}}{\partial p} $ does not exists in cases where the active set change. However, the left and right-handed derivative exists. Thus $ \boldsymbol{{ z}} $ (and $ \boldsymbol{s}(p)$) is piecewise continuously differentiable. From piecewise differentiability, it follows that $ \boldsymbol{s}(p) $ is continuous.
Some specific questions:
-Does the arguments work out/Is there anything missing? I am especially uncertain on the "active set" argument. (to me it appears to make sense to me).
-Is strict convexity $ {g}(\boldsymbol{x}) $ sufficient , or do we need strong convexity?.
-I asserted (A.1b) to ensure that $ {g}(\boldsymbol{x}) $ never tend to an infinitum. Is this needed, or does the same follow from (A.1a)?
PS:
This is a continuation of a question problem I asked previously (for which I got a good answer for). I know want to expand it, and be more precise.