0
$\begingroup$

We consider an estimation problem where the parameter $\theta$ is assigned with a Gaussian negative log-prior $g_{\sigma_0}$ with mean 0 and variance $\sigma_0^2$ and the negative log-likelihood of the observations $x$ is $f(x|\theta)$, assumed to be convex.

The MAP estimator is given by: $$\theta_{\sigma_0}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) + g_{\sigma_0}(\theta),$$ which exists and is unique since the loss is strongly convex. If we assume that $f$ is known and $\sigma_0^2$ is not, then, for any $\sigma^2$, we can consider the MAP estimator $$\theta_{\sigma}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) + g_{\sigma}(\theta).$$

We denote the MSE of an estimator $\hat{\theta}(x)$ as $\mathrm{MSE} (\hat{\theta}(x)) = \mathbb{E}_{\theta} [ \mathbb{E}_{x|\theta} [ \| \hat{\theta}(x) - \theta \|^2 ]]$. Do we have in general that $$\mathrm{MSE} (\theta_{\sigma_0}^*(x)) \leq \mathrm{MSE}(\theta_{\sigma}^*(x))?$$ The result is true when the likelihood $f$ si Gaussian since the MAP estimator is the MMSE in this case. What about the general case for $f$ convex?

$\endgroup$

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.