Skip to main content
added 30 characters in body
Source Link
Goulifet
  • 2.6k
  • 1
  • 17
  • 22

We consider an estimation problem where the parameter $\theta$ is assigned with a Gaussian priornegative log-prior $g_{\sigma_0}$ with mean 0 and variance $\sigma_0^2$ and the likelihoodnegative log-likelihood of the observations $x$ is $f(x|\theta)$, assumed to be convex.

The MAP estimator is given by: $$\theta_{\sigma_0}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) g_{\sigma_0}(\theta),$$$$\theta_{\sigma_0}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) + g_{\sigma_0}(\theta),$$ which exists and is unique since the loss is strongly convex. If we assume that $f$ is known and $\sigma_0^2$ is not, then, for any $\sigma^2$, we can consider the MAP estimator $$\theta_{\sigma}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) g_{\sigma}(\theta).$$$$\theta_{\sigma}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) + g_{\sigma}(\theta).$$

We denote the MSE of an estimator $\hat{\theta}(x)$ as $\mathrm{MSE} (\hat{\theta}(x)) = \mathbb{E}_{\theta} [ \mathbb{E}_{x|\theta} [ \| \hat{\theta}(x) - \theta \|^2 ]]$. Do we have in general that $$\mathrm{MSE} (\theta_{\sigma_0}^*(x)) \leq \mathrm{MSE}(\theta_{\sigma}^*(x))?$$ The result is true when the likelihood $f$ si Gaussian since the MAP estimator is the MMSE in this case. What about the general case for $f$ convex?

We consider an estimation problem where the parameter $\theta$ is assigned with a Gaussian prior $g_{\sigma_0}$ with mean 0 and variance $\sigma_0^2$ and the likelihood of the observations $x$ is $f(x|\theta)$, assumed to be convex.

The MAP estimator is given by: $$\theta_{\sigma_0}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) g_{\sigma_0}(\theta),$$ which exists and is unique since the loss is strongly convex. If we assume that $f$ is known and $\sigma_0^2$ is not, then, for any $\sigma^2$, we can consider the MAP estimator $$\theta_{\sigma}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) g_{\sigma}(\theta).$$

We denote the MSE of an estimator $\hat{\theta}(x)$ as $\mathrm{MSE} (\hat{\theta}(x)) = \mathbb{E}_{\theta} [ \mathbb{E}_{x|\theta} [ \| \hat{\theta}(x) - \theta \|^2 ]]$. Do we have in general that $$\mathrm{MSE} (\theta_{\sigma_0}^*(x)) \leq \mathrm{MSE}(\theta_{\sigma}^*(x))?$$ The result is true when the likelihood $f$ si Gaussian since the MAP estimator is the MMSE in this case. What about the general case for $f$ convex?

We consider an estimation problem where the parameter $\theta$ is assigned with a Gaussian negative log-prior $g_{\sigma_0}$ with mean 0 and variance $\sigma_0^2$ and the negative log-likelihood of the observations $x$ is $f(x|\theta)$, assumed to be convex.

The MAP estimator is given by: $$\theta_{\sigma_0}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) + g_{\sigma_0}(\theta),$$ which exists and is unique since the loss is strongly convex. If we assume that $f$ is known and $\sigma_0^2$ is not, then, for any $\sigma^2$, we can consider the MAP estimator $$\theta_{\sigma}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) + g_{\sigma}(\theta).$$

We denote the MSE of an estimator $\hat{\theta}(x)$ as $\mathrm{MSE} (\hat{\theta}(x)) = \mathbb{E}_{\theta} [ \mathbb{E}_{x|\theta} [ \| \hat{\theta}(x) - \theta \|^2 ]]$. Do we have in general that $$\mathrm{MSE} (\theta_{\sigma_0}^*(x)) \leq \mathrm{MSE}(\theta_{\sigma}^*(x))?$$ The result is true when the likelihood $f$ si Gaussian since the MAP estimator is the MMSE in this case. What about the general case for $f$ convex?

Source Link
Goulifet
  • 2.6k
  • 1
  • 17
  • 22

MAP estimator and MSE for Gaussian prior

We consider an estimation problem where the parameter $\theta$ is assigned with a Gaussian prior $g_{\sigma_0}$ with mean 0 and variance $\sigma_0^2$ and the likelihood of the observations $x$ is $f(x|\theta)$, assumed to be convex.

The MAP estimator is given by: $$\theta_{\sigma_0}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) g_{\sigma_0}(\theta),$$ which exists and is unique since the loss is strongly convex. If we assume that $f$ is known and $\sigma_0^2$ is not, then, for any $\sigma^2$, we can consider the MAP estimator $$\theta_{\sigma}^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) g_{\sigma}(\theta).$$

We denote the MSE of an estimator $\hat{\theta}(x)$ as $\mathrm{MSE} (\hat{\theta}(x)) = \mathbb{E}_{\theta} [ \mathbb{E}_{x|\theta} [ \| \hat{\theta}(x) - \theta \|^2 ]]$. Do we have in general that $$\mathrm{MSE} (\theta_{\sigma_0}^*(x)) \leq \mathrm{MSE}(\theta_{\sigma}^*(x))?$$ The result is true when the likelihood $f$ si Gaussian since the MAP estimator is the MMSE in this case. What about the general case for $f$ convex?