We consider an estimation problem where the parameter $\theta$ is assigned with the prior $g_\alpha$ depending on some parameter $\alpha$ (e.g. the variance of a Gaussian prior) and the observation $x$ has negative log-likelihood $f(x|\theta)$. Then, the MAP estimator is given by: $$\theta_\alpha^* (x) = \arg\min_{\theta \in \mathbb{R}^n} f (x | \theta) + g_\alpha(\theta).$$ We assume that $f$ and $g_\alpha$ have nice properties, e.g. such that the minimization problem above is strongly convex and admits a unique minimizer.
I am interested in known properties about the MSE of the MAP estimator, given by
$$\mathrm{MSE}_\alpha (\theta_\alpha^*(x)) = \mathbb{E}_{\theta_\alpha} [ \mathbb{E}_{x|\theta_\alpha} [ \| \theta_\alpha^*(x) - \theta_\alpha\|^2 ]],$$
where $\theta_\alpha$ follows the prior $g_\alpha$. In particular, assuming one uses a MAP estimator associated to the prior $g_\beta$ instead of the true $g_\alpha$, do we necessarily have that $$\mathrm{MSE}_\alpha (\theta_\alpha^*(x)) \leq \mathrm{MSE}_\alpha (\theta_\beta^*(x)),$$ that is, a mismatched model always leads to reduced performances in terms of MSE.
The result is true in the Gaussian case since the MAP estimator is also the MMSE. What about the general case?