I have $N$ i.i.d random vectors $\{X_k\}_{k=1}^N$ in $\mathbb{R}^n$ where each entry is bounded and positive. I construct a matrix $M_N$ as \begin{align} M_N=\frac{1}{N}\sum_{k=1}^NX_kX_k^T \end{align} i.e, an estimate of the covariance matrix. Let's define \begin{align} M=\mathbb{E}\bigg[X_kX_k^T \bigg] \end{align} and hence $M=\mathbb{E}[M_N ]$. $M$ is symmetric and I know that $M$ is positive definite.
The problem: I have a particular vector $v\in \mathbb{R}^n$ and want to show that, with high probability, there is a constant $C$ (independent of $N,n$) such that \begin{align} v^T(M_N)^{-1}v\leq C v^TM^{-1}v \end{align} and the probability will of course depend on $N,n$. I want to know if this is possible with the assumption that $N=\kappa n$ where $\kappa>1$ is a constant.
I can proceed in this way: \begin{align} v^T(M_N)^{-1}v=v^T((M_N)^{-1}-M^{-1})v+v^TM^{-1}v, \end{align} so if I can show that $v^T((M_N)^{-1}-M^{-1})v\leq C v^TM^{-1}v$ with high probability the result follows.
I also know that $v^TM^{-1}v\lesssim n$, if that can be of any help. The problem is the inverses in the expressions. Maybe it is possible to apply some sort of concentration inequality.