6
$\begingroup$

Consider an $n \times n$ matrix $A$. I'm interested in algorithms that can verify whether the largest singular value of $A$, i.e., its spectral norm $\| A \|_2$, is less than or equal to $1$ or not. So the algorithm should return true if $\| A \|_2 \le 1$ and false otherwise. One way to do this is to simply compute the full SVD. This, however, has $O \left( n^3 \right)$ complexity. My questions:

  1. Are there algorithms to check whether $\| A \|_2 \le 1$ that work in $O \left( n^2 \right)$ steps?

  2. There are upper bounds on $\| A \|_1$ that are computable in $O \left( n^2 \right)$, e.g., the Frobenius norm $\| A \|_2 \le \| A \|_F$. However, this bound is not tight, e.g., for the identity matrix $A = I_n$, $\| A \|_2 = \sqrt{n} \|A\|_F$. Are there functions $\alpha(A)$ that can be computed in $O \left( n^2 \right)$ and upper bound $\| A \|_2$ tighter than $\sqrt{n}$, i.e., $\max\limits_{A} \frac{\alpha(A)}{\|A\|_2} < \sqrt{n}$?

  3. And a final question. Assume you can change the matrix $A$ by arbitrary non-degenerate diagonal transformations $A'=\Lambda_1 A \Lambda_2$ with all elements larger than one in absolute value: $|\Lambda_{1,i}|>1, |\Lambda_{2,i}|>1$. Is it possible to find a tight lower bound on $\|A'\|_2$ from question 2, if we are allowed to use any $\Lambda_1,\Lambda_2$ computable in $O(n^2)$ steps?

My motivation for these questions comes from physics, and I suspect all of them have negative answers, i.e. such algorithms and bounds are not possible to give using only $O \left(n^2\right)$ computations. However, I wasn't able to find the relevant results in the literature.

Any related comments are highly appreciated, e.g. if there exist algorithms of complexity intermediate between $O(n^2)$ and $o(n^3)$, or perhaps if there are approximate or randomized algorithms that work in $O(n^2)$. Restricting the class of matrices $A$ may also be interesting.

$\endgroup$
9
  • $\begingroup$ For symmetric PSD matrices, the eigenvalues and the singular values are the same. Thus, for these matrices, the trace and the nuclear norm are the same $\endgroup$ Commented May 26 at 12:10
  • $\begingroup$ @RodrigodeAzevedo Well the trace is more like a Frobenius norm (the sum of singular values squared) and it only gives a loose bound on the max singular value. $\endgroup$ Commented May 26 at 12:25
  • $\begingroup$ Are you interested in theory or practice? Do you happen to have an upper bound on the spectral gap of $A^* A$ (Or $A$ in the symmetric PSD case)? $\endgroup$ Commented May 26 at 12:41
  • $\begingroup$ @ I'm primarily interested in the answer, not in the proof. I have a class of algorithms that take a matrix $A$ as input, and only work (don't raise an error) if and only if the largest singular value of a $A$ is less or equal to one. I believe that no such algorithm can run in $O(n^2)$ time and work for arbitrary matrices (not sure if restricting to PSD helps). I'm mainly interested to know if this is correct and appropriate references, if any. For background: the algorithms I consider should configure some physical systems a certain way, which I believe is not possible to do in $O(n^2)$. $\endgroup$ Commented May 26 at 12:51
  • 2
    $\begingroup$ @RodrigodeAzevedo Yes, it is important in my context. My goal is not to bound the norms of any particular matrices, but to bound the complexity of the algorithms that work with them, instead (please see my other comment). Because these algorithms are agnostic to the properties of the input matrices, it is crucial that the bounds are applicable generally. $\endgroup$ Commented May 26 at 13:34

1 Answer 1

2
+100
$\begingroup$

I don't think there are algorithms to check deterministically whether $\|A\|_2 < 1$ in less than matrix multiplication cost.

There are norm estimators for the norm 2, for instance normest in Matlab, but as far as I know they all give guarantees on the lower bound (which are easier, because to get them it is sufficient to exhibit a norm-1 vector such that $\|Av\|$ is large) and no guarantees on the upper bound (which are harder, because you have to prove a "for all vectors" property).

There are some probabilistic/randomized estimators; see e.g.

Hochstenbach, Michiel E., Probabilistic upper bounds for the matrix two-norm, J. Sci. Comput. 57, No. 3, 464-476 (2013). ZBL1292.65037.

A very simple randomized estimator can be obtained by running a few steps of the power method on $A^*A$.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.