7
$\begingroup$

In applications one often encounters very large matrices that barely fit in computer memory, if at all. Naturally one wishes to represent those matrices as compactly as possible. Sometimes one even sacrifices accuracy for compactness, as in L-BFGS for example.

On the other hand, often enough the linear problem naturally appears with matrices that can be presented in a relatively compact form, either with band matrices or Toeplitz matrices, etc.

Let's call an $n\times n$ matrix presentation "compact" if it requires no more than $O(n \log^{const} n)$ entries (that's the "quasilinear" rule of thumb for trackability of large problems). Is there a way to tell that a family of matrices is presentable in a "compact" form? And, if it is, what would be a computationally feasible algorithm to compute that presentation?

Or, to put it differently, has something like Kolmogorov complexity been studied for matrices? Here by "Kolmogorov complexity" one may understand a map $M_{n\times n}\to \mathbb{N}$ mapping a matrix to the length of its shortest presentation.

This is different from sparsity of matrices, as evident from compact yet dense Toeplitz matrices.

$\endgroup$
1
  • $\begingroup$ How about low-rank approximations? $\endgroup$ Commented Nov 19, 2022 at 1:38

1 Answer 1

4
$\begingroup$

Kolmogorov’s complexity for positive definite matrices

Based on Kolmogorov’s idea, complexity of positive definite matrices with respect to a unit vector is defined. We show that the range of the complexity coincides with the logarithm of its spectrum and the order induced by the complexity is equivalent to the spectral one. This order implies the reversed one induced by the operator entropy.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.