3
$\begingroup$

I know that the computational complexity of inverting a general $n \times n$ matrix $A$ is $O(n^{2.373})$ and multiplying it against an $n \times m$ matrix is $O(n^2m)$. Moreover, I've seen that special matrices, such as upper-triangular matrices, admit algorithms for computing these basic operations which are much faster.

In my case, I have an $n\times n$ symmetric matrix $A$ with two entries $\alpha$ and $\beta$ such that $A_{i,i}=\alpha$ for $i=1,\dots,n$ and $A_{i,j}=\beta$ for $i,j=1,\dots,n$ with $i\neq j$. That is $$ A = \begin{pmatrix} \alpha &\beta &\dots& \beta \\ \beta & \alpha & \dots &\beta \\ \vdots & & \ddots & \\ \beta & \dots & & \alpha \end{pmatrix}. $$ My intuition is that these operations should be of near linear complexity in $n$, in this case, since the operations should only depend on $n$ basic operations with two constants, $\alpha$ and $\beta$.

So my question is: For such a matrix, what are the most efficient known algorithms for computing matrix inversion and multiplication?

$\endgroup$
1

1 Answer 1

3
$\begingroup$

When your data have such strong structure, questions of "computational complexity" can be drastically affected on how you encode input and output, and also on which parameters you hold constant.

Inversion

The actual calculation (see below) can be done in a constant number of scalar operations, so can we simply say the time complexity is constant?

First consider the input method. Since we know the structure, you might give the input as the triple $(n,\alpha,\beta)$. Or, if you present the input matrix explicitly, for example in row-major form, it suffices to read the first two entries to find $\alpha$ and $\beta$. Oh, but you need to know also $n$. Was it a constant, or given as a separate input, or do you need to read a full row of input to find how long it is?

Then consider the actual calculation. Knowing $(\alpha, \beta, n)$, we only need a constant number of scalar arithmetic operations. We observe that $$A = (\alpha-\beta) I + \beta uu^T,$$ where $I$ is the $n \times n$ identity matrix and $u$ is the $n \times 1$ vector of ones. Then, by the Sherman-Morrison formula and some simplification, every diagonal entry of $A^{-1}$ is $$ c - c^2 \beta / (1+nc\beta) $$ and every off-diagonal entry is $$ - c^2 \beta / (1+nc\beta) $$ where we have written $c = 1/(\alpha-\beta)$ for brevity. (It might simplify even further, I didn't check.)

Then consider the output method. We know from above that the output matrix has just two different values: on-diagonal and off-diagonal. We computed them in constant time (assuming constant-time scalar arithmetic). Can we output it in constant time? Yes, if we are only required to output the two values and if outputting a scalar is constant-time. But if explicit $n \times n$ output matrix is required, surely that lower-bounds the output time. Also, note that the size of the scalars depends on the values of $\alpha$, $\beta$ and $n$. Do you need to output them as exact rationals? Floating point approximation?

Multiplication

You did not stipulate any special structure in the second ($n \times m$) matrix. Assuming constant-time scalar operations, the straightforward $O(nm)$ method is optimal. This is a lower bound because just outputting the $n \times m$ result takes this much.

From the inversion section we recall $$A = (\alpha-\beta) I + \beta uu^T.$$ Multiplying by the first term is multiplying your second matrix by a scalar. Multiplying by the second term is taking column sums and multiplying by scalar. Each part, and adding them, is easily done in $O(nm)$ time.

Appendix

This kind of matrix (constant value on main diagonal, another constant everywhere else) has been called double-constant matrix, at least by Ben O'Neill in his 2021 arXiv manuscript (The Double-Constant Matrix, Centering Matrix and Equicorrelation Matrix: Theory and Applications). This may lead to further information. On page 9 there is a formula for the inverse matrix, similar to mine above.

Another name seems to be completely symmetric matrix, e.g. Henderson (1981), C85. On the inverse of a completely symmetric matrix, Journal of Statistical Computation and Simulation 12: 138-139.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.