Skip to main content
In response to a comment by @LeechLattice, added more about significance of upper Hessenberg and why method doesn't work on $H$.
Source Link
dharr
  • 446
  • 3
  • 7

So the sequence of $r(n)/r(n-1)$ that gives the growth rate on the infinite latticegrap is essentially the same as the sequence of (approximate) maximum eigenvalues of the $A_{k}$, and has the same limit.

Edit: Here's an elaboration about note 3.

We want to know the maximum power we can raise $A_{k}$ by and still have the first row agree with the number of walks on the infinite lattice. For $G$, one cannot progress beyond vertex $k$ with a walk of length less than $k$, and any of the walks from $1$ to $m$, $m<k$ is the same on $G_{k}$ as on the infinite graph; the number of these walks is correctly given by the first row of $(A_{k})^{k-1}$. This is related to the fact that the matrix is lower Hessenberg: the furthest forward one can step from vertex $i$ is to vertex $i+1$. The first row of $(A_{k})^{k}$ fails to count the walk $1\rightarrow 2\rightarrow\ldots\rightarrow k+1$ on $G$. Therefore to calculate $r(n)/r(n-1),$ we need $A_{k}$, $k>n$. If we choose $k=n+1$ then we evaluate $\left( A_{n+1}\right) ^{n}$ and use the $n$th iteration to approximate the eigenvector in the power method. We could choose larger $k$ but then $\left(A_{k}\right) ^{n}$ would give a poorer estimate of the eigenvector.

For $H$, the largest vertex we can reach with walks of length $n$ is $2^{n}$, with the walk $1\rightarrow 2\rightarrow 4\rightarrow 8\ldots\rightarrow 2^{n}$. Therefore to calculate $r(n)/r(n-1)$, we need $A_{k}$, $k>2^{n}$. If we choose $k=2^{n}+1$ then we evaluate $\left( A_{2^{n}+1}\right) ^{n}$ and use $n$ iterations to approximate the eigenvector in the power method. However, the $n$th approximation is likely a poor estimate for a matrix of size $2^{n+1}$, and certainly this estimate will get worse in the limit of large $n$.

So the sequence of $r(n)/r(n-1)$ that gives the growth rate on the infinite lattice is essentially the same as the sequence of (approximate) maximum eigenvalues of the $A_{k}$, and has the same limit.

So the sequence of $r(n)/r(n-1)$ that gives the growth rate on the infinite grap is essentially the same as the sequence of (approximate) maximum eigenvalues of the $A_{k}$, and has the same limit.

Edit: Here's an elaboration about note 3.

We want to know the maximum power we can raise $A_{k}$ by and still have the first row agree with the number of walks on the infinite lattice. For $G$, one cannot progress beyond vertex $k$ with a walk of length less than $k$, and any of the walks from $1$ to $m$, $m<k$ is the same on $G_{k}$ as on the infinite graph; the number of these walks is correctly given by the first row of $(A_{k})^{k-1}$. This is related to the fact that the matrix is lower Hessenberg: the furthest forward one can step from vertex $i$ is to vertex $i+1$. The first row of $(A_{k})^{k}$ fails to count the walk $1\rightarrow 2\rightarrow\ldots\rightarrow k+1$ on $G$. Therefore to calculate $r(n)/r(n-1),$ we need $A_{k}$, $k>n$. If we choose $k=n+1$ then we evaluate $\left( A_{n+1}\right) ^{n}$ and use the $n$th iteration to approximate the eigenvector in the power method. We could choose larger $k$ but then $\left(A_{k}\right) ^{n}$ would give a poorer estimate of the eigenvector.

For $H$, the largest vertex we can reach with walks of length $n$ is $2^{n}$, with the walk $1\rightarrow 2\rightarrow 4\rightarrow 8\ldots\rightarrow 2^{n}$. Therefore to calculate $r(n)/r(n-1)$, we need $A_{k}$, $k>2^{n}$. If we choose $k=2^{n}+1$ then we evaluate $\left( A_{2^{n}+1}\right) ^{n}$ and use $n$ iterations to approximate the eigenvector in the power method. However, the $n$th approximation is likely a poor estimate for a matrix of size $2^{n+1}$, and certainly this estimate will get worse in the limit of large $n$.

Added note 3 to address @Kostya_I's comment.
Source Link
dharr
  • 446
  • 3
  • 7
  1. The argument needs work for a formal proof. The approximations to the maximum eigenvalues presumably become sufficiently more accurate at higher $n$.

  2. The power method requires the maximum eigenvalue to be greater in magnitude that the others. In addition to irreducibility, the nonzero diagonal entries make it also primitive, which is sufficient for this. The initial vector $x_{0}$ cannot be orthogonal to the left eigenvector for $\lambda_{\max}$; since both these vectors are positive this is also satisfied.

  3. The success of the method relies on the key facts that the adjacency matrix is lower Hessenberg, and that one cannot progress beyond vertex $k$ with a walk of length less than $k$ on $G$. Neither of these are true for the graph $H$ given by the OP.

  1. The argument needs work for a formal proof. The approximations to the maximum eigenvalues presumably become sufficiently more accurate at higher $n$.

  2. The power method requires the maximum eigenvalue to be greater in magnitude that the others. In addition to irreducibility, the nonzero diagonal entries make it also primitive, which is sufficient for this. The initial vector $x_{0}$ cannot be orthogonal to the left eigenvector for $\lambda_{\max}$; since both these vectors are positive this is also satisfied.

  1. The argument needs work for a formal proof. The approximations to the maximum eigenvalues presumably become sufficiently more accurate at higher $n$.

  2. The power method requires the maximum eigenvalue to be greater in magnitude that the others. In addition to irreducibility, the nonzero diagonal entries make it also primitive, which is sufficient for this. The initial vector $x_{0}$ cannot be orthogonal to the left eigenvector for $\lambda_{\max}$; since both these vectors are positive this is also satisfied.

  3. The success of the method relies on the key facts that the adjacency matrix is lower Hessenberg, and that one cannot progress beyond vertex $k$ with a walk of length less than $k$ on $G$. Neither of these are true for the graph $H$ given by the OP.

minor edit to clarify notation.
Source Link
dharr
  • 446
  • 3
  • 7

To show this limit is the growth rate on the infinite graph, consider the algorithm for generating $r(n)/r(n-1)$ for arbitrary $n$ based on the above-noted agreement between $r$ and $r_{k}$. Choose $k=n+1$. Then $r(n)=e_{1}^{T}\left( A_{k}\right)^{n}e$, or $r(n)$ is the first entry of $\left(A_{k}\right)^{n}e=u_{n}$; similarly for $r(n-1)$ (using the same $k$). In the power method for finding the largest eigenvalue, one chooses an initial vector $x_{0}$ and iterates $x_{j+1}:=A_{k}x_{j}$ to find the eigenvector corresponding to the largest eigenvalue. If we choose $x_{0}=e$ and omit the customary renormalization during the iterations, then $r(n-1)$ and $r(n)$ are the first entries of the $(n-1)$ th and $n$th iteration eigenvectors $u_{n-1}$ and $u_n$. Since $u_{n}=Au_{n-1}\approx\lambda_{\max}u_{n-1}$, $r(n)/r(n-1)$ is an estimate of $\lambda_{\max}$ for $A_{n+1}$.The value of $r(500)/r(499)$ was given above; $\lambda_{\max}$ for $A_{501}$ is $2.482146221045796473951094505$ (a difference of $10^{-25}$).

To show this limit is the growth rate on the infinite graph, consider the algorithm for generating $r(n)/r(n-1)$ for arbitrary $n$ based on the above-noted agreement between $r$ and $r_{k}$. Choose $k=n+1$. Then $r(n)=e_{1}^{T}\left( A_{k}\right)^{n}e$, or $r(n)$ is the first entry of $\left(A_{k}\right)^{n}e=u_{n}$; similarly for $r(n-1)$ (using the same $k$). In the power method for finding the largest eigenvalue, one chooses an initial vector $x_{0}$ and iterates $x_{j+1}:=A_{k}x_{j}$ to find the eigenvector corresponding to the largest eigenvalue. If we choose $x_{0}=e$ and omit the customary renormalization during the iterations, then $r(n-1)$ and $r(n)$ are the first entries of the $(n-1)$ th and $n$th iteration eigenvectors. Since $u_{n}=Au_{n-1}\approx\lambda_{\max}u_{n-1}$, $r(n)/r(n-1)$ is an estimate of $\lambda_{\max}$ for $A_{n+1}$.The value of $r(500)/r(499)$ was given above; $\lambda_{\max}$ for $A_{501}$ is $2.482146221045796473951094505$ (a difference of $10^{-25}$).

To show this limit is the growth rate on the infinite graph, consider the algorithm for generating $r(n)/r(n-1)$ for arbitrary $n$ based on the above-noted agreement between $r$ and $r_{k}$. Choose $k=n+1$. Then $r(n)=e_{1}^{T}\left( A_{k}\right)^{n}e$, or $r(n)$ is the first entry of $\left(A_{k}\right)^{n}e=u_{n}$; similarly for $r(n-1)$ (using the same $k$). In the power method for finding the largest eigenvalue, one chooses an initial vector $x_{0}$ and iterates $x_{j+1}:=A_{k}x_{j}$ to find the eigenvector corresponding to the largest eigenvalue. If we choose $x_{0}=e$ and omit the customary renormalization during the iterations, then $r(n-1)$ and $r(n)$ are the first entries of the $(n-1)$ th and $n$th iteration eigenvectors $u_{n-1}$ and $u_n$. Since $u_{n}=Au_{n-1}\approx\lambda_{\max}u_{n-1}$, $r(n)/r(n-1)$ is an estimate of $\lambda_{\max}$ for $A_{n+1}$.The value of $r(500)/r(499)$ was given above; $\lambda_{\max}$ for $A_{501}$ is $2.482146221045796473951094505$ (a difference of $10^{-25}$).

minor clarification
Source Link
dharr
  • 446
  • 3
  • 7
Loading
Source Link
dharr
  • 446
  • 3
  • 7
Loading