Skip to main content
Source Link

Neural networks might help to speed up computations in the monster group $\mathbb{M}$, which is the largest finite sporadic simple group. Such a network could be in some sense a (rather large) cousin of the neural network dealing with Rubik's cube mentioned by the OP.

Elements of $\mathbb{M}$ are usually represented as words of sparse matrices in $\mbox{GL}_n(\mathbb{F}_k)$, $196882 \leq n \leq 196884, k = 2,3$. There is an effective algorithm for checking equality of two such words, see [1]. Reduction of a word to a shorter word may take several minutes on a computer, see e.g. [2] for an overview. I'm currently working on the acceleration of the reduction of such words. I plan to exploit some geometric information contained in the images of certain vectors in $\mathbb{F}_k^n$. As far as I know, nobody has used this information before. (For experts: I focus on the images of vectors called 2A-axes). Here a neural network might be better at learning how to use this information than I am.

The mathematical benefits of such a project are:

  1. Regarding computations, the monster group is the most difficult finite simple group to deal with. If we can compute in the monster group then we can compute in all finite groups, provided we have enough information about that group and enough computer memory.

  2. We can probably finish the classification of the maximal subgroups of the monster group.

References

[1] R. Wilson. Computing in the monster. In Group, Combinatorics & Geometry, Durham 2001, 327–337. World Scientific Publishing, 2003.

[2] R. A. Wilson. The Monster and black-box groups.

Post Made Community Wiki by Martin Seysen