DEV Community

bamina bertin
bamina bertin

Posted on

The Polynomial Mirror: Can We Understand Neural Networks with Algebra?

Neural networks are powerful .
They make decisions in finance, medicine, language, and vision, yet we often can’t explain why they work. We trust them, but we don’t understand them.

That’s why We created a theoretical framework called The Polynomial Mirror.
It’s a way to look inside a trained neural network, not just from the outside ,and rewrite its behavior as a composition of polynomials.

Imagine taking a fully trained neural network.
Instead of modifying it or retraining it, you leave everything as it is — but you replace each activation function with a polynomial approximation (like Chebyshev series). You also keep the affine transformations (matrix multiplies and biases), which are already polynomial.

The result?
You get a new version of the network — what I call a Polynomial Mirror — that mimics its behavior but in symbolic algebraic form.

It’s like holding up a mirror to the black box and seeing a shape you can understand and analyze mathematically.

Each neuron becomes a symbolic function of the input. You can inspect it, analyze it, and understand it layer by layer.

Since each activation is a polynomial, you can fine-tune the coefficients to control each neuron’s behavior — something standard networks don’t allow.

The mirror is built after training. You don’t have to change the model or architecture.

Here’s how we approximate a common activation function like ReLU using a polynomial:

ReLU(x) ≈ 0.0278 + 0.5x + 1.8025x² − 5.9964x⁴ + 12.2087x⁶ − 11.8118x⁸ + 4.2788x¹⁰

This function behaves almost identically to ReLU on the interval [−1,1], making it ideal for symbolic substitution in networks where activations are bounded.

The Polynomial Mirror raises a deep and exciting question:

Can neural networks truly be captured by polynomials, or is their power irreducible to classical algebra?

Whether the answer is yes or no, the exploration helps us understand what makes AI learn — and what makes it mysterious.

I’ve published the full research paper, including examples, theory, and open problems, here on Zenodo: https://zenodo.org/records/15673070.

I welcome feedback, questions, collaboration, or simply your thoughts.
Let’s open the black box — symbolically.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.