Given a function $f(x) : \mathbb{R}^n \to \mathbb{R}$, we call it algebraic if it satisfies a polynomial equality $g(y, x) = 0$.
This is analogous to an algebraic number being the root of a univariate polynomial. A useful property of a set of algebraic numbers $\{a_i\}_1^m$ is that there exists an algebraic number $b$ such that each of $a_i$ can be expressed as a polynomial (with rational coefficients) in $b$. The number $b$ is called the primitive element.
This fact is a result of the primitive element theorem, which is typically expressed in more abstract algebra terms ("every finite separable field extension is simple, i.e. generated by a single element"). This is useful in computer algebra; for example to compute the rational lift of a polynomial with algebraic coefficients. Many computer algebra systems have algorithms that can compute the primitive element $b$ of a set of algebraic numbers, as well as the polynomial representation of each of the $a_i$.
Can we say something similar for algebraic functions? Are there algorithms that can find such a primitive element?
I'm interested in this for the purposes of solving optimization problems with algebraic objectives or constraints. If it was possible to compute such a primitive element, we could rewrite an algebraic optimization problem as a polynomial optimization problem, amenable to sum-of-squares programming (in the sense of e.g. Parrilo), by introducing a single new variable and a single new constraint (the primitive element and its minimal polynomial). (Although I haven't thought this idea out fully.)
I would appreciate any pointers to relevant references!