Skip to main content
We’ve updated our Terms of Service. A new AI Addendum clarifies how Stack Overflow utilizes AI interactions.
deleted 16 characters in body
Source Link

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential.We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replaced for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ for some polynomial $p$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ for some polynomial $p$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replaced for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ for some polynomial $p$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

added 17 characters in body; added 24 characters in body
Source Link

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential.We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ for some polynomial $p$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ for some polynomial $p$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

edited body
Source Link

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do itso we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do it we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

Normally a 2-sided error randomized algorithm will have some constant error $\varepsilon < 1/2$. We know that we can replace the error term for any inverse polynomial. And the inverse polynomial can be replace for an inverse exponential. Say that we have an algorithm $A$ with $\varepsilon_A=1/p(n)$ that runs in $T(n)$ steps, and by repeating the algorithm $O(\log \frac{1}{\varepsilon})$ times we obtain and algorithm $B$ with success probability close to 1 but with a logarithmic overhead.

My question is:

(1) If the error decreases polynomially faster, for practical purposes, do we still need to repeat the algorithm several times? Because if we do so we get a logarithmic term (which is not desired), but leaving it as it is, the algorithm will still have a success probability close to 1 for sufficiently large $n$.

(2) What about an exponentially faster decreasing error? Here it seems that we don't need to repeat the algorithm at all.

The same questions apply for 1-sided and 0-sided errors.

Source Link
Loading