Skip to main content
Post Made Community Wiki by Todd Trimble
Source Link
Thomas Klimpel
  • 2.5k
  • 21
  • 43

The promise of quantum computing supremacy is bunk by Colin Earl references to

  1. Polynomial Time and Extravagant Models by Leonid A. Levin
  2. On Quantum Computing by Oded Goldreich
  3. Note (d) Quantum computers for Chapter 12.8: Undecidability and Intractability by Stephen Wolfram

for doubts against QC expressed by mathematicians, computer scientists, and physicists.


Details for 1: L. Levin does not attack QC, but only its application to break RSA encryption:

The closed-minded cryptographers, however, were not convinced and this result brought a dismissal of the unit-cost model, not RSA.

Another, not dissimilar, attack is raging this very moment. ... Peter Shor ... factors integers in polynomial time using an imaginary analog device, Quantum Computer (QC), inspired by the laws of quantum physics taken to their formal extreme.

His main objection is that Shor's algorithm would require extremely accurate amplitudes:

All such problems, however, are peanuts. The major problem is the requirement that basic quantum equations hold to multi-hundredth if not millionth decimal positions where the significant digits of the relevant quantum amplitudes reside. We have never seen a physical law valid to over a dozen decimals.


Details for 2: O. Goldreich attacks the assumption that arbitrary Unitary transformations can be applied to the quantum state:

QM says that certain things are not impossible, but it does not say that every thing that is not impossible is indeed possible. For example, it says that you cannot make non-Unitary transformations, but this by itself does not mean that you can effect any Unitary transformation that you want.

and he attacks the unit-cost assumption for the basic operation composing those arbitrary transformations:

As far as I am concerned, the QC model consists of exponentially-long vectors and some uniform operations on such vectors. The key point is that the associated complexity measure postulates that each such operation can be effected at unit cost. My main concern is with this postulate. My own intuition is that the cost of such an operation or of maintaining such vectors should be linearly related to the amount of non-degeneracy of these vectors, where the non-degeneracy may vary from a constant to linear in the length of the vector.

However, in the end he admits that he simply does not believe in the exponential speed-ups:

  • I believe that our universe is not a miracolous one, allowing exponential speed-ups over the natural model of computation.
  • ... in my opinion, those believing that a QC can manipulate or maintain huge objects free of cost should provide a convincing explanation to this fantastic speculation. Being skeptic of this speculation seems to be the default and natural position.

Details for 3: S. Wolfram stresses the point that only very few problems are known where QC offers exponential speed-ups:

But in fact only a very few other examples were found—all ultimately based on very much the same ideas as factoring.

and that even for those few problems where exponential speed-ups are expected, it remains unclear which idealizations have been used, and what might actually be needed to perform those quantum computations:

And even in the case of factoring there are questions about the idealizations used. It does appear that only modest precision is needed for the initial amplitudes. And it seems that perturbations from the environment can be overcome using versions of error-correcting codes. But it remains unclear just what might be needed actually to perform for example the final measurements required.