*(OK, technically it was in the Eisenhower Executive Office Building, which is not exactly the White House itself, but is adjacent to the West Wing in the White House complex. And President Obama wasn’t there—maybe, like Justin Trudeau, he already knows everything about quantum computing? But lots of people from the Office of Science and Technology Policy were! And some of us talked with Valerie Jarrett, Obama’s adviser, when she passed us on her way to the West Wing.*

*The occasion was a Quantum Information Science policy workshop that OSTP held, and which the White House explicitly gave us permission to discuss on social media. Indeed, John Preskill already tweeted photos from the event. Besides me and Preskill, others in attendance included Umesh Vazirani, Seth Lloyd, Yaoyun Shi, Rob Schoelkopf, Krysta Svore, Hartmut Neven, Stephen Jordan…*

*I don’t know whether this is the first time that the polynomial hierarchy, or the notion of variation distance, were ever invoked in a speech at the White House. But in any case, I was proud to receive a box of Hershey Kisses bearing the presidential seal. I thought of not eating them, but then I got hungry, and realized that I can simply refill the box later if desired.*

*For regular readers of Shtetl-Optimized, my talk won’t have all that much that’s new, but in any case it’s short.*

*Incidentally, during the workshop, a guy from OSTP told me that, when he and others at the White House were asked to prepare materials about quantum computing, posts on Shtetl-Optimized (such as Shor I’ll Do It) were a huge help. Honored though I was to have “served my country,” I winced, thinking about all the puerile doofosities I might’ve self-censored had I had any idea who might read them. I didn’t dare ask whether anyone at the White House also reads the comment sections!*

*Thanks so much to all the other participants and to the organizers for a great workshop. –SA)*

**Quantum Supremacy**

by Scott Aaronson (UT Austin)

October 18, 2016

Thank you; it’s great to be here. There are lots of directions that excite me enormously right now in quantum computing theory, which is what I work on. For example, there’s the use of quantum computing to get new insight into classical computation, into condensed matter physics, and recently, even into the black hole information problem.

But since I have five minutes, I wanted to talk here about one particular direction—one that, like nothing else that I know of, bridges theory and experiment in the service of what we hope will be a spectacular result in the near future. This direction is what’s known as “Quantum Supremacy”—John [Preskill], did you help popularize that term? [John nods yes]—although some people have been backing away from the term recently, because of the campaign of one of the possible future occupants of this here complex.

But what quantum supremacy means to me, is demonstrating a quantum speedup for *some* task as confidently as possible. Notice that I didn’t say a *useful* task! I like to say that for me, the #1 application of quantum computing—more than codebreaking, machine learning, or even quantum simulation—is just disproving the people who say quantum computing is impossible! So, quantum supremacy targets *that* application.

What *is* important for quantum supremacy is that we solve a clearly defined problem, with some relationship between inputs and outputs that’s independent of whatever hardware we’re using to solve the problem. That’s part of why it doesn’t cut it to point to some complicated, hard-to-simulate molecule and say “aha! quantum supremacy!”

One discovery, which I and others stumbled on 7 or 8 years ago, is that quantum supremacy seems to become much easier to demonstrate if we switch from problems with a single valid output to *sampling* problems: that is, problems of sampling exactly or approximately from some specified probability distribution.

Doing this has two advantages. First, we no longer need a full, fault-tolerant quantum computer—in fact, very rudimentary types of quantum hardware appear to suffice. Second, we can design sampling problems for which we can arguably be *more* confident that they really are hard for a classical computer, than we are that (say) factoring is classically hard. I like to say that a fast classical factoring algorithm might collapse the world’s electronic commerce, but as far as we know, it wouldn’t collapse the polynomial hierarchy! But with sampling problems, at least with exact sampling, we *can* often show the latter implication, which is about the best evidence you can possibly get for such a problem being hard in the present state of mathematics.

One example of these sampling tasks that we think are classically hard is BosonSampling, which Alex Arkhipov and I proposed in 2011. BosonSampling uses a bunch of identical photons that are sent through a network of beamsplitters, then measured to count the number of photons in each output mode. Over the past few years, this proposal has been experimentally demonstrated by quantum optics groups around the world, with the current record being a 6-photon demonstration by the O’Brien group in Bristol, UK. A second example is the IQP (“Instantaneous Quantum Polynomial-Time”) or Commuting Hamiltonians model of Bremner, Jozsa, and Shepherd.

A third example—no doubt the simplest—is just to sample from the output distribution of a random quantum circuit, let’s say on a 2D square lattice of qubits with nearest-neighbor interactions. Notably, this last task is one that the Martinis group at Google is working toward achieving right now, with 40-50 qubits. They say that they’ll achieve it in as little as one or two years, which translated from experimental jargon, means maybe five years? But not infinity years.

The challenges on the experimental side are clear: get enough qubits with long enough coherence times to achieve this. But there are also some huge theoretical challenges remaining.

A first is, can we still solve classically hard sampling problems even in the presence of realistic experimental imperfections? Arkhipov and I already thought about that problem—in particular, about sampling from a distribution that’s merely *close* in variation distance to the BosonSampling one—and got results that admittedly weren’t as satisfactory as the results for exact sampling. But I’m delighted to say that, just within the last month or two, there have been some excellent new papers on the arXiv that tackle exactly this question, with both positive and negative results.

A second theoretical challenge is, how do we verify the results of a quantum supremacy experiment? Note that, as far as we know today, verification could itself require classical exponential time. But that’s not the showstopper that some people think, since we could target the “sweet spot” of 40-50 qubits, where classical verification is difficult (and in particular, clearly “costlier” than running the experiment itself), but also far from impossible with cluster computing resources.

If I have any policy advice, it’s this: recognize that a clear demonstration of quantum supremacy is at least as big a deal as (say) the discovery of the Higgs boson. After this scientific milestone is achieved, I predict that the whole discussion of commercial applications of quantum computing will shift to a new plane, much like the Manhattan Project shifted to a new plane after Fermi built his pile under the Chicago stadium in 1942. In other words: at this point, the most “applied” thing to do might be to set applications aside temporarily, and just achieve this quantum supremacy milestone—i.e., build the quantum computing Fermi pile—and thereby show the world that quantum computing speedups are a reality. Thank you.