If you are following the advances in the field of quantum computing, and even if you aren’t, chances are that you have come across the news about Google reaching the long-awaited quantum supremacy. But what does this really mean for the field of quantum computing?
As in everything in life, there are conflicting opinions about the achievement, and the hype around quantum doesn’t help in the the objectivity of the analyses. After the paper’s release, the first tweet of the “Quantum Bullshit Detector” (a twitter account I really enjoy reading, as it “classifies” into bullshit and not bullshit every article, paper, PR, and opinion that is published or shared related to quantum) related to the topic was:
I know, I know. The Quantum Bullshit Detector may not be the best source to form a reasoned opinion of the relevance of the discovery, so in order to build my own opinion about the matter, and understand if quantum supremacy was actually reached, I decided to go to one of my favorite eminences in the field of theoretical quantum computing, Scott Aaronson.
Scott dedicated an AMAZING blog post to analyze Google’s achievement, and IBM’s response to Google’s claim. If you love this field as much as I do, I really encourage you to read the blog post, along with Scott’s Supreme Quantum Supremacy FAQ! to have a deep understanding of what Google’s paper mean.
The classically intractable algorithm
So what’s the classically intractable problem that Google claimed to have solved faster using their quantum processor? The simulation of random quantum circuits. But is this problem useful for our daily lives, or something we will be running once we have the first commercially-available quantum computers? Probably not. My opinion is that this problem was “artificially designed” to show that quantum supremacy is possible. Nonetheless, it opens the door to the resolution of other hard problems, and it was a great excuse to build a really advanced quantum processor.
To put thing in perspective let’s see what does my friend (I wish) Scott think about this matter:
Q9. Does sampling-based quantum supremacy have any applications in itself?
When people were first thinking about this subject, it seemed pretty obvious that the answer was “no”! (I know because I was one of the people.) Recently, however, the situation has changed—for example, because of my certified randomness protocol, which shows how a sampling-based quantum supremacy experiment could almost immediately be repurposed to generate bits that can be proven to be random to a skeptical third party (under computational assumptions). This, in turn, has possible applications to proof-of-stake cryptocurrencies and other cryptographic protocols. I’m hopeful that more such applications will be discovered in the near future.
See? These artificially designed problems, such as Scott’s certified random numbers, could still open the door to real applications. What the hell! I may be using Scott’s certified quantum random numbers for my next blockchain application.
IBM’s response
So what do Google’s competitors in the quantum wars think about this achievement? IBM recently published a paper answering to Google’s quantum supremacy claim:
They [IBM] argue that, by commandeering the full attention of Summit at Oak Ridge National Lab, the most powerful supercomputer that currently exists on Earth—one that fills the area of two basketball courts, and that (crucially) has 250 petabytes of hard disk space—one could just barely store the entire quantum state vector of Google’s 53-qubit Sycamore chip in hard disk. And once one had done that, one could simulate the chip in ~2.5 days, more-or-less just by updating the entire state vector by brute force, rather than the 10,000 years that Google had estimated on the basis of my and Lijie Chen’s “Schrödinger-Feynman algorithm” (which can get by with less memory).
So what IBM is saying is that Google’s classical simulation of quantum random circuits could be solved with the current most powerful super computer in about 2.5 days and not 10,000 years. But wait a minute? Google’s 53-qubit Sycamore chip solves the problem in a few minutes, so there is still a few order of magnitudes improvement by using Google’s quantum processor over the most powerful supercomputer ever built. Even more, we are talking about a supercomputer the size of a soccer field against a really small processor. You can call it quantum supremacy or not, but I think the improvement is evident. So IBM is not actually responding to Google’s achievement of quantum supremacy, but to their assumption used to show the classical hardness of the problem.
Greg Kuperberg puts this matter elegantly on the quote shared by Scott in his blog:
I’m not entirely sure how embarrassed Google should feel that they overlooked this. I’m sure that they would have been happier to anticipate it, and happier still if they had put more qubits on their chip to defeat it. However, it doesn’t change their real achievement.
I respect the IBM paper, even if the press along with it seems more grouchy than necessary. I tend to believe them that the Google team did not explore all avenues when they said that their 53 qubits aren’t classically simulable. But if this is the best rebuttal, then you should still consider how much Google and IBM still agree on this as a proof-of-concept of QC. This is still quantum David vs classical Goliath, in the extreme. 53 qubits is in some ways still just 53 bits, only enhanced with quantum randomness. To answer those 53 qubits, IBM would still need entire days of computer time with the world’s fastest supercomputer, a 200-petaflop machine with hundreds of thousands of processing cores and trillions of high-speed transistors. If we can confirm that the Google chip actually meets spec, but we need this much computer power to do it, then to me that’s about as convincing as a larger quantum supremacy demonstration that humanity can no longer confirm at all.
Honestly, I’m happy to give both Google and IBM credit for helping the field of QC, even if it is the result of a strange dispute.
The fight is not over!
We are witnessing an impressive intellectual battle between outstanding research teams, companies, and individuals (that I wish I was smart enough to take part in) to achieve quantum supremacy, and achieve practical results in the field of quantum computing after more that 20 years of active research.
My bet is that in the next few months we will see more of these battles in the media, with, most probably, IBM and others aiming to find better classical algorithms to show that some of the supposedly “quantumly” superior problems are solvable classically. But I am not the only one thinking this:
Designing better classical simulations is precisely how IBM and others should respond to Google’s announcement, and how I said a month ago that I hoped they would respond. If we set aside the pass-the-popcorn PR war (or even if we don’t), this is how science progresses.
[…]
For by definition, quantum supremacy all about beating something—namely, classical computation—and the latter can, at least for a while, fight back.
In conclusion, Google may have used a really artificial problem to show his quantum supremacy, some of their assumptions may have not been as right as they thought. Either way, this paper show and advancement from an engineering and practical point of view for the field of quantum computing, and we should celebrate it. Even more, these overcompensation claims may serve to tease other to fight back in the battle for quantum supremacy.
In the present case, while increasing the circuit depth won’t evade IBM’s “store everything to hard disk” strategy, increasing the number of qubits will. If Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250-petabyte storage capacity. At 60 qubits, you’d need 33 Summits. At 70 qubits, enough Summits to fill a city … you get the idea.
Meanwhile, while the fight for quantum supremacy keeps its course, we common mortals (without the resources of companies such as IBM and Google to build quantum processors) should keep working on finding better quantum-inspired classical algorithms, understanding the value and limitations of quantum computing, and using all this knowledge to solve and improve our current problems, use cases and needs. These are exciting times to be in the field of science and computing. We must go on.
If you liked this post do not hesitate on sharing it, subscribing to this newsletter, or buying me a beer :)
Some references
In this post I just wanted to summarize what is going on around Google’s new quantum supremacy paper, but if you want to go a bit deeper in the matter I really recommend you reading some of the following references (especially the ones from Scott Aaronson and Gilkalai, in order to take a grasp of the battle being fought between great minds with opposing opinions in the field of quantum computing).
Google’s blog post: https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html?m=1
The article: https://www.nature.com/articles/d41586-019-02936-3