The Google team claims that a completely new type of computer – a quantum computer – has irrevocably surpassed classical computers in some respects. This is probably true, even though quantum computers are in their infancy, and the Google research team is almost ten thousand years in the wrong.
Google has a media bomb ready for October 23 this year. On that day, an article was published in Nature magazine in which the company’s team argues that the moment has come when today’s classical electronic computers are no longer sufficient for quantum computers in some tasks.
Through the looking glass
Quantum computers already have a rich paper history. They were “invented” by the famous physicist Richard Feynman more than half a century ago. He was the first to realize that, using the laws of quantum mechanics, it is theoretically possible to build a device with enormous computing power.
This is possible due to a combination of several seemingly strange laws of the quantum world. Quantum superposition of states. A classical bit can be 1 or 0; a quantum bit (or cubit) can be both at the same time.
For a single qubit, it is not a big difference compared to the classical bit it can only carry twice as much information. But we repeat that in the quantum world ordinary rules do not work. Besides the principle of “indifference”, i.e. superposition of states, quantum computers also use the so-called “magic” connection of quantum entanglement.
So when you connect two quantum bits, that is, two qubits, they can simultaneously carry four times as much information. In theory, there’s no reason not to string together even more: three, ten, or one hundred. And here the growth is already incredibly fast – the amount of information in the system grows in proportion to the power of the number of connected qubits. In the case of two, it is 2², i.e. 4. For five it is 2^5, i.e. 32. And for 53 qubits it is already 2^53, i.e. 9 007 199 254 740 992 (nine qubits).
Unfortunately, its implementation is not easy. Creating a tangled qubit system and running the computation does not solve the problem. Then you have to read the result from the computer so you don’t damage it. Because if you just knew the state of the qubits during the computation, it would turn into a boring classical particle with one bit of information. The magic of the quantum computer would immediately disappear.
Therefore, special techniques (based on the so-called quantum interference principle) are used to determine the result. As the name implies, this means that the individual “variants” as a result either reinforce each other or, on the contrary, weaken each other. Thus, some become more probable because they are “supported,” while others, conversely, are less probable because other variants “nullify” them. A properly constructed quantum computer (and a properly tuned algorithm) will gradually “amplify” the correct solution until there is nothing left at the end of the calculation.
Quantum computers will never be faster in all types of problems. Such parallel computation is not always such a great advantage. But there are tasks in which we are now certain that they will be the best. Probably the most famous one concerns computer encryption and is named after its discoverer, the mathematician Peter Shore – the so-called Shore algorithm .
It was published in 1994 and was the first example of the practical use of quantum computers. From the layman’s point of view, it is a completely uninteresting guide to finding prime multipliers of large numbers; however, Peter Shore showed that quantum computers should be able to break several ciphers which were then (and still are) used because of their properties.
There are other well-known ideas for using quantum computers to perform tasks that are inaccessible to classical computers (notably Grover’s algorithm for searching databases), but 1994 was a breakthrough. Since then, quantum computers have been awaited with a mixture of trepidation and enthusiasm. The whole decade was, of course, about platonic feelings, because it was clear that quantum computers were a long way off. But as (among other things) the Google result shows, time is running out.
You may have noticed that we chose a rather odd number for an example of the extremely rapid growth of quantum computing: 53. As you should have realized or learned from other sources, this is no coincidence.
It was with 53 qubits that the Sycamore processor worked, with which the Google team announced in Nature that “quantum dominance” had arrived i.e. – let’s repeat it – the moment when quantum computers can handle at least some calculations faster than even the fastest classical computers.
Sycamore is a device based on the technology for creating qubits in loops of superconducting wire, a favorite of several teams today (not just because of the superconductivity with liquid helium cooled to near-zero temperatures, we might add). This particular device was developed directly by Google, and it is a flat lattice of superconductors with 54 “nodes,” that are qubits. Yes, 54 nodes – one cubit was defective and so only 53 could be used. As you can see, quantum computers are not the most reliable devices after all.
The least interesting part of the whole report is perhaps the very problem in which Google demonstrated “quantum superiority. It was carefully chosen only so that a quantum computer could prove its superiority over it,” wrote John Preskill of the California Institute of Technology, author of the term “quantum dominance”, in Quanta Magazine , adding: “Otherwise it is not a problem. which is somehow interesting from a practical point of view.
Simply put, the computer performed a randomly chosen set of instructions and then scientists measured the result. The calculation has no obvious application and serves mainly as proof that the team has “mastered” enough quantum hardware to be able to perform some calculations relatively reliably that classical computers simply can’t do.
“The Google team has shown that today we can build a quantum machine large and accurate enough to solve a problem we couldn’t solve before,” Preskill wrote in the article.
IBM Strikes Back
According to Google, the difference between the results of the quantum computer and the classical computer was dangerous. The Quantum Sycamore handled the calculation in about two hundred seconds; the fastest modern computers would have solved the same problem in about ten thousand years.
However, between the leaked article and its publication, the IBM team intervened. October 21, before the official publication of the work of a competitor in the journal Nature, a team from IBM published a technical paper and the text on the company’s blog, according to which Google has made a very significant miscalculation. According to them, the most powerful computer in the world would calculate the problem much faster – a few days instead of ten thousand years.
IBM is referring to one particular computer, Summit at Oak Ridge National Laboratory, which is currently the most powerful computer in the world (about 200 petaflops) and which, extremely important in this case, also has 250 petabytes of memory. Such storage can (simply put) contain a complete list of all possible states of the quantum 53-bit Sycamore chip that Google has been using. As a result, it could run faster than the simulation predicts.
Thus, IBM estimates that computing the fastest classical computer today would take not ten thousand years, but about two and a half days. And that, she says, is the worst possible outcome, because it should be faster. For now, it’s just an estimate. It has not yet done the calculation, although the IBM team is said to have prepared the software for the test – there is no free time in Summit’s schedule for it yet. By the way, as Scott Arronson points out , if a deadline is released on the computer, the Google team will surely be thrilled – because in their work they did not expect to be able to directly verify their calculations, that is, to check by actually doing the calculation. Their estimates were based on modeling and checking against a small sample of the calculation.
Experts commenting on the situation, quite unequivocally agree that the IBM proposal seems quite rational and practically feasible. The Google team seems to have retroactively overlooked a rather obvious workaround. The question is whether that makes a difference. Not; quantum dominance seems inevitable, at least in the case of this type of task.
Why? As we described, the Sycamore chip runs with 53 qubits, and its simulation “fits” with a small margin of 250 petabytes of Summit supercomputer memory. But the performance of a quantum computer does not grow linearly. This means that if Sycamore had, say, 55 qubits, this solution could no longer be used – the simulation would not fit into the disk array. And if it were 60 qubits, it would need about 33 times as much storage space . It seems that we are really in about the area where quantum computers are starting to outpace classical computers too quickly.
On the other hand, the case shows that classical computers did not have the last word. Several people and teams can now think more intensely about algorithms that could solve the same problem more elegantly and faster even on a classical computing machine – and maybe they will succeed. But the advantage of quantum technology seems so great that on the time horizon of the next years and decades, it will be hard for anyone else to keep up.
Added to that, there are skeptics. If you want, you can look, for example, at the blog of a very active mathematician Gil Kalai, who has long questioned the possibility of quantum domination and is looking very hard (though quite unsuccessfully) for errors in the recently published work of Google’s team.
Mr. Shor’s second idea
The Google machine has also done nothing to solve the big problem of quantum computers. It is their error rate. Modern qubits are very sensitive and easily influenced by external influences. And as soon as an error appears in quantum computing, it needs to be fixed quickly, because as computing power grows, so does its impact on computing, and soon it can collapse. So far, it seems that this problem will never be eliminated.
It is a consequence of the quantum entanglement of qubits with the environment. They have the unpleasant property of occasionally “establishing” the same relationship with their environment as they do with each other. Thus, quantum computation inevitably gets errors that cannot be avoided by the very nature of the case. With classical technology, too, you had to be careful not to put a magnet on a disk or put a floppy disk on a speaker, but otherwise, you could be sure that your data was more or less protected from the outside world and would not just change. (The classical logical unit is separated from zero by a potential difference of five volts, which is a threshold that is difficult to cross under normal conditions.) The quantum system is much more “fragile” because we simply don’t know how to completely separate it from its surroundings. We cannot build quantum walls.
Nevertheless, there is a solution today that is generally considered feasible, although unfortunately (computationally) expensive. It was designed by the not infamous Peter Shore, responsible for making all the cryptographers so eager for quantum computers. He proposed a way to fix the quantum algorithm according to a permanently stored backup. On the face of it, this seems like a natural solution: why not just make a copy, right? It probably crosses everyone’s mind…
Unfortunately, in the quantum world, this is not possible. If you want to make a copy, you have to read the original. And if you accurately “read” the quantum algorithm, it will collapse on you. After him, Shore and several scientists finally perfected a fairly advanced system that will allow you to back up the original calculation in other qubits of the system without damaging the original. It even works in some experiments (see for example this paper ), but so far only in small ones.
So far, a working continuous backup has never been demonstrated in a system as large as the one Google uses to prove its quantum dominance. That is, on a computer that has 50 qubits or more. This is because it would require several times as many “checking” qubits to achieve the necessary level of error protection. And no one has enough of those today.
With the possible emergence of large computers in the future, the problem will become even more pronounced. For example, to break a significant portion of today’s ciphers, you would need a computer capable of handling thousands of qubits. But if the calculation is to be protected from errors arising from spontaneous entanglement of qubits with the medium, experts believe that millions of additional “controlling” qubits will be required. And something like that is hard to imagine with today’s level of technology.
Inspired by an article from the Czech Republic