Google’s quantum computer suggests that wormholes are real


Perhaps wormholes will no longer be relegated to the realm of science fiction.

An image of a spiral galaxy with a purple light coming out of it.
A wormhole is the one way, in the context of general relativity, that immediate transport between two disparate, disconnected events in spacetime can occur. These “bridges” are mathematical curiosities only at this point in time; no physical wormholes have ever been found to exist or have ever been created, but if one were discovered it could immediately test general relativity’s predictions, as well as any alternative competitors. Image credit: Adobe Stock / Nikita Kuzmenkov

Albert Einstein is rightfully considered one of the most impactful physicists of all times.  He created his various theories of relativity, which govern the behavior of matter moving at tremendous speeds and reimagined the force of gravity as the bending of space and time. He also wrote prodigiously on the idiosyncrasies of quantum mechanics, rejecting it as being fundamentally incorrect, yet exploring the implications of the theory.

While Einstein’s reputation as a genius is secure, a little extra validation never hurts, especially when it revolves around one of Einstein’s most exotic predictions: wormholes, or tunnels through space.

A consortium of researchers from Caltech, Google, Fermilab, MIT, and Harvard used a device called the Sycamore quantum processor to generate and control what is equivalent to a wormhole. (The Sycamore is a quantum computer developed by Google.) How does this work? It comes down to intricate interconnections between two of Einstein’s ideas.

Wormholes and quantum entanglement

In 1935, Einstein was working with his student Nathan Rosen on ways to convert his theory of gravity, called the theory of general relativity, into a theory of everything. One problem was that the theory predicted infinities at the center of black holes. These infinities arose when the total mass of a dead star collapsed down into a spot of zero size, what are called singularities.

Rosen and Einstein played around with other possible solutions, including using some creative mathematics to replace two singularities with a tube that connected them. These tubes are called Einstein-Rosen bridges or, more colloquially, wormholes. In principle, it would be possible to have an object enter one wormhole and exit the other, even though the ends of the wormholes are separated by large distances. The object would have traveled through extra dimensions. This work is called ER theory.

Wormholes are favorites of science fiction writers, as they provide the possibility of faster-than-light travel. Spacecraft could travel large distances in zero time. While there are many practical problems involved with making wormholes, an especially important one is that they are unstable unless stabilized by large amounts of negative energy.

That same year, Einstein and Rosen also worked on a topic in quantum mechanics, this time with another physicist by the name of Boris Podolsky. This topic involved quantum entanglement, which considers the behavior of two objects that were initially in contact with one another so that their properties are intertwined. While the properties of neither object were determined — that’s part of the craziness of quantum mechanics — the fact that they were the opposite of one another is “baked in” at the outset.

The tricky business was that even if you separated the two objects by enormous distances and measured the properties of one of them, you instantly knew the properties of the other, despite the properties of neither being determined until a measurement was performed. This was called the EPR paradox, after the researchers’ initials.

ER = EPR

Both the ER theory and EPR paradox were considered curiosities for a long time, however it was in the last decade when scientists began to understand that the two ideas had deeper connections. In fact, it has become clear that the two ideas are, in many ways, functionally identical. Two physicists, Juan Maldacena and Leonard Susskind, are often mentioned as having made some of the more crucial contributions to this realization, and it was Maldacena who coined the succinct representation of the observation: “ER = EPR.”

If it is indeed true that ER = EPR, then we are in luck, because, while we cannot create and generate wormholes, we certainly can do EPR measurements. We have done measurements like that for decades.

Wormholes might be real

This is where the new announcement enters the picture. In a paper in Nature, researchers developed a simplified approach to the problem and modeled wormhole behavior on the quantum computer. They found that the result was exactly as expected. They were even able to simulate conditions whereby the theoretical wormhole was governed by positive and negative energy and discovered that, while the positive option was unstable, the negative one was stable — just as ER theory suggests.

To the extent that EPR and ER are mathematically the same, this work implies that wormholes are not just theoretical curiosities.

It is important to note that the researchers did not generate a physical wormhole. No objects were transferred through extra dimensions. Instead, what was demonstrated was quantum behavior. However, since the mathematics of ER and EPR are deeply intertwined, the new result suggests that wormholes are at least a possibility.

Quantum gravity

The deeper implications of this work are that it provides researchers with a laboratory to explore not only ER theory and the EPR paradox but also a theory called quantum gravity, which is the extension of gravity to the world of the super small. A successful theory of quantum gravity has eluded the scientific community for nearly a century, so this new capability may help illuminate a path forward. Indeed, quantum computing has provided the capacity to test ideas that were impossible just a few years ago.

Nobel Prize in Chemistry 2023


The 2023 Nobel Prize in Chemistry has been awarded to Moungi G. Bawendi, Louis E. Brus and Alexei I. Ekimov for the discovery and synthesis of quantum dots. Quantum dots are semiconductor nanocrystals whose properties can be tuned by their physical size. While Brus and Ekimov independently created quantum dots and linked their nanometre size to their observed optical quantum properties, Bawendi developed the synthesis process, obtaining nanoparticles of uniform size and quality. Quantum dots are now widely employed, for example, in computer and television screens based on the QLED (quantum light emitting diode) technology, or in biochemistry to track cells, and medicine to identify tumour tissue within the body. The discoveries of this year’s Chemistry Nobel Prize Laureates have paved the way for advancements in the field of nanotechnology and still have repercussions not just on the wider research community but also on our society, given the wide applicability of quantum dots.

In this Collection, Nature Portfolio recognizes the achievements of the Laureates in a selection of featured content, articles from the winners, related research papers, and reviews, news and opinion pieces that highlight the development of quantum dots over the past three decades.

The AI–quantum computing mash-up: will it revolutionize science?


Scientists are exploring the potential of quantum machine learning. But whether there are useful applications for the fusion of artificial intelligence and quantum computing is unclear.

The Sycamore Quantum Computer
Google is exploring whether quantum computers can help with machine learning. Credit: Rocco Ceselin for Nature

Call it the Avengers of futuristic computing. Put together two of the buzziest terms in technology — machine learning and quantum computers — and you get quantum machine learning. Like the Avengers comic books and films, which bring together an all-star cast of superheroes to build a dream team, the result is likely to attract a lot of attention. But in technology, as in fiction, it is important to come up with a good plot.

If quantum computers can ever be built at large-enough scales, they promise to solve certain problems much more efficiently than can ordinary digital electronics, by harnessing the unique properties of the subatomic world. For years, researchers have wondered whether those problems might include machine learning, a form of artificial intelligence (AI) in which computers are used to spot patterns in data and learn rules that can be used to make inferences in unfamiliar situations.

Now, with the release of the high-profile AI system ChatGPT, which relies on machine learning to power its eerily human-like conversations by inferring relationships between words in text, and with the rapid growth in the size and power of quantum computers, both technologies are making big strides forwards. Will anything useful come of combining the two?

Booming interest

Many technology companies, including established corporations such as Google and IBM, as well as start-up firms such as Rigetti in Berkeley, California, and IonQ in College Park, Maryland, are investigating the potential of quantum machine learning. There is strong interest from academic scientists, too.

CERN, the European particle-physics laboratory outside Geneva, Switzerland, already uses machine learning to look for signs that certain subatomic particles have been produced in the data generated by the Large Hadron Collider. Scientists there are among the academics who are experimenting with quantum machine learning.

“Our idea is to use quantum computers to speed up or improve classical machine-learning models,” says physicist Sofia Vallecorsa, who leads a quantum-computing and machine-learning research group at CERN.

The big unanswered question is whether there are scenarios in which quantum machine learning offers an advantage over the classical variety. Theory shows that for specialized computing tasks, such as simulating molecules or finding the prime factors of large whole numbers, quantum computers will speed up calculations that could otherwise take longer than the age of the Universe. But researchers still lack sufficient evidence that this is the case for machine learning. Others say that quantum machine learning could spot patterns that classical computers miss — even if it isn’t faster.

Researchers’ attitudes towards quantum machine learning shift between two extremes, says Maria Schuld, a physicist based in Durban, South Africa. Interest in the approach is high, but researchers seem increasingly resigned about the lack of prospects for short-term applications, says Schuld, who works for quantum-computing firm Xanadu, headquartered in Toronto, Canada.

Some researchers are beginning to shift their focus to the idea of applying quantum machine-learning algorithms to phenomena that are inherently quantum. Of all the proposed applications of quantum machine learning, this is “the area where there’s been a pretty clear quantum advantage”, says physicist Aram Harrow at the Massachusetts Institute of Technology (MIT) in Cambridge.

Do quantum algorithms help?

Over the past 20 years, quantum-computing researchers have developed a plethora of quantum algorithms that could, in theory, make machine learning more efficient. In a seminal result in 2008, Harrow, together with MIT physicists Seth Lloyd and Avinatan Hassidim (now at Bar-Ilan University in Ramat Gan, Israel) invented a quantum algorithm1 that is exponentially faster than a classical computer at solving large sets of linear equations, one of the challenges that lie at the heart of machine learning.

But in some cases, the promise of quantum algorithms has not panned out. One high-profile example occurred in 2018, when computer scientist Ewin Tang found a way to beat a quantum machine-learning algorithm2 devised in 2016. The quantum algorithm was designed to provide the type of suggestion that Internet shopping companies and services such as Netflix give to customers on the basis of their previous choices — and it was exponentially faster at making such recommendations than any known classical algorithm.

Tang, who at the time was an 18-year-old undergraduate student at the University of Texas at Austin (UT), wrote an algorithm that was almost as fast, but could run on an ordinary computer. Quantum recommendation was a rare example of an algorithm that seemed to provide a significant speed boost in a practical problem, so her work “put the goal of an exponential quantum speed-up for a practical machine-learning problem even further out of reach than it was before”, says UT quantum-computing researcher Scott Aaronson, who was Tang’s adviser. Tang, who is now at the University of California, Berkeley, says she continues to be “pretty sceptical” of any claims of a significant quantum speed-up in machine learning.

A potentially even bigger problem is that classical data and quantum computation don’t always mix well. Roughly speaking, a typical quantum-computing application has three main steps. First, the quantum computer is initialized, which means that its individual memory units, called quantum bits or qubits, are placed in a collective entangled quantum state. Next, the computer performs a sequence of operations, the quantum analogue of the logical operations on classical bits. In the third step, the computer performs a read-out, for example by measuring the state of a single qubit that carries information about the result of the quantum operation. This could be whether a given electron inside the machine is spinning clockwise or anticlockwise, say.

The thinnest of straws

Algorithms such as the one by Harrow, Hassidim and Lloyd promise to speed up the second step — the quantum operations. But in many applications, the first and third steps could be extremely slow and negate those gains3. The initialization step requires loading ‘classical’ data on to the quantum computer and translating it into a quantum state, often an inefficient process. And because quantum physics is inherently probabilistic, the read-out often has an element of randomness, in which case the computer has to repeat all three stages multiple times and average the results to get a final answer.

Once the quantumized data have been processed into a final quantum state, it could take a long time to get an answer out, too, according to Nathan Wiebe, a quantum-computing researcher at the University of Washington in Seattle. “We only get to suck that information out of the thinnest of straws,” Wiebe said at a quantum machinelearning workshop in October.

“When you ask almost any researcher what applications quantum computers will be good at, the answer is, ‘Probably, not classical data,’” says Schuld. “So far, there is no real reason to believe that classical data needs quantum effects.”

Vallecorsa and others say that speed is not the only metric by which a quantum algorithm should be judged. There are also hints that a quantum AI system powered by machine learning could learn to recognize patterns in the data that its classical counterparts would miss. That might be because quantum entanglement establishes correlations among quantum bits and therefore among data points, says Karl Jansen, a physicist at the DESY particle-physics lab in Zeuthen, Germany. “The hope is that we can detect correlations in the data that would be very hard to detect with classical algorithms,” he says.

A computer generated image of a typical candidate event including two high-energy photons depicted in red
Quantum machine learning could help to make sense of particle collisions at CERN, the European particle-physics laboratory near Geneva, Switzerland.Credit: CERN/CMS Collaboration; Thomas McCauley, Lucas Taylor (CC BY 4.0)

But Aaronson disagrees. Quantum computers follow well-known laws of physics, and therefore their workings and the outcome of a quantum algorithm are entirely predictable by an ordinary computer, given enough time. “Thus, the only question of interest is whether the quantum computer is faster than a perfect classical simulation of it,” says Aaronson.

Fundamental quantum change

Another possibility is to sidestep the hurdle of translating classical data altogether, by using quantum machine-learning algorithms on data that are already quantum.

Throughout the history of quantum physics, a measurement of a quantum phenomenon has been defined as taking a numerical reading using an instrument that ‘lives’ in the macroscopic, classical world. But there is an emerging idea involving a nascent technique, known as quantum sensing, which allows the quantum properties of a system to be measured using purely quantum instrumentation. Load those quantum states on to a quantum computer’s qubits directly, and then quantum machine learning could be used to spot patterns without any interface with a classical system.

When it comes to machine learning, that could offer big advantages over systems that collect quantum measurements as classical data points, says Hsin-Yuan Huang, a physicist at MIT and a researcher at Google. “Our world inherently is quantum-mechanical. If you want to have a quantum machine that can learn, it could be much more powerful,” he says.

Huang and his collaborators have run a proof-of-principle experiment on one of Google’s Sycamore quantum computers4. They devoted some of its qubits to simulating the behaviour of a kind of abstract material. Another section of the processor then took information from those qubits and analysed it using quantum machine learning. The researchers found the technique to be exponentially faster than classical measurement and data analysis.

Is it a superconductor?

Doing the collection and analysis of data fully in the quantum world could enable physicists to tackle questions that classical measurements can only answer indirectly, says Huang. One such question is whether a certain material is in a particular quantum state that makes it a superconductor — able to conduct electricity with practically zero resistance. Classical experiments require physicists to prove superconductivity indirectly, for example by testing how the material responds to magnetic fields.

Particle physicists are also looking into using quantum sensing to handle data produced by future particle colliders, such as at LUXE, a DESY experiment that will smash electrons and photons together, says Jensen — although the idea is still at least a decade away from being realized, he adds. Astronomical observatories far apart from each other might also use quantum sensors to collect data and transmit them — by means of a future ‘quantum internet’ — to a central lab for processing on a quantum computer. The hope is that this could enable images to be captured with unparalleled sharpness.

If such quantum-sensing applications prove successful, quantum machine learning could then have a role in combining the measurements from these experiments and analysing the resulting quantum data.

Ultimately, whether quantum computers will offer advantages to machine learning will be decided by experimentation, rather than by giving mathematical proofs of their superiority — or lack thereof. “We can’t expect everything to be proved in the way we do in theoretical computer science,” says Harrow.

“I certainly think quantum machine learning is still worth studying,” says Aaronson, whether or not there ends up being a boost in efficiency. Schuld agrees. “We need to do our research without the confinement of proving a speed-up, at least for a while.”