The Power of Complexity and Entanglement, from Thousands of Miles Away

by Siobhan Roberts (Journalist in Residence, Simons Institute)

In January 2014, during an open problems session in the auditorium at the Simons Institute, the computer scientist Thomas Vidick posed a question that he expected would go nowhere.

The research program on Quantum Hamiltonian Complexity had just commenced — probing techniques from both quantum complexity theory and condensed matter physics and asking questions such as: Is the scientific method sufficiently powerful to understand general quantum systems? Is materials science about to hit a computational barrier? 

Vidick’s questions waded further into the weeds. 

“A central conjecture, the so-called quantum PCP conjecture, crystallizes many of these issues, and the conjecture was hotly debated throughout the semester,” recounted Vidick, a professor of computing and mathematical sciences at Caltech, in his research vignette published later that year.

Two of the program’s organizers, Umesh Vazirani of UC Berkeley and Dorit Aharonov at the Hebrew University of Jerusalem, encouraged him to formulate a new variant of the conjecture, which (for those readers at least somewhat in the know) he described as follows:

“This formulation of the PCP theorem gives rise to a quantum analogue whose relation to the quantum PCP conjecture is a priori unclear. The new variant asks the following: Do all languages in QMA admit an efficient verification procedure of the above form, in which the verifier and communication may be quantum? Here it is important to also allow the provers to be initialized in an arbitrary entangled state, as this may be necessary to encode the QMA witness whose existence the provers are supposed to convey to the verifier.”

Vidick admitted the problem was tantalizing, yet he believed it would lead to a dead end.

Six years later, however, quite the contrary has proved to be the case: that dead-end question ultimately led to a breakthrough result.

The day before the 2020 spring term began at the Simons Institute — with a timely pairing of two interconnected programs, Lattices: Algorithms, Complexity, and Cryptography and The Quantum Wave in Computing — Vidick and his collaborators1 posted a 165-page paper to arXiv titled “MIP*=RE.”

It had been a long time coming. And during the home stretch, another team of researchers seemed to have proved the opposite result — via a very different language and approach — but a gap emerged with a lemma that could not be fixed.

Before the month of January was out, after some double- and triple-checking of the proof, Henry Yuen, a Simons Institute research fellow visiting that term from the University of Toronto, and the designated front man, wowed workshop participants with a two-part lecture unpacking the nuances of MIP* = RE.

“It’s a very surprising result,” said Vazirani.

At its bones, he noted, the natural hypothesis would have been that both mathematically and computationally there should be a simple route through with simple definitions — that was the starting place. But it gradually became clear that this wasn’t so. In both cases what the result actually reveals is that you cannot get away with it — “You have to pin down the role of entanglement very carefully,” said Vazirani. “You cannot cut any corners.” 

Mysteriously interactive tales

Weeks later, when the coronavirus pandemic hit, Yuen returned to Toronto, and most program participants followed a similar path — heading homeward across the United States and around the globe to Chennai, Paris, Tel Aviv, London, Amsterdam, Berlin, and beyond.

Toward the end of April, Yuen delivered a more popular account of the team’s research during his Richard M. Karp Distinguished Lecture, titled “A Tale of Turing Machines, Quantum-Entangled Particles, and Operator Algebras.” This gathering, of course, transpired via Zoom webinar, which by then was serving as auditorium, teatime lounge, and cookout channel (Léo Ducas demonstrated a recipe for a rigorously layered tomato-zucchini tian that concluded: “Any resemblance to lattice-based crypto is purely coincidental”). Zoom also became the locus for the reading group called Quantum for Crypto Dummies.

The simplified upshot of Yuen’s tale went as follows:

MIP* = RE is an equation showing that games with quantum particles can be extraordinarily complex — so complex that it is impossible to figure out the optimal strategies for winning. And surprisingly, the MIP* = RE equation revealed — simultaneously — answers to seemingly unrelated mysteries and guiding questions in computer science, physics, and mathematics.

Wading into his lecture, Yuen laid out the ideas in these three fields that are very different in character.

In quantum physics, a big question asks whether quantum entanglement can be explained by classical physics.

In the theory of operator algebras, the defining query ponders the mathematical structure of operators on Hilbert space.

In the theory of computing, the crucial mystery is about whether it’s harder for a computer to search for a solution than verify a solution.

“It might be hard to imagine how these questions could remotely be related to each other,” said Yuen — but the MIP* = RE paper, he noted, claims the equivalence of two different complexity classes.

Computer scientists classify computational problems into complexity classes based on the difficulty of solving them. “For example,” Yuen said, “the famous complexity class P refers to all problems that can be solved efficiently on a deterministic computer. The class MIP* is the set of all problems that can be solved by playing games with entangled provers, and RE is the set of all problems that can be solved if one had the ability to solve the halting problem (i.e., tell if a given computer program will stop or not).” 

MIP* = RE thereby represents a convergence of the ideas from these three seemingly unrelated fields.

These three divergent story lines had a common starting point nearly a century ago. In the interim, they diverged and flourished before they converged with MIP* = RE.

First, the quantum mechanics tale.

In the 1930s, amid the Great Depression and swing jazz, quantum mechanics took shape, describing how the universe works on the smallest scales. Physicists grappled with its counterintuitive features — for instance, the wave-particle duality and the uncertainty principle. And, perhaps weirdest of all, the phenomenon of quantum entanglement: imagine two particles that once interacted are separated in galaxies light years apart, but these two particles upon local measurement still display uncanny correlations — almost as if one particle could instantaneously influence the other. The particles are quantum entangled.

Albert Einstein called this “spooky action at a distance.” As a remedy of sorts, Einstein, Boris Podolsky, and Nathan Rosen published their famous 1935 paper proposing a thought experiment arguing that quantum mechanics is incomplete and that there should be a nicer theory of physics that reproduces the same predictions of quantum mechanics but with a local classical formulation. This is the EPR paradox: a deterministic model of nature that doesn’t allow instantaneous action at a distance.

But nearly 30 years later, John Bell proved that the EPR dream was impossible — he proved Bell’s theorem in an ingenious thought experiment, which can be reformulated in what is called the “CHSH game” (Clauser-Horne-Shimony-Holt, the last names of four physicists who in 1969 generalized Bell’s original argument).

“In this game, there’s someone called ‘the verifier’ who’s in the middle — and a verifier is a completely classical being, like you or me,” explained Yuen. “And the verifier interacts with two players that are cooperating with each other — since I’m a computer scientist, I’m going to call these players Alice and Bob.”

Alice and Bob cannot communicate with one another over the course of the game; this is enforced through spatial separation. But they can each communicate with the verifier in the middle. The verifier interrogates Alice and Bob with random bits — the verifier sends bit x to Alice and bit y to Bob. Alice and Bob win if the sum of their answer bits is equal to the product of their question bits. Alice and Bob want to optimize their strategy and maximize their chances of winning, which depend on which model of physics they follow.

If the players behave according to a local classical theory of physics, then their optimal winning probability is at most 75 percent. If, however, Alice and Bob take advantage of quantum physics, leveraging the power of quantum entanglement, they can increase their probability of winning to 85 percent. Quantum physics predicts a higher winning probability than a local classical theory of physics.

“So, put together — this is how John Bell deduced his theorem,” said Yuen. “Local classical physics cannot reproduce all predictions of quantum mechanics.”

And, as he noted, the CHSH game is not just a thought experiment; it has been experimentally tested many times over the years — a high-profile example of this is an experiment that was conducted at TU Delft in the Netherlands.

“This simple game gives a way to verify the quantum nature of our world,” Yuen said.

He added that since the days of Einstein and Bell, various natural phenomena, from the strange to the mundane, have been revealed to be fundamentally due to large numbers of entangled particles — for instance, superconductivity and black hole evaporation and even perhaps the simple process of photosynthesis, converting sunlight into chemical energy.

“The point is that the world we live in seems to be governed by this very nonclassical theory of nature,” said Yuen. “And John Bell’s thought experiment gave us a very solid proof that this is the case.”

Secondly, Yuen outlined the mathematical thread: operator algebras.

In 1932, John von Neumann developed a rigorous mathematical foundation for quantum mechanics — still used by physicists and quantum information scientists today — whereby quantum states are defined by vectors in a complex Hilbert space and measurements are described by linear operators on that space. Inspired by this framework, von Neumann and his collaborator Francis Murray over the next decade published a series of papers that launched the field of operator algebras.

“The field of operator algebras is a very deep subject in pure mathematics revealing a zoo of objects with immense complexity and structure,” said Yuen. “And just as zoologists try to classify the diversity of organisms, operator algebraists also want to classify all these abstract animals that roam around the mathematical landscape.”

One of the most important goals of the field was to classify the so-called von Neumann factors. Broadly, there are three species — type I, type II, and type III — and within each there are subspecies. Type I subspecies were classified by Murray and von Neumann, leaving open the question of how type II and type III behave.

Alain Connes won a Fields Medal in part for classifying the type III factors.

As for type II factors, a throwaway line in Connes’s paper published in 1976 introduced what became known as the Connes embedding conjecture, asking whether the type II factors — which in general are abstract infinite-dimensional objects — could be approximately understood in terms of more concrete finite-dimensional matrices. Over the years, the Connes embedding conjecture became more and more significant as researchers discovered many equivalent formulations and consequences across different areas of mathematics.

Finally, Yuen turned to the computer science tale.

In 1936, Alan Turing proposed the concept of a universal computing machine — a universal Turing machine, a machine that could simulate any other computing device. “It’s one of the most powerful ideas in science — the idea that from a few simple rules, you can obtain universal computation,” said Yuen.

Turing also showed there is a problem that this universal machine can’t solve: it cannot determine which Turing machines halt and which run forever. And he showed that there is no algorithm that can solve this problem (building upon Kurt Gödel’s theorems proving that there are mathematical statements that cannot be proved true or false).

In 1971, Stephen Cook asked a follow-up question of the halting problem: If a statement is guaranteed to have a checkable proof, can the proof be efficiently found? This motivated Cook’s discovery of NP-completeness, and the P vs. NP problem. “At its heart, this is about the difference between checking a proof versus finding one,” said Yuen. “I think many of us have this gut feeling that there is a huge difference — it’s much easier to verify a solution to a problem than to find the solution yourself.”

In 1972, Richard Karp (who would later found the Simons Institute), in his famous paper about 21 NP-complete problems, showed that Cook’s question about proof checking versus proof finding is a much bigger phenomenon than just mathematical proofs; NP-completeness and the P vs. NP question pop up everywhere in computing. And in due course, a dazzling array of new ideas, concepts, and models ensued.

“People kept asking, Well, what is a proof? What does it mean?” said Yuen. Traditionally, a proof was thought of as a piece of static text that is sequentially checked using formal deterministic rules — the types of statements that can be verified with this traditional notion of a proof are the solutions to the problems in the class NP.

“But starting in the 1980s, computer scientists began boldly reconsidering what a proof could be,” said Yuen. It could be interactive; it could be zero-knowledge; it could be probabilistically checkable; it could be quantum; it could be economically rational. And it could be a game, essentially.

The notion of an interactive proof was co-invented by Simons Institute Director Shafi Goldwasser, along with Silvio Micali and Charlie Rackoff — with the motivation deriving from our common experience that it is much easier to learn something complicated if you interactively talk to a teacher than try to learn by reading a book or a paper alone.

In one model of an interactive proof, there is a verifier who wants to determine if statement X is true. A static proof would take too long. But by interrogating someone who is all-powerful and all-knowing — the prover — it is possible for the verifier to learn whether statement X is true or not: if statement X is indeed true, then the prover can convince the verifier of this fact in polynomial time and with high probability.

One of the most important results in complexity theory, Yuen said, is the IP = PSPACE theorem — IP stands for interactive proof, and PSPACE is the class of problems that are solvable in polynomial space, including NP-complete problems as well as problems believed to be much harder.

“The theorem demonstrates that interactivity is a powerful resource for verification,” said Yuen.

In another model of interactive proof, the verifier can interrogate multiple provers — say, two provers — who are all-powerful. There is a constraint between them: they can’t communicate. But the verifier can cross-interrogate and play the provers against each other. This gives the verifier extra leverage against false claims.

And a similarly important theorem demonstrated that the class MIP — the set of problems that can be solved in the multiprover proof model — is equal to the class NEXP, problems that can be solved in nondeterministic exponential time (equivalently, admit exponential time verifiable proofs).

“In other words,” said Yuen, “you can verify in polynomial time solutions to exponentially large NP problems.”

These two results, IP = PSPACE and MIP = NEXP, in the 1990s led to what Yuen deemed perhaps one of the most wonderful results in computer science: the probabilistically checkable proofs theorem, or the PCP theorem.

“It states that if someone discovers a superlong mathematical proof — say for the Riemann hypothesis or the ABC conjecture — in principle you can ask them to format this proof in such a way that you can verify its correctness by looking at only three random locations in the proof,” he said.

Having covered the sweeping historical background, Yuen brought these three intellectual tales together with an overview of the MIP* = RE result.

In 2004, computer scientists realized that Bell’s thought experiment looks awfully like an interactive proof.

“Your first reaction should be to start rubbing your chin and wonder if there’s a deeper connection between these two settings,” said Yuen.

As a result, CHSH games began to be thought of in the context of interactive proofs, motivating a model of quantum interactive provers, the class MIP* — that is, problems with solutions verifiable by interactive proofs with entangled provers.

This in turn led to a crucial question: What is the complexity of MIP* — in other words, how powerful is the model of quantum interactive provers?

And a succession of queries thereafter: Is there a general algorithm to compute the success probability even approximately? How many qubits do Alice and Bob need for optimal play in this game? 

In fact, puzzlingly, it seemed impossible to come up with even an inefficient algorithm to approximate this optimal winning probability.

And about 10 years ago it became apparent that this question connects — due to something of a domino effect of results in different fields — to the Connes embedding conjecture and that this conjecture in turn implies the existence of an algorithm. The algorithm consists of two procedures: the so-called “search from below” procedure and the “search from above” procedure. Run in parallel, together these procedures provide a candidate algorithm — if the Connes embedding conjecture is true.

“So, given this, you might just say, Let’s try to prove the Connes embedding conjecture, and then we have an algorithm,” said Yuen.

The MIP* = RE result, however, in showing that there is a computable reduction from Turing machines to nonlocal games, proves that the Connes embedding conjecture must be false.

Yuen very briefly described the proof, “from a thousand miles away”:

“Recall that the PCP theorem says that proofs of mathematical statements X can be written in a way that are checkable by examining only a constant number of random locations in a proof,” he said.

“The idea behind MIP* = RE is to recursively iterate the PCP theorem an arbitrary number of times,” he said.

This result is something of a tongue twister:

“In the proof of MIP* = RE, the idea is to combine quantum entanglement with the PCP theorem so that one can quickly verify that an even more complex verifier has verified an even more complex verifier that has … and so on,” said Yuen.

“This is impossible classically, because the first verifier runs out of random bits,” said Yuen.

This is where the quantum mechanics come in: games like CHSH can be used to certify the generation of fresh random bits that can be used for the recursive verification procedure.

Implications of the proof unfold in the world of operator algebras in that the type II von Neumann factors can’t be approximated by finite-dimensional matrices; they are irreducibly complex objects.

From a mathematical physics point of view, the proof suggests that there exists, in principle, an experiment to verify whether nature is infinite-dimensional. “It doesn’t spell out how to perform this experiment,” said Yuen. “But it says it is not a priori impossible.”

And interpreted through the lens of complexity theory, this result indicates that there is an efficient interactive proof for the halting problem — “Which is kind of wild,” said Yuen, who concluded his account with a parable:

“In this parable, there’s a group of blind men who encounter an elephant for the very first time,” he said. “The first blind man, who has his hand on the elephant’s side, says that it’s like an enormous wall. The second blind man, who’s wrapping his arms around one of the elephant’s legs, exclaims that surely it must be a gigantic tree. The third blind man, feeling the elephant’s tail, thinks that it’s a giant rope. These blind men argue with each other and they vehemently disagree, but after a while they come to realize that while each person is partially correct, they were only seeing local views of something much more complicated and interesting.”

By Yuen’s observation, there is something similar going on with MIP* = RE. Researchers in the fields of quantum mechanics, operator algebras, and computer science have been exploring a giant, mysterious elephant, calling what they found variously the Connes embedding conjecture or the complexity of quantum multiprover interactive proofs, or Tsirelson’s problem.2

MIP* = RE is showing us that all of these views are connected by the outlines of this mysterious elephant,” said Yuen, “and that there is certainly something much more waiting to be seen.”

Karp, the former director of the Simons Institute, watching from home in San Francisco, deemed Yuen’s lecture “masterful” and the result itself “remarkable.”

“He showed how ideas from computational complexity theory, quantum computation, and operator algebras can be combined to establish the immense power of multiprover quantum computers with entangled initial states,” said Karp.

Shooting for the moon

The result, of course, was a robustly collaborative and time-consuming effort — just as the intellectual themes converged over history, MIP* = RE coalesced through a fortuitous constellation of people and ideas.

During that seemingly dead-end open problems session in 2014, there had been some exciting suggestions from Joseph Fitzsimons, visiting from the Singapore Institute of Technology. He and Vidick investigated during the spring term and followed up over Skype. In 2015, they published the paper “A Multiprover Interactive Proof System for the Local Hamiltonian Problem.”

In 2018, they published another piece of the puzzle, “Quantum proof systems for iterated exponential time, and beyond,” with Zhengfeng Ji, now with the University of Technology Sydney, and Yuen.

And running roughly parallel in 2018, Anand Natarajan, now at Caltech, and John Wright, now at Caltech and UT Austin, were working on a paper about NEEXP — building on an idea Yuen had devised about “introspection”: it was a way to use self-testing to prove a version of compressing games/protocols.3

“Like a lot of research projects, ours began as a series of embarrassing misadventures,” recalled Wright. “For example, by the end of the second day, we mistakenly thought that we had already proven MIP* = RE! Then on the third day, we realized we were totally wrong and had to start over from scratch.”

Once they had the “NEEXP in MIP*” result, MIP* = RE presented itself a potential sequel of sorts — the former result naturally generated a question as to whether the construction could be “iterated.”

In January 2019, Ji visited Caltech, and the basic ideas crystallized with Vidick and Natarajan. Wright joined the project shortly after. And then Vidick sent an invitation to Yuen, who had expressed the same ideas about iterating. “Given the technical difficulty of the RE project, we all felt like we wanted him on board,” said Vidick.

“The five of us are geographically pretty far apart, so the few occasions when we were together in the same place are pretty memorable,” said Natarajan. “One such instance was at the QIP conference in Boulder in January 2019 — I remember trying to work out ideas in a little notebook with John and Zhengfeng during the rump session, while gorging on tortilla chips.” 

By May 2019, they had an outline. They worked nonstop, and by late summer they were almost done. It was a lot of technical work. The main challenge was to come up with the right representation for verifiers and their distributions of questions. This required ingenuity, but there was no big eureka moment.

“For such a ‘major’ result, you might imagine that there needs to be a sudden deep insight, but I don’t think there was,” said Vidick. “It’s more representative of how science is truly done: by slowly building understanding one step at a time, over multiple years and intermediate results.”

“A few years back, no one could have imagined that the problem could be resolved via this approach,” said Ji. “It is also intriguing to see that all the different techniques we used, including quantum soundness of low-degree tests, the rigidity of nonlocal games, introspection, and compression of interactive proofs, can work together in harmony.”

For Wright, the cherry on top was that in addition to the MIP* = RE proof, they also got a disproof of the Connes embedding conjecture from the seemingly unrelated area of operator theory. “There’s a Star Trek episode where Geordi’s VISOR, which was built to help him see, turns out to contain technology that can save some planet from a giant meteor strike,” said Wright. “This kind of reminds me of that episode: tools developed in one area are finding application in another. My hope is that this will become a continued fruitful exchange between these two areas.”

Most surprising, for Vidick, was simply the result itself.

“I still don’t understand the result,” he said. “It’s amazing that you can iterate this process.” 

And it was an amazing place to land after believing he’d hit a dead end.

Looking back on that state of mind, Vidick recalled that it didn’t seem reasonable to place a “quantum” complexity class inside a model that seemed all but classical.

“The entanglement between the provers allows them to cheat. How can the verifier leverage that to their advantage, when they have such bounded power? I never thought you’d be able to test for such complex quantum behavior using so limited means. I was also strongly influenced by the classical result MIP = NEXP and earlier work with Tsuyoshi Ito showing that NEXP is in MIP*.”

“In retrospect, my imagination was too limited,” he said. “I’m that kind of researcher. I can work hard, but I don’t shoot for the moon, unless someone else suggests it. Then I’m game.”

  1. The authors are Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen.
  2. Tsirelson’s problem was a question raised by the late Boris Tsirelson in the early 2000s about whether two mathematical models of quantum correlations were the same. It was realized around 2010 by people working at the intersection of mathematics and quantum information theory that resolving Tsirelson’s problem is equivalent to resolving Connes’ embedding conjecture. This established a bridge between this famous problem in operator algebras and a very natural problem in quantum information theory. The way MIP* = RE resolves the Connes’ embedding conjecture is by first solving Tsirelson’s problem, and then using this existing bridge to go the rest of the way. 
  3. NEEXP is the complexity class of problems that can be solved in nondeterministic doubly-exponential time. Just as NEXP (mentioned earlier) is the class of exponential-sized (2n) NP problems, NEEXP is the astronomically bigger class of doubly exponentially-sized (22n) NP problems. Its relevance to the story is that Natarajan and Wright showed a precursor to MIP* = RE by figuring out that MIP* contains NEEXP, which then paved the way to showing that MIP* = RE. (A cartoonish explanation of how MIP* = RE is proved is that one shows MIP* contains NEXP, NEEXP, NEEEXP, NEEEE ….. EEEEXP for any number of E’s).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.