Research Vignette: Lower Bounds in Computational Complexity

by Rahul Santhanam (University of Oxford)

Computational complexity theory studies the possibilities and limitations of algorithms. Over the past several decades, we have learned a lot about the possibilities of algorithms. We live today in an algorithmic world, where the ability to reliably and quickly process vast amounts of data is crucial. Along with this social and technological transformation, there have been significant theoretical advances, including the discovery of efficient algorithms for fundamental problems such as linear programming and primality.

We know far less about the limitations of algorithms. From an empirical point of view, certain important problems seem inherently hard to solve. These include the satisfiability problem in logic, the Traveling Salesman Problem in graph theory, the integer linear programming problem in optimization, the equilibrium computation problem in game theory, and the protein folding problem in computational biology. What all these problems have in common is that it is easy to verify if a given solution is correct, but it seems hard to compute solutions. This phenomenon is encapsulated in the celebrated NP vs. P question, which asks if all problems with solutions verifiable in polynomial time (NP) can also be solved in polynomial time (P). The NP vs. P problem is important both mathematically, as evidenced by its inclusion in the list of Millennium Prize Problems by the Clay Mathematics Institute, and scientifically, since natural problems that arise in a variety of scientific contexts are in NP but not known to be in P. To make progress on NP vs. P and related questions, we need to show complexity lower bounds (i.e., prove that a given computational problem cannot be solved efficiently).

Complexity lower bounds are interesting for many reasons. First, from a pragmatic point of view, they map the boundaries of what is possible and, hence, save us from wasting our efforts on finding efficient solutions to problems beyond those boundaries. Second, lower bounds can be exploited algorithmically to design secure cryptographic protocols and efficiently convert randomized algorithms into deterministic ones. Thus, intriguingly, proving new limits on the power of computation also opens up new possibilities for computation! Third, and perhaps most importantly, lower bounds provide a deeper understanding of the nature of computation. An efficient solution to an algorithmic problem tells us something new about that specific problem, while a lower bound often tells us something new about the computational model and hence about algorithms in general.

The Fall 2018 Simons Institute program on Lower Bounds in Computational Complexity gathered together researchers working on different aspects of complexity lower bounds, in Boolean, algebraic, and interactive settings, with the goal of making progress on the major open problems concerning lower bounds. In some of these cases, such as the setting of interactive computation, good lower bounds are known for various problems. In other cases, such as the case of general Boolean circuits, very little is known. In the remainder of this article, I will briefly survey what is known about Boolean circuit lower bounds and describe a phenomenon called hardness magnification that provides some new insights. This line of work was developed partly during the course of the Lower Bounds program with my collaborators Igor Oliveira and Ján Pich.

The Current State of Circuit Lower Bounds
The standard approach to the NP vs. P question is via the analysis of Boolean circuits, which are networks of logic gates computing finite Boolean functions. It is well known that any problem solvable in polynomial time can be computed by a sequence of Boolean circuits of polynomial size (as a function of the input length). Thus, to separate NP from P, it suffices to show a superpolynomial circuit lower bound for some NP problem.

This question seems very hard for general Boolean circuits. To make progress, complexity theorists have considered restricted circuit models and attempted to show lower bounds for such models. In the 1980s, there was a series of successes in this direction, with exponential-size lower bounds shown for various problems in restricted circuit models, including the class AC0 of constant-depth circuits, the class AC0[p] of constant-depth circuits with prime modular gates, and the class of monotone Boolean circuits. This inspired the hope that lower bounds for general Boolean circuits were within reach.

However, despite much effort, such lower bounds have remained elusive. We can’t even rule out the possibility that every problem in NP can be solved with circuits of size 6n (where n is the input length), even though we believe that exponentially large circuits are required! Indeed, a picture has emerged where there is a clear distinction between weak and strong models. For weak models such as AC0, AC0[p], and monotone circuits, very good lower bounds are known, and it appears that we understand these models well. For strong models such as Boolean circuits or Boolean formulas, the lower bounds known are very poor. Our understanding seems primitive, and there have been no clear signs of progress in the past couple of decades.

This picture of weak and strong models is reinforced by the celebrated “natural proofs” barrier of Razborov and Rudich. In an attempt to understand the lack of progress in circuit lower bounds following the breakthroughs of the 1980s, they analyzed the techniques used to prove circuit lower bounds. On the one hand, they found that virtually all known lower bound proofs for a circuit model can be made “natural” in the sense that they provide an efficient procedure for distinguishing between random functions and functions of low circuit complexity in the model. On the other hand, they showed that under standard cryptographic assumptions, namely the existence of one-way functions, there are no natural proofs showing lower bounds for general Boolean circuits. Thus weak circuit models such as AC0 and AC0[p] admit natural proofs, while strong models such as Boolean formulas and general Boolean circuits do not admit natural proofs under standard hardness assumptions.

A Magnification Phenomenon
In my paper with Oliveira that was presented at FOCS during the Lower Bounds program, we showed a hardness magnification phenomenon that is surprisingly widespread. For many natural problems, moderate lower bounds for weak models or slightly nontrivial lower bounds for strong models imply excellent lower bounds for strong models. This challenges and complicates the conventional picture of weak versus strong circuit models.

Magnification is most easily explained with an example. Consider the vertex cover (VC) problem, where we are given a graph G and a parameter k and asked if G has a vertex cover of size at most k. There is a strong belief in the complexity community that VC does not have polynomial-size formulas even when the parameter k is specialized to be polylogarithmic in n. Indeed, this is necessarily the case if exponential formula lower bounds hold for NP. Oliveira and I noticed that in this parameter regime, even with slightly superlinear circuit lower bounds for constant-depth circuits, solving the problem would imply that NP does not have polynomial-size Boolean formulas! The constant-depth circuit model is believed to be very well understood, and exponential lower bounds are known even for simple functions such as the parity function. However, it is unclear how to use the known techniques to show the desired lower bounds for the VC problem.

A similar phenomenon connecting weak and strong circuit lower bounds holds for several other fundamental problems, including satisfiability and compression problems such as variants of the Minimum Circuit Size Problem (MCSP). There has been a lot of recent work on MCSP, and there was a reading group devoted entirely to the problem during the Lower Bounds program. In my original paper with Oliveira, we could only show magnification for average-case versions of MCSP, but in follow-up work with Oliveira and Pich during the program, we gave magnification results for the standard version of the problem. In this follow-up work, we also used magnification to partly explain currently known lower bounds for intermediate circuit models such as formulas and branching programs.

Magnification indicates that the gap between weak and strong circuit models is sometimes illusory. When analyzing circuit models, we cannot leave the computational problem out of the picture — our current lower bound techniques work well for some problems that seem fairly easy and fail to work for others that seem harder. We need to revise our intuitions and consider afresh the question of whether there are fundamental difficulties in proving circuit lower bounds.

Several directions remain to be explored. For which computational problems and circuit models does magnification occur? Can magnification be used to prove new circuit lower bounds? Is there something distinctive about MCSP that can be exploited in proving lower bounds? Does magnification indicate that there are further barriers to proving lower bounds, beyond the natural proofs barrier of Razborov and Rudich, and if so, what is the best way to formulate them? Whatever the answers to these questions, they seem to promise at least some progress in our understanding of circuit lower bounds.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.