Theory at the Institute and Beyond, September 2021

by Prasad Raghavendra (Simons Institute)

Being in one of the talks in the Simons Institute auditorium, witnessing live and lively interaction with the speaker, feels like the closest thing to normal since the start of the pandemic. There is a sense of tangible joy among the participants just to be sharing the same physical space, let alone the fantastic environs of the Institute. The renewed energy is all there to witness in the programs this semester on Computational Complexity of Statistical Inference (CCSI) and Geometric Methods in Optimization and Sampling (GMOS), both of which are now in full swing. Although masking is maintained, it doesn’t seem to change the quintessential Simons program experience even a little bit. I am referring, of course, to the constant feeling of missing out on all the incredibly interesting activities going on, much of which one is unable to fit into their schedule.

At least some of the palpable energy can be attributed to over 40 postdocs and research fellows who have arrived at the Institute this semester, many of whom will stay on for a year or two. This extraordinary group of young researchers covers the whole gamut of topics, ranging from cryptography, quantum computing, and fairness to machine learning, data structures, algorithms, and complexity theory. Each of these postdocs and fellows gave a 10-minute presentation at the “Meet the Fellows’’ welcome event that the Institute held on September 8 and 9. Check out their talks for glimpses of the cutting edge in all these subfields of theory.

An advance in algebraic circuit complexity
This time around, there is some good news from the front lines on circuit complexity, one of the most challenging arenas within theoretical computer science.

An algebraic circuit consists of gates, each of which carries out either addition or multiplication over some field, say real numbers. The depth of the circuit is the length of the longest path from the output to one of its inputs. Naturally, an algebraic circuit computes a polynomial over its inputs.

In the world of Boolean circuits with AND/OR/NOT gates, lower bounds against constant depth circuits, aka AC0 circuit lower bounds, have been known since the 1980s and are one of the most influential results in complexity theory. For general algebraic circuits over a large field (say reals), even superpolynomial lower bounds for depth three circuits had remained elusive. In a remarkable paper, Nutan Limaye, Srikanth Srinivasan, and Sébastien Tavenas have obtained the first superpolynomial lower bounds against general algebraic circuits of all constant depths over fields of characteristic zero (say reals). Furthermore, the lower-bound result is shown for a simple polynomial known as “iterated matrix multiplication” whose input consists of \(d\) matrices \(X_1,\ldots,X_d\) of dimension \(n \times n\), and the goal is to compute a fixed entry of their product \(X_1 \cdot X_2 \cdots X_d\). The same work also obtains a depth hierarchy theorem for algebraic circuits showing that for every depth D, there is an explicit polynomial that can be computed by a depth D circuit of size s, but requires circuits of size superpolynomial in s if the depth is D-1.

Remarkable work of Matthew Brennan
The theory community suffered a terrible loss this year with the tragic and untimely passing of one of our rising stars, Matthew Brennan. While still a graduate student at MIT, Matthew almost single-handedly pushed forward an ambitious research program at the intersection of computational complexity and statistics. Here we will try to give a glimpse of Matthew’s extensive body of research.

To set the context, traditionally the field of computational complexity has been concerned with characterizing how much memory or computational power is needed, while statistics sheds light on how much data are needed. The interplay between computational and statistical resources and their underlying trade-offs is only recently coming into focus.

Many statistical estimation tasks exhibit information-computation gaps wherein the accuracy obtained by efficient algorithms is strictly worse than the estimate that one could in principle extract from the data if there were no computational limitations. In other problems, there is an apparent trade-off between the number of samples needed for an estimation task and the computational power of the estimation algorithm.

Identifying and rigorously establishing the presence of these information-computation trade-offs is a fundamental challenge that has drawn much interest in theoretical computer science and statistics. To rigorously establish an information-computation gap, one would need to prove lower bounds against efficient algorithms for statistical estimation problems.

Here on, there are two routes that one could take. First, one can prove limitations for specific classes of algorithms like MCMC, spectral methods, semidefinite programming, statistical queries, and so on. This approach has been highly successful, yielding precise lower bounds in many models and numerous relations discovered between the models.

Alternatively, one can take the grander route: assume the intractability of a well-studied computational problem and use reductions to establish hardness of the statistical estimation problem at hand. In a seminal work, Berthet and Rigollet showed a reduction from the planted clique problem to the sparse principal component analysis (sparse PCA), thereby establishing the presence of an information-computation gap for a version of sparse PCA.

In its most canonical setup, the input to the sparse PCA problem consists of samples from a \(d\)-dimensional Gaussian with covariance \(\Sigma = I_d + \theta vv^T\), where \(v\) is a \(k\)-sparse unit vector and \(\theta\) denotes the signal strength. A simple hypothesis testing variant of the problem asks to distinguish \(n\) samples drawn from this spiked covariance matrix and \(n\) samples drawn from the standard Gaussian distribution.

Notice that statistical estimation problems like sparse PCA are most interesting when their inputs are drawn from some natural distribution like the Gaussian distribution. Therefore, a reduction from a hard problem A to a statistical estimation problem B will not only have to map instances of A to instances of B, but also have to produce the right distribution over instances of B. For example, ideally a reduction from planted clique to sparse PCA must map instances of planted clique (random graphs with cliques) to random vectors drawn from an appropriate Gaussian measure. How could a reduction ever do this, one might ask.

In fact, this is why one might decide not to take the grand route of using reductions to understand the complexity of statistical problems. While reductions have been exceptionally successful in establishing tight hardness results for NP-complete optimization problems, building a similar web of reductions for statistical problems like sparse PCA seemed hopeless to me. Not for Matthew, who courageously took this challenge head-on and succeeded!

Despite efforts following the seminal work of Berthet and Rigollet, showing a tight hardness result for sparse PCA in its most canonical Gaussian setup seemed out of reach. In a series of works, Matthew and coauthors developed a set of simple reduction tools that when put together led to a tight characterization of the complexity of the sparse PCA problem in its most canonical setup, over the entire range of parameters!

More recently, Matthew and his advisor, Guy Bresler, proposed a generalized planted clique conjecture referred to as “secret leakage planted clique” that can serve as a starting point for a web of reductions encompassing a variety of problems such as tensor PCA, robust sparse mean estimation, and semirandom community recovery. Not only have these works developed a set of useful reduction techniques for statistical problems, but most importantly, they have demonstrated that reductions can indeed be harnessed toward tight hardness results for statistical problems. Matthew was a recipient of the best student paper award at the Conference on Learning Theory (COLT) on two separate occasions for this line of work.

Matthew’s work is one of the central themes of this semester’s program on Computational Complexity of Statistical Inference. His work has laid the foundations of a theory of reductions for statistical problems, something for us to build on over the coming years. While his ideas are ever present this semester, his presence at the program is sorely missed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.