Blockchains and decentralized ledgers are creating a new reality for modern society. A major scientific impact of this new reality is that it creates an interdisciplinary arena where traditionally independent research areas, such as cryptography and game theory/economics, need to work together to address the relevant questions. The Simons Institute’s Fall 2019 program on Proofs, Consensus, and Decentralizing Society fostered much-needed cross-disciplinary collaboration by bringing together researchers from these disciplines.
An interdisciplinary research arena
The study of the interplay of cryptography and game theory/economics within the blockchain consists of several subareas. Here we discuss three such areas, with a focus on Bitcoin, as one of the most widely studied and adopted to date blockchains and cryptocurrencies. We stress that the relevant literature is huge and that discussing it all here is beyond the scope of this research vignette.
Cryptographic security and economic robustness. The works of Garay et al. [12] and Pass et al.[17] initiated the rigorous cryptographic study of the Bitcoin blockchain protocol and proved that under common assumptions about the underlying hash function and assuming the majority of the hashing power in the system is invested in executing the Bitcoin protocol (rather than attacking it), the protocol achieves a number of basic properties, such as common prefix (corresponding to the traditional notion of safety), chain growth (corresponding to liveness), and chain quality (ensuring that honest parties get to contribute blocks at an acceptable frequency). Follow-up work by Badertscher et al. [4] defined the functionality offered by the Bitcoin ledger and proved the protocol’s security under the above honest-majority assumption in a composable framework. This enables using the ledger functionality directly — without worrying about implementation details — within higher-level protocols. These initial works have ignited a number of follow-ups aiming at relaxing the underlying assumptions and/or tuning the protocol abstraction to better capture reality.
Parallel to the cryptographic security of blockchain protocols, their economic robustness — i.e., their resilience to incentive-driven attacks/misbehavior — has been extensively investigated. A classical example here is the work of Eyal and Sirer [9], which abstracted the protocol execution as a simple mining game and demonstrated that by withholding mined blocks and strategically releasing them, attackers with favorable network conditions might be incentivized to deviate from the Bitcoin protocol, even when they control only a minority of the hashing power. Note that such deviations cannot affect the worst-case security guarantees established in the cryptographic security proofs, but they can push them to their limits. For instance, although a selfish miner cannot break common prefix and chain quality, they can temporarily create the longest-allowed forks and/or minimize the number of blocks contributed by honest miners to the worst-case value allowed by chain quality.
Economics on blockchains. Independently of the questions about their economic robustness, blockchains and their associated cryptocurrencies have created a new scientific playground for economists and game theorists to develop and test new theories and confirm old ones. Indicatively, Huberman et al. [15] investigated how an ideal abstraction of Bitcoin yields a new market design paradigm where market forces do not control the functionality of the underlying payment system. They showed how analyzing this paradigm can explain behavioral aspects of Bitcoin and pointed to interesting modifications that can affect the efficiency of the protocol itself. Similarly, Benigno et al. [5] demonstrated how under common economics assumptions, equipping a two-country economy with a global cryptocurrency can create market forces driving the nominal interest rates to be equal in the two countries and yielding a rate relation between the two national currencies; and Prat and Walter [18] proposed a model using the Bitcoin-U.S. dollar exchange rate to forecast the computing power of the Bitcoin network.
Blockchain-induced incentives on cryptographic protocols. A third type of blockchain-related problem where cryptography and game theory meet is the design of more efficient and resilient cryptographic protocols using incentives induced by blockchains and cryptocurrencies. The classical example here is fair multiparty computation (in short, fair MPC). In MPC, n parties wish to run a protocol to jointly compute a function on their private data. Fairness in this context requires that if a worst-case adversary — controlling and coordinating the (malicious) parties attacking the protocol — learns (any information on) the output, then the honest parties should also learn it. Cleve’s well-known impossibility result [8] mandates that if the adversary controls a majority of parties, then fairness is impossible. Intuitively, the reason is that as the protocol has to tolerate any adversarial coalition, the actual corrupted parties might be the ones to first jointly learn such information; once they do, they can stop playing, thereby preventing other parties from also learning it. In [2, 6], it was shown that using the Bitcoin blockchain as an automated escrow mechanism, one can enforce a version of fairness based on collaterals where either nobody learns any information or if someone does and prevents others from also learning it, then he loses his collateral to them. Assuming the adversary values his collateral higher than breaking fairness, this mechanism induces a fair evaluation. These results were subsequently extended to ensure robustness [16], i.e., ensure that the protocol will not fail and either it will fairly conclude or the adversary will lose his collateral.
A blockchain-based payment system uses a decentralized network of computers (miners) to verify transactions and maintain the ledger containing transaction history. The blockchain design is governed by a computer protocol and does not require trust in any centralized organization. The system supports only transactions of its native coins, such as bitcoins, which have value because payment recipients have confidence in their future usefulness. The design of such a novel system was proposed by [11]. It is of interest to study the mechanism design of the blockchain payment system and related questions on transparency and fairness [1, 6, 7, 13].
This project is connected with what was explored during the Simons Institute’s Fall 2019 program on Proofs, Consensus, and Decentralizing Society, especially with the algorithms and economics questions that emerged from the blockchain-based systems.
The blockchain payment system can be modeled as a two-sided market that intermediates between users and miners [5, 9]. The rise in Bitcoin transaction volume, coupled with limited capacity on the number of blocks that can be posted to the blockchain, raises transaction delays and motivates users to pay transaction fees for service priority. We want to understand how the transaction fees influence users’ willingness to participate in the payment system. Our analysis suggests a simple modification on the design by adding a statistical learning-aid recommendation of transaction fees for users. In particular, we build confidence bands for the regression function \(f(\cdot)\) in the model
Here, the control variable includes the number of pending transactions, fees of pending transactions, current infrastructure level, rate of block addition to the ledger, and history of random arrival of transactions, among others [12]. The \(f(\cdot)\) is the main nonparametric regression function that we want to infer, and the function \(g(\cdot)\) is the nuisance parameter. The dimension of the control vector can be large relative to the sample size. We develop a simple procedure to construct an honest confidence band for \(f(\cdot)\), a method that builds on the recently developed framework (e.g., [2, 3]). The confidence band would cover the true \(f(\cdot)\) at the nominal level (say, 95% level) uniformly over a large function space given a large sample of data. The notion of honesty is closely related to the use of the worst-case criterion that is necessary for good finite-sample performance [10].
This statistical learning-aid recommendation could provide the relationship between the transaction fees and the predicted transaction delay time, together with confidence bands. It leads to several implications. First, the recommendation enables the transparency of the blockchain payment system mechanism. It allows a direct comparison with the traditional payment system operated by a centralized firm like Visa or PayPal. The firm can reverse transactions in case of error or fraud, a property that is lacking in the blockchain payment system. On the other hand, users benefit from the transparency since they can decide which payment system is better for processing the transaction and how to balance the trade-off between transaction fees and transaction delay. Second, the blockchain payment system enjoys the Vickrey, Clarke, Groves (VCG) property in that each user pays a transaction fee equal to the externality on other users caused by delaying others’ transactions [4, 8, 9, 14]. The recommendation also adds the fairness guarantee; that way, with high probability, the transaction fee is close to the actual value of service priority and users avoid paying extra.
The research I embarked on was to find conditions under which the anomaly is eliminated.
In synchronous systems with processors using signatures that may behave in a Byzantine malicious manner, consensus can be reached as long as at most half of the processors are engaged in the malicious behavior. In contrast, when the system is asynchronous but eventually settles on synchronous behavior, consensus can be reached only under the more stringent condition that at most a third of processors rather than at most half are malicious.
This anomaly that synchronous systems and eventually synchronous systems do not have the same capability in reaching consensus is exhibited only with the introduction of cryptography signatures.
We resolve this anomaly by proposing that the definition of asynchrony, while practically pleasing, should be substituted by the more elegant, appealing mathematical definition of viewing asynchrony as the traditional synchronous system, but with the faulty behavior exhibited by different sets from round to round, giving rise to the pair mobile/stationary rather than asynchronous/synchronous.
This shift raises two questions: one about the meaning of mobile Byzantine and the other about the meaning of possibly all signatures forged over the rounds.
We resolve the first question by attributing the faulty behavior solely to the communication subsystem, considering processors to always behave correctly. We resolve the second question by abstracting signatures as just a constraint on which forwarded messages can be forged and which cannot. (We leave open the question of implementing the constraint.)
With these resolutions, a stationary system with signature constraint can reach consensus in the face of malicious faults as long as no more than half of processors fail in a round. The notion of eventual stationarity is now the elegant notion that the mobility of the system freezes. The system was hot and is now cooled. This supports the observation that, mathematically, the correct pair to consider is mobile/stationary rather than asynchrony/synchrony.
One of the most fascinating things about cryptography is its deeply paradoxical nature: among its early successes is the ability for two strangers to meet, generate a shared secret key, and thereon communicate privately — all of these performed entirely within a crowd. Over the last several decades, we have witnessed ever more surprising cryptographic protocols, achieving increasingly paradoxical properties, ranging from zero knowledge to fully homomorphic encryption. This article will focus on one such paradox, a new technical approach that has been unraveling surprising power, opening up avenues for hitherto unrealized cryptography. This is the unlikely friendship of two seemingly disparate mathematical structures and conjectured hard problems on these, namely, the hardness of finding short independent vectors in high-dimensional lattices and hardness assumptions on elliptic curves over finite fields. This topic is connected with what was explored during the Simons Institute’s Spring 2020 program on Lattices: Algorithms, Complexity, and Cryptography — the interface of lattices with other mathematical structures and new cryptography that can emerge from it.
Our assumptions
In more detail, our first assumption on high-dimensional lattices is the so-called learning with errors (LWE) proposed by Oded Regev in 2005 [21]. LWE conjectures that it is hard to recover a secret \(\mathbf{s}\in \mathbb{Z}_q^n\) from a sequence of noisy linear equations on \(\mathbf{s}\). In more detail, given a large prime \(q\) and polynomially many pairs \((\mathbf{a}_i, \langle \mathbf{a}_i,\;\mathbf{s}\rangle + e_i)\), where \(\mathbf{a}_i \in \mathbb{Z}_q^n\) are random and \(e_i \in \mathbb{Z}_q\) are “small” perturbations chosen from some appropriate distribution, the LWE problem asks to find \(\mathbf{s}\). For our second assumption, we require three cyclic groups of large prime order (\(q\), say), \(\mathbb{G}_1\), \(\mathbb{G}_2\), and \(\mathbb{G}_T\). Let \(e:\mathbb{G}_1\times \mathbb{G}_2 \to \mathbb{G}_T\) be a nondegenerate bilinear map or “pairing,” and \(g_1\) and \(g_2\) be the generators of \(\mathbb{G}_1\) and \(\mathbb{G}_2\), respectively. Then, by the bilinearity of the pairing, we have that \(e (\;g_1^a, \; g_2^b \;) = e(\;g_1,\; g_2\;)^{ab}\) for any \(a, b \in \mathbb{Z}_q\). We require that the group operations in \(\mathbb{G}_1\), \(\mathbb{G}_2\), and \(\mathbb{G}_T\) as well as the bilinear map \(e\) can be efficiently computed. In terms of hardness, we will roughly assume that the only information the adversary can learn is via legitimate group and pairing computations on the elements she receives. This strong security guarantee can be formalized in the so-called generic group model[22, 20]. We may think of \(a\) as a secret “message” that is encoded in the exponent of a group element \(g_1^a\). Both LWE and bilinear map–based assumptions have been studied extensively, and we have significant confidence in their hardness. These are also two of the most versatile tools in cryptography and have been used to construct breakthrough applications such as identity-based encryption [5], fully homomorphic encryption [14], and such others. Traditionally, these assumptions were used in mutual exclusion for any given construction, but recently we are learning to combine them in novel non-black-box ways to yield surprising results. In this article, I will describe a recent construction of the primitive of broadcast encryption in a joint work with Shota Yamada [2] where we leveraged a serendipitous interplay of these structures to obtain optimal parameters.
This is the third and last entry in a series of posts about lattice block reduction. See here and here for the first and second parts, resp. In this post I will assume you have read the other parts.
In the first two parts we looked at BKZ and Slide reduction, the former being the oldest and most useful in practice, while the latter achieves the best provable bounds and has the cleaner analysis. While BKZ is a natural generalization of LLL, we have seen that the analysis of LLL does not generalize well to BKZ. One can view Slide reduction as a different generalization of LLL with the goal of also naturally generalizing its analysis. As we mentioned in the first part, there is another analysis technique based on dynamical systems, introduced in [HPS11]. Unfortunately, as applied to BKZ, there are some cumbersome technicalities and the resulting bounds on the output quality are not as tight as we would like them to be (i.e. as for Slide reduction). One can view the algorithm we are considering today – SDBKZ [MW16] – as a generalization of LLL that lends itself much easier to this dynamical systems analysis: it is simpler, cleaner and yields better results. Since part of the goal of today’s post is to demonstrate this very useful analysis technique, SDBKZ is a natural candidate.
SDBKZ
Recall the two tools we’ve been relying on in the first two algorithms, SVP and DSVP reduction of projected subblocks:
Effect of a call to the DSVP oracle. GSO log norms of the input in black, of the output in blue. Note that the sum of the GSO log norms is a constant, so increasing the length of the last vector, decreases the (average of the) remaining vectors.
Effect of a call to the DSVP oracle. GSO log norms of the input in black, of the output in blue. Note that the sum of the GSO log norms is a constant, so increasing the length of the last vector, decreases the (average of the) remaining vectors.
We will use both of them again today. Like BKZ, a tour of SDBKZ starts by calling the SVP oracle on successive blocks of our basis. However, when we reach the end of the basis, we will not decrease the size of the window, since this is actually quite inconvenient for the analysis. Instead, we will keep the size of the window constant but switch to DSVP reduction, i.e. at the end of the BKZ tour we DSVP reduce the last block. This will locally maximize the last GSO vector in the basis, just as the first SVP call locally minimized the first vector of the basis. Then we will move the window successively backwards, mirroring a BKZ tour, but using DSVP reduction, until we reach the beginning of the basis again. At this point, we switch back to SVP reduction and move the window forward, etc. So SDBKZ runs in forward and backward tours.
SDBKZ in one picture: apply the SVP oracle to the projected blocks from start to finish and when you reach the end, apply the DSVP oracle to from finish to start. Repeat.
A nice observation here is that the backward tour can be viewed equivalently as: 1) compute the reversed dual basis (i.e. the dual basis with reversed columns), 2) run a forward tour, 3) compute the primal basis again. The first of these two steps is self-inverse: computing the reversed dual basis of the reversed dual basis yields the original primal basis. This means step 3) is actually the same as step 1). So in effect, one can view SDBKZ as simply repeating the following two steps: 1) run a forward tour, 2) compute the reversed dual basis. So it doesn’t matter if we use the primal or the dual basis as input, the operations of the algorithm are the same. This is why it is called Self-Dual BKZ.
There is one caveat with this algorithm: it is not clear, when one should terminate. In BKZ and Slide reduction one can formulate clear criteria, when the algorithm makes no more progress anymore. In SDBKZ this is not the case, but the analysis will show that we can bound the number of required tours ahead of time.
The Analysis
We will start by analyzing the effect of a forward tour. Let \({\mathbf{B}}\) be our input basis. The first call to the SVP oracle in a forward tour replaces \({\mathbf{b}}_1\) with the shortest vector in \({\mathbf{B}}_{[1,k]}\). This means that the new basis \({\mathbf{B}}’\) satifies \(\| {\mathbf{b}}_1′ \| \leq \sqrt{\gamma_k} (\prod_{i=1}^k \|{\mathbf{b}}_i^* \|)^{1/k}\) by Minkowski’s bound. Equivalently, this can be written as \[\log \| {\mathbf{b}}_1′ \|
\leq \log \sqrt{\gamma_k} + \frac1k (\sum_{i=1}^k \log \|{\mathbf{b}}_i^* \|).\] So if we consider the \(\log \|{\mathbf{b}}_i^*\|\) as variables, it seems like linear algebra could be useful here. So far, so good. The second step is more tricky though. We know that the next basis \({\mathbf{B}}”\), i.e. after the call to the SVP oracle on \({\mathbf{B}}’_{[2,k+1]}\), satisfies \({\mathbf{b}}_1” = {\mathbf{b}}_1’\) and \(\| ({\mathbf{b}}_2”)^* \| \leq \sqrt{\gamma_k} (\prod_{i=2}^{k+1} \|({\mathbf{b}}’_i)^* \|)^{1/k}\). Unfortunately, we have no control over \(\|({\mathbf{b}}’_i)^* \|\) for \(i \in {2,\dots,k} \), since we do not know how the SVP oracle in the first call changed these vector. However, we do know that the lattice \({\mathbf{B}}_{[1,k+1]}\) did not change in that call. So we can write \[\prod_{i=2}^{k+1} \|({\mathbf{b}}’_i)^* \| = \frac{\prod_{i=1}^{k+1} \|{\mathbf{b}}_i^* \|}{\| {\mathbf{b}}’_1 \|}\] and thus we obtain \[\log \| ({\mathbf{b}}_2′)^* \|
\leq \log \sqrt{\gamma_k} + \frac1k (\sum_{i=1}^{k+1} \log \|{\mathbf{b}}_i^* \| – \log \|{\mathbf{b}}’_1 \|).\] Again, this looks fairly “linear algebraicy”, so it could be useful. But there is another issue now: in order to get an inequality purely in the input basis \({\mathbf{B}}\), we would like to use our inequality for \(\log \|{\mathbf{b}}_1′ \|\) in the one for \(\log \| ({\mathbf{b}}_2′)^* \|\). But the coefficient of \(\log \|{\mathbf{b}}_1′ \|\) is negative, so we would need a lower bound for \(\log \|{\mathbf{b}}_1′ \|\). Furthermore, we would like to use upper bounds for our variables later, since the analysis of a tour will result in upper bounds and we would like to apply it iteratively. For this, negative coefficients are a problem. So, we need one more modification: we will use a change of variable to fix this. Instead of considering the variables \(\log \| {\mathbf{b}}_i^* \|\), we let the input variables to our forward tour be \(x_i = \sum_{j < k+i} \log \|{\mathbf{b}}^*_i \|\) and the output variables \(y_i = \sum_{j \leq i} \log \|({\mathbf{b}}’_i)^* \|\) for \(i \in [1,\dots,n-k]\). Clearly, we can now write our upper bound on \(\log \|({\mathbf{b}}’_1)^*\|\) as \[y_1 \leq \log \sqrt{\gamma_k} + \frac{x_1}{k}.\] More generally, we have \[\|({\mathbf{b}}’_i)^* \| \leq \sqrt{\gamma_k} \left(\frac{\prod_{j=1}^{i+k-1} \|{\mathbf{b}}_j^* \|}{\prod_{j=1}^{i-1} \|({\mathbf{b}}’_j)^* \|} \right)^{\frac1k}\] which means for our variables \(x_i\) and \(y_i\) that \[y_i = y_{i-1} + \log \| ({\mathbf{b}}’_i)^* \|
\leq y_{i-1} + \log \sqrt{\gamma_k} + \frac{x_i – y_{i-1}}{k}
= (1-\frac1k) y_{i-1} + \frac1k x_i + \log \sqrt{\gamma_k}.\]
Note that we can write each \(y_i\) in terms of \(x_i\) and the previous \(y_i\) with only positive coefficients. So now we can apply induction to write each \(y_i\) only in terms of the \(x_i\)’s, which shows that \[y_i = \frac1k \sum_{j=1}^i \omega^{i-j} x_j + (1-\omega)^i k \alpha\] where we simplified notation a little by defining \(\alpha = \log \sqrt{\gamma_k}\) and \(\omega = 1-\frac1k\). By collecting the \(x_i\)’s and \(y_i\)’s in a vector each, we have the vectorial inequality \[{\mathbf{y}} \leq {\mathbf{A}} {\mathbf{x}} + {\mathbf{b}}\] where \[{\mathbf{b}} = \alpha k \left[
\begin{array}{c}
1 – \omega \\
\vdots \\
1 – \omega^{n-k}
\end{array}\right]
\qquad\qquad
{\mathbf{A}} = \frac1k
\left[
\begin{array}{cccc}
1 & & & \\
\omega & 1 & & \\
\vdots & \ddots & \ddots & \\
\omega^{n-k-1} & \cdots & \omega & 1
\end{array}
\right].\]
Now recall that after a forward tour, SDBKZ computes the reversed dual basis. Given the close relationship between the primal and the dual basis and their GSO, one can show that simply reversing the vector \({\mathbf{y}}\) will yield the right variables \({\mathbf{x}}’_i\) to start the next “forward tour” (which is actually a backward tour, but on the dual). I.e. after reversing \({\mathbf{y}}\), the variables represent the logarithm of the corresponding subdeterminants of the dual basis. (For this we assume for convenience and w.l.o.g. that the lattice has determinant 1; otherwise, there would be a scaling factor involved in this transformation.)
In summary, the effect on the vector \({\mathbf{x}}\) of executing once the two steps, 1) forward tour and 2) computing the reversed dual basis, can be described as \[{\mathbf{x}}’ \leq {\mathbf{R}} {\mathbf{A}} {\mathbf{x}} + {\mathbf{R}} {\mathbf{b}}\] where \({\mathbf{R}}\) is the reversed identity matrix (i.e. the identity matrix with reversed columns). Iterating the two steps simply means we will be iterating the vectorial inequality above. So analyzing the affine dynamical system \[{\mathbf{x}} \mapsto {\mathbf{R}} {\mathbf{A}} {\mathbf{x}} + {\mathbf{R}} {\mathbf{b}}\] will allow us to deduce information about the basis after a certain number of iterations.
Small Digression: Affine Dynamical Systems
Consider some dynamical system \({\mathbf{x}} \mapsto {\mathbf{A}} {\mathbf{x}} + {\mathbf{b}} \) and assume it has exactly one fixed point, i.e. \({\mathbf{x}}^*\) such that \({\mathbf{A}} {\mathbf{x}}^* + {\mathbf{b}} = {\mathbf{x}}^* \). We can write any input \({\mathbf{x}}’\) as \({\mathbf{x}}’ = {\mathbf{x}}^* + {\mathbf{e}}\) for some “error vector” \({\mathbf{e}}\). When applying the system to it, we get \({\mathbf{x}}’ \mapsto {\mathbf{A}} {\mathbf{x}}’ + {\mathbf{b}} = {\mathbf{x}}^* + {\mathbf{A}} {\mathbf{e}}\). So the error vector \({\mathbf{e}}\) is mapped to \({\mathbf{A}} {\mathbf{e}}\). Applying this \(t\) times maps \({\mathbf{e}}\) to \({\mathbf{A}}^t {\mathbf{e}}\), which means after \(t\) iterations the error vector has norm \(\|{\mathbf{A}}^t {\mathbf{e}} \|_{p} \leq \|{\mathbf{A}}^t \|_{p} \| {\mathbf{e}} \|_{p} \) (where \(\| \cdot \|_{p}\) is the matrix norm induced by the vector \(p\)-norm). If we can show that \(\|{\mathbf{A}} \|_p \leq 1 – \epsilon\), then \(\|{\mathbf{A}}^t \|_p \leq \|A \|^t \leq (1-\epsilon)^t \leq e^{-\epsilon t}\), so the error vector will decay exponentially in \(t\) with base \(e^{-\epsilon}\) and the algorithm converges to the fixed point \({\mathbf{x}}^*\).
Back to our concrete system above. As we just saw, we can analyze its output quality by computing its fixed point and its running time by computing \(\|{\mathbf{R}} {\mathbf{A}} \|_p\) for some induced matrix \(p\)-norm. Since this has been a lenghty post already, I hope you’ll trust me that our system above has a fixed point \({\mathbf{x}}^*\), which can be written out explicitely in closed form. As a teaser, its first coordinate is \[x^*_1 = \frac{(n-k)k}{k-1} \alpha.\] This means that if the algorithm converges, it will converge to a basis such that \(\sum_{j \leq k}\log \| {\mathbf{b}}_j^*\| \leq \frac{(n-k)k}{k-1} \log \sqrt{\gamma_k}\). Applying Minkowski’s Theorem to the first block \({\mathbf{B}}_{[1,k]}\) now shows that the shortest vector in this block satisfies \(\lambda_1({\mathbf{B}}_{[1,k]}) \leq \sqrt{\gamma_k}^{\frac{n-1}{k-1}}\). Note that the next forward tour will find a vector of such length. Recall that we assumed that our lattice has determinant 1, so this is exactly the Hermite factor achieved by Slide reduction, but for arbitrary block size (we do not need to assume that \(k\) divides \(n\)) and better than what we can achieve for BKZ (even using the same technique). Moreover, the fixed point actually gives us more information: the other coordinates (that I have ommited here) allow us control over all but \(k\) GSO vectors and by terminating the algorithm at different positions, it allows us to choose which vectors we want control over.
It remains to show that the algorithm actually converges and figure out how fast. It is fairly straight-forward to show that \[\|{\mathbf{R}} {\mathbf{A}}\|_{\infty} = \|{\mathbf{A}}\|_{\infty} = 1 – \omega^{n-k} \approx e^{-\frac{n-k}{k}}.\] (Consider the last row of \({\mathbf{A}}\).) This is always smaller than 1, so the algorithm does indeed converge. For \(k = \Omega(n)\) this is bounded far enough from 1 such that the system will converge to the fixed point up to an arbitrary constant in a number of SVP calls that is polynomial in \(n\). Using another change of variable [N16] or considering the relative error instead of the absolute error [MW15], one can show that this also holds for smaller \(k\).
As mentioned before, this type of analysis was introduced in [HPS11] and has inspired new ideas even in the heuristic analysis of BKZ. In particular, one can predict the behavior of BKZ by simply running such a dynamical system on typical inputs (and making some heuristic assumptions). This idea has been and is being used extensively in cryptanalysis and in optimizing parameters of state-of-the-art algorithms.
Finally, a few last words on SDBKZ: we have seen that it achieves a good Hermite factor, but what can we say about the approximation factor? I actually do not know if the algorithm achieves a good approximation factor and also do not see a good way to analyze it. However, there is a reduction [L86] from achieving approximation factor \(\alpha\) to achieving Hermite factor \(\sqrt{\alpha}\). So SDBKZ can be used to achieve approximation factor \(\gamma_k^{\frac{n-1}{k-1}}\). This is a little unsatisfactory in two ways: 1) the reduction results in a different algorithm, and 2) the bound is a little worse than the factor achieved by slide reduction, which is \(\gamma_k^{\frac{n-k}{k-1}}\). On a positive note, a recent work [ALNS20] has shown that, due to the strong bound on the Hermite factor, SDBKZ can be used to generalize Slide reduction to arbitrary block size \(k\) in a way to achieve the approximation factor \(\gamma_k^{\frac{n-k}{k-1}}\). Another recent work [ABFKSW20] exploited the fact that SDBKZ allows to heuristically predict large parts of the basis to achieve better bounds on the running time of the SVP oracle.
Lovász. An Algorithmic Theory of Numbers, Graphs and Convexity. 1986
by Siobhan Roberts (Journalist in Residence, Simons Institute)
In January 2014, during an open problems session in the auditorium at the Simons Institute, the computer scientist Thomas Vidick posed a question that he expected would go nowhere.
The research program on Quantum Hamiltonian Complexity had just commenced — probing techniques from both quantum complexity theory and condensed matter physics and asking questions such as: Is the scientific method sufficiently powerful to understand general quantum systems? Is materials science about to hit a computational barrier?
Vidick’s questions waded further into the weeds.
“A central conjecture, the so-called quantum PCP conjecture, crystallizes many of these issues, and the conjecture was hotly debated throughout the semester,” recounted Vidick, a professor of computing and mathematical sciences at Caltech, in his research vignette published later that year.
Two of the program’s organizers, Umesh Vazirani of UC Berkeley and Dorit Aharonov at the Hebrew University of Jerusalem, encouraged him to formulate a new variant of the conjecture, which (for those readers at least somewhat in the know) he described as follows:
“This formulation of the PCP theorem gives rise to a quantum analogue whose relation to the quantum PCP conjecture is a priori unclear. The new variant asks the following: Do all languages in QMA admit an efficient verification procedure of the above form, in which the verifier and communication may be quantum? Here it is important to also allow the provers to be initialized in an arbitrary entangled state, as this may be necessary to encode the QMA witness whose existence the provers are supposed to convey to the verifier.”
Vidick admitted the problem was tantalizing, yet he believed it would lead to a dead end.
Six years later, however, quite the contrary has proved to be the case: that dead-end question ultimately led to a breakthrough result.
It had been a long time coming. And during the home stretch, another team of researchers seemed to have proved the opposite result — via a very different language and approach — but a gap emerged with a lemma that could not be fixed.
Amid the biggest pandemic in a century, one that has disrupted lives and livelihoods the world over, it is gratifying to see how people in different walks of life have found ways to cope and carry on. Within the realm of theory research, the pursuit to better understand the foundations of computation and their implications doesn’t seem to have slowed down even a bit.
Make no mistake, the pandemic has disrupted our traditional modes of operation. A typical theorist might spend several hours each day brainstorming in a group, often over a beverage. For most theorists, this is the single most productive and enjoyable activity each day. As these meetings move online, they remain a shadow of what they used to be. Normally, surprising results often arise out of chance encounters between researchers from very different areas. As conferences and workshops shift online, however, these chance encounters become very rare. Finally, on most days, we are struggling along an unforgiving trail in an attempt to scale a seemingly insurmountable peak. Doubts — such as, Are we on the right trail? Even if we are on the right trail, are we strong enough to get through it? — often linger and can easily set one up for failure. Sharing these “theoretical” struggles with other researchers over lunch or in the corridors can be critical to keep us going. Sadly, these opportunities are rare these days.
Yet theory research does not seem to have missed a beat. The ACM STOC conference was held online for the first time. Despite the limitations of the medium, there are many silver linings to an online conference. Participation at the conference nearly doubled from last year, with 606 participants from 33 countries, many of which had not been represented at typical STOC conferences before. The videos of the conference talks are all available online, a fantastic resource for researchers going forward. Finally, as Ronen Eldan’s talk at the conference beautifully demonstrated, video is a really effective medium to communicate a research work in broad strokes in a very short time. In fact, this year’s online conference seemed to have so many advantages that the PC chair, Julia Chuzhoy, suggested holding the STOC conference twice each year, once online and once offline.
As weekly seminars at most universities move online, they have begun to attract participants from across the world. CS Theory Online Talks maintains a list of theory talks that are available online (also see here and here). The PIMS-CRM summer school on probability has morphed into a great set of online courses that I have really enjoyed. The Oxford-Warwick Complexity Meetings are an online lecture series dedicated to complexity theory, while we at the Simons Institute are also hosting a lecture series, on Boolean function analysis (more on this later). This flurry of online activity catalyzed by the pandemic is promising to make theory research broadly accessible to graduate students and undergraduates across the globe.
Meanwhile, fantastic new results keep pouring in. The biggest breakthrough this summer is the work of Karlin, Klein, and Oveis Gharan on the metric traveling salesman problem (metric TSP). They have posted a 1.5 – ε approximation algorithm for metric TSP for some constant ε > 10-36. Metric TSP is a fundamental combinatorial optimization problem wherein the inputs consist of a network of cities and the distances between them. The goal is to find the shortest-length route that visits each city exactly once and returns to the starting point. This problem is called metric TSP if the distances between cities are assumed to satisfy the triangle inequality, namely the distance from City A to City C is at most the sum of the distances from City A to City B and from City B to City C.
The practice of deep learning has attracted new attention to several basic questions in statistical machine learning. One such question is how fitting machine learning models to relatively small “training” data sets can lead to accurate predictions on new data. In machine learning jargon, this is the question of generalization.
The conventional wisdom in machine learning offers the following about generalization:
A model that is too simple will underfit the true patterns in the training data, and thus, it will predict poorly on new data.
A model that is too complicated will overfit spurious patterns in the training data; such a model will also predict poorly on new data.
Consequently, one should choose a model that balances these concerns of underfitting and overfitting [30]. A textbook corollary is that a model that exactly fits — i.e., interpolates — the training data “is overfit to the training data and will typically generalize poorly” [17].
Recent machine learning practice appears to eschew this conventional wisdom in a dramatic fashion. For instance, a common starting point for training a neural network is to find a model that exactly fits the training data [27]. (Typically, the model is subsequently fine-tuned using different criteria, but the starting point is already nontrivial.) While this may be difficult or impossible with small neural networks, it becomes substantially easier after making the network very large. It is remarkable that this practice is compatible with generalization, despite the concerns about overfitting.
This apparent contradiction of conventional wisdom is not new. In the last century, machine learning practitioners were already aware of the efficacy (and advantages) of using “oversized” neural networks where the number of model parameters exceeds the number of training data [9, 20]. Similar observations were made about large ensembles of decision trees: it was found that increasing the number of decision trees in an ensemble could improve accuracy, even beyond the point at which the ensemble exactly fit the training data [28]. These empirical observations motivated the development of a new theory that greatly sharpens the existing body of knowledge in statistical machine learning [1, 28], and this theory has been recently extended to deep neural networks [2, 15, 26].
Figure 1: Leo Breiman’s list of important questions regarding neural networks (reproduced from [9])
However, the new theory does not fully explain recent observations about interpolating neural networks. First, in more recent experiments, neural networks are trained to interpolate noisy data [34]. By noisy data, we mean data in which the prediction targets (labels) may be erroneous. In fact, experiments have been conducted in which noise is deliberately injected in the data by assigning random labels to subsets of the training data. Neural networks that interpolate these data were nevertheless found to have nontrivial accuracy on new data. Second, similar phenomena have been observed in the context of predicting real numerical targets. (The aforementioned theoretical and empirical studies mostly focused on classification targets.) When predicting real numbers, it is almost always a given that training data will have noisy labels, so the thought of interpolating training data seems even more preposterous. But again, in recent experimental studies, interpolating (or nearly interpolating) models were found to have nontrivially accurate predictions on new data [4, 7].
An emerging theme in several disciplines of science over the last few decades is the study of local-to-global phenomena. Let me elucidate with a few examples. In biology, one tries to understand the global properties of an organism by studying local interactions at a cellular level. The Internet graph is impossible to predict or control at a global level; what one can at best do is understand its behavior by making changes at a very local level. Moving to examples closer to theoretical computer science, the pioneering works in computation theory due to Turing and others define the computation of any global function as one that can be broken down into a sequence of local computations. The seminal work on NP-completeness due to Cook, Levin, and Karp demonstrates that any verification task can be in fact reduced to a conjunction of verifying very local objects.
Given these examples, an intriguing question that arises in multiple disciplines is to identify which objects support such local-to-global phenomena. This question as stated is an ill-formed one, but one can attempt to make it well-defined in a variety of contexts. Let me illustrate by giving three specific instances.
Mixing time in graphs: Which graphs have the property of having a global mixing property that can be inferred by understanding the mixing properties of several related smaller graphs?
Local testing and decoding (in coding theory): Which error-correcting codes have the property of being able to be tested or decoded by looking at the code word or received word very locally (i.e., at very few locations)?
Topologically expanding graphs: Which graphs or hypergraphs have the property that their global topological property can be inferred by their local topological behavior?
These questions, seemingly disparate and arising in very different contexts and communities (in particular, approximate sampling, coding theory, and topology), surprisingly all led to the investigation of a similar expanding object: the high-dimensional expander. The Simons Institute 2019 summer cluster on Error-Correcting Codes and High-Dimensional Expansion brought together researchers from these diverse communities to study high-dimensional expanders and their potential applications to mathematics and theoretical computer science.
What is a high-dimensional expander?
High-dimensional expanders (HDXs) are a high-dimensional analogue of expander graphs. An expander graph, loosely speaking, is an extremely well-connected graph. Analytically, this is best captured via the second-largest eigenvalue (in absolute value) of the normalized adjacency matrix of the graph. More precisely, a graph G is said to be a λ-expander (for λ ∈ [0,1]) if the second-largest eigenvalue (in absolute value) of the normalized adjacency matrix AG of G is bounded above by λ. The smaller the λ, the better the expander. An equivalent definition is in terms of how well the random walk on the vertices induced by the graph mixes: G is a λ-expander if the spectral norm of the difference of the normalized adjacency matrix AG and the normalized all-ones matrix J (the normalized adjacency matrix of the complete graph) is bounded above by λ (i.e., \[\left\| A_G – \frac1n J \right\| \leq \lambda,\]
where n is the number of vertices of the graph G).
This is the second entry in a series of posts about lattice block reduction. See here for the first part. In this post I will assume you have read the first one, so if you haven’t, continue at your own risk. (I suggest reading at least the first part for context, notations and disclaimers.)
Last time we focused on BKZ which applies SVP reduction to successive projected subblocks. In this post we consider slide reduction, which allows for a much cleaner and nicer analysis. But before we can do that, we need a little more background.
A New Tool: Dual SVP Reduction
As you hopefully know, duality is a very useful concept in lattice theory (and in mathematics more generally). It allows to pair up lattices, which are related in a well defined way. Similarly, we can pair up the different bases of two dual lattices to obtain dual bases. I’ll skip the definition of these two concepts since we will not need them. It is sufficient to know that we can compute the dual basis from the primal basis efficiently. One very cool feature of dual bases is that the last vector in the dual basis has a length that is inverse to the length of the last GSO vector of the primal basis. In math: if \({\mathbf{B}}\) and \({\mathbf{D}}\) are dual bases, then \(\| {\mathbf{b}}_n^* \| = \| {\mathbf{d}}_n \|^{-1}\). (If you want to see why this is true at least for full rank lattices, use the fact that in this case \({\mathbf{D}} = {\mathbf{B}}^{-T}\), and the QR-factorization.) It follows that if \({\mathbf{d}}_n\) happens to be the shortest vector in the dual lattice, then \({\mathbf{b}}_n^*\) is as long as possible, since the existence of any basis \(\bar{{\mathbf{B}}}\) where \(\|\bar{{\mathbf{b}}}_n^*\| > \|{\mathbf{b}}_n^*\|\) would imply that there exists a dual basis \(\bar {{\mathbf{D}}}\) such that \(\| \bar{{\mathbf{d}}}_n\| < \| {\mathbf{d}}_n\| \). By analogy to SVP reduction, we call a basis, where \(\| {\mathbf{b}}^*_n \|\) is maximized, dual SVP reduced (DSVP reduced). This gives us a new tool to control the size of the GSO vectors: we can apply an SVP algorithm to the dual basis of a projected subblock \({\mathbf{B}}_{[i,j]}\). This will yield a shortest vector in the dual of this projected sublattice. Then we can compute a dual basis, which contains this shortest vector in the last position and finally compute a new primal basis for this projected subblock, which now locally maximizes \(\|{\mathbf{b}}_j^* \|\). As we did for primal SVP reduction in the last post, we will assume access to an algorithm that, given a basis \({\mathbf{B}}\) and indices \(i,j\), will return a basis such that \({\mathbf{B}}_{[i,j]}\) is DSVP reduced and the rest of the basis is unchanged. We will call such an algorithm a DSVP oracle. It may sound like this should be somewhat less efficient than SVP reduction, since we have to switch between the dual and the primal bases (which, when done explicitly, requires matrix inversion), but this is not actually the case. In fact, one can implement a DSVP reduction entirely without having to explicitly compute (any part of) the dual basis as shown in [GN08,MW16].
Effect of a call to the DSVP oracle. GSO log norms of the input in black, of the output in blue. Note that the sum of the GSO log norms is a constant, so increasing the length of the last vector, decreases the (average of the) remaining vectors.
I hope this figure provides some intuition that such an oracle can be useful. Now let us quantify how much this DSVP oracle helps us. Recall that in the primal SVP reduction we used Minkowski’s theorem to bound the length of the first vector. Since we are now applying the SVP algorithm to the dual, it should come as no surprise that we will use Minkowski’s theorem on the dual lattice, which tells us that \[\lambda_1(\widehat{\Lambda}) \leq \sqrt{\gamma_n} \det(\widehat{\Lambda})^{1/n} = \sqrt{\gamma_n} \det(\Lambda)^{-1/n}\] where \(\widehat{\Lambda}\) is the dual lattice, i.e. the lattice generated by the dual basis. Furthermore, by exploiting above fact that for basis \({\mathbf{B}}\) and its dual \({\mathbf{D}}\) we have \(\| {\mathbf{b}}_n^* \| = \|{\mathbf{d}}_n \|^{-1}\), this shows that if \({\mathbf{B}}\) is DSVP reduced, i.e. \({\mathbf{d}}_n\) is a shortest vector in the dual lattice, then \[\| {\mathbf{b}}_n^* \| = \|{\mathbf{d}}_n \|^{-1} = \lambda_1(\widehat{\Lambda})^{-1} \geq \frac{\det(\Lambda)^{1/n}}{\sqrt{\gamma_n}}.\] So after we’ve applied the DSVP oracle to a projected block \({\mathbf{B}}_{[i-k+1,i]}\), we have \[\|{\mathbf{b}}^*_i \| \geq \frac{\left(\prod_{j = i-k+1}^{i} \|{\mathbf{b}}_j^* \| \right)^{1/k}}{\sqrt{\gamma_{k}}}.\]
Slide Reduction
Now we have all the tools we need to describe slide reduction [GN08]. One of the major hurdles to apply an LLL-style running time analysis to BKZ seems to be that the projected subblocks considered in that algorithm are maximally overlapping. So slide reduction takes a different route: it applies primal and dual SVP reduction to minimally overlapping subblocks, which allows to still prove nice bounds on the output quality (in fact, even better than BKZ), but also on the running time via a generalization of the LLL analysis. More specifically, let \({\mathbf{B}}\) be the given lattice basis of an \(n\)-dimensional lattice and \(k\) be the blocksize. We require that \(k\) divides \(n\). (We’ll come back to that restriction later.) Instead of applying our given SVP oracle to successive projected subblocks, we apply it to disjoint projected subblocks, i.e. to the blocks \({\mathbf{B}}_{[1,k]}\), \({\mathbf{B}}_{[k+1,2k]}\), etc. So we locally minimize the GSO vectors \({\mathbf{b}}^*_{ik + 1}\) for \(i \in \{0,\cdots,n/k – 1\}\). (Technically, we iterate this step with a subsequent LLL reduction until there is no more change, which is important for the runtime analysis, but let’s ignore this for now). So now we have a basis where these disjoint projected subblocks are SVP-reduced. In the next step we shift the blocks by 1 and apply our DSVP oracle to them. (Note that the last block now extends beyond the basis, so we ignore this block.) This has the effect of locally maximizing the vectors \({\mathbf{b}}^*_{ik + 1}\) for \(i \in \{1,\cdots,n/k – 1\}\). This might seem counter-intuitive at first, but note that the optimization context for \({\mathbf{b}}^*_{ik + 1}\) changes between the SVP reduction and the DSVP reduction: \({\mathbf{b}}^*_{ik + 1}\) is first minimized with respect to the block \({\mathbf{B}}_{[ik+1,(i+1)k]}\) and then maximized with respect to the block \({\mathbf{B}}_{[(i-1)k+1,ik+1]}\). So one can view this as using the block \({\mathbf{B}}_{[ik+2, (i+1)k]}\) as a pivot to lower the ratio between the lengths of the GSO vectors \({\mathbf{b}}^*_{ik+1}\) and \({\mathbf{b}}^*_{(i+1)k+1}\). This view is reminiscent of the proof of Mordell’s inequality \(\gamma_n^{\frac{1}{n-1}} \leq \gamma_{n-1}^{\frac{1}{n-2}}\), which explains the title of the paper [GN08]. The idea of slide reduction is to simply iterate these two steps until there is no more change.
Slide reduction in one picture: apply the SVP oracle to the disjoint projected blocks in parallel, then shift the blocks by 1 and apply the DSVP oracle. Repeat.
Let’s dive into the analysis.
The Good
When the algorithm terminates, we are guaranteed that the following conditions hold simultaneously:
The blocks \({\mathbf{B}}_{[ik+1, (i+1)k]}\) are SVP reduced for all \(i \in \{0,\cdots,n/k – 1\}\) (the primal conditions), which implies \[\|{\mathbf{b}}^*_{ik+1} \|^{k-1} \leq \gamma_k^{k/2} \prod_{j=ik+2}^{(i+1)k} \|{\mathbf{b}}^*_j \|\] (Note that we raised Minowski’s bound to the \(k\)-th power and canceled one the \(\|{\mathbf{b}}^*_{ik+1} \|\) on both sides.)
The blocks \({\mathbf{B}}_{[ik+2, (i+1)k+1]}\) are DSVP reduced for all \(i \in \{0,\cdots,n/k – 2\}\) (the dual conditions), which implies \[\gamma_{k}^{k/2} \|{\mathbf{b}}^*_{(i+1)k+1} \|^{k-1} \geq \prod_{j = ik+2}^{(i+1)k} \|{\mathbf{b}}_j^* \|\]
(Technically, there is a constant slack factor \(>1\) involved, which can be set arbitrarly close to 1, but is important for running time. We’ll sweep under the rug for simplicity.)
Just by staring at the two inequalities, you will notice that they can easily be combined to yield: \[\|{\mathbf{b}}^*_{ik+1} \| \leq \gamma_k^{\frac{k}{k-1}} \|{\mathbf{b}}^*_{(i+1)k+1} \|\] for all \(i \in \{0,\dots,n/k-2\}\) and in particular \[\|{\mathbf{b}}^*_{1} \| \leq \gamma_k^{\frac{k}{k-1} (\frac{n}{k}-1)} \|{\mathbf{b}}^*_{n-k+1} \|
= \gamma_k^{\frac{n-k}{k-1}} \|{\mathbf{b}}^*_{n-k+1} \|\] By a similar trick as last time we can assume that \(\lambda_1({\mathbf{B}}) \geq \|{\mathbf{b}}^*_{n-k+1} \|\), because the last block is SVP-reduced, which shows that slide reduction achieves an approximation factor \[\|{\mathbf{b}}^*_{1} \| \leq \gamma_k^{\frac{n-k}{k-1}} \lambda_1({\mathbf{B}}).\] Done! Yes, it is really that simple. With (very) little more work one can similarly show a bound on the Hermite factor \[\|{\mathbf{b}}^*_{1} \| \leq \gamma_k^{\frac{n-1}{2(k-1)}} \det({\mathbf{B}})^{\frac1n}.\] Simply reuse the bounds on the ratios of \(\| {\mathbf{b}}^*_1 \|\) and \(\| {\mathbf{b}}^*_{ik+1} \|\) in combination with Minkowski’s bound for each block. (You guessed it: Homework!) Note that both of them are better than what we were able to obtain for BKZ in our last blog post. And in contrast to BKZ one can easily bound the number of calls to the SVP oracle by a polynomial in \(n, k\) and the bit size of the original basis. The analysis is similar to the one of LLL with a modified potential function: we let \(P({\mathbf{B}}) = \prod_{i=0}^{n/k-2} \det({\mathbf{B}}_{[1,ik]})^2\). If the basis \({\mathbf{B}}\) consists of integer coefficients only, this potential is also integral. Furthermore, one can show that if an iteration of slide reduction modifies the basis, it will decrease this potential by at least a constant factor (by using the slack factor we brushed over). This shows that while the basis is modified, the potential decreases exponentially, which results in a polynomial number of calls to the (D)SVP oracle.
The Bad
We just sketched a complete and elegant analysis of the entire algorithm and it checks all the boxes: best known approximation factor, best known Hermite factor, a polynomial number of calls to its (D)SVP oracle. So what could possibly be bad about it? Remember that we required that the blocksize \(k\) divides the dimension \(n\). It seems like it should be easy to get rid of this restriction, for example one could artificially increase the dimension of the lattice to assure that the blocksize divides it. Unfortunately, this and similar approaches will degrade the bound on the output quality – there will be a rounding-up operator in the exponent [LW13]. For small \(k\) this might not be too much of an issue, but as \(k\) grows, this results in a significant performance hit. Luckily, a recent work [ALNS19] shows that one can avoid this degradation by combining slide reduction with yet another block reduction algorithm: SDBKZ, which will be the topic of the next post.
The Ugly
Slide reduction is beautiful and there is little one can find ugly about it in theory. Unfortunately, experimental studies so far concluded that this algorithm is significantly inferior to BKZ, which (at least to me) is puzzling. This is often attributed to the fact that BKZ uses maximally overlapping blocks, which seems to allow it to obtain stronger reduction notions (even though we cannot prove it). So, one could wonder if there is an algorithm that uses maximally overlapping blocks (and is thus hopefully competetive in practice), but allows for a clean analysis. It turns out that the topic of the next post (SDBKZ) is such an algorithm.
Gama, Nguyen. Finding short lattice vectors within Mordell’s inequality. STOC 2008
Li, Wei. Slide reduction, successive minima and several applications. Bulletin of the Australian Mathematical Society 2013