The historical perspective on the Simons Institute’s past programs, such as Real Analysis in Computer Science in 2013, underscores the institute’s role in shaping and advancing various fields of computer science. The evolution of the field over the past decade, with the emergence of new themes like global hypercontractivity and spectral independence, as well as the incorporation of innovative methodologies, demonstrates the dynamic nature of computer science research.

Furthermore, the focus on Quantum Computing and its emphasis on noisy intermediate-scale quantum (NISQ) devices is particularly intriguing. NISQ devices are at the forefront of quantum technology, and the pursuit of quantum advantage in the absence of error correction presents unique challenges and exciting opportunities for theoretical research.

In summary, the Simons Institute’s dedication to facilitating groundbreaking research in Analysis, TCS, and Quantum Computing is commendable, and it’s evident that these programs are contributing significantly to the advancement of their respective fields. The institute’s role in fostering collaboration between theory and practice is vital, especially in areas as cutting-edge and rapidly evolving as quantum computing.

]]>I have no clue how to go about proving this, though.

]]>Hi Thijs,

RE, Frodo, thanks for the correction! I’ve updated the post.

I’m not at all sure about kissing numbers in l_p norms, but the maximum number of closest vectors is known and is fairly straightforward for all l_p norms: the maximum possible number of closest vectors for 1 < p < infinity is 2^n and is unbounded for l_1 and l_infinity (even in two dimensions).

For 1 < p < infinity, the integer lattice together with the all (1/2)s vector as a target yields a lower bound of 2^n, and the upper bound follows by a pigeonhole/coset averaging argument. Namely, suppose that there are 2^n + 1 closest lattice vectors to a given target. Then, there must be two such closest vectors v, w that lie in the same coset mod twice the lattice. But then (v + w)/2 is also a lattice vector, and, by the strict convexity of the l_p norm for 1 < p < infinity, it is strictly closer to the target than v or w, which is a contradiction.

Accordingly, it might be reasonable to guess that the “right” time complexity of CVP_p for 1 < p < infinity is 2^n. This would match our lower bound for all such p that are not even integers.

]]>