This is the home page for NYU's Computer Science Theory seminar, hosted jointly by the Courant Theoretical Computer Science Group and the Tandon Algorithms and Foundations Group.


Time and Location:

Thursday, 11AM
Warren Weaver Hall
251 Mercer Street
Room 517

You can sign up for the Theory Seminar mailing list here. For more information, or if you would like to invite a speaker or give a talk, please contact Christopher Musco.


Fall 2022

Sept. 22
  • Vladimir Podolskii (NYU)
  • Abstract: We consider sorting networks that are constructed from comparators of arity k>2. That is, in our setting the arity of the comparators — or, in other words, the number of inputs that can be sorted at the unit cost — is a parameter. We study its relationship with two other parameters — n, the number of inputs, and d, the depth. This model received considerable attention. Partly, its motivation is to better understand the structure of sorting networks. In particular, sorting networks with large arity are related to recursive constructions of ordinary sorting networks. Additionally, studies of this model have natural correspondence with a recent line of work on constructing circuits for majority functions from majority gates of lower fan-in.

    We obtain the first lower bounds on the arity of constant-depth sorting networks. More precisely, we consider sorting networks of depth d up to 4, and determine the minimal k for which there is such a network with comparators of arity k. For depths d=1, 2 we observe that k=n. For d=3 we show that k=n/2. For d=4 the minimal arity becomes sublinear: k=\Theta(n^{2/3}). This contrasts with the case of majority circuits, in which k=O(n^{2/3}) is achievable already for depth d=3.

    Joint work with Natalia Dobrokhotova-Maikova and Alexander Kozachinskiy: https://eccc.weizmann.ac.il/report/2022/116/

    Bio: Vladimir Podolskii defended his PhD thesis in 2009 in Moscow State University. His research areas are computational complexity, tropical algebra, applications of complexity theory to databases augmented with logical theories. Until recently he worked in Steklov Mathematical Institute (Moscow) and HSE University (Moscow).
Sept. 29
  • Sepehr Assadi (Rutgers)
  • Abstract: Recent breakthroughs in graph streaming have led to the design of single-pass semi-streaming algorithms for various graph coloring problems such as (Δ+1)-coloring, Δ-coloring, degeneracy-coloring, coloring triangle-free graphs, and others. These algorithms are all randomized in crucial ways and whether or not there is any deterministic analogue of them has remained an important open question in this line of work.

    In this talk, we will discuss our recent result that fully resolves this question: there is no deterministic single-pass semi-streaming algorithm that given a graph G with maximum degree Δ, can output a proper coloring of G using any number of colors which is sub-exponential in Δ. The proof is based on analyzing the multi-party communication complexity of a related communication game using elementary random graph theory arguments.

    Based on joint work with Andrew Chen (Cornell) and Glenn Sun (UCLA): https://arxiv.org/abs/2109.14891A.
Oct. 6
  • Peng Zhang (Rutgers)
  • Abstract: Marcus, Spielman, and Srivastava (Annals of Mathematics, 2015) solved the Kadison-Singer Problem by proving a strong form of Weaver’s discrepancy conjecture. They showed that for all \(\alpha > 0\) and all lists of vectors of norm at most \(\sqrt{\alpha}\) whose outer products sum to the identity, there exists a signed sum of those outer products with operator norm at most \(\sqrt{8 \alpha} + 2 \alpha\). Besides its relation to the Kadison-Singer problem, Weaver’s discrepancy problem has applications in graph sparsification and randomized experimental design. In this talk, we will prove that it is NP-hard to distinguish such a list of vectors for which there is a signed sum that equals the zero matrix from those in which every signed sum has operator norm at least \(k \sqrt{\alpha}\), for some absolute constant \(k > 0\). Thus, it is NP-hard to construct a signing that is a constant factor better than that guaranteed to exist. This result is joint work with Daniel Spielman.
Oct. 13
  • Dominik Kempa (Stony Brook)
  • Abstract: Lempel-Ziv (LZ77) compression is one of the most commonly used lossless compression algorithms. The basic idea is to greedily break the input string into blocks (called phrases), every time forming as a phrase the longest prefix of the unprocessed part that has an earlier occurrence. In 2010, Kreft and Navarro introduced a variant of LZ77 called LZ-End, which requires the previous occurrence of each phrase to end at the boundary of an already existing phrase. They conjectured that it achieves a compression that is always close to LZ77. In this talk, we: (1) present the first proof of this conjecture; (2) discuss the first data structure implementing fast random access to the original string using space linear in the size of LZ-End parsing. We will also give a broad overview of the increasingly popular field of compressed data structures/algorithms.

    This is joint work with Barna Saha (UC San Diego) and was presented at SODA 2022.
Oct. 18
(this is a Tuesday!)
Room 202.

  • Yuval Rabani (Hebrew University of Jerusalem)
  • Abstract: We prove new bounds, mostly lower bounds, on the competitive ratio of a few online problems, including the \(k\)-server problem and other related problems. In particular:

    1. The randomized competitive ratio of the \(k\)-server problem is at least \(\Omega(\log^2 k)\) in some metric spaces. This refutes the long-standing randomized \(k\)-server conjecture that the competitive ratio is \(O(\log k)\) in all metric spaces.

    2. Consequently, there is a lower bound of \(\Omega(\log^2 n)\) on the competitive ratio of metrical task systems in some \(n\)-point metric spaces, refuting an analogous conjecture that in all \(n\)-point metric spaces the competitive ratio is \(O(\log n)\). This lower bound matches asymptotically the best previously known universal upper bound.

    3. The randomized competitive ratio of traversing width-\(w\) layered graphs is \(\Theta(w^2)\). The lower bound improves slightly the previously best lower bound. The upper bound improves substantially the previously best upper bound.

    4. The \(k\)-server lower bounds imply improved lower bounds on the competitive ratio of distributed paging and metric allocation.

    5. The universal lower bound on the randomized competitive ratio of the \(k\)-server problem is \(\Omega(\log k)\). Consequently, the universal lower bound for \(n\)-point metrical task systems is \(\Omega(\log n)\). These bounds improve the previously known universal lower bounds, and they match asymptotically existential upper bounds.

    The talk is based on two papers which are both joint work with Sebastien Bubeck and Christian Coester: Shortest paths without a map, but with an entropic regularizer (the upper bound in 3, to appear in FOCS 2022) The randomized \(k\)-server conjecture is false! (all the lower bounds, manuscript)
Oct. 27
  • Jessica Sorrell (University of Pennsylvania)
  • Abstract: Reproducibility is vital to ensuring scientific conclusions are reliable, but failures of reproducibility have been a major issue in nearly all scientific areas of study in recent decades. A key issue underlying the reproducibility crisis is the explosion of methods for data generation, screening, testing, and analysis, where, crucially, only the combinations producing the most significant results are reported. Such practices (also known as p-hacking, data dredging, and researcher degrees of freedom) can lead to erroneous findings that appear to be significant, but that don’t hold up when other researchers attempt to replicate them.

    In this talk, we introduce a new notion of reproducibility for randomized algorithms. This notion ensures that with high probability, an algorithm returns exactly the same output when run with two samples from the same distribution. Despite the exceedingly strong demand of reproducibility, there are efficient reproducible algorithms for several fundamental problems in statistics and learning, including statistical queries, approximate heavy-hitters, medians, and halfspaces. We also discuss connections to other well-studied notions of algorithmic stability, such as differential privacy.

    This talk is based on prior and ongoing work with Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, and Satchit Sivakumar.
Nov. 3
  • Riko Jacob (IT University of Copenhagen)
  • Authors: Michael T. Goodrich, Riko Jacob, and Nodari Sitchinava

    Abstract: We prove an \(\Omega(\log n \log \log n)\) lower bound for the span of implementing the \(n\) input, \(\log n\)-depth FFT circuit (also known as butterfly network) in the nonatomic binary fork-join model. In this model, memory-access synchronizations occur only through fork operations, which spawn two child threads, and join operations, which resume a parent thread when its child threads terminate. Our bound is asymptotically tight for the nonatomic binary fork-join model, which has been of interest of late, due to its conceptual elegance and ability to capture asynchrony.Our bound implies super-logarithmic lower bound in the nonatomic binary fork-join model for implementing the butterfly merging networks used, e.g., in Batcher's bitonic and odd-even mergesort networks. This lower bound also implies an asymptotic separation result for the atomic and nonatomic versions of the fork-join model, since, as we point out, FFT circuits can be implemented in the atomic binary fork-join model with span equal to their circuit depth.
Nov. 10
  • Huacheng Yu (Princeton University)
  • Abstract: In this talk, we show a strong XOR lemma for bounded-round two-player randomized communication. For a function \(f:X\times Y\rightarrow\{0,1\}\), the n-fold XOR function \(f^{\oplus n}:X^n\times Y^n \rightarrow\{0,1\}\) maps n input pairs \((x_1,...,x_n), (y_1,...,y_n)\) to the XOR of the n output bits \(f(x_1,y_1)\oplus \ldots \oplus f(x_n, y_n)\). We prove that if every r-round communication protocols that computes f with probability 2/3 uses at least C bits of communication, then any r-round protocol that computes \(f^{\oplus n}\) with probability \(1/2+exp(-O(n))\) must use \(n(r^{-O (r)}C-1)\) bits. When r is a constant and C is sufficiently large, this is \(\Omega(nC)\) bits. It matches the communication cost and the success probability of the trivial protocol that computes the n bits \(f(x_i,y_i)\) independently and outputs their XOR, up to a constant factor in n. A similar XOR lemma has been proved for f whose communication lower bound can be obtained via bounding the discrepancy [Shaltiel03]. By the equivalence between the discrepancy and the correlation with 2-bit communication protocols, our new XOR lemma implies the previous result.
Nov. 17
  • Sophie Huiberts (Columbia University)
  • Abstract: Explaining why the simplex method is fast in practice, despite it taking exponential time in the theoretical worst case, continues to be a challenge. Smoothed analysis is a paradigm for addressing this question. During my talk I will present an improved upper bound on the smoothed complexity of the simplex method, as well as prove the first non-trivial lower bound on the smoothed complexity. This is joint work with Yin Tat Lee and Xinzhi Zhang.
Dec. 1
  • Roei Tell (Institute for Advanced Study/DIMACS)
  • Abstract: In the first half of the talk I'll set up some background, by describing two recent directions in the study of derandomization: A non-black-box algorithmic framework, which replaces the classical PRG-based paradigm; and free lunch results, which eliminate randomness with essentially no runtime overhead.

    In the second half we'll see one result along these directions: Under hardness assumptions, every doubly efficient proof system with constantly many rounds of interaction can be simulated by a deterministic NP-type verifier, with essentially no runtime overhead, such that no efficient adversary can mislead the deterministic verifier. Consequences include an NP-type verifier of this type for #SAT, running in time \(2^{\epsilon n}\) for an arbitrarily small constant eps>0; and a complexity-theoretic analysis of the Fiat-Shamir heuristic in cryptography.

    The talk is based on a joint work with Lijie Chen (UC Berkeley).
Dec. 8
  • Michael Chapman (NYU)
  • Abstract: A property is testable if there exists a probabilistic test that distinguishes between objects satisfying this property and objects that are far away from satisfying the property. In other words, if an object passes the test with high probability, it is close to an object that satisfies the test with probability 1 (namely, an object with the desired property). Property testing results are useful for many TCS applications, mainly in error correction, probabilistically checkable proofs and hardness of approximation.

    In this talk we are going to discuss two property testing problems that arise naturally in group theory. (All relevant group theoretic notions will be defined during the talk).

    1. The first is due to Ulam, who in 1940 asked: Given an almost homomorphism between two groups, is it close to an actual homomorphism between the groups? The answer depends on the choice of groups, as well as what we mean by "almost" and "close". We will present some classical and recent results in TCS in this framework, specifically the BLR test and quantum soundness of 2-player games. We will discuss some recent developments and open problems in this field.

    2. The second problem is the following: Is being a proper subgroup a testable property? We will discuss partial results and a very nice open problem in this direction.
Dec. 15
  • Aaron Sidford (Stanford)
  • Abstract: TBA

Spring 2023

Feb. 2
Feb. 9
Feb. 16
Feb. 23
  • Spencer Peters (Cornell University)
  • Abstract: TBA
Mar. 2
Mar. 9 (Chris away)
Mar. 23
Mar. 30
Apr. 6
Apr. 13
Apr. 20
Apr. 27
May 4

Spring 2019

May 21
  • Jeroen Zuiddam (IAS)
  • Abstract: We give a dual description of the Shannon capacity of graphs. The Shannon capacity of a graph is the rate of growth of the independence number under taking the strong power, or in different language, it is the maximum rate at which information can be transmitted over a noisy communication channel. Our dual description gives Shannon capacity as a minimization over the "asymptotic spectrum of graphs", which as a consequence unifies previous results and naturally gives rise to new questions. Besides a gentle introduction to the asymptotic spectrum of graphs we will discuss, if time permits, Strassen's general theory of "asymptotic spectra" and the asymptotic spectrum of tensors.
May 16
  • Omri Weinstein (Columbia University)
  • Abstract: I will talk about new connections between arithmetic data structures, circuit lower bounds and pseudorandomness. As the main result, we show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ω(log^2 n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s = (1+ε)n), would already imply a semi-explicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy, and Yekhanin, 2009). Our result further asserts that polynomial (t n^δ) data structure lower bounds against near-maximal space, would imply super-linear circuit lower bounds for log-depth linear circuits (which would close a four-decade open question). In the succinct space regime (s = n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the inner and outer dimensions of a matrix (Paturi and Pudlak, 2006), and on a new worst-to-average case reduction for rigidity, which is of independent interest.

    Based mostly on joint work with Zeev Dvir (Princeton) and Alexander Golovnev (Harvard).
May 2
  • Noah Stephens-Davidowitz (MIT)
  • Abstract: We show that, assuming a common conjecture in complexity theory, there are "no non-trivial algorithms" for the two most important problems in coding theory: the Nearest Codeword Problem (NCP) and the Minimum Distance Problem (MDP). Specifically, for any constant \eps > 0, there is no N^{1-\eps}-time algorithm for codes with N codewords. In fact, the NCP result even holds for a family of codes with a single code of each cardinality, and our hardness result therefore also applies to the preprocessing variant of the problem.

    These results are inspired by earlier work showing similar results for the analogous lattice problems (joint works with three other NYU alums: Huck Bennett and Sasha Golovnev and with Divesh Aggarwal), but the proofs for coding problems are far simpler. As in those works, we also prove weaker hardness results for approximate versions of these problems (showing that there is no N^{o(1)}-time algorithm in this case).

    Based on joint work with Vinod Vaikuntanathan.
Apr. 11
  • Li-Yang Tan (Stanford University)
  • Abstract: We prove a new Littlewood--Offord-type anticoncentration inequality for m-facet polytopes, a high-dimensional generalization of the classic Littlewood--Offord theorem. Our proof draws on and extends techniques from Kane's bound on the boolean average sensitivity of m-facet polytopes. Joint work with Ryan O'Donnell and Rocco Servedio; manuscript available at https://arxiv.org/abs/1808.04035.
Mar. 21
  • Sahil Singla (Princeton)
  • Abstract: In the classical secretary problem, a sequence of n elements arrive in a uniformly random order. The goal is to maximize the probability of selecting the largest element (or to maximize the expected value of the selected item). This model captures applications like online auctions, where we want to select the highest bidder. In many such applications, however, one may expect a few outlier arrivals that are adversarially placed in the arrival sequence. Can we still select a large element with good probability? Dynkin’s popular 1/e-secretary algorithm is sensitive to even a single adversarial arrival: if the adversary gives one large bid at the beginning of the stream, the algorithm does not select any element at all. In this work we introduce the Byzantine Secretary problem where we have two kinds of elements: green (good) and red (bad). The green elements arrive uniformly at random. The reds arrive adversarially. The goal is to find a large green element. It is easy to see that selecting the largest green element is not possible even when a small fraction of the arrival is red, i.e., we cannot do much better than random guessing. Hence we introduce the second-max benchmark, where the goal is to select the second-largest green element or better. This dramatically improves our results. We study both the probability-maximization and the value-maximization settings. For probability-maximization, we show the existence of a good randomized algorithm, using the minimax principle. Specifically, we give an algorithm for the known distribution case, based on trying to guess the second-max in hindsight, and using this estimate as a good guess for the future. For value-maximization, we give an explicit poly log^∗ n competitive algorithm, using a multi-layered bucketing scheme that adaptively refines our estimates of second-max as we see more elements. For the multiple secretary problem, where we can pick up to r secretaries, we show constant competitiveness as long as r is large enough. For this, we give an adaptive thresholding algorithm that raises and lowers thresholds depending on the quality and the quantity of recently selected elements.
Mar. 14
  • Mert Saglam (University of Washington)
  • Abstract: We answer a 1982 conjecture of Erdős and Simonovits about the growth of number ofk-walks in a graph, which incidentally was studied even earlier by Blakley and and Dixon in 1966. We prove this conjecture in a more general setup than the earlier treatment, furthermore, through a refinement and strengthening of this inequality, we resolve two related open questions in complexity theory: the communication complexity of thek-Hamming distance isΩ(k log k)and that consequently any property tester fork-linearity requires Ω(k log k)queries.
Jan. 24
  • Mika Göös (IAS)
  • Abstract: *Separations:* We introduce a monotone variant of XOR-SAT and show it has exponential monotone circuit complexity. Since XOR-SAT is in NC^2, this improves qualitatively on the monotone vs. non-monotone separation of Tardos (1988). We also show that monotone span programs over R can be exponentially more powerful than over finite fields. These results can be interpreted as separating subclasses of TFNP in communication complexity. *Characterizations:* We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (Karchmer--Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995). Joint work with Pritish Kamath, Robert Robere, and Dmitry Sokolov.

Related Seminars at NYU