We show the first poly-time algorithm for partitioning a simple polygon P into a minimum number of star-shaped polygons. A polygon is called star-shaped if there is an interior point which can see all of it. This is a cute and long-standing open problem in computational geometry.
No background in computational geometry is necessary to follow the talk :)
The question of whether such an algorithm exists has been open for more than four decades [Avis and Toussaint, Pattern Recognit., 1981] and it has been repeated frequently, for example in O'Rourke's famous book [Art Gallery Theorems and Algorithms, 1987]. In addition to its strong theoretical motivation, the problem is also motivated by practical domains such as CNC pocket milling, motion planning, and shape parameterization. The covering variant in which the pieces are star-shaped but allowed to overlap---known as the Art Gallery Problem---was recently shown to be \exists \mathbb{R}-complete and is thus likely not in NP [Abrahamsen, Adamaszek and Miltzow, STOC 2018 \&\ J. ACM 2022]; this is in stark, and perhaps surprising, contrast to our result.
Joint work with: Mikkel Abrahamsen, Andre Nusser, and Hanwen Zhang
Paper is in submission and will appear on arxiv `soon'.
- Monday, April 24th, Artur Riazanov Decision Depth Lower Bounds for Sampling
A circuit samples a distribution X with an error \eps if the statistical distance between the output of the circuit on the uniform input and X is at most \eps. Studying the hardness of distributions was proposed by Viola [RANDOM 2012] and since then found many nontrivial applications and relations with other areas of complexity such as succinct data structure lower bounds (Viola [RANDOM 2012]; Viola [SIAM J. Comput. 2020]) and 2-source extractor constructions (Viola [FOCS 2011]; Chattopadhyay, Zuckerman [STOC 2016]). We study the hardness of sampling by decision forests (each output bit is computed by a decision tree). Our main focus is uniform distributions over strings of a fixed Hamming weight, for Hamming weight o(n) we show polynomially optimal decision depth lower bounds for the sampling of these distributions.
Joint work with Yuval Filmus, Itai Leigh, and Dmitry Sokolov
- Monday, April 17th, Michael Kapralov A Quasi-Monte Carlo Data Structure for Smooth Kernel Evaluations
In the kernel density estimation (KDE) problem one is given a kernel K(x, y) and a dataset P of points in a high dimensional Euclidean space, and must prepare a small space data structure that can quickly answer density queries: given a point q, output a (1+\e)-approximation to \mu:=\frac1{|P|}\sum_{p\in P} K(p, q). The classical approach to KDE (and the more general problem of matrix vector multiplication for kernel matrices) is the celebrated fast multipole method of Greengard and Rokhlin [1983]. The fast multipole method combines a basic space partitioning approach with a multidimensional Taylor expansion, which yields a \approx \log^d (n/\e) query time (exponential in the dimension). A recent line of work initiated by Charikar and Siminelakis'17 achieved polynomial dependence on d via a combination of random sampling and randomized space partitioning, with Backurs et al'18 giving an efficient data structure with query time \approx \poly{\log(1/\mu)}/\e^2 for smooth kernels.
Quadratic dependence on \e, inherent to the sampling (i.e., Monte Carlo) methods above, is prohibitively expensive for small \e. This is a classical issue addressed by quasi-Monte Carlo methods in numerical analysis. The high level idea in quasi-Monte Carlo methods is to replace random sampling with a discrepancy based approach. I will show how to get a data structure with \approx \poly{\log(1/\mu)}/\e query time for smooth kernel KDE. Our main insight is a new way to combine discrepancy theory with randomized space partitioning inspired by, but significantly more efficient than, that of the fast multipole methods. We hope that our techniques will find further applications to linear algebra for kernel matrices.
Joint work with Moses Charikar and Erik Waingarten.
- Thursday, December 8th, Nathaniel Harms Distribution Testing under the Parity Trace
We introduce a distribution testing task called distribution testing under the parity trace, in which the testing algorithm receives the parity trace of a sample instead of the sample itself. For a sample S drawn from a distribution over N, the parity trace of S is the ordered sequence of parities of the sample points x \in S. This prevents the algorithm from computing the histogram of the sample, which is usually the first step of a distribution tester. Distribution testing under the parity trace is essential for three natural types of testing problems:
1. It is necessary for understanding several important property testing problems in the distribution-free sample-based testing model. It is essentially equivalent to testing certain joint properties of function–distribution pairs, including the basic property of k-alternating functions, among others.
2. We initiate the study of property testing in the well-studied trace reconstruction setting, and show that distribution testing under the parity trace is central to these problems. This leads to a number of results on property testing for trace reconstruction, including a surprising non-constructive `testing-by-learning' reduction to standard PAC learning.
3. We define the closely-related task of distribution testing with a confused collector, where the collector of the sample cannot perfectly distinguish between nearby elements of the domain, and therefore systematically mislabels the sample.
The most fundamental distribution testing problem is testing uniformity. Our main technical result is a tight ~\Theta( (n/epsilon)^{4/5} + \sqrt{n}/epsilon^2) bound on the sample complexity of testing uniformity of a distribution over domain [n] under the parity trace, which implies related results for the three types of testing problems above
- Thursday, December 1st, Mikhail Makarov Expander Decomposition in Dynamic Streams
I will talk about expander decompositions of a graph G = (V, E) in the
streaming model of computation. The goal is to find a partitioning C of vertices V such that the
subgraphs of G induced by the clusters C are good expanders, while keeping the number of intercluster
edges small. Expander decompositions are classically constructed by a recursively applying
balanced sparse cuts to the input graph.
We give the first implementation of such a
recursive sparsest cut process using small space in the dynamic streaming model.
Our main algorithmic tool is a new type of cut sparsifier that we refer to as a power cut sparsifier
– it preserves cuts in any given vertex induced subgraph (or, any cluster in a fixed partition of V )
to within a (\delta, \eps)-multiplicative/additive error with high probability.
Based on a joint work with Michael Kapralov and Arnold Filtser.
- Thursday, November 17th in INJ114, Kshiteej Sheth Toeplitz Low-Rank Approximation with Sublinear Query Complexity
In scientific computing, engineering, and signal processing, highly structured matrices such as circulant and Toeplitz matrices arise in many problems, often due to the discretization of an underlying physical system.
Very fast near linear time algorithms have existed for fundamental problems of numerical linear algebra on Toeplitz matrices. In this work we show how to find a good low-rank approximation for Toeplitz matrices using sublinear query complexity and runtime, i.e. without even reading a single row of the matrix in its entirety. Our main conceptual contribution is to show that a good low-rank approximation that itself is Toeplitz exists, this basic structural theorem was not known before. At a technical level, the theorem is achieved by combining the classical Vandermonde decomposition of Toeplitz matrices with techniques from the Sparse Fourier Transform literature.
This is joint work with Michael Kapralov (EPFL), Hannah Lawrence (MIT), Mikhail Makarov (EPFL), Cameron Musco (UMass Amherst) to appear in SODA 2023.
- Thursday, November 10th, in INJ114, Alexandros Hollender Pure-Circuit: Strong Inapproximability for PPAD
The current state-of-the-art methods for showing inapproximability in PPAD arise from the \eps-Generalized-Circuit (ε-GCircuit) problem. Rubinstein (2018) showed that there exists a small unknown constant \eps for which \eps-GCircuit is PPAD-hard, and subsequent work has shown hardness results for other problems in PPAD by using \eps-GCircuit as an intermediate problem.
We introduce Pure-Circuit, a new intermediate problem for PPAD, which can be thought of as \eps-GCircuit pushed to the limit as \eps \to 1, and we show that the problem is PPAD-complete. We then prove that \eps-GCircuit is PPAD-hard for all \eps<0.1 by a reduction from Pure-Circuit, and thus strengthen all prior work that has used GCircuit as an intermediate problem from the existential-constant regime to the large-constant regime.
We show that stronger inapproximability results can be derived by reducing directly from Pure-Circuit. In particular, we prove tight inapproximability results for computing \eps-well-supported Nash equilibria in two-action polymatrix games, as well as for finding approximate equilibria in threshold games.
Joint work with Argyrios Deligkas, John Fearnley, and Themistoklis Melissourgos.
- Thursday, October 27th, Weiqiang Yuan, The Exact Bipartite Matching Polytope Has Exponential Extension Complexity
Given a graph with edges colored red or blue and an integer k, the exact perfect matching problem asks if there exists a perfect matching with exactly k red edges. There exists a randomized polylogarithmic-time parallel algorithm to solve this problem, dating back to the eighties, but no deterministic polynomial-time algorithm is known, even for bipartite graphs. Some previous work suggests that understanding the structure of the exact bipartite matching polytope might be a key step to derandomizing the algorithm.
We prove that the exact bipartite matching polytope has exponential extension complexity. In fact, we prove something stronger, that the convex hull of perfect matching with an odd number of red edges has exponential extension complexity. In this talk, I will mainly focus on the results and tools in extension complexity developed in the past 30 years, as well as our linear inequality relaxation of the odd-parity matching polytope (we leave showing whether it is tight as an open problem). I will also briefly talk about some key steps of our proof.
Based on joint work with Xinrui Jia and Ola Svensson.
- Thursday, October 20th, in GA331, Goran Zuzic, Universal optimality in distributed computing and its connections to diverse areas of theoretical computer science
The modern computation and information processing systems shaping our world have become massively distributed, and a fundamental understanding of distributed algorithmics has never been more important. At the same time, despite 40 years of intense study, we often do not have an adequate understanding of the fundamental barriers that rule out the existence of ultra-fast distributed algorithms. This is true even for the most well-studied problems in computer science---including the shortest path, minimum spanning tree, minimum cut, and many other well-known tasks.
In this talk, I will present a high-level overview of a sequence of papers that give a near-complete answer to the above question. Its culmination is the following pie-in-the-sky result called *universal optimality*: for all of the tasks mentioned, there exists a single distributed algorithm that, when run on any communication network G, is provably competitive with the fastest algorithm on G.
The pursuit of universal optimality has led to the development of many new connections between distributed computing and other seemingly unrelated areas of theoretical computer science. Curiously, these connections have been mutually-beneficial and have already led to many breakthroughs not only in distributed computing, but also in the theory of metric embedding, information theory, and oblivious packet routing. I will briefly explore these connections.
- Thursday, October 13th, Gilbert Maystre, An inner-optimal composition theorem for randomised query complexity
Given two boolean functions f and g, is there a nice way to express the complexity of the composed function f.g as a function of the respective complexities of f and g? This is a basic question one can ask about any computational model. In the context of decision tree complexity, the question has long being settled for the deterministic and quantum case (the relation is multiplicative). In stark contrast, the case of randomized query complexity has proven to be far more intricate, as demonstrated by a long line of work in the past decade. In this talk, we will introduce LR a new "linearized randomized" complexity measure and show that that R(f.g)\geq R(f)LR(g). LR is the best such measure one can hope for general f and g, making the new theorem optimal.
Based on joint work with Shalev Ben-David, Eric Blais and Mika Göös.
- Thursday, October 6th, Yu Chen, Weighted Graph Sparsification by Linear Sketching
A seminal work of~[Ahn-Guha-McGregor, PODS'12] showed that one could compute a cut sparsifier of an unweighted undirected graph by taking a near-linear number of linear measurements on
the graph. Subsequent works also studied computing other graph sparsifiers using linear sketching and obtained near-linear upper bounds for spectral sparsifiers~[Kapralov-Lee-Musco-Musco-Sidford, FOCS'14] and first non-trivial upper bounds for spanners~[Filtser-Kapralov-Nouri,
SODA'21]. All these linear sketching algorithms, however, only work on unweighted graphs and are extended to weighted graphs by weight grouping, a non-linear operation not implementable in, for instance, general turnstile streams.
In this work, we initiate the study of weighted graph sparsification by linear sketching. We give an algorithm that computes a
(1+\varepsilon)-cut sparsifier using \tilde{O}(n) linear measurements and an algorithm that computes an algorithm that computes a
(1+\varepsilon)-spectral sparsifier using \tilde{O}(n^{6/5}) linear measurements. In this talk, I'll focus on the main difference between the weighted case and the unweighted case, that is, sampling low strength edges for cut sparsifier and sampling high effective resistance edges for spectral sparsifier.
- Thursday, September 29th, Dmitry Sokolov Resolution Complexity of Nisan -- Wigderson Generators
Following the paper of Alekhnovich, Ben-Sasson, Razborov, Wigderson we call a pseudorandom
generator (PRG) $F\colon \{0, 1\}^n \to \{0, 1\}^m$ hard for a propositional proof system $P$ if $P$ cannot
efficiently certify the (properly encoded) statement $b \notin Img(F)$ for any string $b \in \{0, 1\}^m$.
The study of proof complexity of PRG is motivated by various problems in mathematical logic as well as
circuit complexity. In this talk, we consider some recent developments in proving lower bounds on
Nisan--Wigderson pseudorandom generator. And we also discuss open problems related to proof
complexity.
- Thursday, September 22nd, Greg Gluch, Nonlocality vs Entanglement - Quantum Mechanics meets Computational Complexity
Axioms of quantum theory define states of a quantum system to be vectors in Hilbert space. These states, \rho_{AB} shared between two parties A and B, can be naturally split into two sets. The first one is the set of separable states, i.e. states that can be written as convex combination of tensor product states. All other states are called entangled. This separation relies purely on mathematical formalism. The question of Entanglement = Nonlocality asks whether there is an operational (theory independent) distinction between separable and entangled states. More formally we ask if it's true that for every entangled state \rho_{AB} there exists a game between A, B and a verifier V that distinguishes between two cases: A and B sharing \rho_{AB} versus A and B sharing only a separable state. In this talk I will explore the fascinating history of this problem and present new results linking it with computational complexity. I will show, for instance, that Entanglement = Nonlocality under cryptographic assumptions.
- Thursday, September 15th, Ido Nachum, On the Geometrical Representation Induced by Batch Normalization
Batch normalization (BN) is a successful practice of neural network training, yet it is not well understood. BN follows in two stages each calculated over the batch: mean subtraction and normalization by the standard deviation. It is typically applied after the matrix multiplication and before the application of the non-linearity in each layer.
This work follows the work of Hadi Daneshmand, Amir Joudaki, Francis Bach [Neurips 21'], where they study a deep neural network at initialization with no non-linearity and only the normalization part of BN. Here, we present a complement picture where we study the mean subtraction followed by the ReLU non-linearity.
Our work follows in two parts, we first show empirically that ReLU deep networks with BN no longer orthogonalize the representation, and the representation induced by complete BN is different than using only variance normalization. Second, we provide a theoretical explanation for this observed behavior.
- Monday, July 4th, Billy Jin, Tight Robustness-Consistency Tradeoffs for Two-Stage Bipartite Matching
We study the two-stage vertex-weighted online bipartite matching problem of Feng, Niazadeh, and Saberi (SODA'21) in a setting where the algorithm has access to a suggested matching that is recommended in the first stage. We evaluate an algorithm by its robustness $R$, which is its performance relative to that of the optimal offline matching, and its consistency $C$, which is its performance when the advice or the prediction given is correct. We characterize for this problem the Pareto-efficient frontier between robustness and consistency, which is rare in the literature on advice-augmented algorithms, yet necessary for quantifying such an algorithm to be optimal. Specifically, we propose an algorithm that is $R$-robust and $C$-consistent for any $(R,C)$ with $0 \leq R \leq \frac{3}{4}$ and $\sqrt{1-R} + \sqrt{1-C} = 1$, and prove that no other algorithm can achieve a better tradeoff.
This is joint work with Will Ma from Columbia University.
- Thursday, June 16th, Siqi Liu, Testing thresholds for high-dimensional sparse random geometric graphs
In the random geometric graph model, we identify each of our n vertices with an independently and uniformly sampled vector from the d-dimensional unit sphere, and we connect pairs of vertices whose vectors are "sufficiently close", such that the marginal probability of an edge is p. We investigate the problem of distinguishing an Erdos-Renyi graph from a random geometric graph. When p = c / n for constant c, we prove that if d \geq poly log n, the total variation distance between the two distributions is close to 0; this improves upon the best previous bound of Brennan, Bresler, and Nagaraj (2020), which required d >> n^{3/2}, and further our result is nearly tight, resolving a conjecture of Bubeck, Ding, Eldan, and Racz (2016) up to logarithmic factors. We also obtain improved upper bounds on the statistical indistinguishability thresholds in d for the full range of p satisfying 1/n\leq p \leq 1/2, improving upon the previous bounds by polynomial factors.
In this talk, we will discuss the key ideas in our proof, which include:
- Analyzing the Belief Propagation algorithm to characterize the distributions of (subsets of) the random vectors conditioned on producing a particular graph.
- Sharp estimates for the area of the intersection of a random sphere cap with an arbitrary subset of the sphere, which are proved using optimal transport maps and entropy-transport inequalities on the unit sphere.
Based on joint work with Sidhanth Mohanty, Tselil Schramm, and Elizabeth Yang.
- Thursday, May 19th, Siddhartha Jain Jia, TFNP: Collapses, separations, and characterisations
We discuss the class of total search problems with short certificates, TFNP (Total Function Nondeterministic Polynomial). We will start with a quick reminder on the commonly studied subclasses of TFNP, after which we explain our contributions. Our results include new collapses, black-box separations and characterisations of the black-box/query analogues using propositional proof systems. Finally, we will focus on our collapse result with a (perhaps surprisingly) short proof: EoPL = PLS \cap PPAD.
Joint work with more than enough coauthors for a volleyball team: Alexandros Hollender, Mika Goos, Gilbert Maystre, William Pires, Robert Robere, and Ran Tao.
- Thursday, May 12th, Xinrui Jia, Recent Work on Explainable Clustering
Many classic clustering algorithms use the feature values of the points in a complicated way to output the clusters. Hence, the inclusion of any particular point to its cluster may not be easy to interpret. In this talk I will discuss the work of Dasgupta, Frost, Moshkovitz, and Rashtchian (Explainable k-Means and k-Medians Clustering, ICML'20) that defines an explainable clustering as a clustering where each cluster corresponds to a leaf of a decision tree. Each node in the decision tree is an axis-aligned cut so it is a split at some value of a single dimension. This work of DFMR is the first to look at the cost of explainability of clustering from a theoretical perspective and obtains a cost of O(k) for k-medians and O(k^2) for k-means.
I will also present a joint work with Buddhima, Adam, and Ola (Nearly-Tight and Oblivious Algorithms for Explainable Clustering, NeurIPS'21) that improves the approximation ratios to O(log^2 k) for k-medians and O(k log^2 k) for k-means along with a lower bound of \Omega(k) for k-means. Lastly, I will mention the recent work of Makarychev and Shan (Explainable k-means. Don't be greedy, plant bigger trees!, STOC'22) that gives a bi-criteria approximation for explainable k-means which opens (1 + \delta)k centers at a cost of \tilde O(log^2 k).
- Thursday, April 14th, Mikhail Makarov Traversing the FFT Computation Tree for Dimension-Independent Sparse Fourier Transforms
We are interested in the well studied Sparse Fourier transform problem,
where one aims to quickly recover an approximately Fourier $k$-sparse domain vector $\widehat{x} \in \mathbb{C}^{n^d}$ from observing
its time domain representation $x$. In the exact $k$-sparse case the previously best known dimension-independent algorithm
runs in near cubic time in $k$. We show how to improve upon it to achieve an almost quadratic in $k$ upper bound through a reduction to a purely discrete problem of tree exploration.
- Thursday, March 24th, Aleksander Lukasiewicz Cardinality estimation using Gumbel distribution
Cardinality estimation is the task of approximating the number of distinct elements in a large dataset with possibly repeating elements. LogLog and HyperLogLog (c.f. Durand and Flajolet [ESA 2003], Flajolet et al. [Discrete Math Theor. 2007]) are small space sketching schemes for cardinality estimation, which have both strong theoretical guarantees of performance and are highly effective in practice. This makes them a highly popular solution with many implementations in big-data systems (e.g. Algebird, Apache DataSketches, BigQuery, Presto and Redis). However, despite having simple and elegant formulation, both the analysis of LogLog and HyperLogLog are extremely involved -- spanning over tens of pages of analytic combinatorics and complex function analysis. We propose a modification to both LogLog and HyperLogLog that replaces discrete geometric distribution with a continuous Gumbel distribution. This leads to a very short, simple and elementary analysis of estimation guarantees, and smoother behavior of the estimator.
Joint work with Przemys?aw Uzna?ski.
- Thursday, March 31st, Chris Jones Sum-of-Squares Lower Bounds for Sparse Independent Set
The Sum-of-Squares (SoS) hierarchy of semidefinite programs is a powerful algorithmic paradigm which captures state-of-the-art algorithmic guarantees for a wide array of problems. In the average case setting, SoS lower bounds provide strong evidence of algorithmic hardness or information-computation gaps. Prior to this work, SoS lower bounds have been obtained for problems in the "dense" input regime, where the input is a collection of independent Rademacher or Gaussian random variables, while the sparse regime has remained out of reach. We make the first progress in this direction by obtaining strong SoS lower bounds for the problem of Independent Set on sparse random graphs. We prove that with high probability over an Erdos-Renyi random graph G?G(n,d) with average degree d>log n, degree-d^? SoS fails to refute the existence of an independent set of size approximately k=?(n / ?d) in G whereas the true size of the largest independent set in G is O(n*log d / d). Our proof involves several significant extensions of the pseudocalibration+graph matrix techniques developed by Barak et al [FOCS 16].
Based on joint work with Aaron Potechin, Goutham Rajendran, Madhur Tulsiani, and Jeff Xu.
- Thursday, March 17th, Marina Drygala An improved approximation algorithm for TSP in the half integral case
We review the work of Karlin, Klein and Gharan which gives a (3/2-\epsilon) approximation algorithm for TSP in the half-integral case, for a universal constant \epsilon. In, 1976 the classic work of Christofides gave a 3/2-approximation, while the best-known inapproximability result given by Karpinski et al. shows that it is NP-hard to approximate TSP within any constant less than 123/122 . It had been a long-standing open problem to close this gap. Recently, the same authors, were able to close this gap slightly to 3/2 - \epsilon, for an absolute constant \epsilon > 0. The techniques presented in this work helped develop the machinery to solve the general case. In particular, the authors use exploit structural properties of the maximum entropy spanning tree distribution and integrality of the T-Join polytope to show the in expectation the cost of the solution outputted by their algorithm is bounded below 1/2.
- Thursday, March 10th, Freya Behrens Phase Transitions in (dis)assortative partitions on random regular graphs
We study the problem of assortative and disassortative partitions on random d-regular graphs. Nodes in the graph are partitioned into two non-empty groups. In the assortative partition every node requires at least H of their neighbors to be in their own group. In the disassortative partition they require less than H neighbors to be in their own group. Using the cavity method based on analysis of the Belief Propagation algorithm we establish for which combinations of parameters (d,H) these partitions exist with high probability and for which they do not. ForH > ceil(d/2) we establish that the structure of solutions to the assortative partition problems corresponds to the so-called frozen-1RSB. This entails a conjecture of algorithmic hardness of finding these partitions efficiently. For H <= ceil(d/2) we argue that the assortative partition problem is algorithmically easy on average for all d. Further we provide arguments about asymptotic equivalence between the assortative partition problem and the disassortative one, going through a close relation to the problem of single-spin-flip-stable states in spin glasses. In the context of spin glasses, our results on algorithmic hardness imply a conjecture that gapped single spin flip stable states are hard to find which may be a universal reason behind the observation that physical dynamics in glassy systems display convergence to marginal stability.
-
Thursday, March 3rd, Dragoljub Duric (ON ZOOM) Double Choco is NP-complete
In the Nikoli pencil-and-paper game Double Choco, a puzzle consists of an m ? n grid of cells of white or gray color, separated by dotted lines where each cell possibly contains an integer. The goal is to partition the grid into blocks by drawing solid lines over the dotted lines, where every block must contain a pair of areas of white and gray cells having the same form (size and shape). An integer indicates the number of cells of that color in the block. A block can contain any number of cells with the integer. We prove this puzzle NP-complete, establishing a Nikoli gap of 2 years.
-
Thursday, December 16th, Adam Polak Memoryless Worker-Task Assignment with Polylogarithmic Switching Cost
I'll talk about the basic problem of assigning memoryless workers to tasks with dynamically changing demands. Given a set of w workers and a multiset T \subset [t] of w tasks, a memoryless worker-task assignment function is any function f that assigns the workers [w] to the tasks T based only on the current value of T. The assignment function f is said to have switching cost at most k if, for every task multiset T, changing the contents of T by one task changes f(T) by at most k worker assignments. The problem of memoryless worker task assignment is to construct an assignment function with the smallest possible switching cost.
In past work, it has been posed as an open question what the optimal switching cost is: there are no known sub-linear upper bounds, and after considerable effort, the best known lower bound remains 4.
In this talk, I'll show that it is possible to achieve polylogarithmic O(log(w)log(wt)) switching cost. I'll also prove a super-constant Omega(log*(t)) lower bound. Finally, I'll present an application of the worker-task assignment problem to a metric embedding problem into Hamming space.
The talk will be based on joint work with Aaron Berger, William Kuszmaul, Jonathan Tidor, and Nicole Wein.
-
Thursday, December 9th, Weronika Wrzos-Kaminska Online Stochastic Max-Weight Bipartite Matching: Beyond Prophet Inequalities
The literature on online Bayesian selection typically focuses on prophet inequalities, which compare the gain of an online algorithm to that of a "prophet" who knows the future. An equally natural, though significantly less well-studied benchmark is the optimum online algorithm. What is the computational complexity of the optimum online? How well can a polynomial-time algorithm approximate it?
In this talk, I will present a recent paper by Christos Papadimitriou, Tristan Pollner, Amin Saberi and David Wajc (https://arxiv.org/abs/2102.10261) in which they consider the online stochastic maximum-weight matching problem under vertex arrivals. In the paper, they give a polynomial-time algorithm which approximates the optimal online algorithm within a factor of 0.51, beating the best-possible prophet inequality for this problem. In contrast, they also show that it is PSPACE-hard to approximate this problem within some constant \alpha<1.
-
Thursday, December 2nd, Mikhail Makarov Motif Cut Sparsifiers
A motif M is a frequently occurring subgraph of a given directed or undirected graph G. Motifs capture higher order organizational structure of G beyond edge relationships, and, therefore, have found wide applications such as in graph clustering, community detection, and analysis of biological and physical networks to name a few.
In this talk, we will show the possibility of constructing a motif cut sparsifier. More precisely, we will show that one can compute in polynomial time a sparse weighted subgraph G' of G with only \tO(n/\epsilon^2) edges such that for every cut, the weighted number of copies of M crossing the cut in G' is within a 1+\epsilon factor of the number of copies of M crossing the cut in G, for every constant size motif M.
In addition, we will present a strong lower bound ruling out a similar result for sparsification with respect to induced occurrences of motifs.
This is a joint work with Michael Kapralov, Sandeep Silwal, Christian Sohler and Jakab Tardos.
-
Thursday, November 25th, Weiqiang Yuan Log-rank and Lifting for AND-Functions
Let f: {0, 1}^n -> {0, 1} be a boolean function, and let f_/\(x, y) = f(x /\ y) denote the AND-function of f, where x /\ y denotes bit-wise AND. We study the deterministic communication complexity of f_/\ and show that, up to a log n factor, it is bounded by a polynomial in the logarithm of the real rank of the communication matrix of f_/\. This comes within a log n factor of establishing the log-rank conjecture for AND-functions with no assumptions on f. Our result stands in contrast with previous results on special cases of the log-rank conjecture, which needed significant restrictions on f such as monotonicity or low F_2-degree. Our techniques can also be used to prove (within a log n factor) a lifting theorem for AND-functions, stating that the deterministic communication complexity of f_/\ is polynomially related to the AND-decision tree complexity of f.
The results rely on a new structural result regarding boolean functions f: {0, 1}^n -> {0, 1} with a sparse polynomial representation, which may be of independent interest. We show that if the polynomial computing f has few monomials then the set system of the monomials has a small hitting set, of size poly-logarithmic in its sparsity. We also establish extensions of this result to multi-linear polynomials f: {0, 1}^n -> R with a larger range.
-
Thursday, November 18th, Marina Drygala A Simple LP-Based Approximation Algorithm for the Matching Augmentation Problem
The Matching Augmentation Problem (MAP) has recently received significant attention as an important step towards better approximation algorithms for finding cheap 2-edge connected subgraphs. This has culminated in a 5/3-approximation algorithm. However, the algorithm and its analysis are fairly involved and do not compare against the problem's well-known LP relaxation called the cut LP.
We propose a simple algorithm that, guided by an optimal solution to the cut LP, first selects a DFS tree and then finds a solution to MAP by computing an optimum augmentation of this tree.
Using properties of extreme point solutions, we show that our algorithm always returns (in polynomial time) a better than 2-approximation when compared to the cut LP. We thereby also obtain an improved upper bound on the integrality gap of this natural relaxation.
-
Thursday, November 11th, Gilbert Maystre TFNP meets proof complexity
Subclasses of TFNP aims at classifying the difficulty of search problems that have no satisfactory decision counterpart. For instance, given a graph with constant degree 3 and some Hamiltonian circuit, deciding whether another disjoint Hamiltonian circuit exists is a trivial task because its existence is guaranteed by some combinatorial lemma. On the other hand, actually finding this circuit seems intuitively hard. Proof complexity addresses a different question: Given a boolean formula which is unsatisfiable, is there a succint way to prove that it admits no satisfying assignment? While proof complexity and TFNP seem to share little common ground at first sight, surprisingly strong connections have been exhibited allowing transfer of results from one domain to the other.
In this talk, I will give an overview of both areas, exemplify the existing connections by showing how PLS is related to resolution width and discuss some exciting open problems. The content is borrowed from known results and motivated by ongoing work with Mika G??s, Alexandros Hollender, Siddhartha Jain, William Pires, Robert Robere, and Ran Tao.
-
Thursday, November 4th, Agastya Vibhuti Approximating the Center Ranking under Ulam
We study the problem of approximating a center under the Ulam metric. The Ulam metric, defined over a set of permutations over n, is the minimum number of move operations (deletion plus insertion) to transform one permutation into another. The Ulam metric is a simpler variant of the general edit distance metric. It provides a measure of dissimilarity over a set of rankings/permutations. In the center problem, given a set of permutations, we are asked to find a permutation (not necessarily from the input set) that minimizes the maximum distance to the input permutations. This problem is also referred to as maximum rank aggregation under Ulam. So far, we only know of a folklore 2-approximation algorithm for this NP-hard problem. Even for constantly many permutations, we do not know anything better than an exhaustive search over all n! permutations.
We achieve a (3/2 - 1/(3m))-approximation of the Ulam center in time O(n^(m^2 \ln m)), for m input permutations over [n]. We therefore get a polynomial time bound while achieving better than a 3/2-approximation for constantly many permutations. This problem is of special interest even for constantly many permutations because under certain dissimilarity measures over rankings, even for four permutations, the problem is NP-hard.
In proving our result, we establish a surprising connection between the approximate Ulam center problem and the closest string with wildcard problem (the center problem over the Hamming metric, allowing wildcards). We further study the closest string with wildcards problem and show that there cannot exist any $(2-\epsilon)$-approximation algorithm (for any $\epsilon >0$) for it unless P = NP. This inapproximability result is in sharp contrast with the same problem without wildcards, where we know of a PTAS.
This is joint work with Dr. Diptarka Chakraborty and Dr. Kshitij Gajjar.
-
Thursday, October 28th, Jakab Tardos Spectral Hypergraph Sparsifiers of Nearly Linear Size
Graph sparsification has been studied extensively over the past two decades, culminating in spectral sparsifiers of optimal size (up to constant factors). Spectral hypergraph sparsification is a natural analogue of this problem, for which optimal bounds on the sparsifier size were unknown until recently, mainly because the hypergraph Laplacian is non-linear, and thus lacks the linear-algebraic structure and tools that have been so effective for graphs.
In the talk I will introduce our algorithm for constructing \eps-spectral sparsifiers for hypergraphs with O_*(n) hyperedges; the size of the sparsifier is independent of the rank of the hypergraph (the maximum cardinality of a hyperedge), and is essentially best possible. This result is obtained by introducing two new tools. First, a new proof of spectral concentration bounds for sparsifiers of graphs which avoids linear-algebraic methods, replacing the usual application of the matrix Bernstein inequality, and therefore applies to the (non-linear) hypergraph setting. Second, an extension of the weight assignment techniques of Chen, Khanna and Nagda [FOCS'20] to the spectral sparsification setting.
-
Thursday, October 14th, Alexandra Lassota Tightness of Sensitivity and Proximity Bounds for Integer Linear Programs
We consider Integer Linear Programs (ILPs).
The distance between an optimal fractional solution and an optimal
integral solution (called the proximity) is an important measure. A classical result by Cook et
al. (Math. Program., 1986) shows that it is at most n*subDet(A), where
subDet(A) is the largest subdeterminant in the constraint matrix A and n is the number of columns in A.
Another important measure studies the change in an optimal solution if the
right-hand side b is replaced by another right-hand side b'. The
distance between an optimal solution x w.r.t. b and an optimal solution
x' w.r.t. b' (called the sensitivity) is similarly bounded, i.e., ||b-b'||_1 n*subDet(A)
as also shown by Cook et al. (Math. Program., 1986).
Even after more than thirty years, these bounds are essentially the best known
bounds for these measures.
While some lower bounds are known for these measures, they either only work
for very small values of Delta, require negative entries in the
constraint matrix, or have fractional right-hand sides.
Hence, these lower bounds often do not correspond to instances from
algorithmic problems.
This work presents for each n > 0 and arbitrary large entries in the constraint matrix ILPs of the
above type with non-negative constraint matrices such that their proximity and
sensitivity is at least n*subDet(A).
This is joint work with Sebastian Berndt and Klaus Jansen.
-
Thursday, October 7th, Akash Kumar The complexity of testing all properties of planar graphs and the role of isomorphism
Consider property testing on bounded degree graphs and let \eps > 0 denote the proximity parameter. A remarkable theorem of Newman-Sohler asserts that all properties of planar graphs (more generally hyperfinite) are testable with query complexity only depending on \eps.
Recent advances in testing minor-freeness have proven that all additive and monotone properties of planar graphs can be tested in poly(1/\eps) queries. Moreover, a recent work of Levi-Shoshan presents poly(1/\eps) testers for Hamiltonicity under the promise that the graph is planar. Motivated by these results, we ask: can all properties of planar graphs can be tested in poly(1/\eps) queries? In this talk, I will show that such hopes unfortunately meet a roadblock.
The natural property of testing isomorphism to a fixed graph requires exp(Omega(1/\eps^2)) queries. We also show by a straightforward adapation of the Newman-Sohler analysis that isomorphism is the "hardest" property of planar graphs. Namely, all properties of planar graphs can be tested within the budget of \exp(Omega(1/\eps^2)) queries.
Joint work with Sabyasachi Basu and C. Seshadhri
-
Thursday, September 30th, Kshiteej Sheth Towards Non-Uniform k-Center with Constant Types of Radii
In the Non-Uniform k-Center problem we need to cover a finite metric space using k balls of different radii that can be scaled uniformly. The goal is to minimize the scaling factor. If the number of different radii is unbounded, the problem does not admit a constant-factor approximation algorithm but it has been conjectured that such an algorithm exists if the number of radii is constant. Yet, this is known only for the case of two radii. Our first contribution is a simple black box reduction which shows that if one can handle the variant of t-1 radii with outliers, then one can also handle t radii. Together with an algorithm by Chakrabarty and Negahbani for two radii with outliers, this immediately implies a constant-factor approximation algorithm for three radii; thus making further progress on the conjecture. Furthermore, using algorithms for the k-center with outliers problem, that is the one radii with outliers case, we also get a simple algorithm for two radii.
The algorithm by Chakrabarty and Negahbani uses a top-down approach, starting with the larger radius and then proceeding to the smaller one. Our reduction, on the other hand, looks only at the smallest radius and eliminates it, which suggests that a bottom-up approach is promising. In this spirit, we devise a modification of the Chakrabarty and Negahbani algorithm which runs in a bottom-up fashion, and in this way we recover their result with the advantage of having a simpler analysis.
This is joint work with Xinrui Jia, Lars Rohwedder and Ola Svensson.
-
Friday, June 4th, Gilbert Maystre A majority lemma for randomised query complexity
An interesting challenge in the area of boolean function complexity is to understand how function composition behaves under different models. In this talk, I will recall some recent results that show the intricacy of this question in the randomized query complexity setting and present a new one that proves that whenever the outer function is fixed to be the majority one, then the naïve algorithm that solves each inner function with reduced error in parallel is essentially optimal.
Joint work with Mika Göös.
-
Friday, May 14th, Haotian Jiang Minimizing Convex Functions with Integral Minimizers
Given a separation oracle SO for a convex function f that has an integral minimizer inside a box with radius R, we show how to find an exact minimizer of f using at most O(n (n + \log(R))) calls to SO and \poly(n, \log(R)) arithmetic operations, or
O(n \log(nR)) calls to SO and \exp(O(n)) \poly(\log(R)) arithmetic operations.
When the set of minimizers of $f$ has integral extreme points, our algorithm outputs an integral minimizer of f. This improves upon the previously best oracle complexity of O(n^2 (n + \log(R))) for polynomial time algorithms obtained by [Grötschel, Lovász and Schrijver, Prog. Comb. Opt. 1984, Springer 1988] over thirty years ago.
For the Submodular Function Minimization problem, our result immediately implies a strongly polynomial algorithm that makes at most O(n^3) calls to an evaluation oracle, and an exponential time algorithm that makes at most O(n^2 \log(n)) calls to an evaluation oracle. These improve upon the previously best O(n^3 \log^2(n)) oracle complexity for strongly polynomial algorithms given in [Lee, Sidford and Wong, FOCS 2015] and [Dadush, Végh and Zambelli, SODA 2018], and an exponential time algorithm with oracle complexity O(n^3 \log(n)) given in the former work.
Our result is achieved via a reduction to the Shortest Vector Problem in lattices. We show how an approximately shortest vector of certain lattice can be used to effectively reduce the dimension of the problem. Our analysis of the oracle complexity is based on a potential function that captures simultaneously the size of the search set and the density of the lattice, which we analyze via convex geometry tools. Paper available at https://arxiv.org/abs/2007.01445.
-
Friday, May 7th, Andreas Maggiori The Primal-Dual method for Learning Augmented Algorithms
The extension of classical online algorithms when provided with ML predictions is a new and active research area. In this area, the goal is to design learning augmented online algorithms which:
incorporate predictions in a black-box manner (without making assumptions regarding the quality of the predictions)
outperform any online algorithm when the predictions are accurate
maintain reasonable worst case guarantees if the predictions are misleading
In this talk, I will show how to extend the primal-dual method for online algorithms in the learning augmented setting. Using this method, I will focus on how to design algorithms that use predictions for the ski rental problem and the online set cover problem.
Co-authors: Etienne Bamas and Ola Svensson
Link to the paper: https://arxiv.org/pdf/2010.11632.pdf (accepted as an oral talk at NeurIPS 2020)
-
Friday, April 30th, David Saulpic New Coreset Framework for Clustering
Given a metric space, the (k,z)-clustering problem consists
of finding k centers such that the sum of the of distances raised to the power z of
every point to its closest center is minimized. This encapsulates the famous
k-median (z=1) and k-means (z=2) clustering problems. Designing
small-space sketches of the data that approximately preserves the cost
of the solutions, also known as coresets, has been an important research
direction over the last 15 years.
In this talk, I will present a new, simple coreset framework that
simultaneously improves upon the best known bounds for a large variety of settings, ranging
from Euclidean space, doubling metric, minor-free metric, and the
general metric cases.
This is joint work with Vincent Cohen-Addad and Chris Schwiegelshohn.
-
Friday, April 23rd, Buddhima Gamlath Single-Sample Prophet Inequalities
In the classical single choice prophet inequality, n values are independently drawn from n known distributions and revealed to an online algorithm in an adversarial order. The task of the algorithm is to irrevocably pick one of the values and stop before seeing the remaining values, so as to maximize the picked value. The performance is measured compared to the maximum value in the hindsight, and it is known that the optimal competitive ratio is 1/2.
In the single-sample case, the algorithm does not get to know the distributions but instead gets a single sample drawn from each of the distributions. In the talk, I will first go over a simple and elegant proof (by Rubinstein et al. https://arxiv.org/abs/1911.07945) that the optimal competitive ratio for the single sample case is also 1/2.
Then I will present a recent single-sample prophet inequality (by Dütting et al. https://arxiv.org/abs/2104.02050) for weighted matching in general graphs where each edge weight w_e is independently drawn from a distribution D_e and the algorithm only has one sample value per each edge to begin with. (A similar result was also shown recently by Caramanis et al. https://arxiv.org/abs/2103.13089).
-
Friday, April 16th, Kshiteej Sheth Algorithms for k-median clustering with outliers
Clustering is a fundamental task studied by various communities and one of the most well-studied formulations of it is the k-Median problem. Given a set of n clients and facilities in a metric space, the goal is to open a set of k facilities such that the sum over every client of the client's distance to the nearest open facility is minimized. A practically motivated and algorithmically challenging generalization is the k-Median with outliers problem, in which you only need to connect m(<n) clients (m is given in the input). In this talk, I will mainly explain the beautiful iterative rounding framework of Krishnaswamy, Li, and Sandeep that appeared in STOC'18, which gets a constant approximation for this problem. If time permits, I will discuss some ongoing work with Xinrui Jia, Lars Rohwedder and Ola Svensson on extensions of this problem.
-
Friday, April 9th, Jan Hazla On codes decoding errors on the BEC and BSC
I will talk about how to use a recent inequality on Boolean functions by Samorodnitsky to obtain a result in coding theory. Namely, assume that a linear code successfully corrects errors on the binary erasure channel (BEC) under optimal (maximum a posteriori, or MAP) decoding. Then, this code is also successful on a range of other channels with somewhat smaller capacities, including binary symmetric channel (BSC).
One previously unknown consequence is that constant-rate Reed--Muller codes correct errors on BSC(p) for some constant p > 0.
Joint work with Alex Samorodnitsky and Ori Sberlo.
-
Friday, April 6th, Akash Kumar, Partition oracles for bounded degree planar graphs
Take a planar graph with maximum degree d. These graphs admit a hyperfinite decompositions, where, for a sufficiently small \epsilon > 0, one removes \epsilon dn edges to get connected components of size independent of n. An important tool for sublinear algorithms and property testing for such classes is the partition oracle. A partition oracle is a local procedure that gives consistent access to a hyperfinite decomposition, without any preprocessing. Given a query vertex v, the partition oracle outputs the component containing v in time independent of n. All the answers are consistent with a single hyperfinite decomposition. In this talk, I will describe a partition oracle that runs in time poly(d/\epsilon).
Joint work with C. Seshadhri and Andrew Stolman.
-
Friday, March 19th, Xinrui Jia, Fair Colorful k-Center Clustering
An instance of the colorful k-center problem consists of points in a metric space that are colored red or blue, along with an integer k and a coverage requirement for each color. The goal is to find the smallest radius r such that there exist balls of radius r around k of the points that meet the coverage requirements.
In this talk, I will first present a pseudo-approximation of S. Bandyapadhyay, T. Inamdar, S. Pai, and K. Varadarajan that uses k+1 centers. Then I will explain how we can obtain a true approximation using clustering and subset-sum ideas that arise from the pseudo-approximation.
This is joint work with Kshiteej Sheth and Ola Svensson.
-
Friday, March 12th, Jakab Tardos, Spectral Sparsification of Hypergraphs
Cut and spectral sparsification of graphs have numerous applications, including e.g. speeding up algorithms for cuts and Laplacian solvers. These powerful notions have recently been extended to hypergraphs. In this talk, I will define spectral sparsifiers for hypergraphs, and present a technique for producing spectral sparsifiers of size O*(nr) for hypergraphs with hyperedge size bounded by r. Our main tool is a new concentration inequality for the (non-linear) Laplacian of a subsampled expander hypergraph. On the lower bound side, I will show that the hypergraph cut function of an r-uniform hypergraph with n vertices in general requires \Omega(n r) bits to represent, even approximately.
Joint work with Michael Kapralov, Robert Krauthgamer, and Yuichi Yoshida.
-
Friday, March 5th, Paritosh Garg, Robust Algorithms against Adversarial Injections
I will present a new model that we study that is sandwiched somewhere between adversarial-order streams and random-order streams with the motivation of designing robust algorithms. Then, I will present results on two problems namely maximum matching and submodular maximization in this model. Eventually, I will discuss some interesting open problems that arise.
This is a joint work with Sagar Kale, Lars Rohwedder and Ola Svensson.
-
Friday, February 26th, Navid Nouri, Graph Spanners by Sketching in Dynamic Streams and the Simultaneous Communication Model
Graph sketching is a powerful technique in dynamic graph streams that has enjoyed considerable attention in the literature in recent years, and has led to near optimal dynamic streaming algorithms for many fundamental problems such as connectivity, cut and spectral sparsifiers.
In the first part of this talk, we are going to talk about the first algorithm that achieves sub-linear stretch in a single pass, using almost linear space. Then, we are going to discuss the space-stretch trade-offs when more space is allowed. Finally, we will talk about the simultaneous communication model and propose and analyze the guarantees of our protocol, where each player can communicate small messages.
Joint work with Arnold Filtser and Michael Kapralov
-
Friday, February 12th, Aida Mousavifar,
Spectral Clustering Oracles in Sublinear Time
Given a graph G that can be partitioned into k disjoint expanders with outer conductance upper
bounded by eps, can we efficiently construct a small space data structure that allows quickly
classifying vertices of G according to the expander (cluster) they belong to? Formally, we would
like an efficient local computation algorithm that misclassifies at most an O(eps) fraction of vertices in
every expander. We refer to such a data structure as a spectral clustering oracle.
In this talk, first I will explain how to design a sublinear time oracle that provides dot product access to the
spectral embedding of G. Then I will show how to design a spectral clustering oracle with sublinear time using the dot product oracle.
-
Friday, February 5th, at 1.15pm (note unusual time) Siddhartha Jain,
Unambiguous DNFs from Hex
I'll be talking about my work for my semester project supervised by Mika G??s, in collaboration with Shalev Ben-David & Robin Kothari. We exhibit an unambiguous k-DNF formula that requires CNF width \tOmega(k^{1.5}). Our construction is inspired by the board game Hex and it is vastly simpler than previous ones, which achieved at best an exponent of 1.22. Our result is known to imply several other improved separations in query and communication complexity.