EPFL Theory and Combinatorics Seminar
Goal: The goal of this seminar is to host a variety of talks on (broadly understood) theoretical computer science and combinatorics.Time and place: Unless otherwise noted, this semester we meet on Wednesdays at 16:15 in either INF 211 or MA B1 524 (check below for the correct room)
Mailing list and calendar: There is a mailing list where the announcements of upcoming talks, as well as, other theoryrelated information appear. To subscribe to this list, send an email (can be empty) to: theoryannouncesubscribe@listes.epfl.ch . There is also the EPFL Theory and Combinatorics calendar that you can subscribe to.
Upcoming Talks:
 No upcoming talks
Past Talks:

Monday, May 26, 2014:
Marek Elias
(Charles University Prague)
Ramsey questions in discrete geometry
Ramsey theory contains many theorems of a similar form: in every
sufficiently large structure we can find a homogeneous substructure of
nontrivial size. Actually, one of the first results in this field was
in geometric context. In 1935 Erdos and Szekeres proved that among n^2
points in a plane n of them form a monotone subset, and that every set
of 2^n points contains an npoint convex or an npoint concave subset.
Recently, there was a progress in this area concerning Semialgebraic
predicates. Conlon, Fox, Pach, Sudakov, and Suk showed that every set of
twr_k(n^c) points in R^d equipped with a (k+1)ary semialgebraic
predicate contains an npoint set homogeneous according to this
predicate. They also constructed a set S of twr_k(cn) points in
dimension 2^{k2} and a (k+1)ary semialgebraic predicate P such that
there is no Phomogeneous subset of S of size n+1. In this talk we
present a result with Matousek, RoldanPensado, and Safernova which
lowers the dimension needed for this construction, as well as bounds and
constructions for some specific predicates e.g. Higher order ErdosSzekeres
monotonicity and Order types.
(joint work with J. Matousek, E. RoldanPensado, and Z. Safernova) 
Friday, May 23, 2014:
Mikkel Thorup
(AT&T Labs and University of Copenhagen)
Bottomk and Priority Sampling, Set Similarity and Subset Sums with Minimal Independence
We consider bottomk sampling for a set X, picking a sample
S_k(X) consisting of the k elements that are smallest according to a
given hash function h. With this sample we can estimate the relative
size f=Y/X of any subset Y as S_k(X) intersect Y/k. A standard
application is the estimation of the Jaccard similarity f=A intersect
B/A union B between sets A and B. Given the bottomk samples from A
and B, we construct the bottomk sample of their union as S_k(A union
B)=S_k(S_k(A) union S_k(B)), and then the similarity is estimated as
S_k(A union B) intersect S_k(A) intersect S_k(B)/k.
We show here that even if the hash function is only 2independent, the
expected relative error is O(1/sqrt(fk)). For fk=Omega(1) this is
within a constant factor of the expected relative error with truly
random hashing.
For comparison, consider the classic approach of kxminwise where we
use k hash independent functions h_1,...,h_k, storing the smallest
element with each hash function. For kxminwise there is an at least
constant bias with constant independence, and it is not reduced with
larger k. Recently Feigenblat et al.~showed that bottomk circumvents
the bias if the hash function is 8independent and k is sufficiently
large. We get down to 2independence for any k. Our result is based on
a simply union bound, transferring generic concentration bounds for
the hashing scheme to the bottomk sample, e.g., getting stronger
probability error bounds with higher independence.
For weighted sets, we consider priority sampling which adapts
efficiently to the concrete input weights, e.g., benefiting strongly
from heavytailed input. This time, the analysis is much more involved,
but again we show that generic concentration bounds can be applied. 
Thursday, May 15, 2014:
Dömötör Pálvölgyi
(ELTE Budapest)
Indecomposable coverings using unit discs
Let C= C_i be a collection of sets in R^2. We say that C is an mfold covering if every point of $R^ is contained in at least m members of C. A 1fold covering is simply called a covering.
A planar set C is said to be coverdecomposable if there exists a (minimal) constant m=m(C) such that every mfold covering of the plane with translates of C can be decomposed into two coverings.
The problem of characterizing all coverdecomposable sets in the plane was proposed by Pach in 1980. He conjectured that every planar convex set C is coverdecomposable. We disprove this conjecture by showing it holds for no convex set with a smooth boundary, so for example the unit disc. 
Wednesday, May 7, 2014:
Bartosz Walczak
(Jagiellonian University in Kraków)
Minors and dimension
The dimension of a poset P is the minimum number of linear extensions of P whose intersection is equal to P. Experts say that the dimension plays a similar role for posets as the chromatic number does for graphs. A lot of research has been carried out in order to understand when and why the dimension is bounded. There are constructions of posets with height 2 (but very dense cover graphs) or with planar cover graphs (but unbounded height) that have unbounded dimension. Streib and Trotter proved in 2012 that posets with bounded height and with planar cover graphs have bounded dimension. Recently, Joret et al. proved that the dimension is bounded for posets with bounded height whose cover graphs have bounded treewidth. My current work generalizes both these results, showing that the dimension is bounded for posets of bounded height whose cover graphs exclude a fixed (topological) minor.
In this talk, I will survey results relating the dimension of a poset to structural properties of its cover graph and present some ideas behind the proof of the result on excluded minors. 
Tuesday, April 15, 2014:
Laura Sanità
(University of Waterloo)
On the Equivalence of the Bidirected and Hypergraphic Relaxations for Steiner Tree
The bottleneck of the currently best known approximation algorithms for the NPhard Steiner tree problem is the solution of its large, so called hypergraphic, linear programming relaxation (HYP). Hypergraphic LPs have integrality gap at most 1.39, but are NPhard to solve exactly, and it is a formidable computational task to even approximate them sufficiently well.
We focus on another wellstudied but poorly understood LP relaxation of the problem: the bidirected cut relaxation (BCR).
This LP be solved efficiently, and it is widely conjectured to have a small integrality gap, although only a (trivial) upper bound of 2 is known.
We give an efficient constructive proof that BCR and HYP have the same integrality gap in instances that do not contain a Steiner vertex with 3 Steiner neighbours. This is a significant step forward from the previously known equivalence for (so called quasibipartite) instances in which Steiner vertices form an independent set.
Joint work with A. Feldmann, J.Koenemann, N.Olver 
Wednesday, April 9, 2014:
Timm Oertel
(ETHZ)
A polyhedral Frobenius theorem with applications to integer optimization
We prove a representation theorem of projections of sets of integer points by an integer matrix W. Our result can be seen as a polyhedral analogue of several classical and recent results related to the Frobenius problem. Our result is motivated by a large class of nonlinear integer optimization problems in variable dimension. Concretely, we aim to optimize f(Wx) over the set of integers in P, where f is a nonlinear function, P is a ndimensional polyhedron and W is a d x n matrix. As a consequence of our representation theorem, we obtain a general efficient transformation from the latter class of problems to integer linear programming. Our bounds depends polynomially on various important parameters of the input data leading, among others, to first polynomial time algorithms for several classes of nonlinear optimization problems.

Thursday, March 20, 2014:
Andrew V. Goldberg
(Principal Researcher, Microsoft Research Silicon Valley)
Recent Results on Hub Labeling
Given a weighted graph, a distance oracle takes as an input a pair of vertices and returns the distance between them. The labeling approach to distance oracle design is to precompute a label for every vertex so that the distances can be computed from the corresponding labels, without looking at the graph. The hub labeling (HL) algorithm received a lot of attention recently in both theoretical and experimental contexts. In particular, Cohen et al. (2003) developed an O(log n) approximation algorithm for finding the smallest labeling. Abraham at al. (2012) introduced a special kind of labeling  hierarchical labeling (HHL)  and showed that it is determined by an ordering of vertices. They give a practical algorithm to compute small HHLs on some classes of graphs, including road networks. Akiba et al. (2013) give a faster algorithm to compute the HHL from an order and apply it to social networks.
We introduce a variant of the Cohen at al. algorithm that is faster both in theory and in practice. We also introduce online tiebreaking that reduces the label size on networks with multiple shortest paths, such as social networks. We introduce an algorithm to efficiently compute good orders. This algorithm scales to large networks and is more robust than the previous ones. Finally, we introduce a label compression technique that significantly reduces the size of the labels.
Our experimental results show that the new HHL algorithm is practical on a much broader class of graphs. We also show that it usually produces labels that are not much bigger than those for the HL algorithm with the guaranteed approximation ratio.
Joint work with Daniel Delling, Thomas Pajor, Ruslan Savchenko and Renato Werneck 
Thursday, March 13, 2014:
Shay Moran
(Technion)
Shattering and Graph Orientations
Consider the following puzzles:
1.[Acylic orientations]
Let G be a simple undirected graph. Show that the number of orientations
of G which give a Directed Acyclic Graph is at most the number of
spanning subforests of G (= spanning subgraphs of G that are forests).
2.[Strong orientations]
Robbin's Theorem states that the graphs that have strong orientations
are exactly the 2edgeconnected graphs. Prove that for every graph
G, the number of strong orientations of G is at least the number of
2edgeconnected spanning subgraphs of G and at most the number of
connected spanning subgraphs of G.
In this talk I will present a unified approach which gives simple proofs
for the above inequalities and many other inequalities of this flavor.
The approach is based on a connection between two seemingly disparate
fields: VCtheory and graph theory. This connection yields natural
correspondences between fundamental concepts in VCtheory, such as
shattering and VCdimension, and wellstudied concepts of graph theory
related to connectivity, combinatorial optimization, forbidden subgraphs,
and others. The main tool is a generalization of the SauerShelah Lemma
[Pajor 85, Bollobas and Radcliffe 95, Dress 97,...]. Using this tool
we obtain a series of inequalities and equalities related to properties
of orientations of a graph.
My talk will be based on the following papers:
Shattering Extremal Systems [S. Moran]
http://arxiv.org/abs/1211.2980
Shattering, Graph Orientations and Connectivity [L. Kozma, S. Moran]
http://www.combinatorics.org/ojs/index.php/eljc/article/view/v20i3p44 
Monday, February 3, 2014:
Nabil Mustafa
(ESIEE, Paris)
Geometric Separators and their Applications
Separators are by now a widelyused tool for designing efficient algorithms. The planar graph separator theorem of LiptonTarjan has found many uses in the design of exact and approximate algorithms for optimization problems. Recently Adamaszek and Wiese proved the existence of a wide variety of separators for geometric objects. Using these, they presented QPTAS for weighted independent set of polygonal regions. We show that their construction can be made optimal (and more general), thus implying improved running times for their algorithms. We then give further algorithmic applications of these separators. Joint work with Rajiv Raman and Saurabh Ray.

Thursday, January 23, 2014:
Annie Raymond
(Zuse Institute Berlin)
A New Conjecture for UnionClosed Families
The Frankl unionclosed sets conjecture states that there exists an element present in at least half of the sets forming a unionclosed family. We present an integer program to model the problem and discuss some useful additional cuts. The computations done with this program lead to a new conjecture: we claim that the maximum number of sets in a nonempty unionclosed family in which each element is present at most a times is independent of the number n of elements spanned by the sets if n is greater or equal to log_2(a)+1. We prove that this is true when n is greater or equal to ~2a. We also discuss the impact that this new conjecture would have on the Frankl conjecture if it turns out to be true.

Wednesday, December 11, 2013:
Andrew H Suk
(MIT)
Densitytype theorems for semialgebraic hypergraphs
A kary semialgebraic relation E on R^d is a subset of R^{kd}, the set of ktuples of points in R^d,
which is determined by a finite number of polynomial equations and inequalities in kd real variables. Given point sets P_1,...,P_k subset R^d and a kary semialgebraic relation E on R^d, let H = (P_1,...,P_k,E) be the underlying kpartite kuniform hypergraph with parts P_1,..,P_k. Fox, Gromov, Lafforgue, Naor, and Pach showed that if H is dense, and if k,d and the complexity of E are fixed, then there are linear size subsets P'_isubset P_i, 1leq i leq k, such that P'_1,...,P'_k induces a complete kpartite kuniform hypergraph.
The goal of this talk is to improve this densitytype result by finding much larger subsets P'_1,...,P'_k that induces a complete kpartite kuniform hypergraph when k and d are large. As a consequence, our results imply several new bounds for classical problems in discrete geometry relating to sametype transversals, homogeneous selections from hyperplanes, and a Tverbergtype result. This is joint work with Jacob Fox and Janos Pach. 
Wednesday, November 27, 2013:
Imre Barany
(Renyi Institute, Budapest and University College London)
On a geometric Ramsey number
A partial result: if a planar curve intersects every line in
at most 3 points, then it can be partitioned into 4 convex curves.
This result can be extended to R^d, and the extension implies a good,
asymptotically precise, lower bound on a geometric Ramsey number.
Joint result with Jiri Matousek and Attila Por. 
Wednesday, November 20, 2013:
Gabor Tardos
(EPFL and Renyi Institute, Budapest)
On the communication complexity of sparse set disjointness and existsequal problems
Probably the two most studied problem in communication complexity are the equality and set disjointness problems. In the former problem Alice and Bob have to decide if their inputs are equal, in the latter they receive subsets of a common ground set and they have to decide if their subsets are disjoint. The complexity of these problems are well established and in the randomized model very different.
In this talk I concentrate on variants of these problems: The sparse set disjointness problem Alice and Bob receive SMALL subsets of a large ground set. As an improvement of the HastadWigderson protocol of 1997 we give a protocol that finds the intersection of two input sets in an optimal number of rounds and optimal total communication that not only finds if the sets are disjoint but finds the actual intersection too.
The lower bound argument is for the related existequal problem (= OR of several independent equality problems). It is a round elimination proof that uses a novel kind of isoperimetric inequality on the box with the Hamming distance.
The results are joint with Mert Sağlam and appeared in FOCS 2013.
Yes, this is the title as that of my October 14 talk, but I will spend less time on the background now and more on the actual proofs. The talk will be selfcontained. 
Wednesday, November 13, 2013:
Gabor Simonyi
(Renyi Institute, Budapest)
On the local chromatic number of graphs
The local chromatic number of a graph is the minimum number of colors that
must appear in the most colorful closed neighborhood of a vertex in any proper
coloring (using any number of colors). It was introduced in 1986
by Erdos, Furedi, Hajnal, Komjath, Rodl, and Seress. The chromatic number is a
trivial upper bound for this quantity. With Korner and Pilotto we observed
that the fractional chromatic number is a lower bound. This triggered
investigations of the local chromatic number of graphs with a large gap
between their fractional and ordinary chromatic number.
We also introduced a straightforward extension of the local chromatic number
to directed graphs. This motivated investigations of the possible relations
between the directed and undirected version for oriented graphs and their
underlying undirected graph.
In the talk I will try to show some of the results of the above mentioned
investigations and also mention some open problems. The talk is based on
several joint papers with different subsets of the following people: Janos
Korner, Bojan Mohar, Concetta Pilotto, Gabor Tardos, Sinisa Vrecica, Ambrus
Zsban. 
Wednesday, November 6, 2013:
Moran Feldman
(EPFL)
Fast Submodular Optimization Algorithms
A set function is a function assigning a real value to every subset of a given ground set. Submodular functions form a large natural class of set functions with applications in many fields including graph theory, economics and game theory. Optimization of a submodular function subject to various constraints attracted much attention in recent years. Unfortunately, most of the algorithms suggested for these problems are quite slow, and are impractical for large inputs. In this talk, I will describe two algorithms for natural submodular optimization problems that achieve the state of the art approximation ratios for their respective problems, and are much quicker than the "average" algorithm in this field. The problems corresponding to the two algorithms are "unconstrained submodular maximization" and "maximizing a submodular function subject to a cardinality constraint".
Joint work with Niv Buchbinder, Seffi Naor and Roy Schwartz. 
Wednesday, October 16, 2013:
Moran Feldman
(EPFL)
Fast Submodular Optimization Algorithms
A set function is a function assigning a real value to every subset of a given ground set. Submodular functions form a large natural class of set functions with applications in many fields including graph theory, economics and game theory. Optimization of a submodular function subject to various constraints attracted much attention in recent years. Unfortunately, most of the algorithms suggested for these problems are quite slow, and are impractical for large inputs. In this talk, I will describe two algorithms for natural submodular optimization problems that achieve the state of the art approximation ratios for their respective problems, and are much quicker than the "average" algorithm in this field. The problems corresponding to the two algorithms are "unconstrained submodular maximization" and "maximizing a submodular function subject to a cardinality constraint".
Joint work with Niv Buchbinder, Seffi Naor and Roy Schwartz. 
Wednesday, October 2, 2013:
Jose Verschae
(Universidad de Chile)
Dual techniques for scheduling on a machine with varying speed
I will present two different models related to scheduling on a machine of varying speed. In the static setting the speed function is given, while in the dynamic model deciding upon the speed is part of the scheduling problem by taking into consideration energy constraints. In both cases we want to minimize the weighted completion time. First I'll show how these problems can be interpreted as a problem of minimizing a generalized global cost function on a unit speed machine. Then I'll present the main ideas to obtain a PTAS for both cases. Instead of focusing in the technical details, I'll try to give intuition and explain how the notion of dual schedules can be used to derive such a result. I will also mention other complexity results and more efficient algorithms for special cases.
This is joint work with Nicole Megow. 
Thursday, September 26, 2013:
Satoru Iwata
(University of Tokyo)
Bisubmodular Function Maximization
Bisubmodular functions have been introduced as a generalization of
submodular functions. Examples include the cut capacity functions of
bidirected networks and the rank functions of
deltamatroids. Combinatorial polynomial algorithms for minimizing
bisubmodular functions have been developed by extending those for
submodular function minimization.
In this talk, we present simple greedy approximation algorithms for
maximizing bisubmodular functions, extending the doublegreedy
algorithms for submodular function maximization due to Buchbinder,
Feldman, Naor, and Schwartz (2012). Our deterministic algorithm
provides an approximate solution that achieves at least one third of
the optimal value, whereas the output of our randomized algorithm
achieves at least a half of the optimal value at expectation.
This is joint work with Shinichi Tanigawa and Yuichi Yoshida. 
Wednesday, September 25, 2013:
Vadim Lozin
(University of Warwick)
Boundary properties of the satisfyabilty problem
The satisfiability problem is known to be NPcomplete in general and
for many restricted instances, such as formulas with at most 3 variables
per clause and at most 3 occurrences per variable, or planar formulas.
The latter example refers to graphs representing satisfiability instances.
These are bipartite graphs with vertices representing clauses and variables,
and edges connecting variables to the clauses containing them.
Finding the strongest possible restrictions under which a problem remains
NPcomplete is important for at least two reasons. First, this can make
it easier to establish the NPcompleteness of new problems by allowing
easier transformations. Second, this can help clarify the boundary between
tractable and intractable instances of the problem. In this talk, we address
the second issue and reveal the first boundary property of graphs
representing satisfiability instances. 
Thursday, September 19, 2013:
Avi Wigderson
(Institute for Advanced Study, Princeton)
Population Recovery and Partial Identification
We study several natural problems in which an unknown distribution over an unknown population of vectors needs to be recovered from partial or noisy samples. Such problems naturally arise in a variety of contexts in learning, clustering, statistics, data mining and database privacy, where loss and error may be introduced by nature, inaccurate measurements, or on purpose. We give fairly efficient algorithms to recover the data under fairly general assumptions, when loss and noise are close to the information theoretic limit (namely, nearly completely obliterate the original data).
Underlying one of our algorithms is a new combinatorial structure we call a partial identification (PID) graph. While standard IDs are subsets of features (vector coordinates) that uniquely identify an individual in a population, partial IDs allow ambiguity (and "imposters"), and thus can be shorter. PID graphs capture this imposterstructure. PID graphs yield strategies for dimension reductions of recovery problems, and the reassembly of this local pieces of statistical information to a global one. The combinatorial heart of this work is proving that every set of vectors admits partial IDs with "cheap" PID graphs (and hence efficient recovery). We further show how to find such nearoptimal PIDs efficiently.
Time permitting, I will also describe our original motivation for studying these recovery problems above: a new learning model we call "restriction access", introduced in earlier work. This model aims at generalizing prevailing "blackbox" access to functions when trying to learn the "device" (e.g. circuit, decision tree, polynomial...) which computes them. We propose a "greybox" access that allows certain partial views of the device, obtained from random restrictions. Our recovery algorithms above allow positive learning results for the PAClearning analog of our model, for such devices as decision trees and DNFs, which are currently beyond reach in the standard "blackbox" version of PAClearning.
This talk should be accessible to undergrads CS, EE and Math with some background in algorithms.
Based on joint works with Zeev Dvir, Anup Rao and Amir Yehudayoff. 
Wednesday, September 18, 2013:
Suvrit Sra
(MaxPlanck Institut fuer intelligente Systeme, Tuebingen, and Machine Learning Department, Carnegie Mellon University, Pittsburgh)
Geometric optimisation for data analysis
Modern data analysis is driving a remarkable confluence of disciplines. The
same application may easily require expertise in computer science, statistics,
functional analysis, and optimisation. This interaction arises from two key
aims: scaling algorithms to tackle "big data"; and exploiting structure in
data. Today's talk focuses on the latter aim.
In particular, I shall talk about data analysis problems that rely on the rich
and fascinating geometric "structure" of the set of positive definite
matrices. For these matrices, we present a new distance function and a
nonconvex "matrix geometric mean" based on it. The distance and its associated
geometric mean, not only prove crucial to a basic application in computer
vision, but also open up a new vista of optimisation problems. Indeed, the
concrete examples shown today mark the beginning of a broader framework for
"geometric optimisation" that we are developing.
To add perspective to today's material, I shall touch upon some interesting
connections that it has to harmonic analysis on positive matrices, matrix
theory, nonlinear matrix equations, combinatorics, fixedpoint theory, quantum
information theory, and nonconvex global optimisation. 
Tuesday, August 13, 2013:
Aravind Srinivasan
(University of Maryland)
Progress on algorithmic versions of the Lovasz Local Lemma
There has been substantial progress on algorithmic versions and
generalizations of the Lovasz Local Lemma recently, with some of the main
ideas getting simplified as well. I will survey some of the main ideas of
Moser & Tardos, Pegden, and David Harris & myself in this context. 
Monday, July 8, 2013:
Vijay Vazirani
(Georgia Tech)
Matching  A New Proof for an Ancient Algorithm
For all practical purposes, the MicaliVazirani algorithm, discovered in
1980, is still the most efficient known maximum matching algorithm (for
very dense graphs, slight asymptotic improvement can be obtained using fast
matrix multiplication). However, this has remained a "black box" result for
the last 32 years. We hope to change this with the help of a recent paper
giving a simpler proof and exposition of the algorithm:
http://arxiv.org/abs/1210.4594
In the interest of covering all the ideas, we will assume that the audience
is familiar with basic notions such as augmenting paths and bipartite
matching algorithm. 
Monday, July 8, 2013:
Rico Zenklusen
(Johns Hopkins University)
The Matroid Secretary Problem: Free Order Model and Laminar Case
Secretary problems are basic online selection problems with an interesting connection to online mechanism design. The task is to select, out of a set of weighted elements that appear one by one, a maximum weight subset fulfilling some constraints. A constraint type that is both rich enough to capture interesting applications and structured enough to be a promising candidate for designing strong algorithms, are matroid constraints, leading to the Matroid Secretary Problem (MSP). A famous conjecture claims the existence of an O(1)competitive algorithm for MSP. Whereas this remains open, modified forms of it were shown to be true, when assuming that the assignment of weights to the secretaries is not adversarial but uniformly at random. However, so far, no variant of the MSP with adversarial weight assignment is known that admits an O(1)competitive algorithm.
We address this point by presenting a 9competitive algorithm for the ``free order model'', a model suggested shortly after the introduction of the MSP. The free order model is a relaxed version of the original MSP, with the only difference that one can choose the order in which elements appear.
If time permits, we will also discuss the classical MSP for the special case of laminar matroids. Only recently, a clever but rather involved 16000/3competitive algorithm was found. We present a considerably simpler and stronger 14.12competitive algorithm, based on reducing the problem to several classical single choice secretary problem.
Based on joint work with Patrick Jaillet and Josï¿½ A. Soto.

Tuesday, June 18, 2013:
Laurence Wolsey
(UC Louvain)
Aspects of Inventory Routing
The Inventory Routing Problem (IRP) arises from the integration of two basic components of the logistic supply chain, as Inventory Management and Vehicle Routing. IRP involves the distribution of one or more products from a supplier to a set of customers over a discrete planning horizon. Each customer has a known demand to be met in each period and can hold a limited amount of stock. The product is shipped through a distribution network by one or more vehicles of limited capacity. IRP has found applications in several contexts, as maritime logistics and the distribution of gas, perishable items, groceries, etc.
We examine the modeling of two such problems as mixed integer programs. The first problem involves a single product, a heterogeneous shipping fleet and multiple production and consumption ports with limited storage capacity. We formulate it as a pure fixed charge network flow model with side constraints and then show how several lotsizing subproblems can be used to strengthen the formulation.
The second problem, the socalled Vendor Managed Inventory Routing Problem (VMIRP), is the IRP problem arising when replenishment policies are decided by the supplier, who selects the customers to be visited in each period, and the order in which the customers are visited. Given constant demands over time and stock upper bounds at each customer, we consider two replenishment policies: an orderup policy in which the amount delivered must bring the stock level up to the upper bound and an unconstrained policy. We develop tight formulations for the inventory management, and cutting planes linking routing and inventory.
Computational results are presented for both models. The first is joint work with Agra, Andersson and Christiansen and the second with Avella and Boccia. 
Monday, June 17, 2013:
Nicolas Bonifas
(Ecole Polytechnique, Paris)
Preemptive reformulations of cumulative constraints in scheduling
We introduce a reformulation of the cumulative resource in scheduling, so that the lower bound on the makespan that one gets by energetic reasoning is at least as good as that of a preemptive schedule on the original resource.
This reformulation relies on a linear program whose size depends on the capacity of the resource but not on the number of tasks, which enables to precompute the reformulations.
It provides a significant improvement for all algorithms which rely on energetic reasonings, such as edgefinding techniques. 
Monday, June 17, 2013:
Thomas Rothvoss
(MIT)
Approximating Bin Packing within O(log OPT * log log OPT) bins
For bin packing, the input consists of n items
with sizes s_1,...,s_n in [0,1] which have to be assigned to a minimum
number of bins of size 1. The seminal KarmarkarKarp algorithm from '82
produces a solution with at most OPT + O(log^2 OPT) bins.
We provide the first improvement in now 3 decades and show that
one can find a solution of cost
OPT + O(log OPT * log log OPT) in polynomial time.
This is achieved by rounding a fractional solution to the
GilmoreGomory LP relaxation using the Entropy Method from discrepancy theory.
The result is constructive via algorithms of Bansal and LovettMeka. 
Wednesday, June 12, 2013:
Flavia Bonomo
(Universidad de Buenos Aires, Argentina)
Cliqueperfectness of complements of line graphs and coloring netfree line graphs
A cliquetransversal of a graph G is a subset of vertices that
meets all the maximal cliques of G. A cliqueindependent set is a
collection of pairwise vertexdisjoint maximal cliques. The
cliquetransversal number and cliqueindependence number of G are
the sizes of a minimum cliquetransversal and a maximum
cliqueindependent set of G, respectively. A graph G is
cliqueperfect if these two numbers are equal for every induced
subgraph of G. Unlike the class of perfect graphs, the class of
cliqueperfect graphs is not closed under complement neither a
characterization by minimal forbidden induced subgraphs is known.
Nevertheless, partial results in this direction have been
obtained; i.e., characterizations of cliqueperfect graphs by a
restricted list of forbidden induced subgraphs when the graph is
known to belong to certain graph classes. For instance, a
characterization of those line graphs that are cliqueperfect is
given in terms of minimal forbidden induced subgraphs in a
previous joint work with Chudnovsky and Durï¿½n. In this work, we
study cliqueperfectness of complements of line graphs. Precisely,
we characterize cliqueperfect complements of line graphs in terms
of minimal forbidden induced subgraphs. For this class of graphs
the cliqueperfectness can be reformulated in terms of matchings,
and leads to a classification of graphs containing no
bipartiteclaw with respect to edgecoloring (i.e., vertex
coloring of netfree line graphs). Besides, a nice structure
theorem for (bipartiteclaw)free graphs is described.
This is a joint work with Guillermo Durï¿½n, Martin Safe and
Annegret Wagler. 
Tuesday, June 11, 2013:
Nguyen Viet Hang
(Laboratoire GSCOP, CNRS, Grenoble)
Inductive construction and decomposition of graded sparse graphs
Sparse graphs are extensively studied for their role in combinatorial rigidity theory. In this work, we consider graded sparse graphs  a generalization of sparse graphs where different types of edges satisfy different sparsity conditions. We provide an inductive construction and a decomposition into pseudoforests of these graphs. These results generalize the work of Fekete and Szego on the inductive construction of sparse graphs as well as the work of Streinu and Theran on the decomposition of sparse graphs, thus generalize the original work of NashWilliams on the decomposition of graphs into forests. Our results can be used to characterize underlying graphs of rigid frameworks with mixed constraints. We also study the matroidal structure of the family of graded sparse graphs and reobtain the decomposition from a matroid viewpoint.
Joint work with Bill Jackson, Queen Mary University of London. 
Wednesday, May 22, 2013:
Aditya Bhaskara
(EPFL)
Uniqueness of Tensor Decompositions and the Polynomial Identifiability of Mixtures
I will talk about a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions. Given a tensor whose decomposition satisfies certain "rank" conditions, we will show that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error.
An important question in machine learning is to learn the parameters of various latent variable models (such as HMMs, topic models, etc) by observing samples generated according to the model. Our theorem on uniqueness of tensor decompositions implies that in many of these settings, the parameters can be uniquely identified using polynomially many samples. Such polynomial identifiability is an essential first step towards efficient learning algorithms.
I will compare our results to the recent algorithms based on tensor decompositions which have been used to estimate the parameters of hidden variable models under certain "nondegeneracy" assumptions. Our methods give a way to go beyond this nondegeneracy barrier, and establish polynomial identifiability of the parameters under much milder conditions.
This is joint work with Moses Charikar (Princeton University) and Aravindan Vijayaraghavan (CMU). 
Wednesday, May 15, 2013:
Natan Rubin
(FU Berlin)
On Kinetic Delaunay Triangulations: A Near Quadratic Bound for Unit Speed Motions
Let *P* be a collection of *n* points in the plane, each moving along some
straight line and at unit speed. Three points of *P* form a Delaunay
triangle if their circumscribing circle contains no further point of
*P*. These triangles form the famous
*Delaunay triangulation*, denoted by *DT(P)*. We obtain an almost tight
upper bound of *O(n^{2+eps})*, for any *eps>0*, on the maximum number of
discrete changes that *DT(P)* experiences during this motion. Our analysis
is cast in a purely topological setting, where we only assume that (i) any
four points can be cocircular at most three times, and (ii) no triple of
points can be collinear more than twice; these assumptions hold for unit
speed motions. The number of such changes determines the complexity of
several fundamental geometric algorithms. 
Wednesday, May 1, 2013:
David Steurer
(Cornell University)
Approximate Constraint Satisfaction Requires Large LP Relaxations
We prove superpolynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems.
We show that for these problems, polynomialsized linear programs are exactly as powerful as programs arising from a constant number of levels of the Sheraliï¿½Adams hierarchy.
In particular, our results imply that no polynomialsized LP relaxation for Max Cut achieves an approximation ratio better than 1/2.
Based on joint work with Siu On Chan, James R. Lee, and Prasad Raghavendra.

Wednesday, April 10, 2013:
Radoslav Fulek
( Charles University)
Extending Partial Representations of Circle Graphs
A circle graph is the intersection graph of a set of chords of a circle.
We call the set of chords of a circle representing a circle graph G
the chord representation of G. It is an old result in the algorithmic graph
theory that given an abstract graph G there exists a polynomial time
algorithm that decides whether G belongs to the class of circle graphs.
We generalize the above recognition algorithm to the case when we have
a chord representation of an induced subgraph G' of G as a part of the
input. Our algorithm decides in polynomial time if the given partial representation
of G corresponding to G' can be extended to a chord representation of G.
Our algorithm is based on the characterization of circle graphs by Walid
Naji. This solves an open question raised by Klavï¿½k et al.
Joint work with Steven
Chaplick and Pavel Klavï¿½k. 
Wednesday, March 27, 2013:
Jarosław Byrka
(University of Wrocław)
Approximation Algorithms for the Joint Replenishment Problem with Deadlines
The Joint Replenishment Problem (JRP) is a fundamental
optimization problem in supplychain management, concerned with optimizing
the flow of goods over time from a supplier to retailers. Over time,
in response to demands at the retailers, the supplier sends shipments, via
a warehouse, to the retailers. The objective is to schedule shipments to
minimize the sum of shipping costs and retailers' waiting costs.
We study the approximability of JRP with deadlines, where instead of
waiting costs the retailers impose strict deadlines.
We study the integrality gap of the standard linearprogram (LP)
relaxation, giving a lower bound of 1:207, and an upper bound and
approximation ratio of 1:574.
The best previous upper bound and approximation ratio was 1:667; no
lower bound was previously published. For the special case when all demand
periods are of equal length we give an upper bound of 1:5, a lower
bound of 1:2, and show APXhardness.
This is a joint work with Marcin Bienkowski, Marek Chrobak, Neil Dobbs,
Tomasz Nowicki, Maxim Sviridenko, Grzegorz Swirszcz, and Neal E. Young

Tuesday, January 22, 2013:
Dan Suciu
(University of Washington)
Counting Communication Steps for Parallel Query Processing
We consider the following problem. A large number of servers, p, are
used to answer a query on a much larger input database, of size n.
Each server has only O(n/p) space or, in a relaxed model, up to
O(n/p^{1eps}) space. How many global communcation steps are needed
to compute this query? This problem is of critical importance in Big
Data Management, where data is processed and analyzed on large
distributed clusters, because the main bottlneck in these computations
are the number of communication steps and the amount of data being
exchanged.
I will describe some new theoretical results proving both lower and
upper bounds. In the case of a single round of communication, the
lower bound is in the strongest possible model, when arbitrary bits
may be exchanged. Here we prove that if a conjunctive query is
computed in a single step, then it requires Omega(n/p^{1eps}) space
per server, where eps is expressed in terms of a tight, optimal
fractional edge cover of the query; the proof uses entropy and
Friedgut's inequalities (which I will define in the talk).
Conversely, one can compute the query in one communication round using
O(n/p^{1eps}) space per server, where eps is expressed in terms of a
tight, optimal fractional vertex packing of the query. In the case of
multiple rounds of communications, we prove a lower bound in terms of
a certain hierarchical decomposition of a query, and an upper bound in
terms of the depth of a query plan. The lower bound is the first such
bound for multiple rounds, but the proof requires a slightly more
restricted model, where the first round can exchange arbitrary bits,
but rounds 2, 3, ... are restricted to exchange only known tuples.
Joint work with P. Beame and P. Koutris. 
Wednesday, December 12, 2012:
Eden Chlamtac
(Ben Gurion University)
Sparse Spanners via Dense Subgraphs
A spanner (of a graph) with stretch s (an sspanner) is a subset of
edges of the graph for which the shortest path metric is preserved up
to a factor s. Spanners have a number of applications ranging from
parallel and distributed computing to property testing. We focus on
the LowestDegree 2Spanner (LD2S) problem, where we wish to find a
2spanner of a graph with unit edge lengths, while minimizing the
maximum degree in the spanner. This problem was previously studied by
Kortsarz and Peleg [SODA'94; Siam J. Comp.'98], who gave a
ilde{O}(Delta^{1/4}) approximation, where Delta is the maximum
degree in the original graph. We improve on this by giving a
ilde{O}(Delta^c) approximation, where c = 32sqrt{2} = 0.17...
Our algorithm for LD2S relies on a minimisation version of Densest
kSubgraph which we call Smallest mEdge Subgraph (SmES). For this
problem we give a ilde{O}(n^c) approximation for SmES, using a novel
randomised rounding of the SheraliAdams LP hierarchy which chooses
both vertices and edges with probabilities proportional to their LP
values. This property proves quite challenging to achieve for
hierarchies, yet is a crucial part of our reduction from LD2S to SmES.
Based on joint work with Michael Dinitz and Robert Krauthgamer. 
Tuesday, December 11, 2012:
Fabrizio Grandoni
(IDSIA)
Improved Distance Sensitivity Oracles via Fast Singlesource Replacement Paths
A distance sensitivity oracle is a data structure which, given two nodes s and t in a directed edgeweighted graph G and an edge e, returns the shortest length of an st path not containing e, a so called replacement path for the triple (s,t,e). Such oracles are used to quickly recover from edge failures.
In this talk we consider the case of integer weights in the interval [M,M], and present the first distance sensitivity oracle that achieves simultaneously subcubic preprocessing time and sublinear query time. More precisely, for a given parameter alphain [0,1], our oracle has preprocessing time ilde{O}(Mn^{omega+rac{1}{2}}+Mn^{omega+alpha(4omega)}) and query time ilde{O}(n^{1alpha}). Here omega<2.373 denotes the matrix multiplication exponent. For a comparison, the previous best oracle for small integer weights has ilde{O}(Mn^{omega+1alpha}) preprocessing time and (superlinear) ilde{O}(n^{1+alpha}) query time [Weimann,YusterFOCS'10].
The main novelty in our approach is an algorithm to compute all the replacement paths from a given source s, an interesting problem on its own. We can solve the latter singlesource replacement paths problem in ilde{O}(APSP(n,M))) time, where APSP(n,M)< ilde{O}(M^{0.681}n^{2.575}) [ZwickJACM'02] is the runtime for computing allpairs shortest paths in a graph with n vertices and integer edge weights in [M,M].
For positive weights the runtime of our algorithm reduces to ilde{O}(Mn^omega). This matches the best known runtime for the simpler replacement paths problem in which both the source s and the target t are fixed [VassilevskaSODA'11].
(Joint work with Virginia Vassilevska Williams, based on a paper in FOCS'12) 
Thursday, December 6, 2012:
Vahab Mirrokni
(Google Research)
Online Ad Allocation: Adversarial, Stochastic and Simultaneous Approximations
As an important component of any ad serving system, online capacity (or budget) planning is a central problem in online ad allocation. I will survey primalbased and dualbased techniques borrowed from online stochastic matching literature and report theoretical approximation guarantees and practical evaluations of these algorithms on real data sets. Finally, inspired by practical applications, I will discuss a Simultaneous approximation results for both adversarial and stochastic models and present some recent theoretical results in this context.

Tuesday, December 4, 2012:
András Sebő
(Grenoble)
Eight Fifth approximation for TSP paths
We prove the approximation ratio 8/5 for the metric stpathTSP
problem, and more generally for shortest connected Tjoins.
The algorithm that achieves this ratio is the simple ``Best of Many''
version of Christofides' algorithm (1976), suggested by An, Kleinberg
and Shmoys (2012), which consists in determining the best Christofides
sttour out of those constructed from a family Fscr_{>0} of
trees having a convex combination dominated by an optimal solution x^*
of the fractional relaxation. They give the approximation guarantee
sqrt{5}+1/2 for such an sttour, which is the first
improvement after the 5/3 guarantee of Hoogeveen's Christofides type
algorithm (1991). Cheriyan, Friggstad and Gao (2012) extended this
result to a 13/8approximation of shortest connected Tjoins, for
Tge 4.
The ratio 8/5 is proved by simplifying and improving the approach of
An, Kleinberg and Shmoys that consists in completing x^*/2 in order to
dominate the cost of "parity correction" for spanning trees. We
partition the edgeset of each spanning tree in Fscr_{>0} into an
stpath (or more generally, into a Tjoin) and its complement,
which induces a decomposition of x^*. This decomposition can be
refined and then efficiently used to complete x^*/2 without using
linear programming or particular properties of T, but by adding to
each cut deficient for x^*/2 an individually tailored explicitly given
vector, inherent in the problem.
A simple example shows that the Best of Many Christofides algorithm may
not find a shorter sttour than 3/2 times the incidentally
common optima of the problem and of its fractional relaxation. 
Thursday, November 29, 2012:
Parinya Chalermsook
(IDSIA)
Graph Products Revisited: Tight Approximation Hardness of Induced Matching, Poset Dimension, and More
We consider an oldfashioned reduction for proving hardness
of approximation. Suppose we have a reduction from maximum independent
set (or graph coloring) that "almost" works but does not, due to a
relatively large error term. We show how to use graph products to get
rid of such error terms. This technique is illustrated by showing
nearly tight lower bounds for various problems in discrete mathematics
and computer science, including Bipartite Induced Matching, Poset
Dimension, Maximum Feasible Subsystem (of linear inequalities) with
0/1 Coefficients, Unitdemand and Singleminded Pricing problems, and
Boxicity of Graphs.
This work will appear in SODA'13 (joint with Bundit Laekhanukit and
Danupon Nanongkai) 
Tuesday, November 27, 2012:
Monaldo Mastrolilli
(IDSIA)
How to Sell Hyperedges: The Hypermatching Assignment Problem
We are given a set of clients with budget constraints and a set of indivisible items. Each client is willing to buy one or more bundles of (at most) $ items each (bundles can be seen as hyperedges in a $hypergraph). If client $ gets a bundle $, she pays {i,e}$ and yields a net profit {i,e}$. The Hypermatching Assignment Problem (HAP) is to assign a set of pairwise disjoint bundles to clients so as to maximize the total profit while respecting the budgets.
This problem has various applications in production planning and budgetconstrained auctions and generalizes wellstudied problems in combinatorial optimization: for example the weighted (unweighted) $hypergraph matching problem is the special case of HAP with one client having unbounded budget and general (unit) profits; the Generalized Assignment Problem (GAP) is the special case of HAP with =1$.
Let >0$ denote an arbitrarily small constant. In this paper we obtain the following main results:
1) We give a randomized $(k+1+epsilon)$ approximation algorithm for HAP, which is based on rounding the $1$round Lasserre strengthening of a novel LP. This is one of a few approximation results based on Lasserre hierarchies and our approach might be of independent interest. We remark that for weighted $hypergraph matching no LP nor SDP relaxation is known to have integrality gap better than 1+1/k$ for general $ [Chan and Lau, SODA'10].
2) For the relevant special case that one wants to maximize the total revenue (i.e., {i,e}=w_{i,e}$), we present a local search based $(k+O(sqrt{k}))/2$ approximation algorithm for =O(1)$. This almost matches the best known $(k+1+epsilon)/2$ approximation ratio by Berman [SWAT'00] for the (less general) weighted $hypergraph matching problem.
3) For the unweighted $hypergraph matching problem, we present a $(k+1+epsilon)/3$ approximation in quasipolynomial time.
This improves over the $(k+2)/3$ approximation by Halld{'o}rsson [SODA'95] (also in quasipolynomial time). In particular this suggests that a $4/3+epsilon$ approximation for $3$dimensional matching might exist, whereas the currently best known polynomialtime approximation ratio is~$3/2$.
(Joint work with Marek Cygan and Fabrizio Grandoni) 
Tuesday, October 2, 2012:
HyungChan An
(EPFL)
Improving Christofides' Algorithm for the st Path TSP
We present a deterministic (1+sqrt(5))/2approximation algorithm for the st path TSP for an arbitrary metric. Given a symmetric metric cost on n vertices including two prespecified endpoints, the problem is to find a shortest Hamiltonian path between the two endpoints; Hoogeveen showed that the natural variant of Christofides' algorithm is a 5/3approximation algorithm for this problem, and this asymptotically tight bound in fact had been the best approximation ratio known until now. We modify this algorithm so that it chooses the initial spanning tree based on an optimal solution to the HeldKarp relaxation rather than a minimum spanning tree; we prove this simple but crucial modification leads to an improved approximation ratio, surpassing the 20yearold barrier set by the natural Christofides' algorithm variant. Our algorithm also proves an upper bound of (1+sqrt(5))/2 on the integrality gap of the pathvariant HeldKarp relaxation. The techniques devised in this paper can be applied to other optimization problems as well: these applications include improved approximation algorithms and improved LP integrality gap upper bounds for the prizecollecting st path problem and the unitweight graphical metric st path TSP.
This is joint work with Bobby Kleinberg and David Shmoys. 
Wednesday, September 19, 2012:
Piotr Sankowski
(University of Warsaw)
Algorithmic Applications of BaurStrassen's Theorem: Shortest Cycles, Diameter and Matchings
Consider a directed or an undirected graph with integral edge weights
from the set [W, W], that does not contain negative weight cycles. In
this paper, we introduce a general framework for solving problems on
such graphs using matrix multiplication. The framework is based on the
usage of BaurStrassen's theorem and of Strojohann's determinant
algorithm. It allows us to give new and simple solutions to the
following problems:
 Finding Shortest Cycles  We give a simple O(Wn^omega) time
algorithm for finding shortest cycles in undirected and directed
graphs. For directed graphs this matches the time bounds obtained in
2011 by Roditty and VassilevskaWilliams. On the other hand, no
algorithm working in $ ilde{O}(Wn^{omega})$ time was previously
known for undirected graphs with negative weights.
 Computing Diameter  We give a simple O(Wn^omega) time algorithm
for computing a diameter of an undirected or directed graphs. This
considerably improves the bounds of Yuster from 2010, who was able to
obtain this time bound only in the case of directed graphs with
positive weights. In contrary, our algorithm works in the same time
bound for both directed and undirected graphs with negative weights.
 Finding Minimum Weight Perfect Matchings  We present an
O(Wn^omega) time algorithm for finding minimum weight perfect
matchings in undirected graphs. This resolves an open problem posted
by Sankowski in 2006, who presented such an algorithm but only in the
case of bipartite graphs.
We believe that the presented framework can find applications for
solving larger spectra of related problems. As an illustrative example
we apply it to the problem of computing a set of vertices that lie on
cycles of length at most t, for some t. We give a simple O(Wn^omega)
time algorithm for this problem that improves over the O(tWn^omega})
time algorithm given by Yuster in 2011.
This is joint work with Marek Cygan and Hal Gabow. 
Tuesday, September 18, 2012:
Christian Sommer
(MIT)
Exact and Approximate ShortestPath Queries
We discuss the problem of efficiently computing a shortest path
between two nodes of a network  a problem with numerous
applications. The shortestpath query problem in particular occurs in
transportation (route planning and navigation or also logistics and
traffic simulations), in packet routing, in social networks, and in
many other scenarios.
Strategies for computing answers to shortestpath queries may involve
the use of precomputed data structures (also called distance oracles)
in order to improve the query time. We survey the main ideas and
techniques of shortestpath query methods  some of them are used
in stateoftheart route planning software such as Microsoft Bing Maps. 
Tuesday, September 11, 2012:
Aditya Bhaskara
(EPFL)
Algorithms for Minimizing the Discrepancy of Set Systems
I will talk about some old and new results on the discrepancy of set systems. We will see an outline of Spencer's "six standard deviations" result, which uses the socalled Entropy method. Next, I will talk in detail about a recent paper of Lovett and Meka which gives a simpler and "constructive" proof of this theorem. I will end with some related open questions, in particular the BeckFiala and Komlos conjectures. If time permits, I will outline Banaszczyk's bound which is the best known (and nonconstructive) upper bound in these settings. Making this argument constructive is also an interesting open question.

Thursday, January 1, 1970:
diverse speakers
()
Two day workshop on Combinatorial Geometry
See website for details