Iddo Tzameret. Sparser Random 3-SAT Refutation Algorithms and the Interpolation Problem
Abstract: We formalize a combinatorial principle, called \textit{the 3XOR principle}, due to Feige, Kim and Ofek \cite{FKO06}, as a family of unsatisfiable propositional formulas for which refutations of small size in any propositional proof system that possesses the feasible interpolation property imply an efficient \emph{deterministic} refutation algorithm for random 3SAT with $ n $ variables and $ \Omega(n^{1.4}) $ clauses. Such small size refutations would improve the state of the art (with respect to the clause density) efficient refutation algorithm, which works only for $\Omega(n^{1.5})$ many clauses \cite{FO07}.
We demonstrate polynomial-size refutations of the 3XOR principle in resolution operating with disjunctions of quadratic equations with small integer coefficients, denoted \RZQ ; this is a weak extension of cutting planes with small coefficients. We show that \RZQ\ is weakly automatizable iff \RZ\ is weakly automatizable, where \RZ\ is similar to \RZQ\ but with linear instead of quadratic equations (introduced in \cite{RT07}). This reduces the problem of refuting random 3CNF with $ n $ variables and $ \Omega(n^{1.4}) $ clauses to the interpolation problem of \RQ\ and to the weak automatizability of \RL.
Abstract: In this paper, we analyze several variants of a simple method for generating prime numbers with fewer random bits. To generate a prime p less than x, the basic idea is to fix a constant q\propto x^{1-epsilon}, pick a uniformly random a<q coprime to q, and choose p of the form a+tq, where only t is updated if the primality test fails. We prove that variants of this approach provide prime generation algorithms requiring few random bits and whose output distribution is close to uniform, under less and less expensive assumptions: first a relatively strong conjecture by H. Montgomery, made precise by Friedlander and Granville; then the Extended Riemann Hypothesis; and finally fully unconditionally using the Barban-Davenport-Halberstam theorem.
We argue that this approach has a number of desirable properties compared to previous algorithms. In particular:
* it uses much fewer random bits than both the ``trivial algorithm'' (testing random numbers less than x for primality) and Maurer's almost uniform prime generation algorithm;
* the distance of its output distribution to uniform can be made arbitrarily small, unlike algorithms like PRIMEINC (studied by Brandt and Damgard), which we show exhibit significant biases;
* all quality measures (number of primality tests, output entropy, randomness, etc.) can be obtained under very standard conjectures or even unconditionally, whereas most previous nontrivial algorithms can only be proved based on stronger, less standard assumptions like the Hardy-Littlewood prime tuple conjecture.
Jason Bell, Janusz Brzozowski, Nelma Moreira and Rogério Reis. Symmetric Groups and Quotient Complexity of Boolean Operations
Abstract: The quotient complexity of a regular language L is the number of left quotients of L, which is the same as the state complexity of L. Suppose that L and L' are binary regular languages with quotient complexities m and n, and that the transition semigroups of the minimal deterministic automata accepting L and L' are the symmetric groups S_m and S_n of degrees m and n, respectively. Denote by o any binary boolean operation that is not a constant and not a function of one argument only. For m,n >= 2 with (m,n) not in {(2,2),(3,4),(4,3),(4,4)} we prove that the quotient complexity of L o L' is mn if and only either (a) m is not equal to n or (b) m = n and the bases (ordered pairs of generators) of S_m and S_n are not conjugate. For (m,n)\in {(2,2),(3,4),(4,3),(4,4)} we give examples to show that this need not hold. In proving these results we generalize the notion of uniform minimality to direct products of automata. We also establish a non-trivial connection between complexity of boolean operations and group theory.
Dmitry Gavinsky and Shachar Lovett. En Route to the log-rank Conjecture: New Reductions and Equivalent Formulations
Abstract: We prove that several measures in communication complexity are equivalent, up to polynomial factors in the logarithm of the rank of the associated matrix: deterministic communication complexity, randomized communication complexity, information cost and zero- communication cost. This shows that in order to prove the log-rank conjecture, it suffices to show that low-rank matrices have efficient protocols in any of the aforementioned measures.
Furthermore, we show that the notion of zero-communication complexity is equivalent to an extension of the common discrepancy bound. Linial et al. [Combinatorica, 2007] showed that the discrepancy of a sign matrix is lower-bounded by an inverse polynomial in the logarithm of the associated matrix. We show that if these results can be generalized to the extended discrepancy, this will imply the log-rank conjecture.
Pawel Gawrychowski, Shay Mozes and Oren Weimann. Improved Submatrix Maximum Queries in Monge Matrices
Abstract: We present efficient data structures for submatrix maximum queries in Monge matrices and Monge partial matrices. For n x n Monge matrices, we give a data structure that requires O(n) space and answers submatrix maximum queries in O(log n) time. The best previous data structure [Kaplan et al., SODA`12] required O(n log n) space and O(log^2 n) query time. We also give an alternative data structure with constant query-time and O(n^{1+epsilon}) construction time and space for any fixed epsilon<1. For n x n partial Monge matrices we obtain a data structure with O(n) space and O(log n \alpha(n)) query time. The data structure of Kaplan et al. required O(n log n \alpha(n)) space and O(log^2 n) query time.
Our improvements are enabled by a technique for exploiting the structure of the upper envelope of Monge matrices to efficiently report column maxima in skewed rectangular Monge matrices. We hope this technique can be useful in obtaining faster search algorithms in Monge partial matrices. In addition, we give a linear upper bound on the number of breakpoints in the upper envelope of a Monge partial matrix. This shows that the inverse Ackermann factor in the analysis of the data structure of Kaplan et. al is superfluous.
Abstract: Consider the \emph{Ants Nearby Treasure Search (ANTS)} problem introduced by Feinerman, Korman, Lotker, and Sereni (PODC 2012), where $n$ mobile agents, initially placed at the origin of an infinite grid, collaboratively search for an adversarially hidden treasure. In this paper, the model of Feinerman et al.\ is adapted such that the agents are controlled by an asynchronous (randomized) finite state machine: they possess a constant-size memory and are able to communicate with each other through constant-size messages. Despite the restriction to constant-size memory, we show that their collaborative performance remains the same by presenting a distributed algorithm that matches a lower bound established by Feinerman et al.\ on the run-time of any ANTS algorithm.
Abstract: In this paper we consider online buffer scheduling problems in which an online stream of n items (jobs) with different colors (types) has to be processed by a machine with a buffer of size k. In the block-operation model, the machine chooses an active color and can–in each step–process all items of that color in the buffer. In the standard model initially introduced by R¨acke, Sohler, and Westermann (ESA 2002), the machine processes items whose color matches the active color until no item in the buffer has the active color (note that the buffer is refilled in each step). Motivated by practical applications in real-world, we assume we have prior stochastic information about the input. In particular, we assume that the colors of items are drawn i.i.d. from a possibly unknown distribution, or more generally, the items are coming in a random order. In the random order setting, an adversary determines the color of each item in advance, but then the items arrive in a random order in the input stream. To the best of our knowledge, this is the first work which considers the reordering buffer problem in stochastic settings.
Our main result is demonstrating constant competitive online algorithms for both the standard model and the block operation model in the unknown distribution setting and more generally in the random order setting. This provides a major improvement of the competitiveness of algorithms in stochastic settings; the best competitive ratio in the adversarial setting is (log log k) for both the standard and the block-operation models by Avigdor-Elgrabli and Rabani (FOCS 2013) and Adamaszek et al. (STOC 2012). Along the way, we also show that in the random order setting, designing competitive algorithms with the same competitive ratios (up to constant factors) in both the block operation model and the standard model are equivalent. To the best of our knowledge this is the first result of this type which relates an algorithm for the standard model to an algorithm for the block-operation model. Last but not least, we show in the uniform distribution setting, in which the probabilities of appearances of all colors are the same, a simple greedy algorithm is the best online algorithm in both models.
Felix Reidl, Peter Rossmanith, Fernando Sánchez Villaamil and Somnath Sikdar. A Faster Parameterized Algorithm for Treedepth
Abstract: The width measure treedepth, also known as vertex ranking, centered coloring and elimination tree height, is a well-established notion which has recently seen a resurgence of interest. We present an algorithm which—given as input an n-vertex graph, a tree decomposition of width w, and an integer t—decides the problem treedepth in time 2^{O(wt)}n. In conjunction with previous results we provide a simple algorithm and a fast algorithm which decides treedepth in time 2^{O(2^t)}n and 2^{O(t^2)}n, respectively, which do not require a tree decomposition as part of their input. This answers an open question posed by Ossona de Mendez and Nešetřil as to whether deciding Treedepth admits a simple algorithm. For chordal graphs we can prove a running time of 2^{O(t \log t)}n for the same algorithm.
Mikolaj Bojanczyk. Transducers with origin information
Abstract: Call a string-to-string transducer regular if it can be realised by one of the following equivalent models: MSO transductions, two-way deterministic automata with output, and streaming transducers with registers. This paper proposes to treat origin information as part of the semantics of a regular string-to-string transducer. With such semantics, the model admits a machine-independent characterisation, Angluin-style learning in polynomial time, as well as effective characterisations of natural subclasses such as one-way transducers or first-order definable transducers.
Noga Ron-Zewi, Eli Ben-Sasson, Julia Wolf and Madhur Tulsiani. Sampling-based proofs of almost-periodicity results and algorithmic applications
Abstract: We give new and simple combinatorial proofs of {\em almost-periodicity results} for sumsets of sets with small doubling in the spirit of Croot and Sisask \cite{Croot:2010fk}, whose almost-periodicity lemma has had far-reaching implications in additive combinatorics. We provide an alternative point of view which relies only on Chernoff's bound for sampling, and avoids the need for $L^p$-norm estimates used in the original proof of Croot and Sisask.
We demonstrate the usefulness of our new approach by showing that one can easily deduce from it two significant recent results proved using Croot and Sisask almost-periodicity -- the {\em quasipolynomial Bogolyubov-Ruzsa lemma} due to Sanders \cite{Sanders10} and a result on large subspaces contained in sumsets of dense sets due to Croot, Laba and Sisask \cite{Croot:2011we}.
We then turn to algorithmic applications, and show that our approach allows for almost-periodicity proofs to be converted in a natural way to probabilistic algorithms that decide membership in almost-periodic sumsets of dense subsets of $\F_2^n$. Exploiting this, we give a new algorithmic version of the quasipolynomial Bogolyubov-Ruzsa lemma.
Together with the results by the last two authors \cite{TulsianiW11}, this implies an algorithmic version of the \emph{quadratic Goldreich-Levin theorem} in which the number of terms in the quadratic Fourier decomposition of a given function, as well as the running time of the algorithm, are quasipolynomial in the error parameter $\epsilon$. The algorithmic version of the quasipolynomial Bogolyubov-Ruzsa lemma also implies an improvement in running time and performance of the self-corrector for the Reed-Muller code of order 2 at distance $1/2-\epsilon$ in \cite{TulsianiW11}.
Abstract: We study the query complexity of Weak Parity: the problem of computing the parity of an n-bit input string, where one only has to succeed on a 1/2+eps fraction of input strings, but must do so with high probability on those inputs where one does succeed. It is well-known that n randomized queries and n/2 quantum queries are needed to compute parity on all inputs. But surprisingly, we give a randomized algorithm for Weak Parity that makes only O(n/log^0.246(1/eps)) queries, as well as a quantum algorithm that makes only O(n/sqrt(log(1/eps))) queries. We also prove a lower bound of Omega(n/log(1/eps)) in both cases; and using extremal combinatorics, prove lower bounds of Omega(log n) in the randomized case and Omega(sqrt(log n)) in the quantum case for any eps>0. We show that improving our lower bounds is intimately related to two longstanding open problems about Boolean functions: the Sensitivity Conjecture, and the relationships between query complexity and polynomial degree.
Abstract: We bring domain theory and quantum logic together by characterising spectral posets, that is, those domains that arise as posets of Boolean subalgebras of a piecewise Boolean algebra. This leads to equivalent descriptions of the category of piecewise Boolean algebras: either as spectral posets equipped with an orientation, or as full structure sheaves on spectral posets.
Mitsuru Kusumoto and Yuichi Yoshida. Testing Forest-Isomorphism in the Adjacency List Model
Abstract: We consider the problem of testing if two input forests are isomorphic or are far from being so. An algorithm is called an $\varepsilon$-tester for forest-isomorphism if given an oracle access to two forests $G$ and $H$ in the adjacency list model, with high probability, accepts if $G$ and $H$ are isomorphic and rejects if we must modify at least $\varepsilon n$ edges to make $G$ isomorphic to $H$. We show an $\varepsilon$-tester for forest-isomorphism with a query complexity $\polylog(n)$ and a lower bound of $\Omega(\sqrt{\log{n}})$. Further, with the aid of the tester, we show that every graph property is testable in the adjacency list model with $\polylog(n)$ queries if the input graph is a forest.
Abstract: We consider capacity maximization in wireless networks under adversarial interference conditions. There are $n$ links, each consisting of a sender and a receiver, which repeatedly try to perform a successful transmission. In each time step, the success of attempted transmissions depends on interference conditions, which are captured by an interference model (e.g. the SINR model). Additionally, an adversarial jammer can render a $(1-\delta)$-fraction of time steps unsuccessful. Our main result is an algorithm based on no-regret learning converging to an $O\left(1/\delta\right)$-approximation. It provides even a constant-factor approximation when the jammer exactly blocks a $(1-\delta)$-fraction of time steps. In addition, we consider a stochastic jammer, for which we obtain a constant-factor approximation after a polynomial number of time steps. We also consider more general settings, in which links arrive and depart dynamically.
Jop Briët, Zeev Dvir, Guangda Hu and Shubhangi Saraf. Lower Bounds for Approximate LDCs
Abstract: We study an approximate version of $q$-query LDCs (Locally Decodable Codes) over the real numbers and prove lower bounds on the encoding length of such codes. A $q$-query $(\alpha,\delta)$-approximate LDC is a set $V$ of $n$ points in $\R^d$ so that, for each $i \in [d]$ there are $\Omega(\delta n)$ disjoint $q$-tuples $(\vec{u}_1,\ldots,\vec{u}_q) $ in $V$ so that $\span(\vec{u}_1,\ldots,\vec{u}_q)$ contains a unit vector whose $i$th coordinate is at least $\alpha$. We prove exponential lower bounds of the form $n \geq 2^{\Omega(\alpha \delta \sqrt{d})}$ for the case $q=2$ and, in some cases, stronger bounds (exponential in $d$).
Mohsen Ghaffari. Near-Optimal Distributed Approximation of Minimum-Weight Connected Dominating Set
Abstract: We present a near-optimal distributed approximation algorithm for the minimum-weight connected dominating set (MCDS) problem in the CONGEST model, where in each round each node can send $O(\log n)$ bits to each neighbor.
The presented algorithm finds an $O(\log n)$ approximation in $\tilde{O}(D+\sqrt{n})$ rounds, where $D$ is the network diameter and $n$ is the number of nodes.
MCDS is a classical NP-hard problem and the achieved approximation factor $O(\log n)$ is known to be optimal up to a constant factor, unless P=NP. Furthermore, the $\tilde{O}(D+\sqrt{n})$ round complexity is known to be optimal modulo logarithmic factors (for any approximation), following [Das Sarma et al.---STOC'11].
Abstract: Two mobile agents, starting from different nodes of an unknown network, have to meet at the same node. Agents move in synchronous rounds using a deterministic algorithm. Each agent has a different label, which it can use in the execution of the algorithm, but it does not know the label of the other agent. Agents do not know any bound on the size of the network. In each round an agent decides if it remains idle or if it wants to move to one of the adjacent nodes. Agents are subject to delay faults: if an agent incurs a fault in a given round, it remains in the current node, regardless of its decision. If it planned to move and the fault happened, the agent is aware of it. We consider three scenarios of fault distribution: random (independently in each round and for each agent with constant probability 0 < p < 1), unbounded adver- sarial (the adversary can delay an agent for an arbitrary finite number of consecutive rounds) and bounded adversarial (the adversary can delay an agent for at most c consecutive rounds, where c is unknown to the agents). The quality measure of a rendezvous algorithm is its cost, which is the total number of edge traversals. For random faults, we show an algorithm with cost polynomial in the size n of the network and polylogarithmic in the larger label L, which achieves rendezvous with very high probability in arbitrary networks. By contrast, for unbounded adversarial faults we show that rendezvous is not feasible, even in the class of rings. Under this scenario we give a rendezvous algorithm with cost O(nl), where l is the smaller label, working in arbitrary trees, and we show that \Omega(l) is the lower bound on rendezvous cost, even for the two-node tree. For bounded adversarial faults, we give a rendezvous algorithm working for arbitrary networks, with cost polynomial in n, and logarithmic in the bound c and in the larger label L.
Abstract: We show that all--instances termination of chase is undecidable. More precisely, there is no algorithm deciding, for a given set T consisting of Tuple Generating Dependencies (a.k.a. Datalog^exists program), whether the T-chase on D will terminate for every finite database instance D.
Our method applies to Oblivious Chase, Semi-Oblivious Chase and -- after a slight modification -- also for Standard Chase. This means that we give a (negative) solution to the all--instances termination problem for all version of chase that are usually considered.
The arity we need for our undecidability proof is three. We also show that the problem is EXPSPACE-hard for binary signatures, but decidability for this case is left open.
Both the proofs -- for ternary and binary signatures -- are easy. Once you know them.
Abstract: The partition of graphs into nice subgraphs is a central algorithmic problem with strong ties to matching theory. We study the partitioning of undirected graphs into stars, a problem known to be NP-complete even for the case of stars on three vertices. We perform a thorough computational complexity study of the problem on subclasses of perfect graphs and identify several polynomial-time solvable and NP-hard cases, for example, on interval graphs, grid graphs, and bipartite permutation graphs.
Abstract: Given a linear recurrence sequence (LRS) over the integers, the Positivity Problem asks whether all terms of the sequence are positive. We show that, for simple LRS (those whose characteristic polynomial has no repeated roots) of order 9 or less, Positivity is decidable, with complexity in the Counting Hierarchy.
Afshin Nikzad and R Ravi. Sending Secrets Swiftly: Approximation Algorithms for Generalized Multicast Problems
Abstract: We consider natural generalizations of the minimum broadcast time problem under the telephone model, where a rumor from a root node must be sent via phone calls to the whole graph in the minimum number of rounds; the telephone model implies that the set of edges involved in communicating in a round form a matching. The extensions we consider involve generalizing the number of calls that a vertex may participate in (the capacitated version), allowing conference calls (the hyperedge version) as well as a new multicommodity version we introduce where the rumors are no longer from a single node but from different sources and intended for specific destinations (the multicommodity version). Based on the ideas from \cite{dirbro0,kortundir}, we present a very simple greedy algorithm for the basic multicast problem with logarithmic performance guarantee and adapt it to the extensions to design typically polylogarithmic approximation algorithms. For the multi-commodity version, we give the first approximation algorithm with performance ratio $2^{ O\left(\log\log k \sqrt{\log k}\right)}$ for $k$ source-sink pairs. We provide nearly matching lower bounds for the hypercasting problem. For the multicommodity multicasting problem, we present improved guarantees for other variants involving asymmetric capacities, small number of terminals and with larger additive guarantees.
Andreas Björklund and Thore Husfeldt. Shortest Two Disjoint Paths in Polynomial Time
Abstract: Given an undirected graph and two pairs of vertices $(s_i,t_i)$ for $i\in\{1,2\}$ we show that there is a polynomial time Monte Carlo algorithm that finds vertex disjoint paths of smallest total length joining $s_i$ and $t_i$ for $i\in\{1,2\}$ respectively, or concludes that there most likely are no such paths at all. This answers an open problem, noticed by Eilam--Tzoreff in 1998 and by Kobayachi and Sommer in 2010.
Our algorithm is algebraic and uses permanents over the quotient ring $\mathbf{Z}_{4}[X]/(X^m)$ in combination with Mulmuley, Vazirani and Vazirani's isolation lemma to detect a solution. We develop a fast algorithm for permanents over said ring by modifying Valiant's 1979 algorithm for the permanent over $\mathbf{Z}_{2^l}$.
Abstract: In this paper, we initiate the systematic study of solving linear programs under the constraint of differential privacy. The first step in this study is simply to define the problem. To this end, we introduce several natural classes of {\em private linear programs} that capture different ways sensitive data can be incorporated into the specification of a linear program. For each class of linear programs we give an efficient, differentially private solver based on the multiplicative weights framework, or we give an impossibility result.
Gábor Ivanyos, Raghav Kulkarni, Youming Qiao, Miklos Santha and Aarthi Sundaram. On the complexity of trial and error for constraint satisfaction problems
Abstract: In a recent work of Bei, Chen and Zhang (STOC 2013), a trial and error model of computing was introduced, and applied to some constraint satisfaction problems. In this model the input is hidden by an oracle which, for a candidate assignment, reveals some information about a violated constraint if the assignment is not satisfying. In this paper we initiate a systematic study of constraint satisfaction problems in the trial and error model. To achieve this, we first adopt a formal framework for CSPs, and based on this framework we define several types of revealing oracles. Our main contribution is to develop a transfer theorem for each type of the revealing oracle, under a broad class of parameters. To any hidden CSP with a specific type of revealing oracle, the transfer theorem associates another, potentially harder CSP in the normal setting, such that their complexities are polynomial time equivalent. This in principle transfers the study of a large class of hidden CSPs, possibly with a promise on the instances, to the study of CSPs in the normal setting. We then apply the transfer theorems to get polynomial-time algorithms or hardness results for hidden CSPs, including satisfaction problems, monotone graph properties, isomorphism problems, and the exact version of the Unique Games problem. Most of the proofs of these results are short and straightforward, which exhibits the power of the transfer theorems.
Merav Parter. Bypassing Erd\H{o}s' Girth Conjecture: Hybrid Stretch and Sourcewise Spanners
Abstract: An $(\alpha,\beta)$-spanner of an $n$-vertex graph $G=(V,E)$ is a subgraph $H$ of $G$ satisfying that $\dist(u, v, H) \leq \alpha \cdot \dist(u, v, G)+\beta$ for every pair $\langle u, v \rangle \in V \times V$, where $\dist(u,v,G')$ denotes the distance between $u$ and $v$ in $G' \subseteq G$. It is known that for every integer $k \geq 1$, every graph $G$ has a polynomially constructible $(2k-1,0)$-spanner of size $O(n^{1+1/k})$. This size-stretch bound is essentially optimal by the girth conjecture. Yet, it is important to note that any argument based on the girth only applies to \emph{adjacent vertices}. It is therefore intriguing to ask if one can ``bypass" the conjecture by settling for a multiplicative stretch of $2k-1$ only for \emph{neighboring} vertex pairs, while maintaining a strictly \emph{better} multiplicative stretch for the rest of the pairs. We answer this question in the affirmative and introduce the notion of \emph{$k$-hybrid spanners}, in which non neighboring vertex pairs enjoy a \emph{multiplicative} $k$-stretch and the neighboring vertex pairs enjoy a \emph{multiplicative} $(2k-1)$ stretch (hence, tight by the conjecture). We show that for every unweighted $n$-vertex graph $G$ with $m$ edges, there is a (polynomially constructible) $k$-hybrid spanner with $O(k^2 \cdot n^{1+1/k})$ edges. This should be compared against the current best $(\alpha,\beta)$ spanner construction of Baswana et al. that obtains $(k,k-1)$ stretch with $O(k \cdot n^{1+1/k})$ edges. \indent An alternative natural approach to bypass the girth conjecture is to allow ourself to take care only of a subset of pairs $S \times V$ for a given subset of vertices $S \subseteq V$ referred to here as \emph{sources}. Spanners in which the distances in $S \times V$ are bounded are referred to as \emph{sourcewise spanners}. Several constructions for this variant are provided (e.g., multiplicative sourcewise spanners, additive sourcewise spanners and more).
David Adjiashvili, Sandro Bosio, Robert Weismantel and Rico Zenklusen. Time-Expanded Packings
Abstract: We introduce a general model for time-expanded versions of packing problems with a variety of applications. Our notion for time-expanded packings, which we introduce in two natural variations, requires elements to be part of the solution for several consecutive time steps. Despite the fact that the time-expanded counterparts of most combinatorial optimization problems become computationally hard to solve, we present strong approximation algorithms for general dependence systems and matroids, respectively, depending on the considered variant. More precisely, for both notions of time-expanded packings that we introduce, the approximation guarantees we obtain are at most a small constant-factor worse than the best approximation algorithms for the underlying problem in its non-time-expanded version.
Abstract: We introduce a new structure for a set of points in the plane and an angle $\alpha$, which is similar in flavor to a bounded-degree MST. We name this structure $\alpha$-MST. Let $P$ be a set of points in the plane and let $0 < \alpha \le 2\pi$ be an angle. An $\alpha$-ST of $P$ is a spanning tree of the complete Euclidean graph induced by $P$, with the additional property that for each point $p \in P$, the smallest angle containing all the edges adjacent to $p$ is at most $\alpha$. An $\alpha$-MST of $P$ is then an $\alpha$-ST of $P$ of minimum weight. For $\alpha < \pi/3$, an $\alpha$-ST does not always exist, and, for $\alpha \ge \pi/3$, it always exists~\cite{AGP13,AHHHPSSV13,CKLR11}. In this paper, we study the problem of computing an $\alpha$-MST for several common values of $\alpha$.
Motivated by wireless networks, we formulate the problem in terms of directional antennas. With each point $p \in P$, we associate a wedge $\wedge{p}$ of angle $\alpha$ and apex $p$. The goal is to assign an orientation and a radius $r_p$ to each wedge $\wedge{p}$, such that the resulting graph is connected and its MST is an $\alpha$-MST. (We draw an edge between $p$ and $q$ if $p \in \wedge{q}$, $q \in \wedge{p}$, and $|pq| \le r_p, r_q$.) Unsurprisingly, the problem of computing an $\alpha$-MST is NP-hard, at least for $\alpha=\pi$ and $\alpha=2\pi/3$. We present constant-factor approximation algorithms for $\alpha = \pi/2, 2\pi/3, \pi$.
One of our major results is a surprising theorem for $\alpha = 2\pi/3$, which, besides being interesting from a geometric point of view, has important applications. For example, the theorem guarantees that given {\em any} set $P$ of $3n$ points in the plane and {\em any} partitioning of the points into $n$ triplets, one can orient the wedges of each triplet {\em independently}, such that the graph induced by $P$ is connected. We apply the theorem to the {\em antenna conversion} problem.
Abstract: We study two-player (zero-sum) concurrent mean-payoff games played on a finite-state graph. We focus on the important sub-class of ergodic games where all states are visited infinitely often with probability 1. The algorithmic study of ergodic games was initiated in a seminal work of Hoffman and Karp in 1966, but all basic complexity questions have remained unresolved. Our main results for ergodic games are as follows: We establish (1) an optimal exponential bound on the patience of stationary strategies (where patience of a distribution is the inverse of the smallest positive probability and represents a complexity measure of a stationary strategy); (2) the approximation problem lie in FNP; (3) the approximation problem is at least as hard as the decision problem for simple stochastic games (for which NP intersection coNP is the long-standing best known bound). We present a variant of the strategy-iteration algorithm by Hoffman and Karp; show that both our algorithm and the classical value-iteration algorithm can approximate the value in exponential time; and identify a subclass where the value-iteration algorithm is a FPTAS. We also show that the exact value can be expressed in the existential theory of the reals, and establish square-root sum hardness for a related class of games.
Alex Borello, Julien Cervelle and Pascal Vanier. Turing degrees of limit sets of cellular automata
Abstract: Cellular automata are discrete dynamical systems and a model of computation. The limit set of a cellular automaton consists of the configurations having an infinite sequence of preimages. It is well known that these always contain a computable point and that any non-trivial property on them is undecidable. We go one step further in this article by giving a full caracterization of the sets of Turing degrees of cellular automata : they are the same as the sets of Turing degrees of effectively closed sets containing a computable point.
Abstract: We present the first approximate distance oracle for sparse directed networks with time-dependent arc-travel-times determined by continuous, piecewise linear, positive functions possessing the FIFO property. Our approach precomputes, in subquadratic time and space, $(1+\eps)-$approximate distance summaries from selected vertices to all other vertices in the network, and provides two sublinear time query algorithms that deliver constant and $(1+\sigma)-$approximate shortest-travel-times, respectively, for arbitrary origin-destination pairs in the network. Our oracle is based only on the sparsity of the network, along with two quite natural assumptions about travel-time functions.
Abstract: We investigate online algorithms for maximum (weight) independent set on graph classes with bounded inductive independence number $\rho$ like interval and disk graphs with applications to, e.g., task scheduling, spectrum allocation and admission control. In the online setting, nodes of an unknown graph arrive one by one over time. An online algorithm has to decide whether an arriving node should be included into the independent set.
Traditional (worst-case) competitive analysis yields only devastating results. Hence, we conduct a stochastic analysis of the problem and introduce a generic sampling approach that allows to devise online algorithms for a variety of input models. It bridges between models of quite different nature -- it covers the secretary model, in which an adversarial graph is presented in random order, and the prophet-inequality model, in which a randomly generated graph is presented in adversarial order.
Our first result is an online algorithm for maximum independent set with a competitive ratio of $O(\rho^2)$ in all considered models. It can be extended to maximum-weight independent set by losing only a factor of $O(\log n)$, with $n$ denoting the (expected) number of nodes. This upper bound is complemented by a lower bound of $\Omega(\log n/\log^2 \log n)$ showing that our sampling approach achieves nearly the optimal competitive ratio in all considered models. In addition, we present various extensions, e.g., towards admission control in wireless networks under SINR constraints.
Uriel Feige and Shlomo Jozeph. Demand Queries with Preprocessing
Abstract: Given a set of items and a submodular set-function $f$ that determines the value of every subset of items, a demand query assigns prices to the items, and the desired answer is a set $S$ of items that maximizes the profit, namely, the value of $S$ minus its price. The use of demand queries is well motivated in the context of combinatorial auctions. However, answering a demand query (even approximately) is NP-hard. We consider the question of whether exponential time preprocessing of $f$ prior to receiving the demand query can help in later answering demand queries in polynomial time. We design a preprocessing algorithm that leads to approximation ratios that are NP-hard to achieve without preprocessing. We also prove that there are limitations to the approximation ratios achievable after preprocessing, unless NP $\subset$ P/poly.
Abstract: Holographic algorithms were first introduced by Valiant as a new methodology to derive polynomial time algorithms. The algorithms introduced by Valiant are based on matchgates, which are intrinsically for problems over planar structures. In this paper we introduce two new families of holographic algorithms. These algorithms work over general, i.e., not necessarily planar, graphs. Instead of matchgates, the two underlying families of constraint functions are of the affine type and of the product type. These play the role of Kasteleyn's algorithm for counting planar perfect matchings. The new algorithms are obtained by transforming a problem to one of these two families by holographic reductions.
The tractability of affine and product-type constraint functions is known. The real challenge is to determine when some concrete problem, expressed by its constraint functions, has such a holographic reduction. We present a polynomial time algorithm to decide if a given counting problem has a holographic algorithm using the affine or product-type constraint functions. Our algorithm also finds a holographic transformation when one exists. We exhibit concrete problems that can be solved by the new holographic algorithms. When the constraint functions are symmetric, we further present a polynomial time algorithm for the same decision and search problems, where the complexity is measured in terms of the (exponentially more) succinct presentation of symmetric constraint functions. The algorithm for the symmetric case also shows that the recent dichotomy theorem for Holant problems with symmetric constraints is efficiently decidable. Our proof techniques are mainly algebraic, e.g., stabilizers and orbits of group actions.
Abstract: One of the first results in planar graph drawing was that every planar graph has a planar straight-line drawing. One of the first algorithmic results in graph drawing was how to find such a drawing such that vertices are at grid-points with polynomial coordinates. But not until 2007 was it proved that finding such a grid-drawing with optimal area is NP-hard, and the result was only for disconnected graphs.
In this paper, we show that for graphs with bounded treewidth, we can find area-optimal planar straight-line drawings in one of the following two scenarios: (1) when faces have bounded degree and the planar embedding is fixed, or (2) when we want all faces to be drawn convex. We also give NP-hardness results to show that none of these restrictions can be dropped. In particular, finding area-minimal drawings is NP-hard for triangulated planar graphs minus one edge.
Barnaby Martin, Petar Markovic and Petar Dapic. QCSP on semicomplete digraphs
Abstract: We study the (non-uniform) quantified constraint satisfaction problem QCSP(H) as H ranges over semicomplete digraphs. We obtain a complexity-theoretic trichotomy: QCSP(H) is either in P, is NP-hard or is Pspace-complete. The largest part of our work is the algebraic classification of precisely which semicompletes enjoy only essentially unary polymorphisms, which is combinatorially interesting in its own right.
Wiebke Höhn, Julian Mestre and Andreas Wiese. How Unsplittable-Flow-Covering helps Scheduling with Job-Dependent Cost Functions
Abstract: Generalizing many well-known and natural scheduling problems, scheduling with job-specific cost functions has gained a lot of attention recently. In this setting, each job incurs a cost depending on its completion time, given by a private cost function, and one seeks to schedule the jobs to minimize the total sum of these costs. The framework captures many important scheduling objectives such as weighted flow time or weighted tardiness. Still, the general case as well as the mentioned special cases are far from being very well understood yet, even for only one machine. Aiming for better general understanding of this problem, in this paper we focus on the case of uniform job release dates on one machine for which the state of the art is a 4-approximation algorithm. This is true even for a special case that is equivalent to the covering version of the well-studied and prominent unsplittable flow on a path problem, which is interesting in its own right. For that covering problem, we present a quasi-polynomial time $(1+\epsilon)$-approximation algorithm that yields an $(e+\epsilon)$-approximation for the above scheduling problem. Moreover, for the latter we devise the best possible resource augmentation result regarding speed: a polynomial time algorithm which computes a solution with \emph{optimal} cost at $1+\epsilon$ speedup. Finally, we present an elegant QPTAS for the special case where the cost functions of the jobs fall into at most $\log n$ many classes. This algorithm allows the jobs even to have up to $\log n$ many distinct release dates.
Abstract: We consider the decidability and complexity of the Ultimate Positivity Problem, which asks whether all but finitely many terms of a given rational linear recurrence sequence (LRS) are positive. Using lower bounds in Diophantine approximation concerning sums of $S$-units, we show that for simple LRS (those whose characteristic polynomial has no repeated roots) the Ultimate Positivity Problem is decidable in polynomial space. If we restrict to simple LRS of a fixed order then we obtain a polynomial-time decision procedure. As a complexity lower bound we show that Ultimate Positivity for simple LRS is at least as hard as the decision problem for the universal theory of the reals: a problem that is known to lie between coNP and PSPACE.
Abstract: What can be computed in an anonymous network, where nodes are not equipped with unique identifiers? It turns out that the answer to this question depends on the commitment of the nodes to their first computed output value: Two classes of problems solvable in anonymous networks are defined, where in the first class nodes are allowed to revoke their outputs and in the second class they are not. These two classes are then related to the class of all centrally solvable network problems, observing that the three classes form a strict linear hierarchy, and for 21 classic and/or characteristic problems in distributed computing, we determine the exact class to which they belong.
Does this hierarchy exhibit complete problems? We answer this question on the affirmative by introducing the concept of a distributed oracle, thus establishing a more fine grained classification for distributed computability which we apply to our 21 problems. Among our findings is the observation that the three classes are characterized by the three pillars of distributed computing, namely, local symmetry breaking, coordination, and leader election.
Abstract: Canonical orderings [STOC'88, FOCS'92] have been used as a key tool in graph drawing, graph encoding and visibility representations for the last decades. We study a far-reaching generalization of canonical orderings to non-planar graphs that was published by Lee Mondshein in a PhD-thesis at M.I.T.\ as early as 1971.
Mondshein proposed to order the vertices of a graph in a sequence such that, for any $i$, the vertices from $1$ to $i$ induce essentially a $2$-connected graph while the remaining vertices from $i+1$ to $n$ induce a connected graph. Mondshein's sequence generalizes canonical orderings and became later and independently known under the name \emph{non-separating ear decomposition}. Currently, the best known algorithm for computing this sequence achieves a running time of $O(nm)$; the main open problem in Mondshein's and follow-up work is to improve this running time to a subquadratic time.
In this paper, we present the first algorithm that computes a Mondshein sequence in time and space $O(m)$, improving the previous best running time by a factor of $n$. In addition, we illustrate the impact of this result by deducing linear-time algorithms for several other problems, for which the previous best running times have been quadratic.
In particular, we show how to compute three independent spanning trees in a $3$-connected graph in linear time, improving a result of Cheriyan and Maheshwari [J. Algorithms 9(4)]. Secondly, we improve the preprocessing time for the output-sensitive data structure by Di Battista, Tamassia and Vismara [Algorithmica 23(4)] that reports three internally disjoint paths between any given vertex pair from $O(n^2)$ to $O(m)$. Finally, we show how a very simple linear-time planarity test can be derived once a Mondshein sequence is computed.
Subhash Khot, Madhur Tulsiani and Pratik Worah. The Complexity of Somewhat Approximation Resistant Predicates
Abstract: A boolean predicate $f:\{0,1\}^k\to\{0,1\}$ is said to be {\em somewhat approximation resistant} if for some constant $\tau > \frac{|f^{-1}(1)|}{2^k}$, given a $\tau$-satisfiable instance of the $\maxkcsp(f)$ problem, it is NP-hard to find an assignment that {\it strictly beats} the naive algorithm that outputs a uniformly random assignment. Let $\tau(f)$ denote the supremum over all $\tau$ for which this holds. It is known that a predicate is somewhat approximation resistant precisely when its Fourier degree is at least $3$. For such predicates, we give a characterization of the {\it hardness gap} $(\tau(f) - \frac{|f^{-1}(1)|}{2^k})$ up to a factor of $O(k^5)$. We show that the hardness gap is determined by two factors:
\begin{itemize}
\item The nearest Hamming distance of f to a function g of Fourier degree at most 2, which is related to the Fourier mass of f on coefficients of degree 3 or higher.
\item Whether f is monotonically below g.
\end{itemize}
When the Hamming distance is small and f is monotonically below g, we give an SDP-based approximation algorithm and hardness results otherwise. We also give a similar characterization of the {\it integrality gap} for the natural SDP relaxation of MAX-k_CSP($f$) after $\Omega(n)$ rounds of the Lasserre hierarchy.
Richard Cole and Howard Karloff. Fast Algorithms for Constructing Maximum Entropy Summary Trees
Abstract: Karloff and Shirley recently proposed "summary trees" as a new way to visualize large rooted trees (Eurovis 2013) and gave algorithms for generating a maximum-entropy k-node summary tree of an input n-node rooted tree. However, the algorithm generating optimal summary trees was only pseudo-polynomial (and worked only for integral weights); the authors left open existence of a polynomial-time algorithm. In addition, the authors provided an additive approximation algorithm and a greedy heuristic, both working on real weights.
This paper shows how to construct maximum entropy k-node summary trees in time O(k^2 n+n log n) for real weights (indeed, as small as the time bound for the greedy heuristic given previously); how to speed up the approximation algorithm so that it runs in time O(n + (k^4/eps) log(k/eps)), and how to speed up the greedy algorithm so as to run in time O(kn + n log n). Altogether, these results make summary trees a much more practical tool than before.
Konstantin Makarychev and Yury Makarychev. Nonuniform Graph Partitioning with Unrelated Weights
Abstract: We give a bi-criteria approximation algorithm for the Minimum Nonuniform Partitioning problem, recently introduced by Krauthgamer, Naor, Schwartz and Talwar (2014). In this problem, we are given a graph G = (V,E) on n vertices and k numbers \rho_1,...,\rho_k. The goal is to partition the graph into k disjoint sets P_1,...,P_k satisfying |P_i| < \rho_i, so as to minimize the number of edges cut by the partition. Our algorithm has an approximation ratio of O(\sqrt{log n log k}) for general graphs, and an O(1) approximation for graphs with excluded minors. This is an improvement upon the O(log n) algorithm of Krauthgamer, Naor, Schwartz and Talwar (2014). Our approximation ratio matches the best known ratio for the Minimum (Uniform) k-Partitioning problem.
We extend our results to the case of "unrelated weights" and to the case of "unrelated d-dimensional weights". In the former case, different vertices may have different weights and the weight of a vertex may depend on the set P_i the vertex is assigned to. In the latter case, each vertex u has a d-dimensional weight r(u,i)=(r_1(u,i),…,r_d(u,i)) if u is assigned to P_i. Each set P_i has a d-dimensional capacity c(i)=(c_1(i),…,c_d(i)). The goal is to find a partition such that \sum_{u\in P_i} r(i) < c(i) coordinate-wise.
Radha Jagadeesan and James Riely. Between Linearizability and Quiescent Consistency: Quantitative Quiescent Consistency
Abstract: Linearizability is the defacto correctness criterion for concurrent data structures. Unfortunately, linearizability imposes a performance penalty which scales linearly in the number of contending threads. Quiescent consistency is an alternative criterion which guarantees that a concurrent data structure behaves correctly when accessed sequentially. Yet quiescent consistency says very little about executions that have any contention.
We define quantitative quiescent consistency (QQC), a relaxation of linearizability where the degree of relaxation is proportional to degree of contention. When quiescent, no relaxation is allowed, and therefore QQC refines quiescent consistency, unlike other proposed relaxations of linearizability. We show that high performance counters and stacks designed to satisfy quiescent consistency continue to satisfy QQC. The precise assumptions under which QQC holds provides fresh insight on these structures. To demonstrate the robustness of QQC, we provide three natural characterizations and prove compositionality
Abstract: We extend the line of research initiated by Fortnow and Klivans \cite{FortnowKlivans09} that studies the relationship between efficient learning algorithms and circuit lower bounds. In \cite{FortnowKlivans09}, it was shown that if a Boolean circuit class $\M$ has an efficient \emph{deterministic} exact learning algorithm, (i.e. an algorithm that uses membership and equivalence queries) then $\EXP^\NP \not \subseteq \ppoly[\M]$ \footnote{$\ppoly[\M]$ stands for the class of all languages that can be computed by polynomial-size circuits from the class $\M$.}. Recently, in \cite{KKO13} $\EXP^\NP$ was replaced by $\DTIME(n^{\omega(1)})$. Yet for the models of \emph{randomized} exact learning or Valiant's $\PAC$ learning, the best result so far is a lower bound against $\BPEXP$ (the exponential-time analogue of $\BPP$). In this paper, we derive stronger lower bounds as well as some other consequences from \emph{randomized} exact learning and $\PAC$ learning algorithms, answering an open question posed in \cite{FortnowKlivans09} and \cite{KKO13}. In particular, we show that
\begin{enumerate} \item If a Boolean circuit class $\M$ has an efficient \emph{randomized} exact learning algorithm or an efficient $\PAC$ learning algorithm \footnote{In fact, our result hold even for a more general model of $\PAC$ learning with membership queries.} then \sloppy $\BPTIME(n^{\omega(1)})/1 \not \subseteq \ppoly[\M]$.
\item If a Boolean circuit class $\M$ has an efficient \emph{randomized} exact learning algorithm then no strong pseudo-random generators exist in $\ppoly[\M]$. \end{enumerate}
We note that in both cases the learning algorithms need not be \emph{proper} \footnote{A learning algorithm is proper if it outputs a hypothesis from the class it learns.}. The extra bit of advice comes to accommodate the need to keep the promise of bounded away probabilities of acceptance and rejection. The exact same problem arises when trying to prove lower bounds for $\BPTIME$ or $\MA$ \cite{Barak02,FS04,vMP07,Santhanam09}. It has been an open problem to remove this bit. We suggest an approach to settle this problem in our case. Finally, we slightly improve the result of \cite{BFT98} by showing a subclass of $\MAEXP$ that requires super-polynomial circuits. Our results combine and extend some of the techniques previously used in \cite{FortnowKlivans09,KKO13} and \cite{Santhanam09}.
Abstract: In this paper, we give the first online algorithms with a poly-logarithmic competitive ratio for the node-weighted prize-collecting Steiner tree and Steiner forest problems. The competitive ratios are optimal up to logarithmic factors. In fact, we give a generic technique for reducing online prize-collecting Steiner problems to the fractional version of their non-prize-collecting counterparts losing only a logarithmic factor in the competitive ratio. This reduction is agnostic to the cost model (edge-weighted or node-weighted) of the input graph and applies to a wide class of network design problems including Steiner tree, Steiner forest, group Steiner tree, and group Steiner forest. Consequently, we also give the first online algorithms for the edge-weighted prize-collecting group Steiner tree and group Steiner forest problems with a poly-logarithmic competitive ratio, since corresponding fractional guarantees for the non-prize-collecting variants of these problems were previously known. Furthermore, we show that it is impossible to generalize these results to guarantee online Lagrangean multiplier preserving (LMP) competitiveness.
For the most fundamental problem in this class, namely the prize-collecting Steiner tree problem, we further improve our results. For the node-weighted prize-collecting Steiner tree problem, we use the generic reduction but improve the best known online Steiner tree result from Naor et al (FOCS 2011) on two counts. We improve the competitive ratio by a logarithmic factor to make it optimal (up to constants), and also give a new dual-fitting analysis showing that the competitive ratio holds against the fractional optimum. This stronger fractional guarantee is critical in applying the algorithm in the reduction whereas the logarithmic improvement implies that the algorithm is optimal up to a single logarithmic factor for the prize-collecting Steiner tree problem. While improving the competitive ratio for the integral guarantee is relatively easier, ensuring it for the fractional objective turns out to be much more challenging. To this end, we introduce a new technique that we call dual averaging. We hope that this new analytical tool will be useful for other dual-fitting analyses as well.
In the edge-weighted prize-collecting Steiner tree problem, we match the optimal (up to constants) competitive ratio of O(logn) that was previously achieved by Qian and Williamson (ICALP 2011) but provide a substantially simpler analysis. Our analysis for this problem uses a novel dual-fitting approach and does not rely on the generic reduction framework mentioned above.
Yuval Ishai and Hoeteck Wee. Partial Garbling Schemes and Their Applications
Abstract: Garbling schemes (aka randomized encodings of functions) represent a function F by a “simpler” randomized function Fˆ such that Fˆ(x) reveals F(x) and no additional information about x. Garbling schemes have found applications in many areas of cryptography. Motivated by the goal of improving the efficiency of garbling schemes, we make the following contributions:
– We suggest a general new notion of *partial garbling* which unifies several previous notions from the literature, including standard garbling schemes, secret sharing schemes, and “conditional disclosure of secrets”. This notion considers garbling schemes in which part of the input is public, in the sense that it can be leaked by Fˆ.
– We present constructions of partial garbling schemes for (boolean and arithmetic) formulas and branching programs which take advantage of the public input to gain better efficiency.
– We demonstrate the usefulness of our new notion by presenting applications to efficient attribute-based encryption, delegation, and secure computation. In each of these applications, we obtain either new schemes for larger classes of functions or efficiency improvements from quadratic to linear. In particular, we obtain the first ABE scheme in bilinear groups for arithmetic formulas, as well as more efficient delegation schemes for boolean and arithmetic branching programs.
Abstract: Fibonacci gate problems have severed as computation primitives to solve other problems by holographic algorithm and play an important role in the dichotomy of exact counting for Holant and CSP frameworks. We generalize them to weighted cases and allow each vertex function to have different parameters, which is a much boarder family and #P-hard for exactly counting. We design a fully polynomial-time approximation scheme (FPTAS) for this generalization by correlation decay technique. This is the first deterministic FPTAS for approximate counting in the general Holant framework without a degree bound. We also formally introduce holographic reduction in the study of approximate counting and these weighted Fibonacci gate problems serve as computation primitives for approximate counting. Under holographic reduction, we obtain FPTAS for other Holant problems and spin problems. One important application is developing an FPTAS for a large range of ferromagnetic two-state spin systems. This is the first deterministic FPTAS in the ferromagnetic range for two-state spin systems without a degree bound. Besides these algorithms, we also develop several new tools and techniques to establish the correlation decay property, which are applicable in other problems.
Abstract: Depth First Search (DFS) tree is a very fundamental data structure for graphs used in solving various algorithmic problems. However, very few results are known for maintaining DFS tree in a dynamic environment - insertion or deletion of edges. The only non-trivial result for this problem is by Franciosa et al. [ESA 1994]. They showed that, for a directed acyclic graph on n vertices, a DFS tree can be maintained in O(n) amortized time per edge insertion. They stated it as an open problem to maintain a DFS tree dynamically in an undirected graph or general directed graph. We present the first algorithm for maintaining a DFS tree for an undirected graph under insertion of edges. For processing any arbitrary online sequence of edge insertions, this algorithm takes total O(n^2) time.
Gillat Kol, Shay Moran, Amir Shpilka and Amir Yehudayoff. Approximate Nonnegative Rank is Equivalent to the Smooth Rectangle Bound
Abstract: We consider two known lower bounds on randomized communication complexity: The smooth rectangle bound and the logarithm of the approximate nonnegative rank. Our main result is that they are the same up to a multiplicative constant and a small additive term. The logarithm of the nonnegative rank is known to be a nearly tight lower bound on the deterministic communication complexity. Our result indicates that proving the analogue for the randomized case, namely that the log approximate nonnegative rank is a nearly tight bound on randomized communication complexity, would imply tightness of the information cost bound. Another corollary of our result is the existence of a boolean function with a quasipolynomial gap between its approximate rank and approximate nonnegative rank.
György Dósa and Jiří Sgall. Optimal analysis of Best Fit bin packing
Abstract: In early seventies it was shown that the {\em asymptotic} approximation ratio of BestFit bin packing is equal to $1.7$. We prove that also the {\em absolute} approximation ratio for BestFit bin packing is exactly $1.7$, improving the previous bound of $1.75$. This means that if the optimum needs $\OPT$ bins, BestFit always uses at most $\lfloor 1.7 \cdot OPT \rfloor$ bins. Furthermore we show matching lower bounds for all values of OPT, i.e., we give instances on which BestFit uses exactly $\lfloor 1.7 \cdot OPT \rfloor$ bins. Thus we completely settle the worst-case complexity of BestFit bin packing after more than 40 years of its study.
Ved Prakash and Hartmut Klauck. An Improved Interactive Streaming Algorithm for the Distinct Elements Problem
Abstract: The exact computation of the number of distinct elements (frequency moment F_0) is a fundamental problem in the study of data streaming algorithms. We denote the length of the stream by n where each symbol is drawn from a universe of size m. While it is well known that the moments F_0,F_1,F_2 can be approximated by efficient streaming algorithms, it is easy to see that exact computation of F_0 and F_2 requires space Omega(m). In previous work, Cormode et al. therefore considered a model where the data stream is also processed by a powerful helper, who provides an interactive proof of the result. They gave such protocols with a polylogarithmic number of rounds of communication between helper and verifier for all functions in NC. This number of rounds (O(log^2 m) in the case of F_0) can quickly make such protocols impractical.
Cormode et al. also gave a protocol with log m +1 rounds for the exact computation of F_0 where the space complexity is O(log m log n + log^2 m) but the total communication O(sqrt(n) log m (log n + log m )). They managed to give log m round protocols with polylog(m,n) complexity for many other interesting problems which includes F_2, Inner product and Range-sum, but computing F_0 exactly with polylogarithmic space and communication and O(log m) rounds remained open.
In this work, we give a streaming interactive protocol with log m rounds for exact computation of F_0 using O(log m (log n + (log m)(loglog m))) bits of space and the communication is O(log m (log n + (log^3 m)(loglog m)^2 )). The update time of the verifier per symbol received is O(log^2 m).
Abstract: This paper introduces a coalgebraic foundation for coinductive types, interpreted as sets of values and extended with set theoretic union. We give a sound and complete characterization of semantic subtyping in terms of inclusion of maximal traces. Further, we provide a technique for reducing subtyping to inclusion between sets of finite traces, based on approximation. We obtain inclusion of tree languages as a sound and complete method to show semantic subtyping of recursive types with basic types, product and union, interpreted coinductively.
Abstract: We introduce a general approach for solving partition problems where the goal is to represent a given set as a union (either disjoint or not) of subsets satisfying certain properties. Many NP-hard problems can be naturally stated as such partition problems. We show that if one can find a large enough system of so-called families with infants for a given problem, then this problem can be solved faster than by a straightforward algorithm. We use this approach to improve known bounds for several NP-hard problems (the traveling salesman problem, the graph coloring problem, the problem of counting perfect matchings) on graphs of bounded average degree, as well as to simplify the proofs of several known results.
Abstract: We study here the problem of determining the majority type in an arbitrary connected network, each vertex of which has initially two possible types (states). The vertices may have a few additional possible states and can interact in pairs only if they share an edge. Any (population) protocol is required to stabilize in the initial majority, i.e. its output function must interpret the local state of each vertex so that each vertex outputs the initial majority type. We first provide a protocol with 4 states per vertex that \emph{always} computes the initial majority value, under any fair scheduler. Under the probabilistic scheduler of pairwise interactions, we prove that our protocol stabilizes in expected polynomial time for any network and is quite fast on the clique.As we prove, this protocol is optimal, in the sense that there does not exist any population protocol that always computes majority with less than 4 states per vertex. However this does not rule out the existence of a protocol with 3 states per vertex that is correct with high probability (whp). To this end, we examine an elegant and very natural majority protocol with 3 states per vertex, introduced in \cite{AAE08} where its performance has been analyzed for the clique graph. In particular, it determines the correct initial majority type in the clique very fast and whp under the probabilistic scheduler. We study the performance of this protocol in arbitrary networks. We prove that, when the two initial states are put uniformly at random on the vertices, the protocol of \cite{AAE08} converges to the initial majority with probability higher than to the initial minority. In contrast, we present an infinite family of graphs, on which the protocol of~\cite{AAE08} can fail, i.e. it can converge to the initial minority type whp, even when the difference between the initial majority and the initial minority is $n - \Theta(\ln{n})$. We also present another infinite family of graphs in which the protocol of~\cite{AAE08} takes an expected exponential time to converge. These two negative results build upon a very positive result concerning the robustness of the protocol of~\cite{AAE08} on the clique, namely that if the initial minority~is~at~most~$\frac{n}{7}$, the protocol fails with exponentially small probability. Surprisingly, the resistance of the clique to failure causes the failure in general graphs. Our techniques use new domination and coupling arguments for suitably defined processes whose dynamics capture the antagonism between the states involved.
Abstract: Understanding the dynamics of evolving social or infrastructure networks is a challenge in applied areas such as epidemiology, viral marketing, or urban planning. During the past decade, data has been collected on such networks but has yet to be fully analyzed. We propose to use information on the dynamics of the data to find stable partitions of the network into groups. For that purpose, we introduce a time-dependent, dynamic version of the facility location problem, that includes a switching cost when a client's assignment changes from one facility to another. This might provide a better representation of an evolving network, emphasizing the abrupt change of relationships between subjects rather than the continuous evolution of the underlying network. We show that in realistic examples this model yields indeed better fitting solutions than optimizing every snapshot independently. We present an $O(\log nT)$-approximation algorithm and a matching hardness result, where $n$ is the number of clients and $T$ the number of time steps. We also give an other algorithms with approximation ratio $O(\log nT)$ for the variant where one pays at each time step (leasing) for each open facility.
Abstract: We introduce an algorithm which solves mean payoff games in polynomial time on average, assuming the distribution of the games satisfies a flip invariance property on the set of actions associated with every state. The algorithm is a tropical analogue of the shadow-vertex simplex algorithm, which solves mean payoff games via linear feasibility problems over the tropical semiring (R U {-oo}, max, +). The key ingredient in our approach is that the shadow-vertex pivoting rule can be transferred to tropical polyhedra, and that its computation reduces to network flow problems.
Abstract: Contexts are terms with one `hole', i.e. a place in which we can substitute an argument. In context unification we are given an equation over terms with variables representing contexts and ask about the satisfiability of this equation. Context unification at the same time is subsumed by a second-order unification, which is undecidable, and subsumes satisfiability of word equations, which is in PSPACE. We show that context unification is in PSPACE, so the same as word equations.
This result is obtained by an extension of the recompression technique, recently developed by the author and used in particular to obtain a new PSPACE algorithm for satisfiability of word equations, to context unification. The recompression is based on applying simple compression rules (replacing pairs of neighbouring function symbols), which are (conceptually) applied on the solution of the context equation and modifying the equation in a way so that such compression steps can be performed directly on the equation, without the knowledge of the actual solution.
Abstract: We consider problems where $n$ people are communicating and a random subset of them is trying to leak information, without making it clear which people are leaking the information. We introduce a measure of suspicion, and show that the amount of leaked information will always be bounded by the expected increase in suspicion, and that this bound is tight. We ask the question: Suppose a large number of people have some information they want to leak, but they want to ensure that after the communication, an observer will assign probability $\leq c$ to the events that each of them is trying to leak the information. How much information can they reliably leak, per person who is leaking? We show that the answer is $\left(\frac{-\log(1-c)}{c}-\log(e)\right)$ bits.
Abstract: By classical results of Abadi and Halpern, validity for probabilistic first-order logic of type~2 (ProbFO) is $\Pi^2_1$-complete and thus not recursively enumerable, and even small fragments of ProbFO are undecidable. In temporal first-order logic, which has similar computational properties, these problems have been addressed by imposing monodicity, that is, by allowing temporal operators to be applied only to formulas with at most one free variable. In this paper, we identify a monodic fragment of ProbFO and show that it enjoys favorable computational properties. Specifically, the valid sentences of monodic ProbFO are recursively enumerable and a slight variation of Halpern's axiom system for type-2 ProbFO on bounded domains is sound and complete for monodic ProbFO. Moreover, decidability can be obtained by restricting the FO part of monodic ProbFO to any decidable FO fragment. In some cases, which notably include the guarded fragment, our general constructions result in tight upper complexity bounds.
Abstract: Recent studies reveal that branching bisimilarity is decidable for both nBPP (normed Basic Parallel Process) and nBPA (normed Basic Process Algebra). These results lead to the question if there are any other models in the hierarchy of PRS (Process Rewrite System) whose branching bisimilarity is decidable. It is shown in this paper that the branching bisimilarity for both nOCN (normed One Counter Net) and nPA (normed Process Algebra) is undecidable. These results essentially imply that the question has a negative answer.
Abstract: We revisit a classical problem in computational geometry that has been studied since the 1980s: in the \emph{rectangle enclosure} problem we want to report all $k$ enclosing pairs of $n$ input rectangles in 2D. We present the first deterministic algorithm that takes $O(n\log n + k)$ worst-case time and $O(n)$ space in the word-RAM model. This improves previous deterministic algorithms with $O((n\log n + k)\log \log n)$ running time. We achieve the result by derandomizing the algorithm of Chan, Larsen and \Pat\ [SoCG'11] that attains the same time complexity but in expectation.
The 2D rectangle enclosure problem is related to the \emph{offline dominance range reporting} problem in 4D, and our result leads to the currently fastest deterministic algorithm for offline dominance reporting in any constant dimension $d\ge 4$.
A key tool behind Chan et al.'s previous randomized algorithm is \emph{shallow cuttings for 3D dominance ranges}. Recently, Afshani and Tsakalidis [SODA'14] obtained a deterministic $O(n\log n)$-time algorithm to construct such cuttings. We start with an improved deterministic construction algorithm that runs in $O(n\log\log n)$ time in the word-RAM; this result is of independent interest. Many additional ideas are then incorporated, including a linear-time algorithm for \emph{merging} shallow cuttings.
Abstract: An oblivious subspace embedding (OSE) for some $\varepsilon, \delta \in (0,1/3)$ and $d \le m \le n$ is a distribution $D$ over $\mathbb{R}^{m x n}$ such that for any linear subspace $W$ of $\mathbb{R}^n$ of dimension $d$,
$$ Pr_{\Pi \sim D}(\forall\ x in W, (1-\varepsilon) \|x\|_2 \le \|\Pi x\|_2 \le (1+\varepsilon)\|x\|_2) ge 1 - \delta. $$
We prove that any OSE with $\delta < 1/3$ must have $m = \Omega((d + log(1/delta))/\varepsilon^2)$, which is optimal. Furthermore, if every $\Pi$ in the support of $D$ is sparse, having at most $s$ non-zero entries per column, then we show tradeoff lower bounds between $m$ and $s$.
Abstract: We construct a PCP for NTIME(2^n) with constant soundness, 2^n poly(n) proof length, and poly(n) queries where the verier's computation is simple: the queries are a projection of the input randomness, and the computation on the prover's answers is a 3CNF. The previous upper bound for these two computations was polynomial-size circuits. Composing this verier with a proof oracle increases the circuit-depth of the latter by 2. Our PCP is a simple variant of the PCP by Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan (CCC 2005). We also give a more modular exposition of the latter, separating the combinatorial from the algebraic arguments.
If our PCP is taken as a black box, we obtain a more direct proof of the result by Williams, later with Santhanam (CCC 2013) that derandomizing circuits on n bits from a class C in time 2^n/n^{\omega(1)} yields that NEXP is not in a related circuit class C'. Our proof yields a tighter connection: C is an And-Or of circuits from C'. Along the way we show that the same lower bound follows if the satisability of the And of any 3 circuits from C' can be solved in time 2^n/n^{\omega(1)}.
Egor Derevenetc and Roland Meyer. Robustness against Power is PSpace-complete
Abstract: Power is a RISC architecture developed by IBM, Freescale, and several other companies and implemented in a series of POWER processors. The architecture features a relaxed memory model providing very weak guarantees with respect to the ordering and atomicity of memory accesses.
Due to these weaknesses, some programs that are correct under sequential consistency (SC) show undesirable effects when run under Power. We call these programs non-robust against the Power memory model. Formally, a program is robust if every computation under Power has the same data and control dependencies as some SC computation.
Our contribution is a decision procedure for robustness of concurrent programs against the Power memory model. It is based on three ideas. First, we reformulate robustness in terms of acyclicity of a happens-before relation. Second, we prove that among computations with cyclic happens-before relation there is one in a certain normal form. Finally, we reduce the existence of such a normal-form computation to a language emptiness problem. Altogether, this yields a PSPACE algorithm for checking robustness against Power. We complement it by a matching lower bound to show PSPACE-completeness.
Abstract: Weighted timed automata (WTA) model quantitative aspects of real-time systems like continuous consumption of memory, power or financial resources. They accept quantitative timed languages where every timed word is mapped to a value, e.g., a real number. In this paper, we prove a Nivat theorem for WTA which states that recognizable quantitative timed languages are exactly those which can be obtained from recognizable boolean timed languages with the help of several simple operations. We also introduce a weighted extension of relative distance logic developed by Wilke, and we show that our weighted relative distance logic and WTA are equally expressive. The proof of this result can be derived from our Nivat theorem and Wilke's theorem for relative distance logic. Since the proof of our Nivat theorem is constructive, the translation process from logic to automata and vice versa is also constructive. This leads to decidability results for weighted relative distance logic.
Sayan Bhattacharya, Vahab Mirrokni and Janardhan Kulkarni. Coordination Mechanisms for (Selfish) Routing over Time on a Tree
Abstract: While selfish routing has been studied extensively, the problem of designing better {\em coordination mechanisms} for routing over time has remained an open problem and only very special networks has been studied. In this paper, we focus on tree networks (single source multiple destinations) with the goal of minimizing (weighted) average {\em sojourn time} of jobs, and provide the first coordination mechanism with provable price of anarchy for this problem. Interestingly, we achieve our price of anarchy results using simple and strongly local policies such as Shortest Job First and the Smith's Rule (also called HDF). In particular, for the case of unweighted jobs, we design a coordination mechanism with polylogarithmic price of anarchy. For weighted jobs, on the other hand, we show that price of anarchy is a function of the depth of the tree and accompany this result by a lower bound for the price of anarchy for the Smith Rule policy and other common strongly local scheduling policies.
Our price of anarchy results also imply improved approximation algorithms for the underlying optimization problem of routing over a tree. This problem is well motivated from applications of routing in a data center network where average sojourn time is an important metric.
Abstract: Jumbled indexing is the problem of indexing a text $T$ for queries that ask to determine whether there is a substring of $T$ matching a pattern represented as a Parikh vector which is the vector of frequency counts for each character. Jumbled indexing has garnered a lot of interest in the last four years. These is a naive algorithm that preprocesses all answers in $O(n^2|\Sigma|)$ time allowing quick queries afterwards and there is another naive algorithm that requires no preprocessing but has $O(n\log|\Sigma|)$ query time. Despite a tremendous amount of effort there has been little improvement over these running times.
In this paper we show that there is good reason for this. We show that, under the 3SUM-Hardness assumption, for alphabets of size $\Omega({\log n / \log\log n})$ one cannot solve jumbled indexing with $O(n^{2-\epsilon})$ preprocessing time and $O(n^{1-\delta})$ for any $\epsilon>0$ and $\delta>0$. We further show that for any constant sized alphabet of size $r>2$ there exists describable fixed constant $\epsilon_r$ and $\delta_r$ such that jumbled indexing over an $r$-sized alphabet requires $\tilde{\Omega}(n^{2-\epsilon_r})$ preprocessing time or $\tilde{\Omega}(n^{1-\delta_r})$ also under the 3SUM-Hardness assumption.
Abstract: In settings where players have limited access to liquidity, represented in the form of budget constraints, efficiency maximization has proven to be a challenging goal. In particular, the social welfare cannot be approximated by a better factor then the number of players. Therefore, the literature has mainly resorted to Pareto-efficiency as a way to achieve efficiency in such settings. While successful in some important scenarios, in many settings it is known that either exactly one truthful auction that always outputs a Pareto-efficient solution, or that no truthful mechanism always outputs a Pareto-efficient outcome. Moreover, since Pareto-efficiency is a binary property (is either satisfied or not), it cannot be circumvented as usual by considering approximations. To overcome impossibilities in important setting such as multi-unit auctions with decreasing marginal values and private budgets, we propose a new notion of efficiency, which we call liquid welfare. This is the maximum amount of revenue an omniscient seller would be able to extract from a certain instance. For the aforementioned setting, we give a deterministic O(log n)-approximation for the liquid welfare in this setting. We also study the liquid welfare in the traditional setting of additive values and public budgets. We present two different auctions that achieve a 2-approximation to the new objective. Moreover, we show that no truthful algorithm can guarantee an approximation factor better than 4/3 with respect to the liquid welfare.
Abstract: This paper is about MSO+U, an extension of monadic second order logic, which has a quantifier that can express that a property of sets is true for arbitrarily large sets. We conjecture that the MSO+U theory of the complete binary tree is undecidable. We prove a weaker statement: there is no algorithm which decides this theory and has a correctness proof in ZFC. This is because the theory is undecidable, under a set theoretic assumption consistent with ZFC, namely that there exists of projective well-ordering of 2^\omega of type \omega_1. We use Shelah's undecidability proof of the MSO theory of the real numbers.
Andrea Cerone, Alexey Gotsman and Hongseok Yang. Parameterised Linearisability
Abstract: Many concurrent libraries are parameterised, meaning that they implement generic algorithms that take another library as a parameter. In such cases, the standard way of stating the correctness of concurrent libraries via linearisability is inapplicable. We generalise linearisability to parameterised libraries and investigate subtle trade-offs between the assumptions that such libraries can make about their environment and the conditions that linearisability has to impose on different types of interactions with it. We prove that the resulting parameterised linearisability is closed under instantiating parameter libraries and composing several non-interacting libraries, and furthermore implies observational refinement. These results allow modularising the reasoning about concurrent programs using parameterised libraries and confirm the appropriateness of the proposed definitions. We illustrate the applicability of our results by proving the correctness of a parameterised library implementing flat combining.
Mikolaj Bojanczyk. Weak MSO+U with Path Quantifiers over Infinite Trees
Abstract: This paper shows that over infinite trees, satisfiability is decidable for weak monadic second-order logic extended by the unbounding quantifier U and quantification over infinite paths. The proof is by reduction to emptiness for a certain automaton model, while emptiness for the automaton model is decided using profinite trees.
Stefan Kiefer and Björn Wachter. Stability and Complexity of Minimising Probabilistic Automata
Abstract: We consider the state-minimisation problem for weighted and probabilistic automata. We provide a numerically stable polynomial-time minimisation algorithm for weighted automata, with guaranteed bounds on the numerical error when run with floating-point arithmetic. Our algorithm can also be used for ``lossy'' minimisation with bounded error. We show an application in image compression. In the second part of the paper we study the complexity of the minimisation problem for probabilistic automata. We prove that the problem is NP-hard and in PSPACE, improving a recent EXPTIME-result.
Abstract: The internal diffusion limited aggregation (IDLA) process places n particles on the two dimensional integer grid. The first particle is placed on the origin; every subsequent particle starts at the origin and performs an unbiased random walk until it reaches an unoccupied position. In this work we study the computational complexity of determining the subset that is generated after n particles have been placed. We develop the first algorithm that provably outperforms the naive step-by-step simulation of all particles. Particularly, our algorithm has a running time of O(n (log n)^2) and a sublinear space requirement of O(\sqrt{n} log n), both in expectation and with high probability. In contrast to some speedups proposed for similar models in the physics community, our algorithm samples from the exact distribution. To simulate a single particle fast we have to develop techniques for combining multiple steps of a random walk to large jumps without hitting a forbidden set of grid points. These techniques might be of independent interest for speeding up other problems based on random walks.
Abstract: Inspired by social networks and complex systems, we propose a core-periphery network architecture that supports fast computation for many distributed algorithms and is robust and efficient in number of links. Rather than providing a concrete network model, we take an axiom-based design approach. We provide three intuitive (and independent) algorithmic axioms and prove that any network that satisfies all axioms enjoys an efficient algorithm for a range of tasks (e.g., MST, sparse matrix multiplication, etc.). We also show the minimality of our axiom set: for networks that satisfy any subset of the axioms, the same efficiency cannot be guaranteed for any deterministic algorithm.
Abstract: We investigate the quantifier alternation hierarchy in first-order logic on words. Levels in this hierarchy are defined by counting the number of quantifier alternations in formulas. We prove that one can decide membership of a regular language to the levels B-\Sigma_2 (boolean combination of formulas having only 1 alternation) and \Sigma_3 (formulas having only 2 alternations beginning with an existential block). Our proof works by considering a deeper problem, called separation, which, once solved for lower levels, allow to solve membership for higher levels.
Samir Datta, William Hesse and Raghav Kulkarni. Dynamic Complexity of Directed Rechability and Other Problems
Abstract: We study some well-known graph problems such as reachability and matching in the context of dynamic complexity. In this model, edges are dynamically added and deleted and we measure the complexity of each update/query. We propose polynomial-size data structures for such updates for several related problems. The updates are in very low level complexity classes such as quantifier-free first order formulas, $\mathsf{AC}^0, \mathsf{AC}^0[2], \mathsf{TC}^0.$ In particular, we show that the following problems are in the indicated classes: \hfill\begin{enumerate} \item[(a)] maximum matching in \DynTCz; \item[(b)] digraph reachability in \DynACzt; \item[(c)] embedded planar digraph reachability in \DynFO. \end{enumerate} Part (c) of our results yields the first non-trivial class of graphs where reachability can be maintained by first-order updates; it is a long-standing open question whether the same holds for general graphs.
We build on the techniques introduced by Hesse for maintaining the reachability \cite{hesse} and those by Hesse, Allender, and Barrington \cite{HAB} for polynomial arithmetic. For (a) we show that the technique in \cite{hesse} can in fact be generalized using \cite{MV} and \cite{HAB} to maintain the determinant of a matrix in \DynTCz. For (b) we extend this technique with the help of two more ingredients namely isolation (cf. ARZ) and truncated approximation using rational polynomials to achieve \DynACzt bound. In fact, our proof yields \DynACzp bound for the same for any prime $p > 1.$ For (c) we exploit the duality between cuts and cycles in planar graphs to maintain the number of crossings between carefully chosen primal and dual paths. In order to implement this approach we show several technical lemmas, which might be interesting on their own.
Michael Lampis. Parameterized Approximation Schemes using Graph Widths
Abstract: Combining the techniques of approximation algorithms and parameterized complexity has long been considered a promising research area, but relatively few results are currently known. In this paper we study the parameterized approximability of a number of problems which are known to be hard to solve exactly when parameterized by treewidth or clique-width. Our main contribution is to present a natural randomized rounding technique that extends well-known ideas and can be used for both of these widths. Applying this very generic technique we obtain approximation schemes for a number of problems, evading both polynomial-time inapproximability and parameterized intractability bounds.
Abstract: We study online algorithms for the Canadian Traveller Problem (CTP) introduced by Papadimitriou and Yannakakis in 1991. In this problem, a traveller knows the entire road network in advance, and wishes to travel as quickly as possible from a source vertex $s$ to a destination vertex $t$, but discovers online that some roads are blocked (e.g., by snow) once reaching them. It is PSPACE-complete to achieve a bounded competitive ratio for this problem. Furthermore, if at most $k$ roads can be blocked, then the optimal competitive ratio for a deterministic online algorithm is $2k+1$, while the only randomized result known is a lower bound of $k+1$.
In this paper, we show for the first time that a polynomial time randomized algorithm can beat the best deterministic algorithms, surpassing the $2k+1$ lower bound by an $o(1)$ factor. Moreover, we prove the randomized algorithm achieving a competitive ratio of $(1+ \frac{\sqrt{2}}{2})k +1$ in pseudo-polynomial time. The proposed techniques can also be applied to implicitly represent multiple near-shortest $s$-$t$ paths.
Laura Mancinska and Thomas Vidick. Unbounded entanglement can be needed to achieve the optimal success probability
Abstract: Quantum entanglement is known to provide a strong advantage in many two-party distributed tasks. We investigate the question of how much entanglement is needed to reach optimal performance. For the first time we show that there exists a purely classical scenario for which no finite amount of entanglement suffices. To this end we introduce a simple two-party nonlocal game $H$, inspired by Hardy's paradox. In our game each player has only two possible questions and can provide answers in a countable set. We exhibit a sequence of strategies which use entangled states in increasing dimension $d$ and succeed with probability $1-O(d^{-c})$ for some $c\geq 0.13$. On the other hand, we show that any strategy using an entangled state of local dimension $d$ has success probability at most $1-\Omega(d^{-2})$. In addition, we show that any strategy restricted to producing answers in a set of cardinality at most $d$ has success probability at most $1-\Omega(d^{-2})$.
Omer Reingold, Ron Rothblum and Udi Wieder. Pseudorandom Graphs in Data Structures
Abstract: We prove that the hash functions required for several data structure applications could be instantiated using the hash functions of Celis et-al (SIAM J. Comput., 2013). These functions simultaneously enjoy short description length as well as fast evaluation time. The applications we consider are: (1) Cuckoo Hashing, (2) Cuckoo Hashing with Stash and (3) the Power of Two Choices paradigm for load balancing. Our analysis relies on a notion of sparse pseudorandom graphs that are similar to random graphs in having no large connected component and no dense subgraph. Such graphs may be of independent interest. Relating pseudorandom graphs to the two-choice paradigm relies on a very simple new proof we give (at the price of somewhat worse parameters).
Jiri Fiala, Pavel Klavík, Jan Kratochvil and Roman Nedela. Algorithmic Aspects of Regular Graph Covers with Applications to Planar Graphs
Abstract: A graph G covers a graph H if there exists a locally bijective homomorphism from G to H. We deal with regular covers in which this locally bijective homomorphism is prescribed by an action of a subgroup of Aut(G). Regular covers have many applications in constructions and studies of big objects all over mathematics and computer science.
We study computational aspects of regular covers that have not been addressed before. The decision problem RegularCover asks for two given graphs G and H whether G regularly covers H. When |H|=1, this problem becomes Cayley graph recognition for which the complexity is still unresolved. Another special case arises for |G| = |H| when it becomes the graph isomorphism problem. Therefore, we restrict ourselves to graph classes with polynomially solvable graph isomorphism.
Inspired by Negami, we apply the structural results used by Babai in the 1970's to study automorphism groups of graphs. Our main result is the following FPT meta-algorithm: Let C be a class of graphs such that the structure of automorphism groups of 3-connected graphs in C is simple. Then we can solve RegularCover for C-inputs G in time O*(2^(e(H)/2)) where e(H) denotes the number of the edges of H. As one example of C, this meta-algorithm applies to planar graphs. In comparison, testing general graph covers is known to be NP-complete for planar inputs G even for small fixed graphs H such as K_4 or K_5. Most of our results also apply to general graphs, in particular the complete structural understanding of regular covers for 2-cuts.
Michael Elkin, Ofer Neiman and Shay Solomon. Light Spanners
Abstract: A \emph{$t$-spanner} of a weighted undirected graph $G=(V,E)$, is a subgraph $H$ such that $d_H(u,v)\le t\cdot d_G(u,v)$ for all $u,v\in V$. The sparseness of the spanner can be measured by its size (the number of edges) and weight (the sum of all edge weights), both being important measures of the spanner's quality -- in this work we focus on the latter.
Specifically, it is shown that for any parameters $k\ge 1$ and $\eps>0$, any weighted graph $G$ on $n$ vertices admits a $(2k-1)\cdot(1+\eps)$-stretch spanner of weight at most $w(MST(G))\cdot O_\eps(kn^{1/k}/\log k)$, where $w(MST(G))$ is the weight of a minimum spanning tree of $G$. Our result is obtained via a novel analysis of the classic greedy algorithm, and improves previous work by a factor of $O(\log k)$.
Zeev Dvir, Rafael Mendes de Oliveira and Amir Shpilka. Testing Equivalence of Polynomials under Shifts
Abstract: Two polynomials $f, g \in F[x_1, \ldots, x_n]$ are called shift-equivalent if there exists a vector $(a_1, \ldots, a_n) \in F^n$ such that the polynomial identity $f(x_1+a_1, \ldots, x_n+a_n) = g(x_1, \ldots, x_n)$ holds. Our main result is a new randomized algorithm that tests whether two given polynomials are shift equivalent. Our algorithm runs in time polynomial in the circuit size of the polynomials, to which it is given black box access. This complements a previous work of Grigoriev (Theoretical Computer Science, 1997) who gave a deterministic algorithm running in time $n^{O(d)}$ for degree d polynomials.
Our algorithm uses randomness only to solve instances of the Polynomial Identity Testing (PIT) problem. Hence, if one could de-randomize PIT (a long-standing open problem in complexity) a de-randomization of our algorithm would follow. This establishes an equivalence between de-randomizing shift-equivalence testing and de-randomizing PIT (both in the black-box and the white-box setting). For certain restricted models, such as Read Once Branching Programs, we already obtain a deterministic algorithm using existing PIT results.
Abstract: In edge orientations, the goal is usually to orient (direct) the edges of an undirected network (modeled by a graph) such that all out-degrees are bounded. When the network is fully dynamic, i.e., admits edge insertions and deletions, we wish to maintain such an orientation while keeping a tab on the update time. Low out-degree orientations turned out to be a surprisingly useful tool for managing networks.
Brodal and Fagerberg (1999) initiated the study of the edge orientation problem in terms of the graph's arboricity, which is very natural in this context. Their solution achieves a constant out-degree and a logarithmic \emph{amortized} update time for all graphs with constant arboricity, which include all planar and excluded-minor graphs. It remained an open question (first proposed by Brodal and Fagerberg, later by Erickson and others) to obtain similar bounds with \emph{worst-case} update time.
We address this 15 year old question by providing a simple algorithm with worst-case bounds that nearly match the previous amortized bounds. Our algorithm is based on a new approach of maintaining a combinatorial invariant, and achieves a logarithmic out-degree with logarithmic worst-case update times. This result has applications to various dynamic network problems such as maintaining a maximal matching, where we obtain logarithmic worst-case update time compared to a similar amortized update time of Neiman and Solomon (2013).
Abstract: We investigate adjacency labeling schemes for graphs of bounded degree $\Delta = O(1)$. In particular, we present an optimal (up to an additive constant) $\log n + O(1)$ adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 10’], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 02’]. We also provide improved labeling schemes for bounded degree planar graphs. Finally, we use combinatorial number systems and present an improved adjacency labeling schemes for graphs of bounded degree $\Delta$ with $(e+1)\sqrt{n} < \Delta \leq n/5$.
Richard Cleve and Rajat Mittal. Characterization of Binary Constraint System Games.
Abstract: We investigate a simple class of multi-prover interactive proof systems (classical non-local games), called binary constraint system (BCS) games, and characterize those that admit a perfect entangled strategy (i.e., a strategy with value 1 when the provers can use shared entanglement). Our characterization is in terms of a system of matrix equations. One application of this characterization is that, combined with a recent result of Arkhipov, it leads to a simple algorithm for determining whether certain restricted BCS games have a perfect entangled strategy, and, for the instances that do not, for bounding their value strictly below 1. An open question is whether, for the case of general BCS games, making this determination is computationally decidable. Our characterization might play a useful role in the resolution of this question.
Raghu Meka, Omer Reingold, Guy Rothblum and Ron Rothblum. Fast Pseudorandomness for Independence and Load Balancing
Abstract: We provide new constructions of several fundamental pseudorandom objects. Loosely speaking, these constructions obtain exponential improvements in efficiency compared to previous constructions with comparable randomness complexity. Our measure of efficiency is the number of word operations, as captured by the well-established unit-cost word RAM model. Our main results are the following:
(1) A family of $(1/n)$-almost $\log n$-wise independent Boolean hash functions with $O(\log n)$ description length (or seed length) and $O(\log\log n)$ operations per evaluation. Prior constructions with similar seed lengths required $\Theta(\log n)$ operations.
(2) $\eps$-biased sequences for $\eps = 1/\poly(n)$ with seed length $O(\log n\log\log n)$ and $O((\log\log n)^2)$ operations (to evaluate an output bit or a block of up to $\log n$ consecutive bits). Prior constructions achieve $O(\log n)$ seed length, but require $\Theta(\log n)$ operations. This construction implies pseudorandom generators with similar efficiency that fool classes such as low-degree polynomials and read-once CNFs. (3) Hash functions for placing $n$ balls in $n$ bins such that with all but probability $1/n$ the maximal load is $O(\log n/\log\log n)$ (which is optimal), with seed-length $O(\log n\log\log n)$ and $O((\log\log n)^2)$ operations per evaluation. The previously known construction with similar seed length required $\Theta(\log n \log\log n)$ operations. Indeed, our construction is an efficient instantiation of that construction, due to Celis, Reingold, Segev and Wieder (FOCS 2011).
These constructions are all simultaneously within $\log \log n$ factors of the optimal seed length, and within $(\log\log n)^2$ factors of the optimal computational efficiency.
Anna Gilbert, Yi Li, Ely Porat and Martin Strauss. For-all Sparse Recovery in Near-Optimal Time
Abstract: An \textit{approximate sparse recovery system} in $\ell_1$ norm consists of parameters $k$, $\epsilon$, $N$, an $m$-by-$N$ measurement $\Phi$, and a recovery algorithm, $\mathcal{R}$. Given a vector, $\mb{x}$, the system approximates $x$ by $\widehat{\mb{x}} = \mathcal{R}(\Phi\mb{x})$, which must satisfy $\|\widehat{\mb{x}}-\mb{x}\|_1 \leq (1+\epsilon)\|\mb{x}-\mb{x}_k\|_1$. We consider the ``for all'' model, in which a single matrix $\Phi$, possibly ``constructed'' non-explicitly using the probabilistic method, is used for all signals $\mb{x}$. The best existing sublinear algorithm by Porat and Strauss (SODA'12) uses $O(\epsilon^{-3} k\log(N/k))$ measurements and runs in time $O(k^{1-\alpha}N^\alpha)$ for any constant $\alpha > 0$.
In this paper, we improve the number of measurements to $O(\epsilon^{-2} k \log(N/k))$, matching the best existing upper bound (attained by super-linear algorithms), and the runtime to $O(k^{1+\beta}\poly(\log N,1/\epsilon))$, with a modest restriction that $\epsilon \leq (\log k/\log N)^{\gamma}$, for any constants $\beta,\gamma > 0$. When $k\leq \log^c N$ for some $c>0$, the runtime is reduced to $O(k\poly(N,1/\epsilon))$. With no restrictions on $\epsilon$, we have an approximation recovery system with $m = O(k/\epsilon \log(N/k)((\log N/\log k)^\gamma + 1/\epsilon))$ measurements.
The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages.
Abstract: Most decidability results concerning well-structured transition systems apply to the finitely branching variant. Yet some models (inserting automata, ω-Petri nets, ...) are naturally infinitely branching. Here we develop tools to handle infinitely branching WSTS by exploiting the crucial property that in the (ideal) completion of a well-quasi-ordered set, downward-closed sets are finite unions of ideals. Then, using these tools, we derive decidability results and we delineate the undecidability frontier in the case of the termination, the control-state maintainability and the coverability problems. A new forward algorithm for deciding coverability is obtained and boundedness is also shown decidable.
Andreas Björklund, Rasmus Pagh, Virginia Vassilevska Williams and Uri Zwick. Listing Triangles
Abstract: We present new algorithms for listing triangles in dense and sparse graphs. The running time of our algorithm for dense graphs is $\BOt{n^{\omega} + n^{3(\omega-1)/(5-\omega)} t^{2(3-\omega)/(5-\omega)}}$, and the running time of the algorithm for sparse graphs is $\BOt{m^{{2\omega}/{(\omega+1)}} + m^{{3(\omega-1)}/{(\omega+1)}} t^{{(3-\omega)}/{(\omega+1)}}}$, where~$n$ is the number of vertices, $m$ is the number of edges, $t$ is the number of triangles to be listed, and $\omega<2.373$ is the exponent of fast matrix multiplication. With the current bound on~$\omega$, the running times of our algorithms are $\BOt{ n^{2.373} + n^{1.568}\, t^{0.478}}$ and $\BOt{m^{1.408} + m^{1.222}\, t^{0.186}}$, respectively. If $\omega=2$, the running times of the algorithms become $\BOt{n^2+nt^{2/3}}$ and $\BOt{m^{4/3}+mt^{1/3}}$, respectively. In particular, if $\omega=2$, our algorithm lists~$m$ triangles in $\BOt{m^{4/3}}$ time. P\v{a}tra\c{s}cu (STOC 2010) showed that $\BOm{m^{4/3-o(1)}}$ time is required for listing $m$ triangles, unless there exist subquadratic algorithms for 3SUM. We first obtain randomized algorithms with the desired running times and then derandomize them using \emph{sparse recovery} techniques.
Abstract: We identify sufficient conditions for the construction of free language models for systems of Kleene algebra with additional equations. The construction applies to a broad class of extensions of KA and provides a uniform approach to deductive completeness and coalgebraic decision procedures.
Abstract: We consider multi-player graph games with partial-observation and parity objective. While the decision problem for three-player games with a coalition of the first and second players against the third player is undecidable, we present a decidability result for partial-observation games where the first and third player are in a coalition against the second player, thus where the second player is adversarial but weaker due to partial-observation. We establish tight complexity bounds in the case where player 1 is less informed than player 2, namely 2-EXPTIME-completeness for parity objectives. The symmetric case of player 1 more informed than player 2 is much more complicated, and we show that already in the case where player 1 has perfect observation, memory of size non-elementary is necessary in general for reachability objectives, and the problem is decidable for safety and reachability objectives. Our results have tight connections with partial-observation stochastic games for which we derive new complexity results.
Abstract: We consider decision problems for deterministic pushdown automata over a unary alphabet (udpda, for short). Udpda are a simple computation model that accept exactly the unary regular languages, but can be exponentially more succinct than finite-state automata. We complete the complexity landscape for udpda by showing that emptiness (and thus universality) is P-hard, equivalence and compressed membership problems are P-complete, and inclusion is coNP-complete. Our upper bounds are based on a translation theorem between udpda and straight-line programs over the binary alphabet (SLPs). We show that the characteristic sequence of any udpda can be represented as a pair of SLPs---one for the prefix, one for the lasso---that have size linear in the size of the udpda and can be computed in polynomial time. Hence, decision problems on udpda are reduced to decision problems on SLPs. Conversely, any SLP can be converted in logarithmic space into a udpda, and this forms the basis for our lower bound proofs. We show coNP-hardness of the ordered matching problem for SLPs, from which we derive coNP-hardness for inclusion. In addition, we complete the complexity landscape for unary nondeterministic pushdown automata by showing that the universality problem is Pi2P-hard, using a new class of integer expressions. Our techniques have applications beyond udpda. We show that our results imply Pi2P-completeness for a natural fragment of Presburger arithmetic and coNP lower bounds for compressed matching problems with one-character wildcards.
Andrew Childs, David Gosset and Zak Webb. The Bose-Hubbard model is QMA-complete
Abstract: The Bose-Hubbard model is a system of interacting bosons that live on the vertices of a graph. The particles can move between adjacent vertices and experience a repulsive on-site interaction. The Hamiltonian is determined by a choice of graph that specifies the geometry in which the particles move and interact. We prove that approximating the ground energy of the Bose-Hubbard model on a graph at fixed particle number is QMA-complete. In our QMA-hardness proof, we encode the history of an n-qubit computation in the subspace with at most one particle per site (i.e., hard-core bosons). This feature, along with the well-known mapping between hard-core bosons and spin systems, lets us prove a related result for a class of 2-local Hamiltonians defined by graphs that generalizes the XY model. By avoiding the use of perturbation theory in our analysis, we circumvent the need to multiply terms in the Hamiltonian by large coefficients.
Abstract: Scheduling jobs with precedence constraints on a set of identical machines to minimize the total processing time (makespan) is a fundamental problem in combinatorial optimization. In practical settings such as cloud computing, jobs are often {\em malleable}, i.e., can be processed on multiple machines simultaneously. The instantaneous processing rate of a job is a non-decreasing function of the number of machines assigned to it (we call it the processing function). While an optimal algorithm for convex processing functions is straightforward, the more practical model is that of concave processing functions, which obey the law of diminishing utility and generalize the classical (non-malleable) problem. Our main result is a $(2+\epsilon)$-approximation algorithm for concave processing functions (for any $\epsilon > 0$), which is the best possible under complexity theoretic assumptions. The approximation ratio improves to $(1 + \epsilon)$ for the interesting and practically relevant special case of power functions, i.e., $p_j(x) = c_j \cdot x^{\gamma}$.
Yitong Yin. Spatial Mixing of Coloring Random Graphs
Abstract: We study the strong spatial mixing (decay of correlation) property of proper $q$-colorings of random graph $G(n, d/n)$ with a fixed~$d$. The strong spatial mixing of coloring and related models have been extensively studied on graphs with bounded maximum degree. However, for typical classes of graphs with bounded average degree, such as $G(n, d/n)$, an easy counterexample shows that colorings do not exhibit strong spatial mixing with high probability. Nevertheless, we show that for $q\ge\alpha d+\beta$ with $\alpha>2$ and sufficiently large $\beta=O(1)$, with high probability proper $q$-colorings of random graph $G(n, d/n)$ exhibit strong spatial mixing with respect to an arbitrarily fixed vertex. This is the first strong spatial mixing result for colorings of graphs with unbounded degree. Our analysis of strong spatial mixing establishes a block-wise correlation decay instead of the standard point-wise decay, which may be of interest by itself, especially for graphs with unbounded degree.
Timon Hertli. Breaking the PPSZ Barrier for Unique 3-SAT
Abstract: The PPSZ algorithm by Paturi, Pudlák, Saks, and Zane (FOCS 1998) is the fastest known algorithm for (Promise) Unique k-SAT. We give an improved algorithm with exponentially faster bounds for Unique 3-SAT.
For uniquely satisfiable 3-CNF formulas, we do the following case distinction: We call a clause critical if exactly one literal is satisfied by the unique satisfying assignment. If a formula has many critical clauses, we observe that PPSZ by itself is already faster. If there are only few clauses allover, we use an algorithm by Wahlström (ESA 2005) that is faster than PPSZ in this case. Otherwise we have a formula with few critical and many non-critical clauses. Non-critical clauses have at least two literals satisfied; we show how to exploit this to improve PPSZ.
Yaoyu Wang and Yitong Yin. Certificates in Data Structures
Abstract: We study certificates in static data structures. In the cell-probe model, certificates are the cell probes which can uniquely identify the answer to the query. As a natural notion of nondeterministic cell probes, lower bounds for certificates in data structures immediately imply deterministic cell-probe lower bounds. In spite of this extra power brought by nondeterminism, we prove that two widely used tools for cell-probe lower bounds: richness lemma of Miltersen et al. and direct-sum richness lemma of Patrascu and Thorup, both hold for certificates in data structures with even better parameters. Applying these lemma and adopting existing reductions, we obtain certificate lower bounds for a variety of static data structure problems. These certificate lower bounds are at least as good as the highest known cell-probe lower bounds for the respective problems. In particular, for approximate near neighbor (ANN) problem in Hamming distance, our lower bound improves the state of the art. When the space is strictly linear, our lower bound for ANN in $d$-dimensional Hamming space becomes $t=\Omega(d)$, which along with the recent breakthrough for polynomial evaluation of Larsen, are the only two $t=\Omega(d)$ lower bounds ever proved for any problems in the cell-probe model.
John Iacono. Why some heaps support constant-amortized-time decrease-key operations, and others do not
Abstract: A lower bound is presented which shows that a class of heap algorithms in the pointer model with only heap pointers must spend $\Omega \left( \frac{\log \log n}{\log \log \log n} \right)$ amortized time on the Decrease-Key operation (given $O(\log n)$ amortized-time Extract-Min). Intuitively, this bound shows the key to having $O(1)$-time decrease-key is the ability to sort $O(\log n)$ items in $O(\log n)$ time; Fibonacci heaps [M.~.L.~Fredman and R. E. Tarjan. J. ACM 34(3):596-615 (1987)] do this through the use of bucket sort. Our lower bound also holds no matter how much data is augmented; this is in contrast to the lower bound of Fredman [J. ACM 46(4):473-501 (1999)] who showed a tradeoff between the number of augmented bits and the amortized cost of Decrease-Key. A new heap data structure, the sort heap, is presented. This heap is a simplification of the heap of Elmasry [SODA 2009: 471-476] and shares with it a $O(\log \log n)$ amortized-time Decrease-Key, but with a straightforward implementation such that our lower bound holds. Thus a natural model is presented for a pointer-based heap such that the amortized runtime of a self-adjusting structure and amortized lower asymptotic bounds for Decrease-Key differ by but a $O(\log \log \log n)$ factor.
Namit Chaturvedi. Toward a Structure Theory of Regular Infinitary Trace Languages
Abstract: The family of regular languages of infinite words is structured into a hierarchy where each level is characterized by a class of deterministic omega-automata -- the class of deterministic Buechi automata being the most prominent among them. In this paper, we analyze the situation of regular languages of infinite Mazurkiewicz traces that model non-terminating, concurrent behaviors of distributed systems. Here, a corresponding classification is still missing. We introduce the model of ``synchronization-aware asynchronous automata'', which allows us to initiate a classification of regular infinitary trace languages in a form that is in nice correspondence to the case of omega-regular word languages.
Amir Abboud, Virginia Vassilevska Williams and Oren Weimann. Consequences of Faster Alignment of Sequences
Abstract: The Local Alignment problem is a classical problem with applications in biology. Given two input strings and a scoring function on pairs of letters, one is asked to find the substrings of the two input strings that are most similar under the scoring function. The best algorithms for Local Alignment run in time that is roughly quadratic in the string length. It is a big open problem whether substantially subquadratic algorithms exist. In this paper we show that for all $\eps>0$, an $O(n^{2-\eps})$ time algorithm for local alignment on strings of length $n$ would imply breakthroughs on three longstanding open problems: it would imply that for some $\delta>0$, $3$SUM on $n$ numbers is in $O(n^{2-\delta})$ time, CNF-SAT on $n$ variables is in $O((2-\delta)^n)$ time, and Maximum Weight $4$-Clique is in $O(n^{4-\delta})$ time. Our result for CNF-SAT also applies to the easier problem of finding the longest common substring of binary strings with don't cares. We also give strong conditional lower bounds for the more general Multiple Local Alignment problem on $k$ strings, under both $k$-wise and SP scoring, and for other string similarity problems such as Global Alignment with gap penalties and normalized Longest Common Subsequence.
Ramanujan M. S., Saket Saurabh, Pranabendu Misra, Manu Basavaraju, Fedor Fomin and Petr Golovach. Parameterized Algorithms to Preserve Connectivity
Abstract: We study the following family of connectivity problems. For a given $\lambda$-edge connected graph $G=(V,E)$, a set of links $L$ such that $G+L=(V, E\cup L)$ is $(\lambda+1)$-edge connected, and a positive integer $k$, the questions are
1. Augmentation Problem: whether $G$ can be augmented to a $(\lambda+1)$-edge connected graph by adding at most $k$ links from $L$; or 2. Deletion Problem: whether it is possible to preserve $(\lambda+1)$-edge connectivity of graph $G+ L$ after deleting at least $k$ links from $L$.
1. An $9^k|V|^{O(1)}$ time algorithm for a weighted version of the augmentation problem. This improves over the previous best bound of $2^{O(k\log k)}|V|^{O(1)}$ given by Marx and Vegh~[{\em ICALP 2013}]. Let us remark that even for $\lambda=1$, the best known algorithm so far due to Nagamochi [{\em DAM 2003}] runs in time $2^{O(k\log k)}|V|^{O(1)}$.
2. An $2^{O(k)}|V|^{O(1)}$ algorithm for the deletion problem thus establishing that the problem is fixed-parameter tractable ({\FPT}). Moreover, we show that the problem admits a kernel with $12k$ vertices and $3k$ links when the graph $G$ has odd-connectivity and a kernel with $O(k^2)$ vertices and $O(k^2)$ links when $G$ has even-connectivity.
Our results are based on a novel connection between augmenting sets and the {\sc Steiner Tree} problem in an appropriately defined auxiliary graph.
Erez Kantor and Shay Kutten. Optimal competitiveness for Symmetric Rectilinear Steiner Arborescence and related problems
Abstract: We present optimal competitive algorithms for two interrelated known problems involving Steiner Arborescence. One is the continuous problem of the Symmetric Rectilinear Steiner Arborescence ($\SRSA$), whose online version was studied by Berman and Coulston as a symmetric version of the known Rectilinear Steiner Arborescence ($RSA$) problem. A very related, but discrete problem (studied separately in the past) is the online Multimedia Content Delivery ($\MCD$) problem on line networks, presented originally by Papadimitriou, Ramanathan, and Rangan. An efficient content delivery was modeled as a low cost Steiner arborescence in a grid of network$\times$time they defined. We study here the version studied by Charikar, Halperin, and Motwani (who used the same problem definitions, but removed some constraints on the inputs). The bounds on the competitive ratios introduced separately in the above papers were similar for the two problems: $O(\log N)$ for the continuous problem and $O(\log n)$ for the network problem, where $N$ was the number of terminals to serve, and $n$ was the size of the network. The lower bounds were $\Omega(\sqrt{\log N})$ and $\Omega(\sqrt{\log n})$ correspondingly.
Berman and Coulston conjectured that both the upper bound and the lower bound could be improved. We disprove this conjecture and close these quadratic gaps for both problems. We present deterministic algorithms that are competitive optimal: $O(\sqrt{\log })$ for $\SRSA$ and $O(\min \{\sqrt{\log n}, \sqrt{\log N} \})$ for $\MCD$, matching the lower bounds for these two online problems. We also present a $\Omega(\sqrt[3]{\log n})$ lower bound on the competitiveness of any randomized algorithm.
Abstract: In this paper, we analyze and study a hybrid model for testing and learning probability distributions. Here, in addition to samples, the testing algorithm is provided with one of two different types of oracles to the unknown distribution D over [n]. More precisely, we define both the dual and cumulative dual access models, in which the algorithm A can both sample from D and respectively, for any i in [n], - query the probability mass D(i) (query access); or - get the total mass of {1,...,i}, i.e. \sum_{j=1}^i D(j) (cumulative access) These two models, by generalizing the previously studied sampling and query oracle models, allow us to bypass the strong lower bounds established for a number of problems in these settings, while capturing several interesting aspects of these problems -- and providing new insight on the limitations of the models. Finally, we show that while the testing algorithms can be in most cases strictly more efficient, some tasks remain hard even with this additional power.
Eric Blais, Johan Håstad, Rocco Servedio and Li-Yang Tan. On DNF approximators for monotone Boolean functions
Abstract: We study the complexity of approximating monotone Boolean functions with disjunctive normal form (DNF) formulas, exploring two main directions. First, we construct DNF approximators for arbitrary monotone functions achieving one-sided error: we show that every monotone $f$ can be $\eps$-approximated by a DNF $g$ of size $2^{n-\Omega_\eps(\sqrt{n})}$ satisfying $g(x) \le f(x) $ for all $x\in\{0,1\}^n$. This is the first non-trivial universal upper bound even for DNF approximators incurring two-sided error.
Next, we study the power of negations in DNF approximators for monotone functions. We exhibit monotone functions for which non-monotone DNFs perform better than monotone ones, giving separations with respect to both DNF size and width. Our results, when taken together with a classical theorem of Quine \cite{Qui54}, highlight an interesting contrast between approximation and exact computation in the DNF complexity of monotone functions, and they add to a line of work on the surprising role of negations in monotone complexity \cite{Raz85,Oko82,AG87}.
Yuval Emek and Adi Rosen. Semi-Streaming Set Cover
Abstract: This paper studies the set cover problem under the semi-streaming model. The underlying set system is formalized in terms of a hypergraph $G = (V, E)$ whose edges arrive one-by-one and the goal is to construct an edge cover $F \subseteq E$ with the objective of minimizing the cardinality (or cost in the weighted case) of $F$. We consider a parameterized relaxation of this problem, where given some $0 \leq \epsilon < 1$, the goal is to construct an edge $(1 - \epsilon)$-cover, namely, a subset of edges incident to all but an $\epsilon$-fraction of the vertices (or their benefit in the weighted case). The key limitation imposed on the algorithm is that its space is limited to (poly)logarithmically many bits per vertex.
Our main result is an asymptotically tight trade-off between $\epsilon$ and the approximation ratio: We design a semi-streaming algorithm that on input graph $G$, constructs a succinct data structure $\mathcal{D}$ such that for every $0 \leq \epsilon < 1$, an edge $(1 - \epsilon)$-cover that approximates the optimal edge \mbox{($1$-)cover} within a factor of $f(\epsilon, n)$ can be extracted from $\mathcal{D}$ (efficiently and with no additional space requirements), where \[ f(\epsilon, n) = \left\{ \begin{array}{ll} O (1 / \epsilon), & \text{if } \epsilon > 1 / \sqrt{n} \\ O (\sqrt{n}), & \text{otherwise} \end{array} \right. \, . \] In particular for the traditional set cover problem we obtain an $O(\sqrt{n})$-approximation. This algorithm is proved to be best possible by establishing a family (parameterized by $\epsilon$) of matching lower bounds.
Markus Sortland Dregi and Daniel Lokshtanov. Parameterized Complexity of Bandwidth on Trees
Abstract: The bandwidth of a $n$-vertex graph $G$ is the smallest integer $b$ such that there exists a bijective function $f : V(G) \rightarrow \{1,...,n\}$, called a layout of $G$, such that for every edge $uv \in E(G)$, $|f(u) - f(v)| \leq b$. In the {\sc Bandwidth} problem we are given as input a graph $G$ and integer $b$, and asked whether the bandwidth of $G$ is at most $b$. We present two results concerning the parameterized complexity of the {\sc Bandwidth} problem on trees.
First we show that an algorithm for {\sc Bandwidth} with running time $f(b)n^{o(b)}$ would violate the Exponential Time Hypothesis, even if the input graphs are restricted to be trees of pathwidth at most two. Our lower bound shows that the classical $2^{O(b)}n^{b+1}$ time algorithm by Saxe [SIAM Journal on Algebraic and Discrete Methods, 1980] is essentially optimal.
Our second result is a polynomial time algorithm that given a tree $T$ and integer $b$, either correctly concludes that the bandwidth of $T$ is more than $b$ or finds a layout of $T$ of bandwidth at most $b^{O(b)}$. This is the first parameterized approximation algorithm for the bandwidth of trees.
Olga Ohrimenko, Michael Goodrich, Roberto Tamassia and Eli Upfal. The Melbourne Shuffle: Improving Oblivious Storage in the Cloud
Abstract: We present a simple, efficient, and secure data-oblivious randomized shuffle algorithm. This is the first secure data-oblivious shuffle that is not based on sorting. Our method can be used to improve previous oblivious storage solutions for network-based outsourcing of data.
Abstract: We consider n mobile agents of limited energy that are placed on a straight line and that need to collectively deliver a single piece of data from the given source point s to the given target point t on the line. Agents can move only as far as their batteries allow. They can hand over the data when they meet, i.e., when they are positioned at the same point. In this paper we show that deciding whether the agents can deliver the data is (weakly) NP-complete, and we present a quasi-, pseudo-polynomial time algorithm that runs in time O(∆^2 · n^{1 + 4 log ∆}), where ∆ is the distance between s and t. This answers an open problem stated by Anaya et al. (DISC 2012).
Mrinal Kumar and Shubhangi Saraf. Superpolynomial lower bounds for general homogeneous depth 4 arithmetic circuits
Abstract: In this paper, we prove superpolynomial lower bounds for the class of homogeneous depth 4 arithmetic circuits. We give an explicit polynomial in $\VNP$ of degree $n$ in $n^2$ variables such that any homogeneous depth 4 arithmetic circuit computing it must have size $n^{\Omega(\log \log n)}$.
Our results extend the works of Nisan-Wigderson~\cite{NW95} (which showed superpolynomial lower bounds for homogeneous depth 3 circuits),~Gupta-Kamath-Kayal-Saptharishi and Kayal-Saha-Saptharishi\linebreak~\cite{GKKS12,KSS13} (which showed superpolynomial lower bounds for homogeneous depth 4 circuits with bounded bottom fan-in), Kumar-Saraf~\cite{KS-formula} (which showed superpolynomial lower bounds for homogeneous depth 4 circuits with bounded top fan-in) and Raz-Yehudayoff and Fournier-Limaye-Malod-Srinivasan~\cite{RY08b,FLMS13} (which showed superpolynomial lower bounds for multilinear depth 4 circuits). Several of these results in fact showed exponential lower bounds.
The main ingredient in our proof is a new complexity measure of {\it bounded support} shifted partial derivatives. This measure allows us to prove exponential lower bounds for homogeneous depth 4 circuits where all the monomials computed at the bottom layer have {\it bounded support} (but possibly unbounded degree/fan-in), strengthening the results of Gupta et al and Kayal et al~\cite{GKKS12,KSS13}. This new lower bound combined with a careful ``random restriction" procedure (that transforms general depth 4 homogeneous circuits to depth 4 circuits with bounded support) gives us our final result.
Abstract: In the classical model of tile self-assembly, unit square tiles translate in the plane and attach edgewise to form large crystalline structures. This model of self-assembly has been shown to be capable of asymptotically optimal assembly of arbitrary shapes and, via information-theoretic arguments, increasingly complex shapes necessarily require increasing numbers of distinct types of tiles. While permitted by the model, such systems are extremely difficult to engineer using known biochemical implementations of tile systems.
We explore the possibility of assembling using systems consisting of a single tile. Our main result shows that any system of square tiles can be simulated using a system with a single tile that is permitted to flip and rotate. We also show that systems of single tiles restricted to translation only can simulate cellular automata for a limited number of steps given an appropriate seed assembly, and that any longer-running simulation must induce infinite assembly.
Abstract: A longstanding open problem in the complexity of Boolean functions is the "Sensitivity Conjecture" (Nisan & Szegedy 1994) which asks whether the maximum sensitivity of a Boolean function is polynomially related to the other major complexity measures of Boolean functions such as block sensitivity, deterministic or randomized query complexity or the Fourier degree, which are all known to be polynomially related to each other. Despite major advances in the analysis of Boolean functions in the past decade (see for example the upcoming book of O'Donnell 2014), this problem remains wide open, with no improvements on the best upper bounds on (say) block sensitivity in terms of sensitivity since 2004 (Kenyon & Kutin 2004).
In this work, we introduce two new techniques for proving upper bounds on decision tree complexity complexity measures in terms of sensitivity and use them to improve the best known relations between sensitivity and other complexity measures. Namely, we show that deg(f)^{1-o(1)}=O(2^{s(f)}) and bs(f) \leq 2^{s(f)-1} s(f). The gap between sensitivity and other complexity measures remains exponential but these results are the first improvement for this difficult problem that has been achieved in a decade.
George Giakkoupis, Thomas Sauerwald and Alexandre Stauffer. Randomized Rumor Spreading in Dynamic Graphs
Abstract: We consider the well-studied rumor spreading model in which nodes contact a random neighbor in each round in order to push or pull the rumor. Unlike most previous works which focus on static topologies, we look at a dynamic graph model where an adversary is allowed to rewire the connections between vertices before each round, giving rise to a sequence of graphs, G_1,G_2,... Our first result is a bound on the rumor spreading time in terms of the conductance of those graphs. We show that if the degree of each node does not change much during the protocol (that is, by at most a constant factor), then the spread completes within t rounds for some t such that the sum of conductances of the graphs G_1 up to G_t is O(log n). This result holds even against an adaptive adversary whose decisions in a round may depend on the set of informed vertices before the round, and implies the known tight bound with conductance for static graphs. Next we show that for the alternative expansion measure of vertex expansion, the situation is different. An adaptive adversary can delay the spread of rumor significantly even if graphs are regular and have high expansion, unlike in the static graph case where high expansion is known to guarantee fast rumor spreading. However, if the adversary is oblivious, i.e., the graph sequence is decided before the protocol begins, then we show that a bound close to the one for the static case holds for any sequence of regular graphs.
Manoj Prabhakaran, Amit Sahai and Akshay Wadia. Secure Computation using Leaky Tokens
Abstract: Leakage-proof hardware tokens have been used to achieve a large number of cryptographic tasks recently. But in real life, due to various physical attacks, it is extremely difficult to construct hardware devices that are guaranteed to be leakage-proof. In this paper, we study the feasibility of general two-party computation using leaky hardware tokens.
Our main result is a completeness theorem that shows that ``every'' non-trivial leaky two-party functionality can be used for general secure computation. In fact, the protocol we construct is ``non-interactive'' and ``unconditionally secure''. There are no restrictions on the leakage functions associated with the token, except that it does not render the tokens trivial, by revealing its entire secrets to the adversary.
André Chailloux and Giannicola Scarpa. Parallel Repetition of Entangled Games with Exponential Decay via the Superposed Information Cost
Abstract: In a two-player game, two cooperating but non communicating players, Alice and Bob, receive inputs taken from a probability distribution. Each of them produces an output and they win the game if they satisfy some predicate on their inputs/outputs. The entangled value $\omega^*(G)$ of a game $G$ is the maximum probability that Alice and Bob can win the game if they are allowed to share an entangled state prior to receiving their inputs.
The $n$-fold parallel repetition $G^n$ of $G$ consists of $n$ instances of $G$ where Alice and Bob receive all the inputs at the same time and must produce all the outputs at the same time. They win $G^n$ if they win each instance of $G$.
In this paper we show that for any game $G$ such that $\omega^*(G) = 1 - \eps < 1$, $\omega^*(G^n)$ decreases exponentially in $n$. First, for any game $G$ on the uniform distribution, we show that $\omega^*(G^n) = (1 - \eps^2)^{\Omega\left(\frac{n}{\log(|I||O|)} - |\log(\eps)|\right)}$, where $|I|$ and $|O|$ are the sizes of the input and output sets. From this result, we show that for any entangled game $G$, $\omega^*(G^n) = (1 - {\eps^2})^{\Omega(\frac{n}{Q^4 \log(Q \cdot|O|)} - |\log(\eps/Q)|)}$ where $p$ is the input distribution of $G$ and $Q = \max(\lceil \frac{1}{\min_{xy: p_{xy} \neq 0}(\sqrt{p_{xy})}}\rceil, |I|)$.
To prove this parallel repetition, we introduce the concept of \emph{Superposed Information Cost} for entangled games which is inspired from the information cost used in communication complexity.
Abstract: Shared randomness can be a useful resource under many scenarios of distributed computing. The most common regime is broadcasting arbitrarily many uniformly random bits to all the parties. But what if the broadcasting channel is noisy, and as a result the parties can only have access to non-perfect shared randomness? How useful various forms of shared randomness -- with various degrees of correlation -- are to the parties as a resouce? In this work, we formalise the above questions and study them under the setup of two-party communication complexity, in the model of simultaneous message passing (SMP).
We introduce a natural measure for the strength of the correlation provided by a bipartite distribution that we call collision complexity. We demonstrate that the collision complexity col_rho(n) of a bipartite distribution rho tightly characterises its ability to "emulate'' perfect shared randomness for the purpose of solving communication tasks of input length n:
1. The collision complexity of perfect shared randomness is 1. 2. For any bipartite distribution rho, any protocol that uses perfect shared randomness and has complexity CC_{perf}(n) can be simulated by a protocol that uses rho and has complexity tilde{O}(col_rho(n) CC_{perf}(n)). 3. We construct a partial function GIP_n, whose communication complexity with shared randomness rho is \tilde{Theta}(col_rho(n)) for any \rho; in particular, with perfect randomness it is tilde{O}(1).
We also show that even the "noisiest'' shared randomness increases the power of SMP substantially: Somewhat surprisingly, the equality function can be solved efficiently with virtually any nontrivial shared randomness (without shared randomness the complexity is known to be Omega(sqrt n)).
Abstract: Given a formula in a temporal logic such as LTL or MTL, a fundamental problem is the complexity of evaluating the formula on a given finite word. For LTL, the complexity of this task was recently shown to be in NC. In this paper, we present an NC algorithm for MTL, a quantitative (or metric) extension of LTL, and give an NCC algorithm for UTL, the unary fragment of LTL. At the time of writing, MTL is the most expressive logic with an NC path-checking algorithm, and UTL is the most expressive fragment of LTL with a more efficient path-checking algorithm than for full LTL (subject to standard complexity-theoretic assumptions). We then establish a connection between LTL path checking and planar circuits, which we exploit to show that any further progress in determining the precise complexity of LTL path checking would immediately entail more efficient evaluation algorithms than are known for a certain class of planar circuits. The connection further implies that the complexity of LTL path checking depends on the Boolean connectives allowed: adding Boolean exclusive or yields a temporal logic with P-complete path-checking problem.
Abstract: We carefully reexamine a construction from \cite{a} to show that the intersection non-emptiness problem for DFA's (deterministic finite automata) characterizes the complexity class $\NL$. In particular, if restricted to a binary work tape alphabet, then there exist constants $c_1$ and $c_2$ such that for every $k$ intersection non-emptiness for $k$ DFA's is solvable in $c_1 k \log(n)$ space, but is not solvable in $c_2 k \log(n)$ space. We optimize the construction to show that intersection non-emptiness is not solvable in $o(\frac{n}{\log(n)\log(\log(n))})$ space. Furthermore, if there exists a function $f(k) = o(k)$ such that for every $k$ intersection non-emptiness for $k$ DFA's is solvable in $n^{f(k)}$ time, then $\P$ $\neq$ $\NL$. If there does not exist a constant $c$ such that for every $k$ intersection non-emptiness for $k$ DFA's is solvable in $n^{c}$ time, then $\P$ does not contain any space complexity class larger that $\NL$.
Abstract: We precisely characterize the role of private randomness in the ability of Alice to send a message to Bob while minimizing the amount of information revealed to him. We give an example of a (randomized) message which can be transmitted while revealing only I bits of information using private randomness, but requires Alice to reveal I +log I −O(1) bits of information if only public coins are allowed. This gives the first example of an super-constant additive separation between these two models. Our example also shows that the one-round compression construction of Harsha et al. cannot be improved. Moreover, we show that our example is tight up to an additive O(1) factor: We show that if using private randomness a message can be transmitted while revealing I bits of information, the transmission can be simulated without private coins using I + log I + O(1) bits of information. This improves over an earlier result by Brody et al.
Rafail Ostrovsky, Giuseppe Persiano and Ivan Visconti. On Input Indistinguishable Proof Systems
Abstract: In this paper, we study Input Indistinguishable Computation (IIC), an important security notion for two-party computation (2PC) proposed by Micali et al. in [FOCS 2006] and recently considered also by Garg et al. in [Eurocrypt 2012]. IIC aims at generalizing the notion of a Witness Indistinguishable (WI) Proof to any two-party functionality, and in its concurrent form, it aims at defeating MiM attacks. We will focus on the proof system functionality and compare IIC with two other notions of security found in the literature for proof systems: WI and Non-Malleability (NM). Specifically, we address the following two natural questions:
1) Since IIC is a generalization of WI from proof systems to general 2PC, are all WI proofs also IIC secure ? 2) Are IIC proofs also NM proofs?
In this work we show, somewhat surprisingly, that both answers to the above questions are negative. Indeed, we show that there exists a WI proof system that is not IIC secure, therefore proving that the answer to the first question is negative. We then show that a large class of WI proof systems, including the one for Hamiltonicity of Blum, are concurrently secure in the IIC sense and this proves that the answer to the second question is negative too, since Blum's proofs are known to be malleable.
The consequence of our results is three-fold. 1) IIC is a generalization of a stronger notion of WI, therefore other relaxed notions of IIC could be obtained still satisfying the desired security flavor. Indeed, we will also show that the revisited notion of IIC (sIIC) proposed by Garg et al. guarantees that any WI proof of knowledge is sIIC secure. 2) For important functionalities, such as the proof system functionality, classical constructions like Blum's protocol are already concurrent IIC secure and therefore it is possible to obtain concurrent IIC security without resorting to general and inefficient (based on the use of expensive non-malleable subprotocols) constructions shown in previous work. 3) Concurrent IIC security should be carefully evaluated when used as a security guarantee to model real-world concurrent attacks to protocols, as our results show that concurrent IIC security (in contrast to standard simulation-based security [DDN91,BPS06], and to a different indistinguishability-based security notion [OPV08], the latter even in constant rounds) does not guarantee non-malleability of proof systems.
Abstract: Thorp shuffle is a simple model for a random riffle shuffle that for many years has eluded good analysis. In Thorp shuffle, one first cuts a deck of cards in half, and then starts dropping the cards from left or right hand as with an ordinary shuffle, so that at each time, one chooses left or right card with probability 1/2, and whatever card is chosen, drops that it, and then drops the card from the opposite half. Then one continues it inductively until all cards has been dropped. The question is how many times one has to repeat this process to randomly shuffle a deck of $n$ cards. Despite its rather simple description and wide interest in understanding its behavior, Thorp shuffle has been very difficult to analyze and only very recently, Morris showed that Thorp shuffle mixes in a polylogarithmic number of rounds.
In our main result, we show that if Thorp shuffle mixes sequences consisting of n-k distinct elements together with k identical elements (so-called k-partial n-permutations) with k = \Theta(n), then O(\log^2n) rounds are sufficient to randomly mix the input elements. In other words, O(\log^2n) Thorp shuffles with n input elements randomly permutes any set of cn elements with any c<1, or, equivalently, is almost cn-wise independent. The key technical part of our proof is a novel analysis of the shuffling process that uses non-Markovian coupling. While non-Markovian coupling is known to be more powerful than the Markovian coupling, our treatment is one of only a few examples where strictly non-Markovian coupling can be used to formally prove a strong mixing time. Our non-Markovian coupling is used to reduce the problem to the analysis of some random process in networks (in particular, when n is a power of two then this is in a butterfly network), which we solve using combinatorial and probabilistic arguments.
Our result can be used to randomly permute any number of elements using Thorp shuffle: If the input deck has N cards, then add another set of 0.01 N "empty" cards and run O(log^2N) Thorp shuffles. Then, if we remove the empty cards, the obtained deck will have the original $N$ cards randomly permuted.
We also analyze a related shuffling process that we call Perfect shuffle. We cut a deck of n cards into two halves, randomly permute each half, and then perform one step of Thorp shuffle. Apart from being interesting on its own, our motivation to study this process was that a single Perfect shuffle is very similar to O(log n) Thorp shuffles, and thus understanding of Perfect shuffle can shed some light on the performance of Thorp shuffle. We apply coupling to show that Perfect shuffle mixes in O(loglog n) steps, which we conjecture to be asymptotically tight.
Kunal Talwar and Udi Wieder. Balanced Allocations: A Simple Proof for the Heavily Loaded Case
Abstract: We give a new proof for the fact that the expected gap between the maximum load and the average load in the two-choice process is bounded by $(1+o(1))\log \log n$, irrespective of the number of balls thrown. The original proof of this surprising result, due to Berenbrink, Czumaj, Steger and Vocking (SICOMP 2006), uses tools from Markov chain theory, and a sophisticated induction proof involving computer-aided calculations. We provide a significantly simpler and more elementary proof. The new technique allows us generalize the result and derive new and often tight bounds for the case of weighted balls. The simplification comes at a cost of larger lower order terms and a weaker tail bound for the probability of deviating from the expectation.
Anupam Gupta, Kunal Talwar and Udi Wieder. Changing Bases: Multistage Optimization for Matroids and Matchings
Abstract: This paper is motivated by the fact that many systems need to be maintained continually while the underlying costs change over time. The challenge then is to continually maintain near-optimal solutions to the underlying optimization problems, without creating too much churn in the solution itself. We model this as a multistage combinatorial optimization problem where the input is a sequence of cost functions (one for each time step); while we can change the solution from step to step, we incur an additional cost for every such change.
We first study the multistage matroid maintenance problem, where we need to maintain a base of a matroid in each time step under the changing cost functions and acquisition costs for adding new elements. The online version of this problem generalizes onine paging, and is a well-structured case of the metrical task systems. E.g., given a graph, we need to maintain a spanning tree $T_t$ at each step: we pay $c_t(T_t)$ for the cost of the tree at time $t$, and also $| T_t \setminus T_{t-1} |$ for the number of edges changed at this step. Our main result is a polynomial time $O(\log m \log r)$-approximation to the online multistage matroid maintenance problem, where $m$ is the number of elements/edges and $r$ is the rank of the matroid. This improves on results of Buchbinder et al.~\cite{BuchbinderCNS12} who addressed the \emph{fractional} version of this problem under uniform acquisition costs, and Buchbinder, Chen and Naor~\cite{BuchbinderCN14} who studied the fractional version of a more general problem. We also give an $O(\log m)$ approximation for the offline version of the problem. These bounds hold when the acquisition costs are non-uniform, in which case both these results are the best possible unless P=NP.
We also study the perfect matching version of the problem, where we must maintain a perfect matching at each step under changing cost functions and costs for adding new elements. Surprisingly, the hardness drastically increases: for any constant $\eps>0$, there is no $O(n^{1-\eps})$-approximation to the multistage matching maintenance problem, even in the offline case.
Swastik Kopparty, Mrinal Kumar and Michael Saks. Efficient Indexing of Necklaces and Irreducible Polynomials over Finite Fields
Abstract: We study the problem of {\em indexing } necklaces, and give the first polynomial time algorithm for this problem. Specifically, we give a $\poly(n, \log |\Sigma|)$-time computable bijection between $\{1, \ldots, |\cal N|\}$ and the set $\cal N$ of all necklaces of length $n$ over a finite alphabet $\Sigma$.
Our main application is to give an explicit indexing of all irreducible polynomials of degree $n$ over the finite field $\F_q$ in time $\poly(n, \log q)$ (with $n \log q$ bits of advice). This has applications in pseudorandomness, and answers an open question of Alon, Goldreich, H{\aa}stad and Peralta~\cite{AGHP}.
Karl Wimmer, Yi Wu and Peng Zhang. Optimal query complexity for estimating the trace of a matrix
Abstract: Given an implicit matrix $A$ with oracle access $x^TA x$ for any $x\in \R^n$, we study the query complexity of randomized algorithms for estimating the \emph{trace} of the matrix. This problem has many applications in quantum physics, machine learning, and pattern matching. Two metrics are commonly used for evaluating the estimators: i) variance; ii) a high probability multiplicative-approximation guarantee. All the known estimators are of the form $\frac{1}{k}\sum_{i=1}^k x_i^T A x_i$ for $x_i\in \R^n$ being i.i.d. for some special distribution.
Our main results are summarized as follows: 1. We give an \emph{exact} characterization of the minimum variance unbiased estimator in the broad class of \emph{linear nonadaptive} estimators (which subsumes all the existing known estimators). 2. We also consider the query complexity lower bounds for \emph{any} (possibly nonlinear and adaptive) estimators: (a) We show that \emph{any} estimator requires $\Omega(1/\eps)$ queries to have a guarantee of variance at most $\eps$. (b) We show that \emph{any} estimator requires $\Omega(\frac{1}{\eps^2}\log \frac{1}{\delta})$ to achieve a $(1\pm\eps)$-multiplicative approximation guarantee with probability at least $1 - \delta$. Both above lower bounds are asymptotically tight.
As a corollary, we also resolve a conjecture in the seminal work of Avron and Toledo (Journal of the ACM 2011) regarding the sample complexity of the Gaussian Estimator.
Chinmay Hegde, Piotr Indyk and Ludwig Schmidt. Nearly Linear-Time Model-Based Compressive Sensing
Abstract: Compressive sensing is a method for recording a k-sparse signal x \in \R^n with (possibly noisy) linear measurements of the form y = Ax, where A \in \R^{m \times n} describes the measurement process. Seminal results in compressive sensing show that it is possible to recover the signal x from m = O(k log n / k) measurements and that this is tight. The model-based compressive sensing framework overcomes this lower bound and reduces the number of measurements further to m = O(k). This improvement is achieved by limiting the supports of x to a structured sparsity model, which is a subset of all \binom{n}{k} possible k-sparse supports. This approach has led to measurement-efficient recovery schemes for a variety of signal models, including tree-sparsity and block-sparsity.
While model-based compressive sensing succeeds in reducing the number of measurements, the framework entails a computationally expensive recovery process. In particular, two main barriers arise: (i) Existing recovery algorithms involve several projections into the structured sparsity model. For several sparsity models (such as tree-sparsity), the best known model-projection algorithms run in time \Omega(kn), which can be too slow for large k. (ii) Existing recovery algorithms involve several matrix-vector multiplications with the measurement matrix A. Unfortunately, the only known measurement matrices suitable for model-based compressive sensing require O(nk) time for a single multiplication, which can be (again) too slow for large k.
In this paper, we remove both aforementioned barriers for two popular sparsity models and reduce the complexity of recovery to nearly linear time. Our main algorithmic result concerns the tree-sparsity model, for which we solve the model-projection problem in O(n log n + k log^2 n) time. We also construct a measurement matrix for model-based compressive sensing with matrix-vector multiplication in O(n log n) time for k \leq n^{1/2 - \mu}, \mu > 0. As an added bonus, the same matrix construction can also be used to give a fast recovery scheme for the block-sparsity model.
Tomasz Krawczyk and Bartosz Walczak. Coloring Relatives of Interval Overlap Graphs Via On-Line Games
Abstract: The main goal of this paper is to formalize and explore a connection between chromatic properties of graphs defined by geometric representations and competitivity analysis of on-line algorithms. This connection became apparent after the recent construction of triangle-free geometric intersection graphs with arbitrarily large chromatic number due to Pawlik et al. We show that any on-line graph coloring problem gives rise to a class of game graphs, which in many cases have a natural representation by geometric objects. As a consequence, problems of estimating the chromatic number of graphs with geometric representations are reduced to finding on-line coloring algorithms that use few colors or proving that such algorithms do not exist.
We use this framework to derive upper and lower bounds on the maximum possible chromatic number in terms of the clique number in the following classes of graphs: rectangle overlap graphs, subtree overlap graphs and interval filament graphs. These graphs generalize interval overlap graphs (also known as circle graphs)---a well-studied class of graphs with chromatic number bounded by a function of the clique number. Our bounds are absolute for interval filament graphs and asymptotic of the form (log log n)^f(omega) for rectangle and subtree overlap graphs. In particular, we provide the first construction of geometric intersection graphs with bounded clique number and with chromatic number asymptotically greater than log log n. Moreover, with some additional assumptions on the geometric representation, the bounds obtained are tight.
Abstract: Strong bisimulation for labelled transition systems is one of the most fundamental equivalences in process algebra, and has been generalised to numerous classes of systems that exhibit richer transition behaviour. Nearly all of the ensuing notions are instances of the more general notion of coalgebraic bisimulation. Weak bisimulation, however, has so far been much less amenable to a coalgebraic treatment. Here we attempt to close this gap by giving a coalgebraic treatment of (parametrized) weak equivalences, including weak bisimulation. Our analysis requires that the functor defining the transition type of the system is based on a suitable order-enriched monad, which allows us to capture weak equivalences by least fixpoints of recursive equations. Our notion is in agreement with existing notions of weak bisimulations for labelled transition systems, probabilistic and weighted systems, and simple Segala systems.
Christian Wulff-Nilsen. Faster Separators for Shallow Minor-Free Graphs via Dynamic Approximate Distance Oracles
Abstract: Plotkin, Rao, and Smith (SODA'97) showed that any graph with m edges and n vertices that excludes K_h as a depth O(l log n)-minor has a separator of size O(n/l + l h^2 log n) and that such a separator can be found in O(mn/l) time. A time bound of O(m + n^{2+epsilon}/l) for any constant epsilon > 0 was later given (W., FOCS'11) which is an improvement for non-sparse graphs. We give three new algorithms. The first two have the same separator size (the second having a slightly larger dependency on h) and running time O(poly(h) l n^{1+epsilon}) and O(poly(h)(l^{1/2} n^{1+epsilon} + n^{2+epsilon}/l^{3/2})), respectively. The former is significantly faster than previous bounds for small h and l. Our third algorithm has running time O(poly(h) l^{1/2} n^{1+epsilon}). It finds a separator of size O(n/l) + O~(poly(h) l n^{1/2}) which is no worse than previous bounds when h is fixed and l = O~(n^{1/4}). A main tool in obtaining our results is a decremental approximate distance oracle of Roditty and Zwick.
Ittai Abraham and Shiri Chechik. Distance Labels with Optimal Local Stretch
Abstract: This paper studies distance oracles and distance labeling scheme with local stretch. Informally we would like to provide stretch guarantees for the $r$ closest nodes of each node.
A distance oracle has local stretch $k$ for $r$ neighborhoods if for any $u,v$ such that $v=M(u,r')$ and $r' \le r$: $ \dist(u,v)/k \le \tilde{\dist}(u,v) \le \dist(u,v)$, where $M(u,r')$ is the $r'$ closest node to $u$ and $\tilde{\dist}(u,v)$ is the estimated distance returned by the distance oracle.
For parameters $r>1$, $k>1$, we obtain labels of size $O(r^{1/k}\ln^{1-1/k}{r} + \ln{k})$, with local stretch of $2k-1$ for $r$ neighborhoods in $O(k)$ time, significantly improving the query time and stretch constant of Abraham, Bartal and Neiman [SODA '09]
Moreover, our stretch guarantee of $2k-1$ matches the best known (and conjectured optimal) guarantees of standard distance oracles.
Abstract: We provide an algorithm for computing a planar morph between any two planar straight-line drawings of any n-vertex plane graph in O(n) morphing steps, thus improving upon the previously best known O(n^2) upper bound. Further, we prove that our algorithm is optimal, that is, we show that there exist two planar straight-line drawings \Gamma_s and \Gamma_t of an n-vertex plane graph G such that any planar morph between \Gamma_s and \Gamma_t requires \Omega(n) morphing steps.
Adrian Kosowski and Dominik Pajak. Does Adding More Agents Make a Difference? A Case Study of Cover Time for the Rotor-Router
Abstract: We consider the problem of graph exploration by a team of $k$ agents, which follow the so-called rotor router mechanism. Agents move in synchronous rounds, and each node successively propagates agents which visit it along its outgoing arcs in round-robin fashion. It has recently been established by Dereniowski et al.\ (STACS 2014) that the rotor-router cover time of a graph $G$, i.e., the number of steps required by the team of agents to visit all of the nodes of $G$, satisfies a lower bound of $\Omega(mD/k)$ and an upper bound of $O(mD/\log k)$ for any graph with $m$ edges and diameter $D$. In this paper, we consider the question of how the cover time of the rotor-router depends on $k$ for many important graph classes. We determine the precise asymptotic value of the rotor-router cover time for all values of $k$ for degree-restricted expanders, random graphs, and constant-dimensional tori. For hypercubes, we also resolve the question precisely, except for values of $k$ much larger than $n$. Our results can be compared to those obtained by Els\"asser and Sauerwald (ICALP 2009) in an analogous study of the cover time of $k$ independent parallel random walks in a graph; for the rotor-router, we obtain tight bounds in a slightly broader spectrum of cases. Our proofs take advantage of a relation which we develop, linking the cover time of the rotor-router to the mixing time of the random walk and the local divergence of a discrete diffusion process on the considered graph.
Madur Tulsiani, John Wright and Yuan Zhou. Optimal strong parallel repetition for projection games on low threshold rank graphs
Abstract: Given a two-player one-round game $G$ with value $\val(G) = (1-\eta)$, how quickly does the value decay under parallel repetition? If $G$ is a projection game, then it is known that we can guarantee $\val(G^{\otimes n}) \leq (1-\eta^2)^{\Omega(n)}$, and that this is optimal. An important question is under what conditions can we guarantee that \emph{strong} parallel repetition holds, i.e. $\val(G^{\otimes}) \leq (1-\eta)^{\Omega(n)}$?
In this work, we show a strong parallel repetition theorem for the case when $G$'s constraint graph has low threshold rank. In particular, for any $k \geq 2$, if $\sigma_k$ is the $k$-th largest singular value of $G$'s constraint graph, then we show that \begin{equation*} \val(G^{\otimes n}) \leq \left(1 - \frac{\sqrt{1-\sigma_k^2}}{k}\cdot \eta\right)^{\Omega(n)}. \end{equation*} When $k$ is a constant and $\sigma_k$ is bounded away from~$1$, this decays like $(1-\eta)^{\Omega(n)}$. In addition, we show that this upper-bound exactly matches the value of the Odd-Cycle Game under parallel repetition. \jnote{It turns out that if we proved this same result except instead of $\sigma_k^2$ under the square root we got $\sigma_k^{1000000}$, it would still be tight for the Odd-Cycle Game. However, the latter would actually be a better (though somewhat implausable...) result, so is it fair to call ours an optimal result?} As a result, our parallel repetition theorem is tight.
This improves and generalizes upon the work of~\cite{RR12}, who showed a strong parallel repetition theorem for the case when $G$'s constraint graph is an expander.
Petr Jancar. Bisimulation Equivalence and Semantic Finiteness of First-Order Grammars
Abstract: One contribution is a decidability proof for bisimulation equivalence in labelled transition systems generated by first-order grammars, i.e.by finite sets of labelled rules that rewrite roots of first-order terms; this is a generalization of DPDA (deterministic pushdown automata) equivalence. The presented result is equivalent to the result achieved by S\'enizergues (1998, 2005) in the framework of equational graphs, or of PDA with restricted epsilon-steps, but the framework of classical first-order terms turns out particularly useful for enabling to provide a short generally understandable proof.
Another contribution uses the above decidability result to show the decidability of semantic finiteness, w.r.t. bisimilarity, of real-time first-order grammars, corresponding to pushdown automata with no epsilon-steps; this problem has been open so far.
Damiano Mazza. Non-Uniform Polytime Computation in\\the Infinitary Affine Lambda-Calculus
Abstract: We give an implicit, functional characterization of the class of non-uniform polynomial time languages, based on an infinitary affine lambda-calculus and on previously defined bounded-complexity subsystems of linear (or affine) logic. The fact that the characterization is implicit means that the complexity is guaranteed by structural properties of programs rather than explicit resource bounds. As a corollary, we obtain a proof of the (already known) P-completeness of the normalization problem for the affine lambda-calculus which mimicks in an interesting way the proof of the Cook-Levin theorem. This suggests that the relationship between affine and usual lambda-calculus is deeply similar to that between Boolean circuits and Turing machines.
Colin Cooper, Robert Elsässer and Tomasz Radzik. The power of two choices in distributed voting
Abstract: Distributed voting is a fundamental topic in distributed computing. In the standard model of pull voting, in each step every vertex chooses a neighbour uniformly at random, and adopts the opinion of that neighbour. The voting is said to be completed when all vertices hold the same opinion. On many graph classes including regular graphs, and irrespective of the expansion properties, pull voting requires $\Omega(n)$ expected time steps to complete, even if initially there are only two distinct opinions, with the minority opinion is sufficiently large.
In this paper we consider a related process which we call two-sample voting. In this process every vertex chooses two random neighbors in each step. If the opinions of these neighbors coincide, then the vertex revises its opinion according to the chosen sample. Otherwise, it keeps its own opinion. We consider the performance of this process in the case where two different opinions which reside on vertices in some (arbitrary) sets $A$ and $B$, where $|A|+|B|=n$ is the number of vertices of the graph.
Assume that the initial imbalance between the sets is $|A| -|B| \geq \nu n$. We show that for some constant $K$, this two-sample voting in a random $d$ regular graph with the initial imbalance between the two opinions $\nu_0 = (|A| -|B|)/n \ge K\sqrt{(1/d) + (d/n)}$, with high probability completes in $O(\log n)$ steps and the initial majority opinion wins. Similarly, for any regular graph with the second largest eigenvalue of the transition matrix $\lambda$, if $\nu_0 \ge K\lambda$, then with high probability voting finishes in $O(\log n)$ steps and the initial majority opinion wins. In the graphs we consider, standard pull voting requires $\Omega(n)$ steps, and the minority can still win with probability $|B|/n$. Our results hold even if an adversary is able to rearrange the opinions in each step, and has complete knowledge of the graph structure.