Theory of Computing Blog Aggregator

UCSD has several postdoctoral fellowships focused on data science. This includes algorithms, machine learning, and complexity aspects of data science. Data Science Fellows are expected to teach one data-science related class per year, and will be given a research budget and a faculty-type office. Exceptional, self-motivated candidates with outstanding research records are encouraged to apply.


by shacharlovett at November 16, 2018 09:21 PM UTC

[Guest post by Piotr Sankowski –Boaz]

Call for Nominations – 4th Highlights of Algorithms conference (HALG 2019)

Copenhagen, June 14-16, 2019

The HALG 2019 conference seeks high-quality nominations for invited talks that will highlight recent advances in algorithmic research. Similarly to previous years, there are two categories of invited talks:

A. survey (60 minutes): a survey of an algorithmic topic that has seen exciting developments in last couple of years.

B. paper (30 minutes): a significant algorithmic result appearing in a paper in 2018 or later.

To nominate, please email  the following information:

1. Basic details: speaker name + topic (for survey talk) or paper’s title, authors, conference/arxiv + preferable speaker (for paper talk).

2. Brief justification: Focus on the benefits to the audience, e.g., quality of results, importance/relevance of topic, clarity of talk, speaker’s presentation skills.

All nominations will be reviewed by the Program Committee (PC) to select speakers that will be invited to the conference.

Nominations deadline: December 9, 2018 (for full consideration).

Please keep in mind that the conference does not provide financial support for the speakers.

Best regards,
Piotr Sankowski
HALG 2019 PC Chair

by Boaz Barak at November 16, 2018 02:39 PM UTC

Call for Nominations

 4th Highlights of Algorithms conference (HALG 2019)

Copenhagen, June 14-16, 2019


The HALG 2019 conference seeks high-quality nominations for invited
talks that will highlight recent advances in algorithmic research.
Similarly to previous years, there are two categories of invited

A. survey (60 minutes): a survey of an algorithmic topic that has seen
 exciting developments in last couple of years.

B. paper (30 minutes): a significant algorithmic result appearing in a
paper in 2018 or later.

 To nominate, please email  the
following information:

1. Basic details: speaker name + topic (for survey talk) or paper’s
title, authors, conference/arxiv + preferable speaker (for paper

2. Brief justification: Focus on the benefits to the audience, e.g.,
quality of results, importance/relevance of topic, clarity of talk,
speaker’s presentation skills.

All nominations will be reviewed by the Program Committee (PC) to
select speakers that will be invited to the conference.

Nominations deadline: December 9, 2018 (for full consideration).

Please keep in mind that the conference does not provide financial
support for the speakers.

by sank at November 16, 2018 09:32 AM UTC

Authors: Manos N. Kamarianakis
Download: PDF
Abstract: In this paper we study one of the fundamental predicates required for the construction of the 3D Apollonius diagram (also known as the 3D Additively Weighted Voronoi diagram), namely the EdgeConflict predicate: given five sites $S_i, S_j,S_k,S_l,S_m$ that define an edge $e_{ijklm}$ in the 3D Apollonius diagram, and a sixth query site $S_q$, the predicate determines the portion of $e_{ijklm}$ that will disappear in the Apollonius diagram of the six sites due to the insertion of $S_q$.

Our focus is on the algorithmic analysis of the predicate with the aim to minimize its algebraic degree. We decompose the main predicate into three sub-predicates, which are then evaluated with the aid of four additional primitive operations. We show that the maximum algebraic degree required to answer any of the sub-predicates and primitives, and, thus, our main predicate is 10.

Among the tools we use is the 3D inversion transformation. In the scope of this paper and due to space limitations, only non-degenerate configurations are considered, i.e. different Voronoi vertices are distinct and the predicates never return a degenerate answer. Most of our analysis is carried out in the inverted space, which is where our geometric observations and analysis is captured in algebraic terms.

at November 16, 2018 02:17 AM UTC

Authors: Tanmay Inamdar, Shreyas Pai, Sriram V. Pemmaraju
Download: PDF
Abstract: This paper presents fast, distributed, $O(1)$-approximation algorithms for metric facility location problems with outliers in the Congested Clique model, Massively Parallel Computation (MPC) model, and in the $k$-machine model. The paper considers Robust Facility Location and Facility Location with Penalties, two versions of the facility location problem with outliers proposed by Charikar et al. (SODA 2001). The paper also considers two alternatives for specifying the input: the input metric can be provided explicitly (as an $n \times n$ matrix distributed among the machines) or implicitly as the shortest path metric of a given edge-weighted graph. The results in the paper are:

- Implicit metric: For both problems, $O(1)$-approximation algorithms running in $O(\mbox{poly}(\log n))$ rounds in the Congested Clique and the MPC model and $O(1)$-approximation algorithms running in $\tilde{O}(n/k)$ rounds in the $k$-machine model.

- Explicit metric: For both problems, $O(1)$-approximation algorithms running in $O(\log\log\log n)$ rounds in the Congested Clique and the MPC model and $O(1)$-approximation algorithms running in $\tilde{O}(n/k)$ rounds in the $k$-machine model.

Our main contribution is to show the existence of Mettu-Plaxton-style $O(1)$-approximation algorithms for both Facility Location with outlier problems. As shown in our previous work (Berns et al., ICALP 2012, Bandyapadhyay et al., ICDCN 2018) Mettu-Plaxton style algorithms are more easily amenable to being implemented efficiently in distributed and large-scale models of computation.

at November 16, 2018 02:12 AM UTC

Authors: Manfred Scheucher, Hendrik Schrezenmaier, Raphael Steiner
Download: PDF
Abstract: We investigate which planar point sets allow simultaneous straight-line embeddings of all planar graphs on a fixed number of vertices. We first show that $(1.293-o(1))n$ points are required to find a straight-line drawing of each $n$-vertex planar graph (vertices are drawn as the given points); this improves the previous best constant $1.235$ by Kurowski (2004).

Our second main result is based on exhaustive computer search: We show that no set of 11 points exists, on which all planar 11-vertex graphs can be simultaneously drawn plane straight-line. This strengthens the result by Cardinal, Hoffmann, and Kusters (2015), that all planar graphs on $n \le 10$ vertices can be simultaneously drawn on particular `universal' sets of $n$ points while there are no universal sets for $n \ge 15$. Moreover, we provide a set of 23 planar 11-vertex graphs which cannot be simultaneously drawn on any set of 11 points. This, in fact, is another step towards a (negative) answer of the question, whether every two planar graphs can be drawn simultaneously -- a question raised by Brass, Cenek, Duncan, Efrat, Erten, Ismailescu, Kobourov, Lubiw, and Mitchell (2007).

at November 16, 2018 02:17 AM UTC

Authors: Sepehr Assadi, Eric Balkanski, Renato Paes Leme
Download: PDF
Abstract: We study a twist on the classic secretary problem, which we term the secretary ranking problem: elements from an ordered set arrive in random order and instead of picking the maximum element, the algorithm is asked to assign a rank, or position, to each of the elements. The rank assigned is irrevocable and is given knowing only the pairwise comparisons with elements previously arrived. The goal is to minimize the distance of the rank produced to the true rank of the elements measured by the Kendall-Tau distance, which corresponds to the number of pairs that are inverted with respect to the true order.

Our main result is a matching upper and lower bound for the secretary ranking problem. We present an algorithm that ranks $n$ elements with only $O(n^{3/2})$ inversions in expectation, and show that any algorithm necessarily suffers $\Omega(n^{3/2})$ inversions when there are $n$ available positions. In terms of techniques, the analysis of our algorithm draws connections to linear probing in the hashing literature, while our lower bound result relies on a general anti-concentration bound for a generic balls and bins sampling process. We also consider the case where the number of positions $m$ can be larger than the number of secretaries $n$ and provide an improved bound by showing a connection of this problem with random binary trees.

at November 16, 2018 02:05 AM UTC

Authors: Sébastien Bubeck, Yin Tat Lee, Eric Price, Ilya Razenshteyn
Download: PDF
Abstract: In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem. More precisely, we constructed a binary classification task for which (i) a robust classifier exists; yet no non-trivial accuracy can be obtained with an efficient algorithm in (ii) the statistical query model. In the present paper we significantly strengthen both (i) and (ii): we now construct a task which admits (i') a maximally robust classifier (that is it can tolerate perturbations of size comparable to the size of the examples themselves); and moreover we prove computational hardness of learning this task under (ii') a standard cryptographic assumption.

at November 16, 2018 02:04 AM UTC

Authors: Tomoyuki Yamakami
Download: PDF
Abstract: The linear space hypothesis is a practical working hypothesis, which originally states the insolvability of a restricted 2CNF Boolean formula satisfiability problem parameterized by the number of Boolean variables. From this hypothesis, it follows that the degree-3 directed graph connectivity problem (3DSTCON) parameterized by the number of vertices in a given graph cannot belong to PsubLIN, composed of decision problems computable by polynomial-time, sub-linear-space deterministic Turing machines. This hypothesis immediately implies L$\neq$NL and it was used as a solid foundation to obtain new lower bounds on the computational complexity of various NL search and NL optimization problems. The state complexity of transformation refers to the cost of converting one type of finite automata to another type, where the cost is measured in terms of the increase of the number of inner states of the converted automata from that of the original automata. We relate the linear space hypothesis to the state complexity of transforming restricted 2-way nondeterministic finite automata to computationally equivalent 2-way alternating finite automata having narrow computation graphs. For this purpose, we present state complexity characterizations of 3DSTCON and PsubLIN. We further characterize a non-uniform version of the linear space hypothesis in terms of the state complexity of transformation.

at November 16, 2018 02:00 AM UTC

Authors: Bartłomiej Dudek, Paweł Gawrychowski
Download: PDF
Abstract: The quartet distance is a measure of similarity used to compare two unrooted phylogenetic trees on the same set of $n$ leaves, defined as the number of subsets of four leaves related by a different topology in both trees. After a series of previous results, Brodal et al. [SODA 2013] presented an algorithm that computes this number in $\mathcal{O}(nd\log n)$ time, where $d$ is the maximum degree of a node. Our main contribution is a two-way reduction establishing that the complexity of computing the quartet distance between two trees on $n$ leaves is the same, up to polylogarithmic factors, as the complexity of counting 4-cycles in an undirected simple graph with $m$ edges. The latter problem has been extensively studied, and the fastest known algorithm by Vassilevska Williams [SODA 2015] works in $\mathcal{O}(m^{1.48})$ time. In fact, even for the seemingly simpler problem of detecting a 4-cycle, the best known algorithm works in $\mathcal{O}(m^{4/3})$ time, and a conjecture of Yuster and Zwick implies that this might be optimal. In particular, an almost-linear time for computing the quartet distance would imply a surprisingly efficient algorithm for counting 4-cycles. In the other direction, by plugging in the state-of-the-art algorithms for counting 4-cycles, our reduction allows us to significantly decrease the complexity of computing the quartet distance. For trees with unbounded degrees we obtain an $\mathcal{O}(n^{1.48})$ time algorithm, which is a substantial improvement on the previous bound of $\mathcal{O}(n^{2}\log n)$. For trees with degrees bounded by $d$, by analysing the reduction more carefully, we are able to obtain an $\mathcal{\tilde O}(nd^{0.77})$ time algorithm, which is again a nontrivial improvement on the previous bound of $\mathcal{O}(nd\log n)$.

at November 16, 2018 02:06 AM UTC

Authors: Sang Won Bae, Arpita Baral, Priya Ranjan Sinha Mahapatra
Download: PDF
Abstract: An annulus is, informally, a ring-shaped region, often described by two concentric circles. The maximum-width empty annulus problem asks to find an annulus of a certain shape with the maximum possible width that avoids a given set of $n$ points in the plane. This problem can also be interpreted as the problem of finding an optimal location of a ring-shaped obnoxious facility among the input points. In this paper, we study square and rectangular variants of the maximum-width empty anuulus problem, and present first nontrivial algorithms. Specifically, our algorithms run in $O(n^3)$ and $O(n^2 \log n)$ time for computing a maximum-width empty axis-parallel square and rectangular annulus, respectively. Both algorithms use only $O(n)$ space.

at November 16, 2018 02:17 AM UTC

Authors: Roman Snytsar
Download: PDF
Abstract: Many modern sequence alignment tools implement fast string matching using the space efficient data structure called FM-index. The succinct nature of this data structure presents unique challenges for the algorithm designers. In this paper, we explore the opportunities for parallelization of the exact and inexact matches and present an efficient SIMD solution for the Occ portion of the algorithm. Our implementation computes all eight Occ values required for the inexact match algorithm step in a single pass. We showcase the algorithm performance in a multi-core genome aligner and discuss effects of the memory prefetch.

at November 16, 2018 02:07 AM UTC

Authors: José R. Correa, Paul Dütting, Felix Fischer, Kevin Schewior
Download: PDF
Abstract: A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: Given a sequence of random variables $X_1,\dots,X_n$ drawn independently from a distribution $F$, the goal is to choose a stopping time $\tau$ so as to maximize $\alpha$ such that for all distributions $F$ we have $\mathbb{E}[X_\tau] \geq \alpha \cdot \mathbb{E}[\max_tX_t]$. What makes this problem challenging is that the decision whether $\tau=t$ may only depend on the values of the random variables $X_1,\dots,X_t$ and on the distribution $F$. For quite some time the best known bound for the problem was $\alpha\geq1-1/e\approx0.632$ [Hill and Kertz, 1982]. Only recently this bound was improved by Abolhassani et al. [2017], and a tight bound of $\alpha\approx0.745$ was obtained by Correa et al. [2017].

The case where $F$ is unknown, such that the decision whether $\tau=t$ may depend only on the values of the first $t$ random variables but not on $F$, is equally well motivated (e.g., [Azar et al., 2014]) but has received much less attention. A straightforward guarantee for this case of $\alpha\geq1/e\approx0.368$ can be derived from the solution to the secretary problem. Our main result is that this bound is tight. Motivated by this impossibility result we investigate the case where the stopping time may additionally depend on a limited number of samples from~$F$. An extension of our main result shows that even with $o(n)$ samples $\alpha\leq 1/e$, so that the interesting case is the one with $\Omega(n)$ samples. Here we show that $n$ samples allow for a significant improvement over the secretary problem, while $O(n^2)$ samples are equivalent to knowledge of the distribution: specifically, with $n$ samples $\alpha\geq1-1/e\approx0.632$ and $\alpha\leq\ln(2)\approx0.693$, and with $O(n^2)$ samples $\alpha\geq0.745-\epsilon$ for any $\epsilon>0$.

at November 16, 2018 02:06 AM UTC

Authors: Chun Jiang Zhu, Tan Zhu, Kam-Yiu Lam, Song Han, Jinbo Bi
Download: PDF
Abstract: We consider the problem of clustering graph nodes over large-scale dynamic graphs, such as citation networks, images and web networks, when graph updates such as node/edge insertions/deletions are observed distributively. We propose communication-efficient algorithms for two well-established communication models namely the message passing and the blackboard models. Given a graph with $n$ nodes that is observed at $s$ remote sites over time $[1,t]$, the two proposed algorithms have communication costs $\tilde{O}(ns)$ and $\tilde{O}(n+s)$ ($\tilde{O}$ hides a polylogarithmic factor), almost matching their lower bounds, $\Omega(ns)$ and $\Omega(n+s)$, respectively, in the message passing and the blackboard models. More importantly, we prove that at each time point in $[1,t]$ our algorithms generate clustering quality nearly as good as that of centralizing all updates up to that time and then applying a standard centralized clustering algorithm. We conducted extensive experiments on both synthetic and real-life datasets which confirmed the communication efficiency of our approach over baseline algorithms while achieving comparable clustering results.

at November 16, 2018 12:00 AM UTC

Authors: Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, Zhiwei Steven Wu
Download: PDF
Abstract: In a social learning setting, there is a set of actions, each of which has a payoff that depends on a hidden state of the world. A sequence of agents each chooses an action with the goal of maximizing payoff given estimates of the state of the world. A disclosure policy tries to coordinate the choices of the agents by sending messages about the history of past actions. The goal of the algorithm is to minimize the regret of the action sequence.

In this paper, we study a particular class of disclosure policies that use messages, called unbiased subhistories, consisting of the actions and rewards from by a subsequence of past agents, where the subsequence is chosen ahead of time. One trivial message of this form contains the full history; a disclosure policy that chooses to use such messages risks inducing herding behavior among the agents and thus has regret linear in the number of rounds. Our main result is a disclosure policy using unbiased subhistories that obtains regret $\tilde{O}(\sqrt{T})$. We also exhibit simpler policies with higher, but still sublinear, regret. These policies can be interpreted as dividing a sublinear number of agents into constant-sized focus groups, whose histories are then fed to future agents.

at November 16, 2018 02:10 AM UTC

Authors: Alexandru Paler, Austin Fowler, Robert Wille
Download: PDF
Abstract: Large scale quantum computing is highly anticipated, and quantum circuit design automation needs to keep up with the transition from small scale to large scale problems. Methods to support fast quantum circuit manipulations (e.g.~gate replacement, wire reordering, etc.) or specific circuit analysis operations have not been considered important and have been often implemented in a naive manner thus far. For example, quantum circuits are usually represented in term of one-dimensional gate lists or as directed acyclic graphs. Although implementations for quantum circuit manipulations are often only of polynomial complexity, the sheer number of possibilities to consider with increasing scales of quantum computations make these representations highly inefficient -- constituting a serious bottleneck. At the same time, quantum circuits have structural characteristics, which allow for more specific and faster approaches. This work utilises these characteristics by introducing a dedicated representation for large quantum circuits, namely wire label reference diagrams. We apply the representation to a set of very common circuit transformations, and develop corresponding solutions which achieve orders of magnitude performance improvements for circuits which include up to 80 000 qubits and 200 000 gates. The implementation of the proposed method is available online.

at November 16, 2018 02:13 AM UTC

Authors: Kai Puolamäki, Andreas Henelius, Antti Ukkonen
Download: PDF
Abstract: In many domains it is necessary to generate surrogate networks, e.g., for hypothesis testing of different properties of a network. Furthermore, generating surrogate networks typically requires that different properties of the network is preserved, e.g., edges may not be added or deleted and the edge weights may be restricted to certain intervals. In this paper we introduce a novel efficient property-preserving Markov Chain Monte Carlo method termed CycleSampler for generating surrogate networks in which (i) edge weights are constrained to an interval and node weights are preserved exactly, and (ii) edge and node weights are both constrained to intervals. These two types of constraints cover a wide variety of practical use-cases. The method is applicable to both undirected and directed graphs. We empirically demonstrate the efficiency of the CycleSampler method on real-world datasets. We provide an implementation of CycleSampler in R, with parts implemented in C.

at November 16, 2018 02:07 AM UTC

“Highlights Beyond EC” Session at EC 2019: Call for Nominations

Committee: Mohammad Akbarpour, Moshe Babaioff, Shengwu Li, and Ariel Procaccia

Following a new tradition started last year, the 2019 ACM Conference on Economics and Computation (EC’19) will host a special session highlighting some of the best work in economics and computation that appears in conferences and journals other than EC. The intention of this session is to expose EC attendees to related work just beyond the boundary of their current awareness.

We seek nominations for papers in Economics and Computation that have made breakthrough advances, opened up new questions or areas, made unexpected connections, or had significant impact on practice or other sciences. Examples of conferences and journals that publish papers relevant for the special session include STOC/FOCS/SODA/ITCS, AAAI/IJCAI/AAMAS, NIPS/ICML/COLT, WWW/KDD, AER/Econometrica/JPE/QJE/RESTUD/TE/AEJ Micro/JET/GEB, and Math of OR/Management Science/Operations Research. Anyone is welcome to contact us, but we especially invite members of PCs or editorial boards in various venues to send us suggestions.
Nominations should be emailed to, and should include:

  • Paper title and authors.
  • Publication venue or online working version. Preference will be given to papers that have appeared in a related conference or journal within the past two years, or have a working version circulated within the past two years.
  • Short (2-3 paragraph) explanation of the paper and its importance.
  • (Optional) Names of 1-3 knowledgeable experts on the area of the paper.

Note that at least one of the authors of a selected paper will be required to present their paper at EC 2019 and so should be available to travel to the conference, which will take place in Phoenix, AZ on June 25-27, 2019.

To ensure maximum consideration, please send all nominations by December 15, 2018.

by felixbrandt at November 15, 2018 07:53 PM UTC

Authors: Neil Olver, Frans Schalekamp, Suzanne van der Ster, Leen Stougie, Anke van Zuylen
Download: PDF
Abstract: We give a 2-approximation algorithm for the Maximum Agreement Forest problem on two rooted binary trees. This NP-hard problem has been studied extensively in the past two decades, since it can be used to compute the rooted Subtree Prune-and-Regraft (rSPR) distance between two phylogenetic trees. Our algorithm is combinatorial and its running time is quadratic in the input size. To prove the approximation guarantee, we construct a feasible dual solution for a novel linear programming formulation. In addition, we show this linear program is stronger than previously known formulations, and we give a compact formulation, showing that it can be solved in polynomial time

at November 15, 2018 12:00 AM UTC

Authors: Xu Han, Laurent Albera, Amar Kachenoura, Huazhong Shu, Lotfi Senhadji
Download: PDF
Abstract: In order to compute the best low-rank tensor approximation using the Multilinear Tensor Decomposition (MTD) model, it is essential to estimate the rank of the underlying multilinear tensor from the noisy observation tensor. In this paper, we propose a Robust MTD (R-MTD) method, which jointly estimates the multilinear rank and the loading matrices. Based on the low-rank property and an over-estimation of the core tensor, this joint estimation problem is solved by promoting (group) sparsity of the over-estimated core tensor. Group sparsity is promoted using mixed-norms. Then we establish a link between the mixed-norms and the nuclear norm, showing that mixed-norms are better candidates for a convex envelope of the rank. After several iterations of the Alternating Direction Method of Multipliers (ADMM), the Minimum Description Length (MDL) criterion computed from the eigenvalues of the unfolding matrices of the estimated core tensor is minimized in order to estimate the multilinear rank. The latter is then used to estimate more accurately the loading matrices. We further develop another R-MTD method, called R-OMTD, by imposing an orthonormality constraint on each loading matrix in order to decrease the computation complexity. A series of simulated noisy tensor and real-world data are used to show the effectiveness of the proposed methods compared with state-of-the-art methods.

at November 15, 2018 12:00 AM UTC

Authors: Sandip Das, Swami Sarvottamananda
Download: PDF
Abstract: We propose a method to efficiently compute the Minkowski sum, denoted by binary operator $\oplus$ in the paper, of convex polytopes in $\Re^d$ using their face lattice structures as input. In plane, the Minkowski sum of convex polygons can be computed in linear time of the total number of vertices of the polygons. In $\Re^d$, we first show how to compute the Minkowski sum, $P \oplus Q$, of two convex polytopes $P$ and $Q$ of input size $n$ and $m$ respectively in time $O(nm)$. Then we generalize the method to compute the Minkowski sum of $n$ convex polytopes, $P_1 \oplus P_2 \oplus \cdots \oplus P_n$, in $\Re^d$ in time $O(\prod_{i}^{n}N_i)$, where $P_1$, $P_2$, $\dots$, $P_n$ are $n$ input convex polytopes and for each $i$, $N_i$ is size of the face lattice structure of $P_i$. Our algorithm for Minkowski sum of two convex polytopes is optimal in the worst case since the output face lattice structure of $P\oplus Q$ for convex polytopes in $\Re^d$ can be $O(nm)$ in worst case.

at November 15, 2018 12:00 AM UTC

Authors: Ludovic Stephan, Laurent Massoulié
Download: PDF
Abstract: The present work is concerned with community detection. Specifically, we consider a random graph drawn according to the stochastic block model~: its vertex set is partitioned into blocks, or communities, and edges are placed randomly and independently of each other with probability depending only on the communities of their two endpoints. In this context, our aim is to recover the community labels better than by random guess, based only on the observation of the graph.

In the sparse case, where edge probabilities are in $O(1/n)$, we introduce a new spectral method based on the distance matrix $D^{(l)}$, where $D^{(l)}_{ij} = 1$ iff the graph distance between $i$ and $j$, noted $d(i, j)$ is equal to $\ell$. We show that when $\ell \sim c\log(n)$ for carefully chosen $c$, the eigenvectors associated to the largest eigenvalues of $D^{(l)}$ provide enough information to perform non-trivial community recovery with high probability, provided we are above the so-called Kesten-Stigum threshold. This yields an efficient algorithm for community detection, since computation of the matrix $D^{(l)}$ can be done in $O(n^{1+\kappa})$ operations for a small constant $\kappa$.

We then study the sensitivity of the eigendecomposition of $D^{(l)}$ when we allow an adversarial perturbation of the edges of $G$. We show that when the considered perturbation does not affect more than $O(n^\varepsilon)$ vertices for some small $\varepsilon > 0$, the highest eigenvalues and their corresponding eigenvectors incur negligible perturbations, which allows us to still perform efficient recovery.

at November 15, 2018 12:00 AM UTC

Authors: Alexander Gribov
Download: PDF
Abstract: The task of finding the optimal compression of a polyline with straight-line segments and arcs is performed in many applications, such as polyline compression, noise filtering, and feature recognition. Optimal compression algorithms find the best solution using the dynamic programming approach, which requires a significant amount of arc fitting. This paper describes an improvement to the dynamic programming approach by reducing the amount of arc fitting necessary to find the optimal solution. Instead of processing from the second to the last vertices in the dynamic programming approach, the algorithm proceeds forward and skips as many steps as possible without affecting the inference in any way. Such a modification extends the practical application of the algorithm to polylines having arcs with a large number of vertices.

at November 15, 2018 12:00 AM UTC

Authors: Junzhou Zhao, Shuo Shang, Pinghui Wang, John C. S. Lui, Xiangliang Zhang
Download: PDF
Abstract: Cardinality constrained submodular function maximization, which aims to select a subset of size at most $k$ to maximize a monotone submodular utility function, is the key in many data mining and machine learning applications such as data summarization and maximum coverage problems. When data is given as a stream, streaming submodular optimization (SSO) techniques are desired. Existing SSO techniques can only apply to insertion-only streams where each element has an infinite lifespan, and sliding-window streams where each element has a same lifespan (i.e., window size). However, elements in some data streams may have arbitrary different lifespans, and this requires addressing SSO over streams with inhomogeneous-decays (SSO-ID). This work formulates the SSO-ID problem and presents three algorithms: BasicStreaming is a basic streaming algorithm that achieves an $(1/2-\epsilon)$ approximation factor; HistApprox improves the efficiency significantly and achieves an $(1/3-\epsilon)$ approximation factor; HistStreaming is a streaming version of HistApprox and uses heuristics to further improve the efficiency. Experiments conducted on real data demonstrate that HistStreaming can find high quality solutions and is up to two orders of magnitude faster than the naive Greedy algorithm.

at November 15, 2018 12:00 AM UTC

Authors: Austin Conner, Fulvio Gesmundo, Joseph M. Landsberg, Emanuele Ventura, Yao Wang
Download: PDF
Abstract: We establish basic information about the set of tight tensors, the tensors with continuous regular symmetry. Our motivation is Strassen's astounding asymptotic rank conjecture that the asymptotic rank of any tight tensor is minimal. In particular, we determine the dimension of the set of tight tensors. Surprisingly we prove this dimension equals the dimension of the set of oblique tensors, a less restrictive class of tensors that Strassen identified as useful for his laser method.

at November 15, 2018 02:04 AM UTC

Authors: Weiming Feng, Nisheeth K. Vishnoi, Yitong Yin
Download: PDF
Abstract: In this paper, we study the problem of sampling from a graphical model when the model itself is changing dynamically with time. This problem derives its interest from a variety of inference, learning, and sampling settings in machine learning, computer vision, statistical physics, and theoretical computer science. While the problem of sampling from a static graphical model has received considerable attention, theoretical works for its dynamic variants have been largely lacking. The main contribution of this paper is an algorithm that can sample dynamically from a broad class of graphical models over discrete random variables. Our algorithm is parallel and Las Vegas: it knows when to stop and it outputs samples from the exact distribution. We also provide sufficient conditions under which this algorithm runs in time proportional to the size of the update, on general graphical models as well as well-studied specific spin systems. In particular we obtain, for the Ising model (ferromagnetic or anti-ferromagnetic) and for the hardcore model the first dynamic sampling algorithms that can handle both edge and vertex updates (addition, deletion, change of functions), both efficient within regimes that are close to the respective uniqueness regimes, beyond which, even for the static and approximate sampling, no local algorithms were known or the problem itself is intractable. Our dynamic sampling algorithm relies on a local resampling algorithm and a new "equilibrium" property that is shown to be satisfied by our algorithm at each step, and enables us to prove its correctness. This equilibrium property is robust enough to guarantee the correctness of our algorithm, helps us improve bounds on fast convergence on specific models, and should be of independent interest.

at November 15, 2018 12:00 AM UTC

Authors: Dimitris Achlioptas, Fotis Iliopoulos, Alistair Sinclair
Download: PDF
Abstract: We present a new perspective on the analysis of stochastic local search algorithms via linear algebra. Our key insight is that LLL-inspired convergence arguments can be seen as a method for bounding the spectral radius of a matrix specifying the algorithm to be analyzed. Armed with this viewpoint we give a unified analysis of all \emph{entropy compression} applications, connecting backtracking algorithms to the LLL in the same fashion that existing analyses connect resampling algorithms to the LLL. We then give a new convergence condition that seamlessly handles resampling algorithms that can detect, and back away from, unfavorable parts of the state space. We give several applications of this condition, notably a new vertex coloring algorithm for arbitrary graphs that uses a number of colors that matches the algorithmic barrier for random graphs. Finally, we introduce a generalization of Kolmogorov's notion of \emph{commutative} algorithms, cast as matrix commutativity, which affords much simpler proofs both of the original results and of recent extensions.

at November 15, 2018 12:00 AM UTC

Denote by $w_n$ the number of words of length $n$ in a (possibly ambiguous) context-free language.

What is known about $w_n$?

I'm sure this has been studied a lot, but I couldn't find anything at all on it.

by domotorp at November 14, 2018 10:29 AM UTC

The Copenhagen Center for Health Technology (CACHET) is looking for a post-doctoral researcher within pervasive healthcare technology for diabetes management. You will have a central role in designing novel interactive and ubiquitous diabetes management technology to be used in clinical trials. You will be part of a large European project with academic and commercial partners.


by shacharlovett at November 14, 2018 09:43 AM UTC

The Simons Institute for the Theory of Computing invites applications for Simons-Berkeley Research Fellowships for the Fall 2019 and/or Spring 2020 semesters. The Institute will host programs on “Proofs, Consensus, and Decentralizing Society” in Fall 2019, and on “The Quantum Wave in Computing” and “Lattices: Algorithms, Complexity and Cryptography” in Spring 2020.


by shacharlovett at November 14, 2018 03:08 AM UTC

The Simons Institute for the Theory of Computing invites applications for Simons-Berkeley Research Fellowships for the Summer 2019. The Institute will host programs on “Deep Learning: From Practical Challenges to Theoretical Foundations” in Summer 2019.


by shacharlovett at November 14, 2018 03:06 AM UTC

A 1-year postdoc position at Sorbonne university is open. The position is funded on the ANR project FOCAL. The project aims at designing approximation algorithms for clustering problems in different settings such as online, beyond worst-case, embedded graphs. Possible Collab. with Varun Kanade at Oxford or Arnaud de Mesmay in Grenoble.
Start date in 2019. Send CV and email of 3 references.


by shacharlovett at November 13, 2018 10:27 AM UTC

[Guest post by Eylon Yogev about a Chrome extension he wrote, of which I am a happy user –Boaz]

Hi fellow researchers,

I’m writing to share a little tool that I have developed with the ambitious goal of boosting research productivity. The tool is a Chrome extension named “Where’s that paper?”. Before I tell you more about what it does, let me touch upon the largest obstacle of any new tool, the learning curve. Rest assured that “Where’s that paper?” requires  zero training – just install it and continue working as usual – it will guide you without further effort on your end.

After building much suspense, I will elaborate. If you’re like me, you find yourself frequently searching the web for papers that you have browsed / opened in the past on your computer – basically paper browsing history. Whether it is in the process of writing a paper or searching for existing ideas, you may search for authors of titles of papers that you recall know to have once glimpsed at. At some point, the paper has either been downloaded or added  to the favorites bar. In any case, this process is manual and takes up much time (often including frustration). As such, I thought we could all use some code to automate it.


The extension is very simple: it identifies when you are reading a scientific paper (according to the domain) and then automatically adds this paper, with the proper author list, year etc., to your favorites bar under a designated folder. Then, when you type a search in the Chrome’s search bar for an author or a title, the relevant results from the favorites pop up. One might also explicitly browse in the favorites to see all papers read.

See an example of the results shown when searching for “short” in Chrome’s address bar. The first three links are from the favorite and the rest are general Google suggestions from the web.

The extension automatically works on a list of specified domains that include:, and

It is not too difficult to customize the list and additional domains. Reach out to me and I’ll add them upon your request!

A bonus feature (thanks Boaz for the suggestion): Click the extension’s icon and you can download a bib file containing (a nice formatting of) the DBLP bibtex records of all papers added to the favorites.

Download “Where’s that paper?” from the Chrome Web Store:


Most who work in our domain of expertise would be quite concerned with installing extensions and with good reason. The extension I wrote does not leak any information, not to me or to another third party. I have no ulterior motive in developing the extension, originally helping myself, I now see the benefit of sharing it with our community.  I do not know how to reassure you that this is indeed the case other than giving you my word and publishing the source code online. It is available here:

I’d be glad to hear any feedback.



by windowsontheory at November 12, 2018 11:38 PM UTC

The best known circuit lower bounds against unrestricted circuits remained around $3n$ for several decades. Moreover, the only known technique for proving lower bounds in this model, gate elimination, is inherently limited to proving lower bounds of less than $5n$. In this work, we suggest a first non-gate-elimination approach for obtaining circuit lower bounds. Namely, we prove that every (unbounded-depth) circuit of size $s$ can be expressed as an OR of $2^{s/3.9}$ $16$-CNFs. While this structural result does not immediately lead to new lower bounds, it suggests a new avenue of attack on them. Our result complements the classical depth reduction result of Valiant which asserts that logarithmic-depth circuits of linear size can be computed by an OR of $2^{\varepsilon n}$ $n^{\delta}$-CNFs. It is known that no such graph-theoretic reduction can work for circuits of super-logarithmic depth. We overcome this limitation (for small circuits) by taking into account both the graph-theoretic and functional properties of circuits. We show that qualitative improvements of the following pseudorandom constructions imply qualitative (from $3n$ to $\omega(n)$) improvement of size lower bounds for log-depth circuits via Valiant's reduction: dispersers for varieties, correlation with constant degree polynomials, matrix rigidity, and hardness for depth-$3$ circuits with constant bottom fan-in. On the other hand, now even modest quantitative improvements of the known constructions give elementary proofs of quantitatively stronger circuit lower bounds ($3.9n$ instead of $3n$).

at November 12, 2018 04:34 PM UTC

Authors: Alexandr Andoni, Tal Malkin, Negev Shekel Nosatzki
Download: PDF
Abstract: We study the problem of discrete distribution testing in the two-party setting. For example, in the standard closeness testing problem, Alice and Bob each have $t$ samples from, respectively, distributions $a$ and $b$ over $[n]$, and they need to test whether $a=b$ or $a,b$ are $\epsilon$-far for some fixed $\epsilon>0$. This is in contrast to the well-studied one-party case, where the tester has unrestricted access to samples of both distributions, for which optimal bounds are known for a number of variations. Despite being a natural constraint in applications, the two-party setting has evaded attention so far.

We address two fundamental aspects: 1) what is the communication complexity, and 2) can it be accomplished securely, without Alice and Bob learning extra information about each other's input. Besides closeness testing, we also study the independence testing problem, where Alice and Bob have $t$ samples from distributions $a$ and $b$ respectively, which may be correlated; the question is whether $a,b$ are independent of $\epsilon$-far from being independent.

Our contribution is three-fold:

1) Communication: we show how to gain communication efficiency with more samples, beyond the information-theoretic bound on $t$. The gain is polynomially better than what one obtains by adapting standard algorithms.

2) Lower bounds: we prove tightness of our protocols for the closeness testing, and for the independence testing when the number of samples is unbounded. These lower bounds are of independent interest as these are the first 2-party communication lower bounds for testing problems.

3) Security: we define secure distribution testing and argue that it must leak at least some minimal information. We then provide secure versions of the above protocols with an overhead that is only polynomial in the security parameter.

at November 12, 2018 12:00 AM UTC

Authors: MohammadHossein Bateni, Alireza Farhadi, MohammadTaghi Hajiaghayi
Download: PDF
Abstract: The $k$-cut problem asks, given a connected graph $G$ and a positive integer $k$, to find a minimum-weight set of edges whose removal splits $G$ into $k$ connected components. We give the first polynomial-time algorithm with approximation factor $2-\epsilon$ (with constant $\epsilon > 0$) for the $k$-cut problem in planar and minor-free graphs. Applying more complex techniques, we further improve our method and give a polynomial-time approximation scheme for the $k$-cut problem in both planar and minor-free graphs. Despite persistent effort, to the best of our knowledge, this is the first improvement for the $k$-cut problem over standard approximation factor of $2$ in any major class of graphs.

at November 12, 2018 12:00 AM UTC

Authors: Alexander Prolubnikov
Download: PDF
Abstract: For the set cover problem, by modifying the approach that leads to the proof of the logarithmic approximation guarantee for the greedy algorithm, we obtain an estimation of the greedy algorithm's accuracy for a given input. By modifying the approach that leads to the proof of the logarithmic approximation guarantee for the set cover problem, we obtain an estimation of the greedy algorithm's accuracy for a given input. We show that, for a wide share of the problem instances, the accuracy of the greedy algorithm may be estimated much better than the common logarithmic approximation guarantee suggests.

at November 12, 2018 12:00 AM UTC

Authors: Qiang Zou, Hsi-Yung Feng
Download: PDF
Abstract: Direct modeling is a very recent CAD modeling paradigm featuring direct, intuitive push-pull interactions with the geometry of the model to much increase model editing flexibility. The major issue for push-pull direct modeling is the possible inconsistency between the altered geometry of the model and its unchanged topology. The challenge of resolving the geometry-topology inconsistency lies in ensuring that the resulting model remains as a valid solid model and that the model shape follows a continuous change pattern. Although push-pull direct modeling has been implemented in several CAD software packages, robustness towards generating valid modeling results and continuous shape changes still remains an open issue. This paper proposes a systematic method to handle the resolution of any possible inconsistent situation. The method formulates the continuous shape change requirement as successive Boolean operations on the model volume, thereby guaranteeing valid solid models and continuous shape changes. Further, this formulation allows for an easy implementation of push-pull direct modeling using existing CAD research and engineering results. In order to show the effectiveness of the proposed method, a software prototype is developed and the modeling results are compared with those of five leading CAD software packages.

at November 12, 2018 12:00 AM UTC

Authors: Alexander Büchel, Ulrich Gilleßen, Kurt-Ulrich Witt
Download: PDF
Abstract: The well-known PARTITION problem: Given positive integers $n, k$ and $t$ such that $t \geq n$ and $k \cdot t = {n+1 \choose 2}$, the algorithm partitions the elements of the set $I_n = \set{1, \ldots, n}$ into $k$ mutually disjoint subsets $T_j$ such that $\cup_{j=1}^k T_j = I_n$ and $\sum_{x \in T_{j}} x = t$ for each $j \in \set{1,2, \ldots, k}$. The algorithm needs $\Ord${\tiny $\kl{\! n \! \cdot \! \kl{\! \frac{n}{2k} \! + \! \log \frac{n(n+1)}{2k} \! } \! }$} steps to insert the $n$ elements of $I_n$ into the $k$ sets $T_j$.

at November 12, 2018 12:00 AM UTC

Authors: D. Gavinsky
Download: PDF
Abstract: In communication complexity the Arthur-Merlin (AM) model is the most natural one that allows both randomness and non-determinism. Presently we do not have any super-logarithmic lower bound for the AM-complexity of an explicit function. Obtaining such a bound is a fundamental challenge to our understanding of communication phenomena. In this work we explore the gap between the known techniques and the complexity class AM.

In the first part we define a new natural class Small-advantage Layered Arthur-Merlin (SLAM) that have the following properties:

- SLAM is (strictly) included in AM and includes all previously known sub-AM classes with non-trivial lower bounds.

- SLAM is qualitatively stronger than the union of those classes.

- SLAM is a subject to the discrepancy bound: in particular, the inner product function does not have an efficient SLAM-protocol.

Structurally this can be summarised as

SBP $\cup$ UAM $\subset$ SLAM $\subseteq$ AM $\cap$ PP.

In the second part we ask why proving a lower bound of $\omega(\sqrt n)$ on the MA-complexity of an explicit function seems to be difficult.

Both of these results are related to the notion of layer complexity, which is, informally, the number of "layers of non-determinism" used by a protocol. We believe that further investigation of this concept may lead to better understanding of the communication model AM and to non-trivial lower bounds against it.

at November 12, 2018 02:01 AM UTC