Theory of Computing Blog Aggregator

Authors: Klaus Jansen, Kim-Manuel Klein, Marten Maack, Malin Rau
Download: PDF
Abstract: Integer linear programs of configurations, or configuration IPs, are a classical tool in the design of algorithms for scheduling and packing problems, where a set of items has to be placed in multiple target locations. Herein a configuration describes a possible placement on one of the target locations, and the IP is used to chose suitable configurations covering the items. We give an augmented IP formulation, which we call the module configuration IP. It can be described within the framework of $n$-fold integer programming and therefore be solved efficiently using the algorithm by Hemmecke, Onn, and Romanchuk. Furthermore, we investigate how structural properties can be applied to reduce the description of the IP and thereby speed up the resulting algorithms. As an application, we consider scheduling problems with setup times, in which a set of jobs has to be scheduled on a set of identical machines, with the objective of minimizing the makespan. For instance, we investigate the case that jobs can be split and scheduled on multiple machines. However, before a part of a job can be processed an uninterrupted setup depending on the job has to be paid. For both of the variants that jobs can be executed in parallel or not, we obtain an efficient polynomial time approximation scheme (EPTAS) of running time $f(1/\varepsilon)\times \mathrm{poly}(|I|)$ with a single exponential term in $f$ for the first and a double exponential one for the second case. Previously only constant factor approximations of $5/3$ and $4/3 + \varepsilon$ respectively were known. Furthermore, we present an EPTAS for a problem where classes of (non-splittable) jobs are given, and a setup has to be paid for each class of jobs being executed on one machine.

at January 22, 2018 01:41 AM UTC

Authors: Anna Lubiw, Debajyoti Mondal
Download: PDF
Abstract: A geometric graph in the plane is angle-monotone of width $\gamma$ if every pair of vertices is connected by an angle-monotone path of width $\gamma$, a path such that the angles of any two edges in the path differ by at most $\gamma$. Angle-monotone graphs have good spanning properties.

We prove that every point set in the plane admits an angle-monotone graph of width $90^\circ$, hence with spanning ratio $\sqrt 2$, and a subquadratic number of edges. This answers an open question posed by Dehkordi, Frati and Gudmundsson.

We show how to construct, for any point set of size $n$ and any angle $\alpha$, $0 < \alpha < 45^\circ$, an angle-monotone graph of width $(90^\circ+\alpha)$ with $O(\frac{n}{\alpha})$ edges. Furthermore, we give a local routing algorithm to find angle-monotone paths of width $(90^\circ+\alpha)$ in these graphs. The routing ratio, which is the ratio of path length to Euclidean distance, is at most $1/\cos(45^\circ + \frac{\alpha}{2})$, i.e., ranging from $\sqrt 2 \approx 1.414$ to $2.613$. For the special case $\alpha = 30^\circ$, we obtain the $\Theta_6$-graph and our routing algorithm achieves the known routing ratio 2 while finding angle-monotone paths of width $120^\circ$.

at January 22, 2018 01:46 AM UTC

Authors: Bernhard Haeupler, Jason Li, Goran Zuzic
Download: PDF
Abstract: Distributed network optimization algorithms, such as minimum spanning tree, minimum cut, and shortest path, are an active research area in distributed computing. This paper presents a fast distributed algorithm for such problems in the CONGEST model, on networks that exclude a fixed minor.

On general graphs, many optimization problems, including the ones mentioned above, require $\tilde\Omega(\sqrt n)$ rounds of communication in the CONGEST model, even if the network graph has a much smaller diameter. Naturally, the next step in algorithm design is to design efficient algorithms which bypass this lower bound on a restricted class of graphs. Currently, the only known method of doing so uses the low-congestion shortcut framework of Ghaffari and Haeupler [SODA'16]. Building off of their work, this paper proves that excluded minor graphs admit high-quality shortcuts, leading to an $\tilde O(D^2)$ round algorithm for the aforementioned problems, where $D$ is the diameter of the network graph. To work with excluded minor graph families, we utilize the Graph Structure Theorem of Robertson and Seymour. To the best of our knowledge, this is the first time the Graph Structure Theorem has been used for an algorithmic result in the distributed setting.

Even though the proof is involved, merely showing the existence of good shortcuts is sufficient to obtain simple, efficient distributed algorithms. In particular, the shortcut framework can efficiently construct near-optimal shortcuts and then use them to solve the optimization problems. This, combined with the very general family of excluded minor graphs, which includes most other important graph classes, makes this result of significant interest.

at January 22, 2018 01:40 AM UTC

Authors: Chao Fang, Zheng Zhu, Helmut G. Katzgraber
Download: PDF
Abstract: Probabilistic membership filters are a type of data structure designed to quickly verify whether an element of a large data set belongs to a subset of the data. While false negatives are not possible, false positives are. Therefore, the main goal of any good probabilistic membership filter is to have a small false-positive rate while being memory efficient and fast to query. Although Bloom filters are fast to construct, their memory efficiency is bounded by a strict theoretical upper bound. Weaver et al. introduced random satisfiability-based filters that significantly improved the efficiency of the probabilistic filters, however, at the cost of solving a complex random satisfiability (SAT) formula when constructing the filter. Here we present an improved SAT filter approach with a focus on reducing the filter building times, as well as query times. Our approach is based on using not-all-equal (NAE) SAT formulas to build the filters, solving these via a mapping to random SAT using traditionally-fast random SAT solvers, as well as bit packing and the reduction of the number of hash functions. Paired with fast hardware, NAE-SAT filters could result in enterprise-size applications.

at January 22, 2018 01:45 AM UTC

Authors: Joergen Bang-Jensen, Stéphane Bessy
Download: PDF
Abstract: A $(\delta\geq k_1,\delta\geq k_2)$-partition of a graph $G$ is a vertex-partition $(V_1,V_2)$ of $G$ satisfying that $\delta(G[V_i])\geq k_i$ for $i=1,2$. We determine, for all positive integers $k_1,k_2$, the complexity of deciding whether a given graph has a $(\delta\geq k_1,\delta\geq k_2)$-partition.

We also address the problem of finding a function $g(k_1,k_2)$ such that the $(\delta\geq k_1,\delta\geq k_2)$-partition problem is ${\cal

NP}$-complete for the class of graphs of minimum degree less than $g(k_1,k_2)$ and polynomial for all graphs with minimum degree at least $g(k_1,k_2)$. We prove that $g(1,k)=k$ for $k\ge 3$, that $g(2,2)=3$ and that $g(2,3)$, if it exists, has value 4 or 5.

at January 22, 2018 01:40 AM UTC

Let $\R(\cdot)$ stand for the bounded-error randomized query complexity. We show that for any relation $f \subseteq \{0,1\}^n \times \mathcal{S}$ and partial Boolean function $g \subseteq \{0,1\}^n \times \{0,1\}$, $\R_{1/3}(f \circ g^n) = \Omega(\R_{4/9}(f) \cdot \sqrt{\R_{1/3}(g)})$. Independently of us, Gavinsky, Lee and Santha \cite{newcomp} proved this result. By an example demonstrated in their work, this bound is optimal. We prove our result by introducing a novel complexity measure called the \emph{conflict complexity} of a partial Boolean function $g$, denoted by $\chi(g)$, which may be of independent interest. We show that $\chi(g) = \Omega(\sqrt{\R(g)})$ and $\R(f \circ g^n) = \Omega(\R(f) \cdot \chi(g))$.

at January 21, 2018 11:34 AM UTC

The measure hypothesis is a quantitative strengthening of the P $\neq$ NP conjecture which asserts that NP is a nonnegligible subset of EXP. Cai, Sivakumar, and Strauss (1997) showed that the analogue of this hypothesis in P is false. In particular, they showed that NTIME[$n^{1/11}$] has measure 0 in P. We improve on their result to show that the class of all languages decidable in nondeterministic sublinear time has measure 0 in P. Our result is based on DNF width and holds for all four major notions of measure on P.

at January 21, 2018 11:31 AM UTC

Lior Silberman asked about applications of the new Khot-Minzer-Safra 2-to-2 game theorem to hardness of approximation, and James Lee answered mentioning applications to vertex cover. Let me elaborate a little on vertex cover, and other matters. (Here is the pervious post on the 2-to-2 game theorem.)

Vertex cover

A vertex cover of a graph G is a set of vertices such that every edge contains a vertex in the set. VERTEX COVER is the algorithmic problem of finding such a set of vertices of minimum size. Famously this problem is an NP-complete problem, in fact, it is one of the problems in Karp’s original list of 21 NP-complete problems. A matching in a graph is a set of edges such that every vertex is included in at most one edge. Given a graph G there is an easy (greedy) efficient algorithm to find a maximal matching with respect to inclusion. Finding a maximal matching with r edges with respect to inclusion, gives us at the same time a vertex cover of size 2r and a guarantee that the minimum size of a vertex cover is at least r. A very natural question is to find an efficient algorithm for a better approximation. There is by now good evidence that this might not be possible. It is known to derive (Khot and Regev (2003)) from Khot’s unique game conjecture (Khot (2002)). I mention VERTEX COVER and MAX CUT (and the unique game conjecture and PCP) in section 3.5 of my ICM18 survey paper, and the problem described below about polytope integrality gap is taken from there.

NP-Hardness results for of approximating vertex cover

Johan Håstad (left) Irit Dinur (Right)

In the last two decades there were three improvements on the threshold of NP-hardness for approximating vertex cover. All three were part of major breakthrough papers.

7/6  Håstad  (1.166..).

The paper is: Some optimal inapproximability results, J. of ACM 48 (2001), 798–859

10 √5 -21 (1.3606..) Dinur and Safra and the importance of being biased

The paper is: On the hardness of approximating minimum vertex cover. Ann. of Math., 162 (2005), 439–485.  The 2002 conference version was entitled  “The importance of being biased.”

√2 – Khot-Minzer-Safra (1.4142..)

This is given (along with various other applications in Appendix B of the paper.)

Vertex cover and polytope-integrality-gap

Given a graph G and nonnegative weights on its vertices, the weighted version of vertex cover is the algorithmic problem of finding a set of vertices of minimum weight that covers all edges.

Minimize w_1x_1 + w_2x_2 +\cdot + w_nx_n, where x = (x_1,x_2,\dots ,x_n) is a 0-1 vectors,

Subject to: x_i + x_j \ge 1 for every edge.

Of course, this, more general problem, is also NP-complete. The linear programming relaxation allows x_is to be real numbers belonging to the interval [0,1]. The integrality gap for general vertex cover problems is 2, and given the solution to the linear programming problem you can just consider the set of vertices i so that x_i \ge 1/2. This will be a cover and the ratio between this cover and the optimal one is at most 2. (The integrality gap for the standard relaxation of MAX CUT  is log n.) The integrality gap is  important in many parts of optimization theory and the theory of computing. Here is a beautiful problem that I learned from Anna Karlin.

Consider the integrality gap (called the polytope integrality gap) between the covering problem and the linear programming relaxation when the graph G is fixed. In greater generality, consider a general covering problem of maximizing ctx subject to Ax b where A is integral matrix of nonnegative integers. Next, considered the integrality gap between 0-1 solutions and real solutions in [0; 1] when A and b are fixed (thus the feasible polyhedron is fixed, hence the name “polytope integrality gap”) and only c (the objective function) varies. The problem is if for vertex cover for every graph G and every vector of weights, there is an efficient algorithm achieving the polytope integrality gap. The same question can be asked for polytope integrality gap of arbitrary covering problems.

Friedgut’s Junta theoren/Bourgain’s Thm/FKN/Kindler-Safra, for Grassman Graphs

For the Boolean cube (or the “Hammer scheme”) there are nice theorems by Friedgut, by Bourgain, by Kindler-Safra, by Friedgut, Kalai, and Naor, and by others asserting that if a function is approximately “low-degree” then it is approximately a “Junta” or a “dictator.” These theorems play a role in PCP theory. When you replace the “Hamming scheme” by more general association schemes matters become much more involved but still hopeful. (Even the “Johnson scheme” this is a major challenge, with some recent results.) The Khot-Minzer-Safra paper and the earlier papers mentioned in my previous post prove such results and also prove how such results are useful for PCP purposes.

Alswede-Khachatrian theorem answering an old conjecture by Erdős-Ko-Rado and “Frankl’s other conjecture”

The paper by Dinur and Safra used beautifully several exciting combinatorial results and ideas. It uses analysis and Boolean functions for biased product probability distributions on the discrete cube. It also uses Erdős-Ko-Rado theory and a result that I will briefly mention. (See also this post on extremal combinatorics, this post on Levon Khachatrian, and this polymath10 post on a more general problem.)

What is the largest size f(n,k,r) of a family of k-subsets from an n-element set such that every two sets in the family have at least r elements in common?

For r=1 this is the Erdős-Ko-Rado theorem that asserts that when n \ge 2k the maximum is {{n-1} \choose {k-1}}.

Here is an example: n=8, k=4, and r=2.

You may think that the best thing to do is to take all 4-subsets containing ‘1’ and ‘2’. There are 15 such sets. But if you take all subsets containing at least 3 elements from {1,2,3,4} you have 17 sets so that every two have at least two elements in common. Erdos, Ko and Rado posed a conjecture for the case n=4r, k=2r, and Peter Frankl posed a conjecture for every n, k, and r. Frankl’s conjecture was settled by Ahlswede and Khachatrian.

Ahlswede and Khachatrian’s theorem:  For every n,k,r, there is m such that f(n,k,r) is attained by the family F(n,k,r,m)

F(n,k,r,m)= \{S \subset [n]:|S \cap [r+2m]|\ge r+m\}.

For a different proof that gives a simple derivation from a theorem by Wilson, see the paper by Balogh and Mubayi A new short proof of a theorem of Ahlswede and Khachatrian. See also Filmus’ paper Ahlswede-Khachatrian Theorems: Weighted, Infinite, and Hamming, and a stability result in the paper by Ellis, Keller, and Lifshitz, Stability for the Complete Intersection Theorem, and the Forbidden Intersection Problem of Erdős and Sós (that we mentioned in this post).

It was a pleasant surprise to see this theorem playing a role in the paper by Irit and Muli.

 

by Gil Kalai at January 21, 2018 08:40 AM UTC

We show how the classical Nisan-Wigderson (NW) generator [Nisan & Wigderson, 1994] yields a nontrivial pseudorandom generator (PRG) for circuits with sublinearly many polynomial threshold function (PTF) gates. For the special case of a single PTF of degree $d$ on $n$ inputs, our PRG for error $\epsilon$ has the seed size $$\exp\left(O(\sqrt{d\cdot\log n\cdot \log\log (n/\epsilon)})\right);$$ this can give a super-polynomial stretch even for a sub-exponentially small error parameter $\epsilon=\exp(-n^{\gamma})$, for any $\gamma=o(1)$. In contrast, the best known PRGs for PTFs of [Meka & Zuckerman, 2013; Kane, 2012] cannot achieve such a small error, although they do have a much shorter seed size for any constant error $\epsilon$. For the case of circuits with degree-$d$ PTF gates on $n$ inputs, our PRG can fool circuits with at most $n^{\alpha/d}$ gates with error $\exp(-n^{\alpha/d})$ and seed length $n^{O(\sqrt{\alpha})}$, for any $\alpha>1$. While a similar NW PRG construction was observed by Lovett and Srinivasan [Lovett & Srinivasan, 2011] to work for the case of constant-depth (AC$^0$) circuits with few PTF gates, the application of the NW generator to the case of general (unbounded depth) circuits consisting of a sublinear number of PTF gates does not seem to have been explicitly stated before. We do so in this note.

at January 20, 2018 03:19 AM UTC

I heard the great news that the paper Multiparty asynchronous session types by the late Kohei Honda, Nobuko Yoshida and Marco Carbone has received the Most Influential POPL Paper Award 2018. See


This award is given annually to the author(s) of a paper presented at the Symposium on Principles of Programming Languages (POPL) held 10 years prior to the award year. The papers are judged by their influence over the past decade.

The citation for the award is available from the above-mentioned web page, but I repeat it here for ease of reference:

Session types are a type-based framework for codifying communication structures and verifying protocols in concurrent, message-passing programs. Previously, session types could only model binary (two-party) protocols. This paper generalizes the theory to the multiparty case with asynchronous communications, preventing deadlock and communication errors in more sophisticated communication protocols involving any number (two or more) of participants. The central idea was to introduce global types, which describe multiparty conversations from a global perspective and provide a means to check protocol compliance. This work has inspired numerous authors to build on its pioneering foundations in the session types community and has initiated many applications of multiparty session types in programming languages and tools. It has also influenced other areas of research, such as software contracts, runtime verification and hardware specifications.

Congratulations to the authors of the paper and to the concurrency community at large, whose work over the last ten years contributed to this award and, most importantly, to significant scientific advances that are now embodied in programming languages and tools that are already having practical impact (see the work with Cognizant, Red Hat, and VMWare on Scribble, and with the OOI), and that I believe will find increasing application in years to come. 

by Luca Aceto (noreply@blogger.com) at January 19, 2018 01:36 PM UTC

Authors: Udo Hoffmann, Keno Merckx
Download: PDF
Abstract: Order types are a well known abstraction of combinatorial properties of a point set. By Mn\"ev's universality theorem for each semi-algebraic set $V$ there is an order type with a realization space that is \emph{stably equivalent} to $V$. We consider realization spaces of \emph{allowable sequences}, a refinement of order types. We show that the realization spaces of allowable sequences are \emph{universal} and consequently deciding the realizability is complete in the \emph{existential theory of the reals} (\ER). This result holds even if the realization space of the order type induced by the allowable sequence is non-empty. Furthermore, we argue that our result is a useful tool for further geometric reductions. We support this by giving \ER-hardness proofs for the realizability of abstract convex geometries and for the recognition problem of visibility graphs of polygons with holes using the hardness result for allowable sequences. This solves two longstanding open problems.

at January 19, 2018 12:00 AM UTC

Authors: Manuel Bodirsky, Johannes Greiner
Download: PDF
Abstract: The CSP of a first-order theory $T$ is the problem of deciding for a given finite set $S$ of atomic formulas whether $T \cup S$ is satisfiable. Let $T_1$ and $T_2$ be two theories with countably infinite models and disjoint signatures. Nelson and Oppen presented conditions that imply decidability (or polynomial-time decidability) of $\mathrm{CSP}(T_1 \cup T_2)$ under the assumption that $\mathrm{CSP}(T_1)$ and $\mathrm{CSP}(T_2)$ are decidable (or polynomial-time decidable). We show that for a large class of $\omega$-categorical theories $T_1, T_2$ the Nelson-Oppen conditions are not only sufficient, but also necessary for polynomial-time tractability of $\mathrm{CSP}(T_1 \cup T_2)$ (unless P=NP).

at January 19, 2018 01:40 AM UTC

Authors: John M. Hitchcock, Adewale Sekoni
Download: PDF
Abstract: The measure hypothesis is a quantitative strengthening of the P != NP conjecture which asserts that NP is a nonnegligible subset of EXP. Cai, Sivakumar, and Strauss (1997) showed that the analogue of this hypothesis in P is false. In particular, they showed that NTIME[n^{1/11}] has measure 0 in P. We improve on their result to show that the class of all languages decidable in nondeterministic sublinear time has measure 0 in P. Our result is based on DNF width and holds for all four major notions of measure on P.

at January 19, 2018 01:41 AM UTC

Authors: John M. Hitchcock, Hadi Shafei
Download: PDF
Abstract: Nonuniformity is a central concept in computational complexity with powerful connections to circuit complexity and randomness. Nonuniform reductions have been used to study the isomorphism conjecture for NP and completeness for larger complexity classes. We study the power of nonuniform reductions for NP-completeness, obtaining both separations and upper bounds for nonuniform completeness vs uniform completeness in NP.

Under various hypotheses, we obtain the following separations:

1. There is a set complete for NP under nonuniform many-one reductions, but not under uniform many-one reductions. This is true even with a single bit of nonuniform advice.

2. There is a set complete for NP under nonuniform many-one reductions with polynomial-size advice, but not under uniform Turing reductions. That is, polynomial nonuniformity is stronger than a polynomial number of queries.

3. For any fixed polynomial p(n), there is a set complete for NP under uniform 2-truth-table reductions, but not under nonuniform many-one reductions that use p(n) advice. That is, giving a uniform reduction a second query makes it more powerful than a nonuniform reduction with fixed polynomial advice.

4. There is a set complete for NP under nonuniform many-one reductions with polynomial advice, but not under nonuniform many-one reductions with logarithmic advice. This hierarchy theorem also holds for other reducibilities, such as truth-table and Turing.

We also consider uniform upper bounds on nonuniform completeness. Hirahara (2015) showed that unconditionally every set that is complete for NP under nonuniform truth-table reductions that use logarithmic advice is also uniformly Turing-complete. We show that under a derandomization hypothesis, the same statement for truth-table reductions and truth-table completeness also holds.

at January 19, 2018 01:40 AM UTC

Authors: Nathan Cassee, Thomas Neele, Anton Wijs
Download: PDF
Abstract: The use of graphics processors (GPUs) is a promising approach to speed up model checking to such an extent that it becomes feasible to instantly verify software systems during development. GPUexplore is an explicit-state model checker that runs all its computations on the GPU. Over the years it has been extended with various techniques, and the possibilities to further improve its performance have been continuously investigated. In this paper, we discuss how the hash table of the tool works, which is at the heart of its functionality. We propose an alteration of the hash table that in isolated experiments seems promising, and analyse its effect when integrated in the tool. Furthermore, we investigate the current scalability of GPUexplore, by experimenting both with input models of varying sizes and running the tool on one of the latest GPUs of NVIDIA.

at January 19, 2018 12:00 AM UTC

Authors: Fei Jiang, Lifang He, Yi Zheng, Enqiang Zhu, Jin Xu, Philip S. Yu
Download: PDF
Abstract: Graph embedding has been proven to be efficient and effective in facilitating graph analysis. In this paper, we present a novel spectral framework called NOn-Backtracking Embedding (NOBE), which offers a new perspective that organizes graph data at a deep level by tracking the flow traversing on the edges with backtracking prohibited. Further, by analyzing the non-backtracking process, a technique called graph approximation is devised, which provides a channel to transform the spectral decomposition on an edge-to-edge matrix to that on a node-to-node matrix. Theoretical guarantees are provided by bounding the difference between the corresponding eigenvalues of the original graph and its graph approximation. Extensive experiments conducted on various real-world networks demonstrate the efficacy of our methods on both macroscopic and microscopic levels, including clustering and structural hole spanner detection.

at January 19, 2018 02:42 AM UTC

Authors: D. F. G. Coelho, R. J. Cintra, V. S. Dimitrov
Download: PDF
Abstract: This paper introduces a new fast algorithm for the 8-point discrete cosine transform (DCT) based on the summation-by-parts formula. The proposed method converts the DCT matrix into an alternative transformation matrix that can be decomposed into sparse matrices of low multiplicative complexity. The method is capable of scaled and exact DCT computation and its associated fast algorithm achieves the theoretical minimal multiplicative complexity for the 8-point DCT. Depending on the nature of the input signal simplifications can be introduced and the overall complexity of the proposed algorithm can be further reduced. Several types of input signal are analyzed: arbitrary, null mean, accumulated, and null mean/accumulated signal. The proposed tool has potential application in harmonic detection, image enhancement, and feature extraction, where input signal DC level is discarded and/or the signal is required to be integrated.

at January 19, 2018 12:00 AM UTC

Last month I posted about the sensitivity conjecture and today I would like to talk about an interesting game developed by Gilmer, Koucký and Saks and independently by Drucker that could yield a proof.

Consider the following game on n bits among three players, Alice, Bob and Carol. The game works as follows: Carol picks a bit position and Alice sets that bit to 0 or 1. Carol then picks another unused bit position and Alice sets that bit as well. This repeats until the last bit which Carol gets to set herself. The bits are then revealed to Bob who has to give a set of bit positions that includes the bit Carol set at the end. Alice and Bob can work out a strategy ahead of time with the goal of minimizing the size of the set.

If Carol can force Bob to give a set of nε bits for some ε>0, this would prove the sensitivity conjecture! Basically a family of functions that give a counterexample to the sensitivity conjecture would give a strategy for Alice and Bob that yields a set of size no(1). You can find the full proof in the papers above.

How well can Alice and Bob do? They can use the following strategy to achieve n1/2: Break up the input positions into n1/2 groups of n1/2 bits each. Whenever Carol picks a bit position, Alice sets that bit to zero unless it is the last bit in that group in which case she sets it to one. Carol can set the last bit in some group anyway she wants. Bob will either see every group having one 1 and will give the set of the positions of all the 1's, or Bob will see a group of all zeros and give the positions in that
group.

Mario Szegedy uses error-correcting codes to improve the upper bound to O(n0.4732). A Georgia Tech undergraduate DeVon Ingram improves the upper bound to O(n0.4696) by generalizing Szegedy's techniques. Ingram's proof comes down to finding a one-to-one function mapping subsets of {1,...,15} of size 4 to perfect codes of length 15 with the property that the bits of the code are 1 for the indices of the subset. Perhaps one could find a clever mapping that has this property but Ingram wrote code using a matching algorithm. A true computer assisted proof. Longer code lengths alas won't yield direct improvements.

The connection to sensitivity doesn't go both ways. An upper bound of no(1) wouldn't necessarily yield a counterexample to the sensitivity conjecture though would tell us the limitation of this game approach. Nevertheless I find it cool that a game that doesn't talk about functions at all could help settle an important question about them.

by Lance Fortnow (noreply@blogger.com) at January 18, 2018 05:47 PM UTC

Nonuniformity is a central concept in computational complexity with powerful connections to circuit complexity and randomness. Nonuniform reductions have been used to study the isomorphism conjecture for NP and completeness for larger complexity classes. We study the power of nonuniform reductions for NP-completeness, obtaining both separations and upper bounds for nonuniform completeness vs uniform completeness in NP. Under various hypotheses, we obtain the following separations: 1. There is a set complete for NP under nonuniform many-one reductions, but not under uniform many-one reductions. This is true even with a single bit of nonuniform advice. 2. There is a set complete for NP under nonuniform many-one reductions with polynomial-size advice, but not under uniform Turing reductions. That is, polynomial nonuniformity is stronger than a polynomial number of queries. 3. For any fixed polynomial p(n), there is a set complete for NP under uniform 2-truth-table reductions, but not under nonuniform many-one reductions that use p(n) advice. That is, giving a uniform reduction a second query makes it more powerful than a nonuniform reduction with fixed polynomial advice. 4. There is a set complete for NP under nonuniform many-one reductions with polynomial ad- vice, but not under nonuniform many-one reductions with logarithmic advice. This hierarchy theorem also holds for other reducibilities, such as truth-table and Turing. We also consider uniform upper bounds on nonuniform completeness. Hirahara (2015) showed that unconditionally every set that is complete for NP under nonuniform truth-table reductions that use logarithmic advice is also uniformly Turing-complete. We show that under a derandomization hypothesis, the same statement for truth-table reductions and truth-table completeness also holds.

at January 18, 2018 01:06 PM UTC

Authors: Remi Cura, Julien Perret, Nicolas Paparoditis
Download: PDF
Abstract: Our modern world produces an increasing quantity of data, and especially geospatial data, with advance of sensing technologies, and growing complexity and organisation of vector data. Tools are needed to efficiently create and edit those vector geospatial data. Procedural generation has been a tool of choice to generate strongly organised data, yet it may be hard to control. Because those data may be involved to take consequence-full real life decisions, user interactions are required to check data and edit it. The classical process to do so would be to build an adhoc Graphical User Interface (GUI) tool adapted for the model and method being used. This task is difficult, takes a large amount of resources, and is very specific to one model, making it hard to share and re-use.

Besides, many common generic GUI already exists to edit vector data, each having its specialities. We propose a change of paradigm; instead of building a specific tool for one task, we use common GIS software as GUIs, and deport the specific interactions from the software to within the database. In this paradigm, GIS software simply modify geometry and attributes of database layers, and those changes are used by the database to perform automated task.

This new paradigm has many advantages. The first one is genericity. With in-base interaction, any GIS software can be used to perform edition, whatever the software is a Desktop sofware or a web application. The second is concurrency and coherency. Because interaction is in-base, use of database features allows seamless multi-user work, and can guarantee that the data is in a coherent state. Last we propose tools to facilitate multi-user edits, both during the edit phase (each user knows what areas are edited by other users), and before and after edit (planning of edit, analyse of edited areas).

at January 18, 2018 01:48 AM UTC

Authors: Rémi Cura, Julien Perret, Nicolas Paparoditis
Download: PDF
Abstract: Streets are large, diverse, and used for several (and possibly conflicting) transport modalities as well as social and cultural activities. Proper planning is essential and requires data. Manually fabricating data that represent streets (street reconstruction) is error-prone and time consuming. Automatising street reconstruction is a challenge because of the diversity, size, and scale of the details (few centimetres for cornerstone) required. The state-of-the-art focuses on roads (no context, no urban features) and is strongly determined by each application (simulation, visualisation, planning). We propose a unified framework that works on real Geographic Information System (GIS) data and uses a strong, yet simple modelling hypothesis when possible to robustly model streets at the city level or street level. Our method produces a coherent street-network model containing topological traffic information, road surface and street objects. We demonstrate the robustness and genericity of our method by reconstructing the entire city of Paris streets and exploring other similar reconstruction (airport driveway).

at January 18, 2018 12:00 AM UTC

Authors: David Gosset, John Smolin
Download: PDF
Abstract: We show how to approximately represent a quantum state using the square root of the usual amount of classical memory. The classical representation of an n-qubit state $\psi$ consists of its inner products with $O(\sqrt{2^n})$ stabilizer states. A quantum state initially specified by its $2^n$ entries in the computational basis can be compressed to this form in time $O(2^n \mathrm{poly}(n))$, and, subsequently, the compressed description can be used to additively approximate the expectation value of an arbitrary observable. Our compression scheme directly gives a new protocol for the vector in subspace problem with randomized one-way communication complexity that matches (up to polylogarithmic factors) the best known upper bound, due to Raz. We obtain an exponential improvement over Raz's protocol in terms of computational efficiency.

at January 18, 2018 01:43 AM UTC

Authors: Xiaoyu He, Xiaoming Sun, Guang Yang, Pei Yuan
Download: PDF
Abstract: The weight decision problem, which requires to determine the Hamming weight of a given binary string, is a natural and important problem, with applications in cryptanalysis, coding theory, fault-tolerant circuit design and so on. In particular, both Deutsch-Jozsa problem and Grover search problem can be interpreted as special cases of weight decision problems. In this work, we investigate the exact quantum query complexity of weight decision problems, where the quantum algorithm must always output the correct answer. More specifically we consider a partial Boolean function which distinguishes whether the Hamming weight of the length-$n$ input is $k$ or it is $l$. Our contribution includes both upper bounds and lower bounds for the precise number of queries. Furthermore, for most choices of $(\frac{k}{n},\frac{l}{n})$ and sufficiently large $n$, the gap between our upper and lower bounds is no more than one. To get the results, we first build the connection between Chebyshev polynomials and our problem, then determine all the boundary cases of $(\frac{k}{n},\frac{l}{n})$ with matching upper and lower bounds, and finally we generalize to other cases via a new \emph{quantum padding} technique. This quantum padding technique can be of independent interest in designing other quantum algorithms.

at January 18, 2018 01:40 AM UTC

Authors: Federico Della Croce, Rosario Scatamacchia
Download: PDF
Abstract: We consider the Pm || Cmax scheduling problem where the goal is to schedule n jobs on m identical parallel machines to minimize makespan. We revisit the famous Longest Processing Time (LPT) rule proposed by Graham in 1969. LPT requires to sort jobs in non-ascending order of processing times and then to assign one job at a time to the machine whose load is smallest so far. We provide new insights on LPT and discuss the approximation ratio of a modification of LPT that improves Graham's bound from 4/3 - 1/(3m) to 4/3 - 1/(3(m-1)) for m >= 3 and from 7/6 to 9/8 for m = 2. We use Linear Programming (LP) to analyze the approximation ratio of our approach. This performance analysis can be seen as a valid alternative to formal proofs based on analytical derivation. Also, we derive from the proposed approach an O(n log n) heuristic. The heuristic splits the sorted jobset in tuples of m consecutive jobs (1,...,m; m+1,...,2m; etc.) and sorts the tuples in non-increasing order of the difference (slack) between largest job and smallest job in the tuple. Then, List Scheduling is applied to the set of sorted tuples. This approach strongly outperforms LPT on benchmark literature instances.

at January 18, 2018 01:48 AM UTC

May 7-11, 2018 Como, Italy http://otnm.lakecomoschool.org Registration deadline: March 10, 2018 OTNM 2018 school aims to give a broad and up-to-date overview on various challenging models and problems arising from applications of Optimal Transport, as control of crowd and congested motions, image processing, big data and machine learning, and economics. A particular emphasis will be … Continue reading Optimal transport: numerical methods and applications

by shacharlovett at January 17, 2018 05:49 PM UTC

Five Postdoctoral positions in Computer Science at
Gran Sasso Science Institute in L'Aquila (Italy)
http://www.gssi.it/
Deadline:  2 March 2018 at 6 p.m. (Italian time zone)


The Gran Sasso Science Institute (GSSI, http://www.gssi.it/), a recently established international PhD school and a centre for advanced studies in computer science, mathematics, physics and social sciences offers 18 postdoctoral research positions, five of which are dedicated to computer science and more specifically to themes that are strongly connected to the pillars of the PhD program in computer science:

- Algorithmic foundations of social and computer networks.
- Software systems and services.
- Specifications and analysis of concurrent reactive systems

The research grants are awarded for two years and their yearly amount is € 36.000,00 gross.

Candidates who are preparing their doctoral thesis are eligible to apply; however, they must have obtained their PhD degree before taking up their appointment with GSSI. Selected candidates are expected to start their appointments no later than 1 November 2018.

The application must be submitted through the online form available at www.gssi.it/postdoc/ by 2 March 2018 at 6 p.m. (Italian time zone).
Each application should include the following material:

- the CV of the applicant,
- a research statement,
- up to 3 publications and
- the name and email of two references.

For more information, please consult the Call for Applications at www.gssi.it/postdoc/ or write an email to info@gssi.it.

Prospective candidates are also welcome to contact Luca Aceto (luca.aceto AT gssi.it) or Michele Flammini (michele.flammini AT gssi.it). 

by Luca Aceto (noreply@blogger.com) at January 17, 2018 09:25 AM UTC

Authors: Xuelian Lin, Jiahao Jiang, Shuai Ma, Yimeng Zuo, Chunming Hu
Download: PDF
Abstract: Various mobile devices have been used to collect, store and transmit tremendous trajectory data, and it is known that raw trajectory data seriously wastes the storage, network band and computing resource. To attack this issue, one-pass line simplification (LS) algorithms have are been developed, by compressing data points in a trajectory to a set of continuous line segments. However, these algorithms adopt the perpendicular Euclidean distance, and none of them uses the synchronous Euclidean distance (SED), and cannot support spatio-temporal queries. To do this, we develop two one-pass error bounded trajectory simplification algorithms (CISED-S and CISED-W) using SED, based on a novel spatio-temporal cone intersection technique. Using four real-life trajectory datasets, we experimentally show that our approaches are both efficient and effective. In terms of running time, algorithms CISED-S and CISED-W are on average 3 times faster than SQUISH-E (the most efficient existing LS algorithm using SED). In terms of compression ratios, algorithms CISED-S and CISED-W are comparable with and 19.6% better than DPSED (the most effective existing LS algorithm using SED) on average, respectively, and are 21.1% and 42.4% better than SQUISH-E on average, respectively.

at January 17, 2018 01:41 AM UTC

Authors: Debarati Das, Michal Koucký, Michael Saks
Download: PDF
Abstract: In this paper we propose models of combinatorial algorithms for the Boolean Matrix Multiplication (BMM), and prove lower bounds on computing BMM in these models. First, we give a relatively relaxed combinatorial model which is an extension of the model by Angluin (1976), and we prove that the time required by any algorithm for the BMM is at least $\Omega(n^3 / 2^{O( \sqrt{ \log n })})$. Subsequently, we propose a more general model capable of simulating the "Four Russians Algorithm". We prove a lower bound of $\Omega(n^{7/3} / 2^{O(\sqrt{ \log n })})$ for the BMM under this model. We use a special class of graphs, called $(r,t)$-graphs, originally discovered by Rusza and Szemeredi (1978), along with randomization, to construct matrices that are hard instances for our combinatorial models.

at January 17, 2018 01:40 AM UTC

Authors: Rade Vuckovac
Download: PDF
Abstract: The one way function based on Collatz problem is proposed. While Colatz ($3x+1$) problem is well known and undecided, the proposal is actually based on the problem's underlying conditional branching structure. The analysis shows why the $3x+1$ problem is so difficult and how the algorithm conditional branching structure can be used to construct an one way function.

at January 17, 2018 01:41 AM UTC

Authors: Edward Raff, Charles Nicholas
Download: PDF
Abstract: In this work we explore the use of metric index structures, which accelerate nearest neighbor queries, in the scenario where we need to interleave insertions and queries during deployment. This use-case is inspired by a real-life need in malware analysis triage, and is surprisingly understudied. Existing literature tends to either focus on only final query efficiency, often does not support incremental insertion, or does not support arbitrary distance metrics. We modify and improve three algorithms to support our scenario of incremental insertion and querying with arbitrary metrics, and evaluate them on multiple datasets and distance metrics while varying the value of $k$ for the desired number of nearest neighbors. In doing so we determine that our improved Vantage-Point tree of Minimum-Variance performs best for this scenario.

at January 17, 2018 01:42 AM UTC

Authors: Remi Cura, Julien Perret, Nicolas Paparoditis
Download: PDF
Abstract: Lidar datasets are becoming more and more common. They are appreciated for their precise 3D nature, and have a wide range of applications, such as surface reconstruction, object detection, visualisation, etc. For all this applications, having additional semantic information per point has potential of increasing the quality and the efficiency of the application. In the last decade the use of Machine Learning and more specifically classification methods have proved to be successful to create this semantic information. In this paradigm, the goal is to classify points into a set of given classes (for instance tree, building, ground, other). Some of these methods use descriptors (also called feature) of a point to learn and predict its class. Designing the descriptors is then the heart of these methods. Descriptors can be based on points geometry and attributes, use contextual information, etc. Furthermore, descriptors can be used by humans for easier visual understanding and sometimes filtering. In this work we propose a new simple geometric descriptor that gives information about the implicit local dimensionality of the point cloud at various scale. For instance a tree seen from afar is more volumetric in nature (3D), yet locally each leaves is rather planar (2D). To do so we build an octree centred on the point to consider, and compare the variation of the occupancy of the cells across the levels of the octree. We compare this descriptor with the state of the art dimensionality descriptor and show its interest. We further test the descriptor for classification within the Point Cloud Server, and demonstrate efficiency and correctness results.

at January 17, 2018 01:43 AM UTC

Authors: Stephan Held, Jochen Könemann, Jens Vygen
Download: PDF
Abstract: When delivering items to a set of destinations, one can save time and cost by passing a subset to a sub-contractor at any point en route. We consider a model where a set of items are initially loaded in one vehicle and should be distributed before a given deadline {\Delta}. In addition to travel time and time for deliveries, we assume that there is a fixed delay for handing over an item from one vehicle to another.

We will show that it is easy to decide whether an instance is feasible, i.e., whether it is possible to deliver all items before the deadline {\Delta}. We then consider computing a feasible tour of minimum cost, where we incur a cost per unit distance traveled by the vehicles, and a setup cost for every used vehicle. Our problem arises in practical applications and generalizes classical problems such as shallow-light trees and the bounded-latency problem.

Our main result is a polynomial-time algorithm that, for any given {\epsilon} > 0 and any feasible instance, computes a solution that delivers all items before time (1+ {\epsilon}){\Delta} and has cost O(1 + 1 / {\epsilon}) OPT, where OPT is the minimum cost of any feasible solution.

We show that our result is best possible in the sense that any improvement would lead to progress on 25-year-old questions on shallow-light trees.

at January 17, 2018 01:42 AM UTC

Authors: Jose Bento, Surjyendu Ray
Download: PDF
Abstract: The solution path of the 1D fused lasso for an $n$-dimensional input is piecewise linear with $\mathcal{O}(n)$ segments (Hoefling et al. 2010 and Tibshirani et al 2011). However, existing proofs of this bound do not hold for the weighted fused lasso. At the same time, results for the generalized lasso, of which the weighted fused lasso is a special case, allow $\Omega(3^n)$ segments (Mairal et al. 2012). In this paper, we prove that the number of segments in the solution path of the weighted fused lasso is $\mathcal{O}(n^3)$, and for some instances $\Omega(n^2)$. We also give a new, very simple, proof of the $\mathcal{O}(n)$ bound for the fused lasso.

at January 17, 2018 01:41 AM UTC

The et al. citation style favors scholars whose last name comes early in the dictionary. For example, other things equal, a last name like Aaron would circulate a lot more than Zuck. This problem is compounded by the existence of highly-cited papers which deviate from alphabetical ordering of authors. They carry the message: order matters, and some of you can’t use this trick, vae victis!

My suggestion is to avoid et al. and instead spell out every name (as in Aaron and Zuck) or every initial (as in AZ). It isn’t perfect, but improvements like randomly permuting the order still aren’t easy to implement. The suggestion actually cannot be implemented in journals like computational complexity which punish the authors into using an idiosyncratic style which has et al. But it doesn’t matter too much; nobody reads papers in those formats anyway, as we discussed several times.

by Emanuele at January 16, 2018 09:48 PM UTC

The Mathematics of AI group at the IBM T.J. Watson Research Center is accepting applications for a summer intern position from PhD students interested in the theory of computation. Applicants should send materials electronically to konak@us.ibm.com, including a CV and short statement of interests. New applications are reviewed until the position is filled.

Website: https://www.research.ibm.com/
Email: konak@us.ibm.com

by shacharlovett at January 16, 2018 09:46 PM UTC

Celebrating Donald Knuth's 80th birthday, or 80 years + 7 days birthday seems odd. Should we use powers of 2? Hmm- too few, just 32 and 64 really. And having a 32-year celebration for someone is unusual. How about numbers that end in 0 in base 8. 64 would be 100, 72 would  110, 80 would be 120 so AH- we would be celebrating! So lets Celebrate!

LANCE: One of us should blog about Don Knuth turning 80.

BILL: How about both of us, sep posts?

Lance had his post here. Now its my turn.

Donald Knuth was the first person (roughly- history is not as definite as mathematics) to use mathematics to analyse algorithms. This may seem like an obvious thing to do but that's easy to say in retrospect. And he never lost focus: He may have to use some hard math to analyze algorithms but his goal always was to analyze algorithms. He didn't get fascinated with some obscure aspect of math like, say, surreal numbers, and go write a book on it. Oh wait, he did (see here). But, for the most part everything he did was towards faster algorithm and faster typesetting.

In an early draft of a book review of  Companion \to the Papers of Donald Knuth I wrote at the end of it

 Donald Knuth has been called the father of Theoretical Computer Science and hence his thoughts are worth knowing

I usually email my reviews to the author so they can make sure I didn't say anything stupid or in some very rare cases have a rebuttal. In Knuth's case I postal mailed it (ask your grandparents what that is) since he does not use email. He mailed back the hardcopy with some corrections in pencil. One of them was to cross out

father of theoretical computer science

and replace it with

father of algorithmic analysis.

I made the correction. That is how he views himself and it would be odd to argue the point.

As a tribute to him I have gathered up all of the book reviews  in my column of books by him (between 10 and 12 depending on how you count) and books clearly inspired by him (1 book- A=B) into one file which I point to  here




by GASARCH (noreply@blogger.com) at January 16, 2018 04:41 PM UTC

The University of Mons announces the opening of a full-time tenure-track faculty position at the assistant professor level. This position has a start date of September 1, 2018. See the website for more information.

Website: http://informatique.umons.ac.be/index.php?p=jobs
Email: veronique.bruyere@umons.ac.be

by shacharlovett at January 16, 2018 04:14 PM UTC

by David Eppstein at January 15, 2018 05:39 PM UTC

Authors: Alexander Barvinok
Download: PDF
Abstract: We prove that for any $\lambda > 1$, fixed in advance, the permanent of an $n \times n$ complex matrix, where the absolute value of each diagonal entry is at least $\lambda$ times bigger than the sum of the absolute values of all other entries in the same row, can be approximated within any relative error $0 < \epsilon < 1$ in quasi-polynomial $n^{O(\ln n - \ln \epsilon)}$ time. We extend this result to multidimensional permanents of tensors and discuss its application to weighted counting of perfect matchings in hypergraphs.

at January 15, 2018 01:40 AM UTC

The approximate degree of a Boolean function $f(x_{1},x_{2},\ldots,x_{n})$ is the minimum degree of a real polynomial that approximates $f$ pointwise within $1/3$. Upper bounds on approximate degree have a variety of applications in learning theory, differential privacy, and algorithm design in general. Nearly all known upper bounds on approximate degree arise in an existential manner from bounds on quantum query complexity. We develop a first-principles, classical approach to the polynomial approximation of Boolean functions. We use it to give the first constructive upper bounds on the approximate degree of several fundamental problems: - $O\bigl(n^{\frac{3}{4}-\frac{1}{4(2^{k}-1)}}\bigr)$ for the $k$-element distinctness problem; - $O(n^{1-\frac{1}{k+1}})$ for the $k$-subset sum problem; - $O(n^{1-\frac{1}{k+1}})$ for any $k$-DNF or $k$-CNF formula; - $O(n^{3/4})$ for the surjectivity problem. In all cases, we obtain explicit, closed-form approximating polynomials that are unrelated to the quantum arguments from previous work. Our first three results match the bounds from quantum query complexity. Our fourth result improves polynomially on the $\Theta(n)$ quantum query complexity of the problem and refutes the conjecture by several experts that surjectivity has approximate degree $\Omega(n)$. In particular, we exhibit the first natural problem with a polynomial gap between approximate degree and quantum query complexity.

at January 14, 2018 10:33 PM UTC