Theory of Computing Blog Aggregator

Authors: Stacey Jeffery, François Le Gall
Download: PDF
Abstract: Computing set joins of two inputs is a common task in database theory. Recently, Van Gucht, Williams, Woodruff and Zhang [PODS 2015] considered the complexity of such problems in the natural model of (classical) two-party communication complexity and obtained tight bounds for the complexity of several important distributed set joins.

In this paper we initiate the study of the *quantum* communication complexity of distributed set joins. We design a quantum protocol for distributed Boolean matrix multiplication, which corresponds to computing the composition join of two databases, showing that the product of two $n\times n$ Boolean matrices, each owned by one of two respective parties, can be computed with $\widetilde{O}(\sqrt{n}\ell^{3/4})$ qubits of communication, where $\ell$ denotes the number of non-zero entries of the product. Since Van Gucht et al. showed that the classical communication complexity of this problem is $\widetilde{\Theta}(n\sqrt{\ell})$, our quantum algorithm outperforms classical protocols whenever the output matrix is sparse. We also show a quantum lower bound and a matching classical upper bound on the communication complexity of distributed matrix multiplication over $\mathbb{F}_2$.

Besides their applications to database theory, the communication complexity of set joins is interesting due to its connections to direct product theorems in communication complexity. In this work we also introduce a notion of *all-pairs* product theorem, and relate this notion to standard direct product theorems in communication complexity.

at August 24, 2016 01:01 AM UTC

Authors: Yakov Babichenko, Aviad Rubinstein
Download: PDF
Abstract: For a constant $\epsilon$, we prove a poly(N) lower bound on the communication complexity of $\epsilon$-Nash equilibrium in two-player NxN games. For n-player binary-action games we prove an exp(n) lower bound for the communication complexity of $(\epsilon,\epsilon)$-weak approximate Nash equilibrium, which is a profile of mixed actions such that at least $(1-\epsilon)$-fraction of the players are $\epsilon$-best replying.

at August 24, 2016 01:00 AM UTC

Authors: Sjoerd Dirksen, Alexander Stollenwerk
Download: PDF
Abstract: We consider the problem of encoding a finite set of vectors into a small number of bits while approximately retaining information on the angular distances between the vectors. By deriving improved variance bounds related to binary Gaussian circulant embeddings, we largely fix a gap in the proof of the best known fast binary embedding method. Our bounds also show that well-spreadness assumptions on the data vectors, which were needed in earlier work on variance bounds, are unnecessary. In addition, we propose a new binary embedding with a faster running time on sparse data.

at August 24, 2016 01:02 AM UTC

Authors: Loukas Georgiadis, Aikaterini Karanasiou, Giannis Konstantinos, Luigi Laura
Download: PDF
Abstract: A flow graph $G=(V,E,s)$ is a directed graph with a distinguished start vertex $s$. The dominator tree $D$ of $G$ is a tree rooted at $s$, such that a vertex $v$ is an ancestor of a vertex $w$ if and only if all paths from $s$ to $w$ include $v$. The dominator tree is a central tool in program optimization and code generation and has many applications in other diverse areas including constraint programming, circuit testing, biology, and in algorithms for graph connectivity problems. A low-high order of $G$ is a preorder $\delta$ of $D$ that certifies the correctness of $D$ and has further applications in connectivity and path-determination problems. In this paper, we first consider how to maintain efficiently a low-high order of a flow graph incrementally under edge insertions. We present algorithms that run in $O(mn)$ total time for a sequence of $m$ edge insertions in an initially empty flow graph with $n$ vertices.These immediately provide the first incremental certifying algorithms for maintaining the dominator tree in $O(mn)$ total time, and also imply incremental algorithms for other problems. Hence, we provide a substantial improvement over the $O(m^2)$ simple-minded algorithms, which recompute the solution from scratch after each edge insertion. We also show how to apply low-high orders to obtain a linear-time $2$-approximation algorithm for the smallest $2$-vertex-connected spanning subgraph problem (2VCSS). Finally, we present efficient implementations of our new algorithms for the incremental low-high and 2VCSS problems and conduct an extensive experimental study on real-world graphs taken from a variety of application areas. The experimental results show that our algorithms perform very well in practice.

at August 24, 2016 01:02 AM UTC

Authors: Vida Dujmović, Fabrizio Frati
Download: PDF
Abstract: It is known that every proper minor-closed class of graphs has bounded stack-number (a.k.a. book thickness and page number). While this includes notable graph families such as planar graphs and graphs of bounded genus, many other graph families are not closed under taking minors. For fixed $g$ and $k$, we show that every $n$-vertex graph that can be embedded on a surface of genus $g$ with at most $k$ crossings per edge has stack-number $\mathcal{O}(\log n)$; this includes $k$-planar graphs. The previously best known bound for the stack-number of these families was $\mathcal{O}(\sqrt{n})$, except in the case of $1$-planar graphs. Analogous results are proved for map graphs that can be embedded on a surface of fixed genus. None of these families is closed under taking minors. The main ingredient in the proof of these results is a construction proving that $n$-vertex graphs that admit constant layered separators have $\mathcal{O}(\log n)$ stack-number.

at August 24, 2016 12:00 AM UTC

Authors: János Balogh, József Békési, György Dósa, Leah Epstein, Asaf Levin
Download: PDF
Abstract: Cardinality constrained bin packing or bin packing with cardinality constraints is a basic bin packing problem. In the online version with the parameter k \geq 2, items having sizes in (0,1] associated with them are presented one by one to be packed into unit capacity bins, such that the capacities of bins are not exceeded, and no bin receives more than k items. We resolve the online problem in the sense that we prove a lower bound of 2 on the overall asymptotic competitive ratio. This closes this long standing open problem, since an algorithm of an absolute competitive ratio 2 is known. Additionally, we significantly improve the known lower bounds on the asymptotic competitive ratio for every specific value of k. The novelty of our constructions is based on full adaptivity that creates large gaps between item sizes. Thus, our lower bound inputs do not follow the common practice for online bin packing problems of having a known in advance input consisting of batches for which the algorithm needs to be competitive on every prefix of the input.

at August 24, 2016 01:03 AM UTC

Authors: T-H. Hubert Chan, Shuguang Hu, Shaofeng H.-C. Jiang
Download: PDF
Abstract: We achieve a (randomized) polynomial-time approximation scheme (PTAS) for the Steiner Forest Problem in doubling metrics. Before our work, a PTAS is given only for the Euclidean plane in [FOCS 2008: Borradaile, Klein and Mathieu]. Our PTAS also shares similarities with the dynamic programming for sparse instances used in [STOC 2012: Bartal, Gottlieb and Krauthgamer] and [SODA 2016: Chan and Jiang]. However, extending previous approaches requires overcoming several non-trivial hurdles, and we make the following technical contributions.

(1) We prove a technical lemma showing that Steiner points have to be "near" the terminals in an optimal Steiner tree. This enables us to define a heuristic to estimate the local behavior of the optimal solution, even though the Steiner points are unknown in advance. This lemma also generalizes previous results in the Euclidean plane, and may be of independent interest for related problems involving Steiner points.

(2) We develop a novel algorithmic technique known as "adaptive cells" to overcome the difficulty of keeping track of multiple components in a solution. Our idea is based on but significantly different from the previously proposed "uniform cells" in the FOCS 2008 paper, whose techniques cannot be readily applied to doubling metrics.

at August 24, 2016 01:02 AM UTC

Authors: Gil Cohen, Thomas Vidick
Download: PDF
Abstract: Privacy amplification is the task by which two cooperating parties transform a shared weak secret, about which an eavesdropper may have side information, into a uniformly random string uncorrelated from the eavesdropper. Privacy amplification against passive adversaries, where it is assumed that the communication is over a public but authenticated channel, can be achieved in the presence of classical as well as quantum side information by a single-message protocol based on strong extractors.

In 2009 Dodis and Wichs devised a two-message protocol to achieve privacy amplification against active adversaries, where the public communication channel is no longer assumed to be authenticated, through the use of a strengthening of strong extractors called non-malleable extractors which they introduced. Dodis and Wichs only analyzed the case of classical side information.

We consider the task of privacy amplification against active adversaries with quantum side information. Our main result is showing that the Dodis-Wichs protocol remains secure in this scenario provided its main building block, the non-malleable extractor, satisfies a notion of quantum-proof non-malleability which we introduce. We show that an adaptation of a recent construction of non-malleable extractors due to Chattopadhyay et al. is quantum proof, thereby providing the first protocol for privacy amplification that is secure against active quantum adversaries. Our protocol is quantitatively comparable to the near-optimal protocols known in the classical setting.

at August 24, 2016 01:01 AM UTC

Various combinatorial/algebraic parameters are used to quantify the complexity of a Boolean function. Among them, sensitivity is one of the simplest and block sensitivity is one of the most useful. Nisan (1989) and Nisan and Szegedy (1991) showed that block sensitivity and several other parameters, such as certificate complexity, decision tree depth, and degree over R, are all polynomially related to one another. The sensitivity conjecture states that there is also a polynomial relationship between sensitivity and block sensitivity, thus supplying the "missing link". Since its introduction in 1991, the sensitivity conjecture has remained a challenging open question in the study of Boolean functions. One natural approach is to prove it for special classes of functions. For instance, the conjecture is known to be true for monotone functions, symmetric functions, and functions describing graph properties. In this paper, we consider the conjecture for Boolean functions computable by read-k formulas. A read-k formula is a tree in which each variable appears at most k times among the leaves and has Boolean gates at its internal nodes. We show that the sensitivity conjecture holds for read-once formulas with gates computing symmetric functions. We next consider regular formulas with OR and AND gates. A formula is regular if it is a leveled tree with all gates at a given level having the same fan-in and computing the same function. We prove the sensitivity conjecture for constant depth regular read-k formulas for constant k.

at August 23, 2016 09:24 PM UTC

Authors: Manfred Cochefert, Jean-François Couturier, Petr A. Golovach, Daniël Paulusma, Anthony Stewart
Download: PDF
Abstract: A graph H is a square root of a graph G if G can be obtained from H by adding an edge between any two vertices in H that are of distance 2. The Square Root problem is that of deciding whether a given graph admits a square root. This problem is only known to be NP-complete for chordal graphs and polynomial-time solvable for non-trivial minor-closed graph classes and a very limited number of other graph classes. We prove that Square Root is O(n)-time solvable for graphs of maximum degree 5 and O(n^4)-time solvable for graphs of maximum degree at most 6.

at August 23, 2016 01:07 AM UTC

Authors: Petr A. Golovach, Dieter Kratsch, Daniël Paulusma, Anthony Stewart
Download: PDF
Abstract: A graph H is a square root of a graph G if G can be obtained from H by the addition of edges between any two vertices in H that are of distance 2 from each other. The Square Root problem is that of deciding whether a given graph admits a square root. We consider this problem for planar graphs in the context of the "distance from triviality" framework. For an integer k, a planar+kv graph (or k-apex graph) is a graph that can be made planar by the removal of at most k vertices. We prove that a generalization of Square Root, in which some edges are prescribed to be either in or out of any solution, has a kernel of size O(k) for planar+kv graphs, when parameterized by k. Our result is based on a new edge reduction rule which, as we shall also show, has a wider applicability for the Square Root problem.

at August 23, 2016 01:01 AM UTC

Authors: Jop Briët, Jeroen Zuiddam
Download: PDF
Abstract: After Bob sends Alice a bit, she responds with a lengthy reply. At the cost of a factor of two in the total communication, Alice could just as well have given the two possible replies without listening and have Bob select which applies to him. Motivated by a conjecture stating that this form of "round elimination" is impossible in exact quantum communication complexity, we study the orthogonal rank and a symmetric variant thereof for a certain family of Cayley graphs. The orthogonal rank of a graph is the smallest number $d$ for which one can label each vertex with a nonzero $d$-dimensional complex vector such that adjacent vertices receive orthogonal vectors.

We show an exp$(n)$ lower bound on the orthogonal rank of the graph on $\{0,1\}^n$ in which two strings are adjacent if they have Hamming distance at least $n/2$. In combination with previous work, this implies an affirmative answer to the above conjecture.

at August 23, 2016 01:00 AM UTC

Authors: Veit Wiechert
Download: PDF
Abstract: A queue layout of a graph consists of a linear order on the vertices and an assignment of the edges to queues, such that no two edges in a single queue are nested. The minimum number of queues needed in a queue layout of a graph is called its queue-number.

We show that for each $k\geq1$, graphs with tree-width at most $k$ have queue-number at most $2^k-1$. This improves upon double exponential upper bounds due to Dujmovi\'c et al. and Giacomo et al. As a consequence we obtain that these graphs have track-number at most $2^{O(k^2)}$.

We complement these results by a construction of $k$-trees that have queue-number at least $k+1$. Already in the case $k=2$ this is an improvement to existing results and solves a problem of Rengarajan and Veni Madhavan, namely, that the maximal queue-number of $2$-trees is equal to $3$.

at August 23, 2016 01:13 AM UTC

Authors: Clément Maria, Steve Oudot
Download: PDF
Abstract: Zigzag persistent homology is a powerful generalisation of persistent homology that allows one not only to compute persistence diagrams with less noise and using less memory, but also to use persistence in new fields of application. However, due to the increase in complexity of the algebraic treatment of the theory, most algorithmic results in the field have remained of theoretical nature.

This article describes an efficient algorithm to compute zigzag persistence, emphasising on its practical interest. The algorithm is a zigzag persistent cohomology algorithm, based on the dualisation of reflections and transpositions transformations within the zigzag sequence.

We provide an extensive experimental study of the algorithm. We study the algorithm along two directions. First, we compare its performance with zigzag persistent homology algorithm and show the interest of cohomology in zigzag persistence. Second, we illustrate the interest of zigzag persistence in topological data analysis by comparing it to state of the art methods in the field, specifically optimised algorithm for standard persistent homology and sparse filtrations. We compare the memory and time complexities of the different algorithms, as well as the quality of the output persistence diagrams.

at August 23, 2016 01:13 AM UTC

Authors: Aleksander Madry
Download: PDF
Abstract: We present an $\tilde{O}\left(m^{\frac{10}{7}}U^{\frac{1}{7}}\right)$-time algorithm for the maximum $s$-$t$ flow problem and the minimum $s$-$t$ cut problem in directed graphs with $m$ arcs and largest integer capacity $U$. This matches the running time of the $\tilde{O}\left((mU)^{\frac{10}{7}}\right)$-time algorithm of M\k{a}dry (FOCS 2013) in the unit-capacity case, and improves over it, as well as over the $\tilde{O}\left(m \sqrt{n} \log U\right)$-time algorithm of Lee and Sidford (FOCS 2014), whenever $U$ is moderately large and the graph is sufficiently sparse. By well-known reductions, this also gives similar running time improvements for the maximum-cardinality bipartite $b$-matching problem.

One of the advantages of our algorithm is that it is significantly simpler than the ones presented in Madry (FOCS 2013) and Lee and Sidford (FOCS 2014). In particular, these algorithms employ a sophisticated interior-point method framework, while our algorithm is cast directly in the classic augmenting path setting that almost all the combinatorial maximum flow algorithms use. At a high level, the presented algorithm takes a primal dual approach in which each iteration uses electrical flows computations both to find an augmenting $s$-$t$ flow in the current residual graph and to update the dual solution. We show that by maintain certain careful coupling of these primal and dual solutions we are always guaranteed to make significant progress.

at August 23, 2016 01:12 AM UTC

Authors: Kyle Headley, Matthew A. Hammer
Download: PDF
Abstract: We introduce the Random Access Zipper (RAZ), a simple, purely-functional data structure for editable sequences. A RAZ combines the structure of a zipper with that of a tree: like a zipper, edits at the cursor require constant time; by leveraging tree structure, relocating the edit cursor in the sequence requires logarithmic time. While existing data structures provide these time bounds, none do so with the same simplicity and brevity of code as the RAZ. The simplicity of the RAZ provides the opportunity for more programmers to extend the structure to their own needs, and we provide some suggestions for how to do so.

at August 23, 2016 01:11 AM UTC

Authors: Mark Cooke, Chris North, Megan Dewar, Brett Stevens
Download: PDF
Abstract: In this paper we discuss a natural mathematical structure that is derived from Samuel Beckett's play "Quad". This structure is called a binary Beckett-Gray code. Our goal is to formalize the definition of a binary Beckett-Gray code and to present the work done to date. In addition, we describe the methodology used to obtain enumeration results for binary Beckett-Gray codes of order $n = 6$ and existence results for binary Beckett-Gray codes of orders $n = 7,8$. We include an estimate, using Knuth's method, for the size of the exhaustive search tree for $n=7$. Beckett-Gray codes can be realized as successive states of a queue data structure. We show that the binary reflected Gray code can be realized as successive states of two stack data structures.

at August 23, 2016 01:12 AM UTC

Authors: Hector Zenil, Narsis Kiani
Download: PDF
Abstract: A common practice in the estimation of the complexity of objects, in particular of graphs, is to rely on graph- and information-theoretic measures. Here, using integer sequences with properties such as Borel normality, we explain how these measures are not independent of the way in which a single object, such a graph, can be described. From descriptions that can reconstruct the same graph and are therefore essentially translations of the same description, we will see that not only is it necessary to pre-select a feature of interest where there is one when applying a computable measure such as Shannon Entropy, and to make an arbitrary selection where there is not, but that more general properties, such as the causal likeliness of a graph as a measure (opposed to randomness), can be largely misrepresented by computable measures such as Entropy and Entropy rate. We introduce recursive and non-recursive (uncomputable) graphs and graph constructions based on integer sequences, whose different lossless descriptions have disparate Entropy values, thereby enabling the study and exploration of a measure's range of applications and demonstrating the weaknesses of computable measures of complexity.

at August 23, 2016 01:01 AM UTC

Authors: Mingyu Xiao
Download: PDF
Abstract: Graph separation and partitioning are fundamental problems that have been extensively studied both in theory and practice. The \textsc{$p$-Size Separator} problem, closely related to the \textsc{Balanced Separator} problem, is to check whether we can delete at most $k$ vertices in a given graph $G$ such that each connected component of the remaining graph has at most $p$ vertices. This problem is NP-hard for each fixed integer $p\geq 1$ and it becomes the famous \textsc{Vertex Cover} problem when $p=1$. It is known that the problem with parameter $k$ is W[1]-hard for unfixed $p$. In this paper, we prove a kernel of $O(pk)$ vertices for this problem, i.e., a linear vertex kernel for each fixed $p \geq 1$. In fact, we first obtain an $O(p^2k)$ vertex kernel by using a nontrivial extension of the expansion lemma. Then we further reduce the kernel size to $O(pk)$ by using some `local adjustment' techniques. Our proofs are based on extremal combinatorial arguments and the main result can be regarded as a generalization of the Nemhauser and Trotter's theorem for the \textsc{Vertex Cover} problem. These techniques are possible to be used to improve kernel sizes for more problems, especially problems with kernelization algorithms based on techniques similar to the expansion lemma or crown decompositions.

at August 23, 2016 01:01 AM UTC

In 2012 a Professor of Divisinity at Harvard, Karen King, announced that she had a fragment that seemed to indicate that Jesus had a wife. It was later found to be fake.  The article that really showed it was a fake was in the Atlantic monthly here.  A Christian Publication called Breakpoint  told the story: here.

When I read a story about person X being proven wrong the question upper most in my mind is: how did X react?  If they retract then they still have my respect and can keep on doing whatever work they were doing. If they dig in their heels and insist they are still right, or that a minor fix will make the proof correct (more common in our area than in history) then they lose all my respect.

The tenth paragraph has the following:


Within days of the article’s publication, King admitted that the fragment is probably a forgery. Even more damaging, she told Sabar that “I haven’t engaged the provenance questions at all” and that she was “not particularly” interested in what he had discovered.


Dr. King should have been more careful and more curious (though hindsight is wonderful)  initially. However, her admitting it was probably a forgery (probably?) is ... okay. I wish she was more definite in her admission but... I've seen far worse.

A good scholar will admit when they are wrong. A good scholar will look at the evidence and be prepared to change their minds.

Does Breakpoint itself do this when discussing homosexuality or evolution or global warming. I leave that to the reader.

However, my major point is that the difference between a serious scientist and a crank is what one does when confronted with evidence that you are wrong.


by GASARCH (noreply@blogger.com) at August 22, 2016 08:25 PM UTC

Authors: Timo Bingmann, Michael Axtmann, Emanuel Jöbstl, Sebastian Lamm, Huyen Chau Nguyen, Alexander Noe, Sebastian Schlag, Matthias Stumpp, Tobias Sturm, Peter Sanders
Download: PDF
Abstract: We present the design and a first performance evaluation of Thrill -- a prototype of a general purpose big data processing framework with a convenient data-flow style programming interface. Thrill is somewhat similar to Apache Spark and Apache Flink with at least two main differences. First, Thrill is based on C++ which enables performance advantages due to direct native code compilation, a more cache-friendly memory layout, and explicit memory management. In particular, Thrill uses template meta-programming to compile chains of subsequent local operations into a single binary routine without intermediate buffering and with minimal indirections. Second, Thrill uses arrays rather than multisets as its primary data structure which enables additional operations like sorting, prefix sums, window scans, or combining corresponding fields of several arrays (zipping). We compare Thrill with Apache Spark and Apache Flink using five kernels from the HiBench suite. Thrill is consistently faster and often several times faster than the other frameworks. At the same time, the source codes have a similar level of simplicity and abstraction

at August 22, 2016 01:00 AM UTC

Authors: Yuto Nakashima, Hiroe Inoue, Takuya Mieno, Shunsuke Inenaga, Hideo Bannai, Masayuki Takeda
Download: PDF
Abstract: A palindrome is a string that reads the same forward and backward. A palindromic substring $P$ of a string $S$ is called a shortest unique palindromic substring ($\mathit{SUPS}$) for an interval $[x, y]$ in $S$, if $P$ occurs exactly once in $S$, this occurrence of $P$ contains interval $[x, y]$, and every palindromic substring of $S$ which contains interval $[x, y]$ and is shorter than $P$ occurs at least twice in $S$. The $\mathit{SUPS}$ problem is, given a string $S$, to preprocess $S$ so that for any subsequent query interval $[x, y]$ all the $\mathit{SUPS}\mbox{s}$ for interval $[x, y]$ can be answered quickly. We present an optimal solution to this problem. Namely, we show how to preprocess a given string $S$ of length $n$ in $O(n)$ time and space so that all $\mathit{SUPS}\mbox{s}$ for any subsequent query interval can be answered in $O(k+1)$ time, where $k$ is the number of outputs.

at August 22, 2016 01:04 AM UTC

We prove that for every $n$ and $1 < t < n$ any $t$-out-of-$n$ threshold secret sharing scheme for one-bit secrets requires share size $\log(t + 1)$. Our bound is tight when $t = n - 1$ and $n$ is a prime power. In 1990 Kilian and Nisan proved the incomparable bound $\log(n - t + 2)$. Taken together, the two bounds imply that the share size of Shamir's secret sharing scheme (Comm. ACM '79) is optimal up to an additive constant even for one-bit secrets for the whole range of parameters $1 < t < n$. More generally, we show that for all $1 < s < r < n$, any ramp secret sharing scheme with secrecy threshold $s$ and reconstruction threshold $r$ requires share size $\log((r + 1)/(r - s))$. As part of our analysis we formulate a simple game-theoretic relaxation of secret sharing for arbitrary access structures. We prove the optimality of our analysis for threshold secret sharing with respect to this method and point out a general limitation.

at August 21, 2016 08:44 PM UTC

We prove tight network topology dependent bounds on the round complexity of computing well studied $k$-party functions such as set disjointness and element distinctness. Unlike the usual case in the CONGEST model in distributed computing, we fix the function and then vary the underlying network topology. This complements the recent such results on total communication that have received some attention. We also present some applications to distributed graph computation problems. Our main contribution is a proof technique that allows us to reduce the problem on a general graph topology to a relevant two-party communication complexity problem. However, unlike many previous works that also used the same high level strategy, we do *not* reason about a two-party communication problem that is induced by a cut in the graph. To `stitch' back the various lower bounds from the two party communication problems, we use the notion of timed graph that has seen prior use in network coding. Our reductions use some tools from Steiner tree packing and multi-commodity flow problems that have a delay constraint.

at August 19, 2016 02:45 PM UTC

Suppose we play the following game. We place a bunch of hexagonal game board tiles on a table, edge-to-edge, to form our playing field. On the field, we place two game pieces, a cop (blue) and a robber (red). The cop and robber take turns, either moving to an adjacent hex or passing (staying put). The cop wins if he can end a turn on the same hex as the robber. The robber wins by evading the cop forever. Who has the advantage?



It turns out to depend on what game board shapes are allowed. If the hexes of the board can completely surround a hole (a shape where one or more hexes could have been placed, but weren't) then the robber can win by keeping the hole between himself and the robber. But if there are no holes, then the cop always wins.

Probably there is a direct strategy that shows this, but it's also possible to prove that the cop wins by using the theory of cop-win graphs, graphs in which the cop wins a generalized version of this game, with the players moving on the vertices of a graph rather than the hexes of a game board. Cop-win graphs are the same as dismantlable graphs, the graphs that can be reduced to a single vertex by repeatedly removing a vertex whose closed neighborhood is a subset of another vertex's closed neighborhood. And the adjacency graphs of systems of hexes with no holes are always dismantlable.

To see this, consider the tree of 2-vertex-connected components of the adjacency graph of the hexes. For instance, the example above has five 2-connected components: the hexagonal shape formed by the seven hexes in the lower left, a big mass of 18 hexes in the center and right, and three single-hex components in the upper left. (Two of these single hexes are connected to the rest of the board only by a single edge; the other single-hex component lies between the two big components.) Choose arbitrarily a single leaf component of this tree (the 18-vertex component, or the two single-hex components connected to the rest by a single edge). If this component is a single hex, then its closed neighborhood is a two-hex set, consisting of itself and its one neighbor. In this case, its neighborhood is always a subset of its neighbor's component.

Otherwise, if the leaf component that you picked is a nontrivial 2-connected component, such as the 18-vertex component of the example, walk counterclockwise around its boundary. The angles that you turn always have to add up to 2pi, so you must have passed at least three points where your walk turns counterclockwise by an angle of pi/3 (a boundary hex adjacent to two others, such as the starting point of the cop in the example) or 2pi/3 (a boundary hex adjacent to three others). In particular, one of these three hexes is not the one that connects your component to the rest of the game board. If it has two neighbors, its neighborhood is a subset of either of its neighbors' neighborhoods. And if it has three neighbors, its neighborhood is a subset of its middle neighbor's neighborhood. So either way, we can find a hex to remove. By repeating this process, we can show that the adjacency graph of the hexes is always dismantlable. More strongly, it's dismantlable with any distinguished vertex (such as the cop's starting location) as its final vertex.

Unfortunately, I don't know of a simple explicit strategy for the cop, even on a polyhex board rather than a general graph. The cop's strategy in the publications on this subject is: remove a removable vertex, and then follow the optimal strategy (recursively) for the remaining graph, pretending that the robber is on the parent of the removed vertex whenever it is actually on the removed vertex. Then, when you think you've won according to this strategy, you will either have actually won, or achieved a position where the robber is on the removed vertex and you're on its parent (where you're pretending that the robber is). But in this case, you can win in one more move.

One way to visualize this strategy is to draw a tree representing the parent of each removed vertex, together with numbers indicating the order in which the vertices were removed:



Then, at each turn, add back one more vertex (in the reverse of the order that they were removed) and move to the lowest-numbered ancestor of the robber's current position among the ones that have been added back so far. For instance, in the position shown, to figure out what to do on your first move, you would add back hex 28, realize that the lowest-numbered ancestor of the robber's position is hex 29 (the one you're already on), and pass. After the robber moves, you would add back hex 27, and (since all moves for the robber land in the subtree of hex 27) move there. Etc.

But this strategy involves keeping track of a spanning tree, a numbering, and a current set of added-back tiles. Maybe for the hex board there's a simpler strategy based only on your position and the position of the robber?

And finally, what about other kinds of game tiles? Square tiles don't work with edge-to-edge adjacency: the robber can evade the cop by staying on the opposite side of a 4-cycle. But for squares with corner adjacency (and no holes) the cop can always win; for instance, consider cops and robbers that move like kings on a chessboard. Being planar with only three game tiles meeting at a corner isn't good enough for the cop to win: the robber always wins on a dodecahedron by staying as far as possible from the cop. Maybe some other polyforms than polyhexes will also allow the cop to always win.

at August 19, 2016 01:33 AM UTC

Authors: Steve Alpern, Thomas Lidbetter
Download: PDF
Abstract: We study the classical problem introduced by R. Isaacs and S. Gal of minimizing the time to find a hidden point $H$ on a network $Q$ moving from a known starting point. Rather than adopting the traditional continuous unit speed path paradigm, we use the ``expanding search'' paradigm recently introduced by the authors. Here the regions $S\left( t\right) $ that have been searched by time $t$ are increasing from the starting point and have total length $t$. Roughly speaking the search follows a sequence of arcs $a_{i}$ such that each one starts at some point of an earlier one. This type of search is often carried out by real life search teams in the hunt for missing persons, escaped convicts, terrorists or lost airplanes. The paper which introduced this type of search solved the adversarial problem (where $H$ is hidden to take a long time to find) for the cases where $Q$ is a tree or is 2-arc-connected. This paper solves the game on some additional families of networks. However the main contribution is to give strategy classes which can be used on any network and have expected search times which are within a factor close to 1 of the value of the game (minimax search time). We identify cases where our strategies are in fact optimal.

at August 19, 2016 01:02 AM UTC

Authors: David A. Cohen, Martin C. Cooper, Peter G. Jeavons, Stanislav Zivny
Download: PDF
Abstract: The binary Constraint Satisfaction Problem (CSP) is to decide whether there exists an assignment to a set of variables which satisfies specified constraints between pairs of variables. A binary CSP instance can be presented as a labelled graph encoding both the forms of the constraints and where they are imposed. We consider subproblems defined by restricting the allowed form of this graph. One type of restriction that has previously been considered is to forbid certain specified substructures (patterns). This captures some tractable classes of the CSP, but does not capture classes defined by language restrictions, or the well-known structural property of acyclicity.

In this paper we extend the notion of pattern and introduce the notion of a topological minor of a binary CSP instance. By forbidding a finite set of patterns from occurring as topological minors we obtain a compact mechanism for expressing novel tractable subproblems of the binary CSP, including new generalisations of the class of acyclic instances. Forbidding a finite set of patterns as topological minors also captures all other tractable structural restrictions of the binary CSP. Moreover, we show that several patterns give rise to tractable subproblems if forbidden as topological minors but not if forbidden as sub-patterns. Finally, we introduce the idea of augmented patterns that allows for the identification of more tractable classes, including all language restrictions of the binary CSP.

at August 19, 2016 01:00 AM UTC

Authors: Wing-Kai Hon, Ton Kloks, Fu-Hong Liu, Hsiang-Hsuan Liu, Tao-Ming Wang
Download: PDF
Abstract: Without further ado, we present the P_3-game. The P_3-game is decidable for elementary classes of graphs such as paths and cycles. From an algorithmic point of view, the connected P_3-game is fascinating. We show that the connected P_3-game is polynomially decidable for classes such as trees, chordal graphs, ladders, cacti, outerplanar graphs and circular arc graphs.

at August 19, 2016 01:03 AM UTC

Authors: Brendan Juba
Download: PDF
Abstract: Machine learning and statistics typically focus on building models that capture the vast majority of the data, possibly ignoring a small subset of data as "noise" or "outliers." By contrast, here we consider the problem of jointly identifying a significant (but perhaps small) segment of a population in which there is a highly sparse linear regression fit, together with the coefficients for the linear fit. We contend that such tasks are of interest both because the models themselves may be able to achieve better predictions in such special cases, but also because they may aid our understanding of the data. We give algorithms for such problems under the sup norm, when this unknown segment of the population is described by a k-DNF condition and the regression fit is s-sparse for constant k and s. For the variants of this problem when the regression fit is not so sparse or using expected error, we also give a preliminary algorithm and highlight the question as a challenge for future work.

at August 19, 2016 01:02 AM UTC

Authors: Nicola Prezza
Download: PDF
Abstract: We consider the problem of building a space-efficient data structure supporting fast longest common extension queries over a text $T\in\Sigma^n$: to compute the length $\ell$ of the longest common prefix of any two $T$'s suffixes. Our main result is a deterministic data structure taking the same size of the text---$n\lceil\log_2|\Sigma|\rceil$ bits---, supporting optimal-time text extraction and $\mathcal O(\log^2\ell)$-time LCE queries, and admitting a construction algorithm running in $\mathcal O(n\log n)$ expected time. LCE query times can be improved to $\mathcal O(\log\ell)$ by adding $\mathcal O(\log_2n)$ memory words to the space usage. As intermediate results, we obtain in-place Monte Carlo algorithms to compute (i) a batch of $k$ LCE queries in $\mathcal O(n+k\log^2 n)$ expected time, and (ii) the lexicographic ordering of an arbitrary subset of $k$ text suffixes in $\mathcal O(n+k\log k\log^2 n)$ expected time.

at August 19, 2016 01:05 AM UTC

The New York Times yesterday ran a story connecting climate change to the Louisiana flooding.
The National Weather Service reports that parts of Louisiana have received as much as 31 inches of rain in the last week, a number Dr. Easterling called “pretty staggering,” and one that exceeds an amount of precipitation that his center predicts will occur once every thousand years in the area.
Dr. Easterling said that those sorts of estimates were predicated on the idea that the climate was stable, a principle that has become outdated.
The third National Climate Assessment, released in 2014 by the United States Global Change Research Program, showed that “the amount of rain falling in very heavy precipitation events” had been significantly above average since 1991.
However, the research did not identify the South as one of the areas of greatest concern; the increase was found to be greatest in the Northeast, Midwest and Upper Great Plains regions of the United States.
In short climate change means our old prediction models of the weather no longer apply. On top of that, new models that tried to take into account climate change predicted heavier rains but not in that area of the country.

The weather is hardly the only predictions gone bad this year. From Nate Cohn's What I Got Wrong About Donald Trump
Did he have a 1 percent chance to win when he descended the escalator of Trump Tower last June? Twenty percent? Or should we have known all along?
Was Mr. Trump’s [republican nomination] victory a black swan, the electoral equivalent of World War I or the Depression: an unlikely event with complex causes, some understood at the time but others overlooked, that came together in unexpected ways to produce a result that no one could have reasonably anticipated?
Or did we simply underestimate Mr. Trump from the start? Did we discount him because we assumed that voters would never nominate a reality-TV star for president, let alone a provocateur with iconoclastic policy views like his? Did we put too much stock in “the party decides,” a theory about the role of party elites in influencing the outcome of the primary process?
The answer, as best I can tell, is all of the above.
I do think we — and specifically, I — underestimated Mr. Trump. There were bad assumptions, misinterpretations of the data, and missed connections all along the way.
We also had bad predictions on Brexit and one factor in the 2008 financial crisis was a heavy reliance on historical patterns of housing prices.

We have at our fingertips incredible prediction tools from machine learning models to prediction markets. Not all things change, our models trained to recognize cat pictures will continue to recognize cat pictures for a long time running. But as we continue to rely more and more on data driven predictions and decisions, be prepared for more and more surprises as underlying changes in the environmental, political and financial climates can pull the rug out from under us.

by Lance Fortnow (noreply@blogger.com) at August 18, 2016 02:35 PM UTC

I am a graduate student in math, and theoretical computer science is a domain which I never understood what it is about because I couldn't find a good read about the topic. I want to know what this domain is actually about, what kind of topics it is concerned with, what prerequisites are needed to embark into it, etc. For now, I just want to know:

What is a good introductory book to theoretical computer science?

Given that there is such a thing. If not, where should a mathematician who has basic knowledge about computer science (i.e. they know the basics of one or two programming languages) start if they want to understand what theoretical computer science is about? What do you recommend?

thanks!

by madmatician at August 18, 2016 02:16 PM UTC

Authors: Sergey Fomin, Dima Grigoriev, Dorian Nogneng, Eric Schost
Download: PDF
Abstract: Semiring complexity is the version of arithmetic circuit complexity that allows only two operations: addition and multiplication. We show that when the number of variables is fixed, the semiring complexity of a Schur polynomial $s_\lambda$ is $O(log(\lambda_1))$; here $\lambda_1$ is the largest part of the partition $\lambda$.

at August 18, 2016 01:01 AM UTC

Authors: Sebastian Wild
Download: PDF
Abstract: We prove the Sedgewick-Bentley conjecture on median-of-$k$ Quicksort on equal keys: The average number of comparisons for Quicksort with fat-pivot (a.k.a. three-way) partitioning is asymptotically only a constant times worse than the information-theoretic lower bound for sorting $n$ i.i.d. elements, and that constant converges to 1 as $k \to \infty$. Hence, Quicksort with pivot sampling is an optimal distribution-sensitive algorithm for the i.i.d. sorting problem.

at August 18, 2016 01:02 AM UTC

Authors: Noriyuki Kurosawa
Download: PDF
Abstract: The linear pivot selection algorithm, known as median-of-medians, makes the worst case complexity of quicksort be $\mathrm{O}(n\ln n)$. Nevertheless, it has often been said that this algorithm is too expensive to use in quicksort. In this article, we show that we can make the quicksort with this kind of pivot selection approach be efficient.

at August 18, 2016 01:03 AM UTC

Authors: Michael W. Mahoney
Download: PDF
Abstract: These are lecture notes that are based on the lectures from a class I taught on the topic of Spectral Graph Methods at UC Berkeley during the Spring 2015 semester.

at August 18, 2016 01:02 AM UTC

Authors: Tomoyuki Morimae, Keisuke Fujii, Harumichi Nishimura
Download: PDF
Abstract: What happens if in QMA the quantum channel between Merlin and Arthur is noisy? It is not difficult to show that such a modification does not change the computational power as long as the noise is not too strong so that errors are correctable with high probability, since if Merlin encodes the witness state in a quantum error-correction code and sends it to Arthur, Arthur can correct the error caused by the noisy channel. If we further assume that Arthur can do only single-qubit measurements, however, the problem becomes nontrivial, since in this case Arthur cannot do the universal quantum computation by himself. In this paper, we show that such a restricted complexity class is still equivalent to QMA. To show it, we use measurement-based quantum computing: honest Merlin sends the graph state to Arthur, and Arthur does fault-tolerant measurement-based quantum computing on the noisy graph state with only single-qubit measurements. By measuring stabilizer operators, Arthur also checks the correctness of the graph state. Although this idea itself was already used in several previous papers, these results cannot be directly used to the present case, since the test that checks the graph state used in these papers is so strict that even honest Merlin is rejected with high probability if the channel is noisy. We therefore introduce a more relaxed test that can accept not only the ideal graph state but also noisy graph states that are error-correctable.

at August 18, 2016 01:00 AM UTC

Authors: Adrian Dumitrescu, Ritankar Mandal, Csaba D. Tóth
Download: PDF
Abstract: (I) We prove that the (maximum) number of monotone paths in a geometric triangulation of $n$ points in the plane is $O(1.8027^n)$. This improves an earlier upper bound of $O(1.8393^n)$; the current best lower bound is $\Omega(1.7003^n)$.

(II) Given a planar geometric graph $G$ with $n$ vertices, we show that the number of monotone paths in $G$ can be computed in $O(n^2)$ time.

at August 18, 2016 01:04 AM UTC

Authors: Austin Luchsinger, Robert Schweller, Tim Wylie
Download: PDF
Abstract: The algorithmic self-assembly of shapes has been considered in several models of self-assembly. For the problem of \emph{shape construction}, we consider an extended version of the Two-Handed Tile Assembly Model (2HAM), which contains positive (attractive) and negative (repulsive) interactions. As a result, portions of an assembly can become unstable and detach. In this model, we utilize fuel-efficient computation to perform Turing machine simulations for the construction of the shape. In this paper, we show how an arbitrary shape can be constructed using an asymptotically optimal number of distinct tile types (based on the shape's Kolmogorov complexity). We achieve this at $O(1)$ scale factor in this straightforward model, whereas all previous results with sublinear scale factors utilize powerful self-assembly models containing features such as staging, tile deletion, chemical reaction networks, and tile activation/deactivation. Furthermore, the computation and construction in our result only creates constant-size garbage assemblies as a byproduct of assembling the shape.

at August 18, 2016 01:08 AM UTC

Authors: Zeyuan Allen-Zhu, Yuanzhi Li
Download: PDF
Abstract: We solve principle component regression (PCR) by providing an efficient algorithm to project any vector onto the subspace formed by the top principle components of a matrix. Our algorithm does not require any explicit construction of the top principle components, and therefore is suitable for large-scale PCR instances.

Specifically, to project onto the subspace formed by principle components with eigenvalues above a threshold $\lambda$ and with a multiplicative accuracy $(1\pm \gamma) \lambda$, our algorithm requires $\tilde{O}(\gamma^{-1})$ black-box calls of ridge regression. In contrast, previous result requires $\tilde{O}(\gamma^{-2})$ such calls. We obtain this result by designing a degree-optimal polynomial approximation of the sign function.

at August 18, 2016 01:03 AM UTC