Theory of Computing Blog Aggregator

Today—January 20, 2017—I have something cheerful, something that I’m celebrating.  It’s Lily’s fourth birthday. Happy birthday Lily!

As part of her birthday festivities, and despite her packed schedule, Lily has graciously agreed to field a few questions from readers of this blog.  You can ask about her parents, favorite toys, recent trip to Disney World, etc.  Just FYI: to the best of my knowledge, Lily doesn’t have any special insight about computational complexity, although she can write the letters ‘N’ and ‘P’ and find them on the keyboard.  Nor has she demonstrated much interest in politics, though she’s aware that many people are upset because a very bad man just became the president.  Anyway, if you ask questions that are appropriate for a real 4-year-old girl, rather than a blog humor construct, there’s a good chance I’ll let them through moderation and pass them on to her!

Meanwhile, here’s a photo I took of UT Austin students protesting Trump’s inauguration beneath the iconic UT tower.

by Scott at January 21, 2017 02:17 AM UTC

 

babai

I am sure that every one of the readers of this blog heard about Laci Babai’s quasi-polynomial algorithm for graph isomorphism and also the recent drama about it: A mistake pointed out by Harald Helfgott,  a new sub-exponential but not quasi-polynomial version of the algorithm that Laci found in a couple of days, and then, a week later, a new variant of the algorithm again found by Laci which is  quasi-polynomial. You can read the announcement on Babai’s homepage, three excellent Quanta magazine articles (I,II,III), and many blog posts all over the Internet.

Babai’s result is an off-scale scientific achievement, it is wonderful in many respects, and I truly admire and envy Laci for this amazing breakthrough. I also truly admire  Harald  for his superb job as a Bourbaki expositor.

Laci Babai is visiting and he is giving lectures on graph isomorphism and related topics all over the Israel.

Tel Aviv University

Tel Aviv University: Sackler distinguished lectures in Pure Mathematics Wednesday, January 18 (Poster. Sorry, too late, I heard it was very inspiring, don’t miss the other talks!)

Tel Aviv University Combinatorics seminar:  Sunday, Jan. 22, 10:00-11:00
Title: Canonical partitioning and the emergence of the Johnson graphs: Combina-
torial aspects of the Graph Isomorphism problem 

(The talk does not depend on Wednesday’s talk)

Hebrew University of Jerusalem

Hebrew University Colloquium San. Jan 22, 16:00-17:00 Title: Graph isomorphism and coherent configurations: The Split-or-Johnson routine

Lecture room 2, Manchester building (Mathematics)

The Technion

Local versus global symmetry and the Graph Isomorphism problem I–III

Lecture I: Monday, January 23, 2017 at 15:30

Lecture II: Tuesday, January 24, 2017 at 15:30

Lecture III: Thursday, January 26, 2017 at 15:30

All lectures will take place at Auditorium 232, Amado Mathematics Building, Technion (Website)

Weitzman Institute

Pekeris lecture, Jan 29, 11:00-12:00 Hidden irregularity versus hidden symmetry 

EBNER AUDITORIUM (webpage)

 


by Gil Kalai at January 20, 2017 12:20 PM UTC

Authors: Aleksandar Nikolov
Download: PDF
Abstract: Combinatorial discrepancy is a complexity measure of a collection of sets which quantifies how well the sets in the collection can be simultaneously balanced. More precisely, we are given an n-point set $P$, and a collection $\mathcal{F} = \{F_1, ..., F_m\}$ of subsets of $P$, and our goal is color $P$ with two colors, red and blue, so that the largest absolute difference between the number of red elements and the number of blue elements (i.e. the discrepancy) in any $F_i$ is minimized. Combinatorial discrepancy has many applications in mathematics and computer science, including constructions of uniformly distributed point sets, and lower bounds for data structures and private data analysis algorithms.

We investigate the combinatorial discrepancy of geometrically defined set systems, in which $P$ is an n-point set in $d$-dimensional space, and $\mathcal{F}$ is the collection of subsets of $P$ induced by dilations and translations of a fixed convex polytope $B$. Such set systems include sets induced by axis-aligned boxes, whose discrepancy is the subject of the well known Tusnady problem. We prove new discrepancy upper bounds for such set systems by extending the approach based on factorization norms previously used by the author and Matousek. We improve the best known upper bound for the Tusnady problem by a logarithmic factor, using a result of Banaszczyk on signed series of vectors. We extend this improvement to any arbitrary convex polytope B by using a decomposition due to Matousek.

at January 20, 2017 12:00 AM UTC

Authors: Jose Capco, Matteo Gallet, Georg Grasegger, Christoph Koutschan, Niels Lubbes, Josef Schicho
Download: PDF
Abstract: Laman graphs model planar frameworks that are rigid for a general choice of distances between the vertices. There are finitely many ways, up to isometries, to realize a Laman graph in the plane. Such realizations can be seen as solutions of systems of quadratic equations prescribing the distances between pairs of points. Using ideas from algebraic and tropical geometry, we provide a recursion formula for the number of complex solutions of such systems.

at January 20, 2017 12:00 AM UTC

Authors: Ademir Hujdurović, Edin Husić, Martin Milanič, Romeo Rizzi, Alexandru I. Tomescu
Download: PDF
Abstract: Motivated by applications in cancer genomics and following the work of Hajirasouliha and Raphael (WABI 2014), Hujdurovi\'{c} et al. (WABI 2015, full version to appear in IEEE TCBB) introduced the minimum conflict-free row split (MCRS) problem: split each row of a given binary matrix into a bitwise OR of a set of rows so that the resulting matrix corresponds to a perfect phylogeny and has the minimum number of rows among all matrices with this property. Hajirasouliha and Raphael also proposed the study of a similar problem, referred to as the minimum distinct conflict-free row split (MDCRS) problem, in which the task is to minimize the number of distinct rows of the resulting matrix. Hujdurovi\'{c} et al. proved that both problems are NP-hard, gave a related characterization of transitively orientable graphs, and proposed a polynomial time heuristic algorithm for the MCRS problem based on coloring cocomparability graphs.

We give new formulations of the two problems, showing that the problems are equivalent to two optimization problems on branchings in a derived directed acyclic graph. Building on these formulations, we obtain new results on the two problems, including: (i) a strengthening of the heuristic by Hujdurovi\'{c} et al. via a new min-max result in digraphs generalizing Dilworth's theorem, (ii) APX-hardness results for both problems, (iii) two approximation algorithms for the MCRS problem, and (iv) a 2-approximation algorithm for the MDCRS problem. The branching formulations also lead to exact exponential time algorithms for solving the two problems to optimality faster than the na\"ive brute-force approach.

at January 20, 2017 12:00 AM UTC

Authors: Mikkel Abrahamsen, Anna Adamaszek, Tillmann Miltzow
Download: PDF
Abstract: In this paper we study the art gallery problem, which is one of the fundamental problems in computational geometry. The objective is to place a minimum number of guards inside a simple polygon such that the guards together can see the whole polygon. We say that a guard at position $x$ sees a point $y$ if the line segment $xy$ is fully contained in the polygon.

Despite an extensive study of the art gallery problem, it remained an open question whether there are polygons given by integer coordinates that require guard positions with irrational coordinates in any optimal solution. We give a positive answer to this question by constructing a monotone polygon with integer coordinates that can be guarded by three guards only when we allow to place the guards at points with irrational coordinates. Otherwise, four guards are needed. By extending this example, we show that for every $n$, there is polygon which can be guarded by $3n$ guards with irrational coordinates but need $4n$ guards if the coordinates have to be rational. Subsequently, we show that there are rectilinear polygons given by integer coordinates that require guards with irrational coordinates in any optimal solution.

at January 20, 2017 12:00 AM UTC

Authors: Krzysztof Domino, Piotr Gawron, Łukasz Pawela
Download: PDF
Abstract: In this paper we introduce a novel algorithm of calculating arbitrary order cumulants of multidimensional data. Since the n th order cumulant can be presented in the form of an n-dimensional tensor, the algorithm is presented using the tensor network notation. The presented algorithm exploits the super--symmetry of cumulant and moment tensors. We show, that proposed algorithm highly decreases the computational complexity of cumulants calculation, compared to the naive algorithm

at January 20, 2017 02:04 AM UTC

Authors: Christian Frey, Andreas Züfle, Tobias Emrich, Matthias Renz
Download: PDF
Abstract: Reliable propagation of information through large networks, e.g. communication networks, social networks or sensor networks is very important in many applications including social service applications, marketing applications, and scientific applications. However, social ties of friendship may be obsolete, and communication links may fail, inducing the notion of uncertainty in such networks. In this paper, we address the problem of optimizing information propagation in uncertain networks given a constraint budget of edges. We show that this problem requires to solve two NP-hard subproblems: the computation of expected information flow, and the optimal choice of edges. To compute the expected information flow to a source vertex, we employ a novel data structure called Component Tree, that identifies independent components of the graph for which the information flow can either be computed analytically and efficiently, or for which traditional Monte-Carlo sampling can be applied independent of the remaining parts of the network. For the problem of finding the optimal edges, we propose a series of heuristics that exploit properties of this Component Tree. Our evaluation shows that these heuristics lead to high quality solutions, thus yielding high information flow, while maintaining low run-time.

at January 20, 2017 02:04 AM UTC

Authors: Cynthia Kop, Jakob Grue Simonsen
Download: PDF
Abstract: We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly means that data constructors cannot occur in function bodies and all manipulation of storage space thus has to happen indirectly using the call stack.

While cons-free programs have previously been used by several au thors to characterise complexity classes, the work on non-deterministic programs has almost exclusively considered programs of data order 0. Previous work has shown that adding explicit non-determinism to cons-free programs taking data of order 0 does not increase expressivity; we prove that this - dramatically - is not the case for higher data orders: adding non-determinism to programs with data order at least 1 allows for a characterisation of the entire class of elementary-time decidable sets.

at January 20, 2017 02:02 AM UTC

Authors: Burak C. Civek, Suleyman S. Kozat
Download: PDF
Abstract: We investigate the problem of sequential linear data prediction for real life big data applications. The second order algorithms, i.e., Newton-Raphson Methods, asymptotically achieve the performance of the "best" possible linear data predictor much faster compared to the first order algorithms, e.g., Online Gradient Descent. However, implementation of these methods is not usually feasible in big data applications because of the extremely high computational needs. Regular implementation of the Newton-Raphson Methods requires a computational complexity in the order of $O(M^2)$ for an $M$ dimensional feature vector, while the first order algorithms need only $O(M)$. To this end, in order to eliminate this gap, we introduce a highly efficient implementation reducing the computational complexity of the Newton-Raphson Methods from quadratic to linear scale. The presented algorithm provides the well-known merits of the second order methods while offering the computational complexity of $O(M)$. We utilize the shifted nature of the consecutive feature vectors and do not rely on any statistical assumptions. Therefore, both regular and fast implementations achieve the same performance in the sense of mean square error. We demonstrate the computational efficiency of our algorithm on real life sequential big datasets. We also illustrate that the presented algorithm is numerically stable.

at January 20, 2017 12:00 AM UTC

Authors: Yishuo Shi, Zhao Zhang, Ding-Zhu Du
Download: PDF
Abstract: This paper studies randomized approximation algorithm for a variant of the set cover problem called {\em minimum submodular cost partial multi-cover} (SCPMC). In a \emph{partial set cover problem}, the goal is to find a minimum cost subcollection of sets covering at least a required fraction of elements. In a {\em multi-cover problem}, each element $e$ has a covering requirement $r_e$, and the goal is to find a minimum cost subcollection of sets $\mathcal S'$ which fully covers all elements, where an element $e$ is fully covered by $\mathcal S'$ if $e$ belongs to at least $r_e$ sets of $\mathcal S'$. In a \emph{minimum submodular cost set cover problem} (SCSC), the objective function is submodular and the constraint is a set cover.

The SCPMC problem studied in this paper is a combination of the above three problems. Previous work shows that such a combination enormously increases the difficulty of studies.

In this paper, assuming that the maximum covering requirement $r_{\max}=\max_e r_e$ is a constant and the objection function is nonnegative, monotone increasing, and submodular, we give the first randomized algorithm for SCPMC achieving performance ratio $O(b)$ with a high probability, where $b=\max_e\binom{f}{r_{e}}$ and $f$ is the maximum number of sets containing a common element. The algorithm is based on a novel non-linear program. Furthermore, in the case when the covering requirement $r\equiv 1$, an $O(f)$-approximation can be achieved even when monotonicity requirement is dropped off from the objective function.

at January 20, 2017 02:07 AM UTC

Authors: Michael A. Forbes, Amir Shpilka, Ben Lee Volk
Download: PDF
Abstract: We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits.

Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices.

Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier.

Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits.

at January 20, 2017 02:00 AM UTC

Authors: Joachim Gudmundsson, Rasmus Pagh
Download: PDF
Abstract: Locality-sensitive hashing (LSH) is a fundamental technique for similarity search and similarity estimation in high-dimensional spaces. The basic idea is that similar objects should produce hash collisions with probability significantly larger than objects with low similarity. We consider LSH for objects that can be represented as point sets in either one or two dimensions. To make the point sets finite size we consider the subset of points on a grid. Directly applying LSH (e.g. min-wise hashing) to these point sets would require time proportional to the number of points. We seek to achieve time that is much lower than direct approaches.

Technically, we introduce new primitives for range-efficient consistent sampling (of independent interest), and show how to turn such samples into LSH values. Another application of our technique is a data structure for quickly estimating the size of the intersection or union of a set of preprocessed polygons. Curiously, our consistent sampling method uses transformation to a geometric problem.

at January 20, 2017 12:00 AM UTC

Authors: Farhad Shahrokhi
Download: PDF
Abstract: A directed acyclic graph G = (V, E) is pseudo-transitive with respect to a given subset of edges E1, if for any edge ab in E1 and any edge bc in E, we have ac in E. We give algorithms for computing longest chains and demonstrate geometric applications that unify and improves some important past results. (For specific applications see the introduction.)

at January 20, 2017 12:00 AM UTC

Authors: Ferdinando Cicalese, Luisa Gargano, Ugo Vaccaro
Download: PDF
Abstract: Given two discrete random variables $X$ and $Y$, with probability distributions ${\bf p} =(p_1, \ldots , p_n)$ and ${\bf q}=(q_1, \ldots , q_m)$, respectively, denote by ${\cal C}({\bf p}, {\bf q})$ the set of all couplings of ${\bf p}$ and ${\bf q}$, that is, the set of all bivariate probability distributions that have ${\bf p}$ and ${\bf q}$ as marginals. In this paper, we study the problem of finding the joint probability distribution in ${\cal C}({\bf p}, {\bf q})$ of minimum entropy (equivalently, the joint probability distribution that maximizes the mutual information between $X$ and $Y$), and we discuss several situations where the need for this kind of optimization naturally arises. Since the optimization problem is known to be NP-hard, we give an efficient algorithm to find a joint probability distribution in ${\cal C}({\bf p}, {\bf q})$ with entropy exceeding the minimum possible by at most 1, thus providing an approximation algorithm with additive approximation factor of 1. We also discuss some related applications of our findings.

at January 20, 2017 02:16 AM UTC

There's a wonderful new series of math videos PBS Infinite Series hosted by Cornell Math Phd student Kelsey Houston-Edwards. Check out this latest video on Markov Chains.


She gives an amazingly clear description of why, on a random walk on an undirected graphs, the stationary distribution puts probability on each node proportional to its degree.

Houston-Edwards also relies without proof on the following fact: The expected time to start and return to a vertex v is 1/p where p is the probability given to v in the stationary distribution. Why should that be true?

Didn't seem so obvious to me, so I looked it up. Here is an informal proof:

Let V1 be the random variable that is the expected length of time to get from v back to v. Let's take that walk again and let V2 be the expected length of time to get from v back to v the second time, and so on. Let An=V1+V2+..+Vn, the random variable representing the number of transitions taken to return to v n times. In a Markov chain each Vi is distributed the same so E(An) = n E(V1). 

As n gets large the Markov chain approaches the stationary distribution and will be in state v about a p fraction of the time. After E(An) transitions we should return to v about p E(An) times, where we actually return n times. So we have p E(An)  approaches n, or p n E(V1) approaches n and the only way this could happen is if E(V1)=1/p.

by Lance Fortnow (noreply@blogger.com) at January 19, 2017 01:25 PM UTC

We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits.

at January 19, 2017 08:36 AM UTC


Impetus to study a new reducibility relation

mikecseawardsreception
See Mike’s other projects too

Michael Wehar has just earned his PhD degree in near-record time in my department. He has posted the final version of his dissertation titled On the Complexity of Intersection Non-Emptiness Problems which he defended last month. The dissertation expands on his paper at ICALP 2014, joint paper at ICALP 2015 with Joseph Swernofsky, and joint paper at FoSSaCS 2016.

Today, Dick and I congratulate Mike on his accomplishment and wish him all the best in his upcoming plans, which center on his new job with CapSen Robotics near Pitt and CMU in Pittsburgh.

Mike’s thesis features a definition that arguably has been thought about by researchers in parameterized complexity but passed over as too particular. If I were classifying his definition like salsa or cheddar cheese,

  • Mild;

  • Medium;

  • Sharp;

then Mike’s definition would rate as

  • Extra Sharp.

Too sharp, that is, for {O}-notation, thus putting it beyond our usual complexity theory diet. But the problem it addresses has also seemed beyond the reach of complexity theory while we can’t even get a clear handle on {\mathsf{P}} versus {\mathsf{NP}}:

What are the relationships between the {n^k} levels of deterministic versus nondeterministic time and/or space, and versus {k\log n} or {n^k} levels of fairly-counted deterministic and nondeterministic space?

“Fairly counted” space means that we cast away the “linear space-compression” theorem by restricting Turing machines to be binary, that is to use work alphabet {\Gamma = \{0,1\}}. The blank character can be read but not written. This doesn’t mean swearing off {O}-notation for space, but does mean employing it more carefully.

Questions about particular polynomial-time exponents and space-usage factors animate fine-grained complexity, which has seen a surge of interest recently. Consider this problem which is central in Mike’s thesis:


  • Intersection (Non-)Emptiness {\mathsf{IE}}: Given {k} automata {M_1,\dots,M_k}, each with {n} states, is {L(M_1) \cap \cdots \cap L(M_k) \neq \emptyset}?


The “naive” Cartesian product algorithm works in time basically {n^k}. The fine-grained question is, can one do better than naive? As we covered before, Mike’s early work showed that getting time {n^{o(k)}} would have the sharp consequence {\mathsf{P \neq NL}}, which improved the {\mathsf{NP \neq NL}} conclusion in a 2003 paper by Dick with George Karakostas and Anastasios Viglas.

Mike’s thesis expands the mechanism that produces this and related results, with the aim of a two-pronged attack on complexity bounds. He has framed it in terms of parameterized complexity, which we’ll discuss before returning to the thesis.

Some Fixed-Parameter Basics

Consider three classic {\mathsf{NP}}-complete problems, each given an undirected graph {G = (V,E)} and a number {k}:

  • Vertex Cover: Is there {S \subseteq V} of size at most {k} such that every edge is incident to a node in {S}?

  • Clique: Is there {S \subseteq V} of size at least {k} such that all nodes in {S} are connected by edges?

  • Dominating Set: Is there {S \subseteq V} of size at most {k} such that every other node has an edge to a node in {S}?

Parameterized complexity started with the question:

What happens when some value of the parameter {k} is fixed?

For each problem {B} and each {k}, we can define the “slice” {B_k} to be the language of strings—here denoting graphs {G}—such that the answer for {(G,k)} is “yes.” Each individual slice belongs to {\mathsf{P}}: just try all {\binom{n}{k} = O(n^k)} choices of {S}, and when {k} is fixed this gives polynomial time. The fine-grained question is:

Can we solve each slice faster? In particular, can we solve {B_k} in time {t(k)n^c} where the exponent {c} is fixed independent of {k}?

The start of much combinatorial insight and fun is that the three problems appear to give three different answers:

  • Vertex Cover: Build {S} by streaming edges {(u,v)} in a fixed order. If neither {u} nor {v} already belongs to {S}, make a binary choice of which to add, until {|S| = k} when we accept if it’s a cover. If not, start over and try the next string in {\{0,1\}^k} to guide the choices. This gives time {O(2^k)n^c} for some fixed {c}.

  • Clique: We can get {n^{ak}} where {a < 1} (see this), but the exponent is still linear in {k}.

  • Dominating Set: No algorithm with time better than {n^{k - o(k)}} is known or expected.

The exact {c} for Vertex Cover may be model-dependent; for RAMs we get {c = 1} and indeed time {O(1.274^k + kn)} is known. But the fact of its being fixed holds in any reasonable model and classifies Vertex Cover as belonging to the class {\mathsf{FPT}} for fixed-parameter tractable. To address whether Clique and Dominating Set belong to {\mathsf{FPT}}, a novel reducibility and completeness theory was built.

Parameterized Reductions and Logspace

To exemplify an FPT-reduction, consider the problem

  • Short NTM Acceptance: Does a given nondeterministic Turing machine {N} accept the empty string within {k} steps?


When {k} is variable this is a generic {\mathsf{NP}}-complete problem, so of course we can reduce Clique to it, but consider the reduction done this way: Given {G} and {k}, make an alphabet {\Gamma} with a character {c_u} for each {u \in V}. Code {N_G} with a table for {E} to write down {k' = \binom{k}{2}} edges {(c_u,c_v)} nondeterministically on its tape, then deterministically check whether only {k} different nodes appear in these edges, accepting if so. The computation takes time {k''} that is {O(k')} and can be defined solely in terms of {k}. The latter fact makes {f(G,k) = (N,k'')} an FPT-eduction.

FPT-reductions work similarly to ordinary polynomial-time reductions. If our NTM problem belongs to {\mathsf{FPT}} then so does Clique. There is a trickier FPT-reduction the other way, so the problems are FPT-equivalent. Both are complete for the first level, called {\mathsf{W[1]}}, of the {\mathsf{W}}-hierarchy. Dominating Set is hard for the second level, {\mathsf{W[2]}}; Clique FPT-reduces to it, but not vice-versa unless {\mathsf{W[2] = W[1]}}.

Just like all major {\mathsf{NP}}-complete problems are related by logspace not just poly-time computable reductions, parameterized complexity has a logspace-founded version. The class {\mathsf{XL}} consists of parameterized problems {B} whose slices {B_k} individually belong to {\mathsf{L}}—that is, to deterministic logspace—and {\mathsf{XNL}} and {\mathsf{XP}} are defined similarly via the conditions {B_k \in \mathsf{NL}} and {B_k \in \mathsf{P}}. Interestingly, the separation {\mathsf{FPT \subset XP}} is known by standard diagonalization means, but {\mathsf{FPT}} is not known to be contained in {\mathsf{XNL}} for reasons similar to the {\mathsf{P}} vs. {\mathsf{NL}} problem. A recent Master’s thesis by Jouke Witteveen has details on these classes.

All four of our problems belong to {\mathsf{XL}}. Furthermore, the slices {B_k} belong to “almost exactly {k\log n}” space using binary TMs. The space is needed mainly to write down each candidate {S} and can be quantified as {k\log n + o(\log n)}. We could jump from this to define finer variations on {\mathsf{XL}} and {\mathsf{XNL}} but Mike, joined by Swernofsky, chose to refine the reductions instead. Now we are primed to see how their ideas might impact separations.

LBL Reductions

Our reduction from Clique to Short NTM Acceptance bumped {k} up to {k'' = \Theta(k^2)}. Blowups in {k} are important to a central concept called kernelization which deserves more space—see this survey. They can be exponential—the {O(2^k n)} algorithm for Vertex Cover hints at how—or even worse. Hence people have refined parameterized reductions according to whether they blow up {k}

  • polynomially, or

  • linearly.

But before now, we haven’t noted in this line a strong motivation to limit the blowup even further, such as to {k'' = k + o(k)} or {k'' = k + d} for some fixed constant {d} (however, see this).

Our reduction also bumped up the alphabet size. This appears necessary because for binary NTMs the problem is in {\mathsf{FPT}}: we can traverse the {2^k} possible guesses made on the tape, and for each one, solve an instance of graph-reachability.

So what can we do if we insist on binary TMs? Mike and Joseph fastened on to the idea of making the {k} in {k\log n} space become the parameter. We can define “Log NTM Acceptance” to be like “Short NTM Acceptance” except that the NTM {N} is allowed {k\log n} space (where {n} is the size of {N} in states). We get a problem in {\mathsf{XNL}} that is in fact complete for {\mathsf{XNL}} under parameterized logspace reductions. Likewise “Log DTM Acceptance” is the case where {N} is deterministic, which is complete for {\mathsf{XL}}. Then we define “Medium DTM Acceptance” where the question asks about acceptance within {n^k} time steps regardless of space. Mike’s thesis also covers “Long DTM Acceptance” where the time is {2^{n^k}}.

The {k} in {k\log n} allows a binary TM to write down the labels of {k} nodes in a size-{n} structure at a time. In that way it subsumes the use of {k} in the above three graph problems but is more fluid—the {k\log n} space could be used for anything. Keying on {k} drives the motivation for Mike’s definition of LBL (for “level-by-level”) reductions, whose uniform logspace version is as follows:

Definition 1 A parameterized problem {[A_k]} LBL-reduces to a parameterized problem {[B_k]} if there is a function {f} computable in {c\log n} space such that for all {x} and {k}, {x \in A_k \iff f(x,k) \in B_k}, where {c} is fixed independent of {k}.

That is, in the general FPT-reduction form {f(x,k) = (x',h(k))}, we insist on {h(k) = k} exactly. This reduction notion turns out to be the neatest way to express how Mike’s thesis refines and extends previous work, even his own.

weharaward
With our Chair, Chunming Qiao, and his UB CSE Graduate Leadership Award.

Some Results

The first reason to care about the sharper reduction is the following theorem which really summarizes known diagonalization facts. The subscript {b} reminds us that the TMs have binary work alphabet size; it is not necessary on {\mathsf{DTIME}[n^{ck}]} since it does not affect the exponent.

Theorem 2

  • If a parameterized problem {[A_k]} is LBL-hard for Log DTM Acceptance, then there is a constant {c > 0} and all {k}, {A_k \notin \mathsf{DSPACE}_b[ck\log n]}.

  • If a parameterized problem {[A_k]} is LBL-hard for Log NTM Acceptance, then there is a constant {c} such that for all {k}, {A_k \notin \mathsf{NSPACE}_b[ck\log n]}.

  • If {[A_k]} is LBL-hard for Medium DTM Acceptance, then there is a constant {c} such that for all {k}, {A_k \notin \mathsf{DTIME}[n^{ck}]}.

Moreover, if there is a constant {d} such that each {A_k} belongs to {\mathsf{DSPACE}_b[dk\log n]}, {\mathsf{NSPACE}_b[dk\log n]}, or {\mathsf{DTIME}[n^{dk}]}, respectively, then these become LBL-equivalences.

The power, however, comes from populating these hardnesses and equivalences with natural problems. This is where the {\mathsf{IE}} problem—as a parameterized family of problems asking about the intersection of the languages of {k}-many {n}-state DFAs—comes in. We can augment them by adding one pushdown store to just one of the DFAs, calling them {P_1,M_2,\dots,M_k} where {P_1} is the PDA. Then call the problem of whether

\displaystyle  L(P_1) \cap L_(M_2) \cap \cdots \cap L(M_k) \neq \emptyset

by the name {\mathsf{IE}_{D+1P}}. It curiously turns out not to matter whether {P_1} or the {M_i} are nondeterministic for the following results:

Theorem 3

  • {\mathsf{IE}} is LBL-equivalent to Log NTM Acceptance. Hence there is a {c > 0} such that for each {k}, {\mathsf{IE}_k \notin \mathsf{NSPACE}_b[ck\log n]}.

  • {\mathsf{IE}_{D+1P}} is LBL-equivalent to Medium DTM Acceptance. Hence there is a {c > 0} such that for each {k}, {\mathsf{IE}_k \notin \mathsf{DTIME}[n^{ck}]}.

Between the ICALP 2015 paper and writing the thesis, Mike found that the consequence of (a) had been obtained in a 1985 paper by Takumi Kasai and Shigeki Iwata. Mike’s thesis has further results establishing a whole spectrum of {\mathsf{IE}} variants that correspond to other major complexity classes and provide similar lower bounds on {k}-levels within them. The question becomes how to leverage the greater level of detail to attack lower bound problems.

Onto Bigger Questions

The way the attack on complexity questions is two-pronged is shown by considering the following pair of results.

Theorem 4 {\mathsf{P = NL}} if and only if {\mathsf{IE}} and {\mathsf{IE}_{D+1P}} are LBL-equivalent.

Theorem 5 If for every {a > 0} there is a {k} such that {\mathsf{IE}_k \in \mathsf{DTIME}[n^{ak}]}, then {\mathsf{P \neq NL}}.

It has been known going back to Steve Cook in 1971 that logspace machines with one auxiliary pushdown, whether deterministic or nondeterministic, capture exactly {\mathsf{P}}. Since pushdowns are restricted and concrete, this raised early hope of separating {\mathsf{P}} from {\mathsf{PSPACE}}—intuitively by “demoting” some aspect of a {\mathsf{P}} problem down to {\mathsf{L}} or {\mathsf{NL}} which are known to be different from {\mathsf{PSPACE}}. There was also hope of separating {\mathsf{P}} and {\mathsf{L}} or at least finding more-critical conditions that pertain to their being equal.

What Mike’s theorem does is shift the combinatorial issue to what happens when the pushdown is added to a collection of DFAs. Unlike the case with an auxiliary pushdown to a {k\log(n)} space-bounded TM, there is a more-concrete sense in which the relative influence of the pushdown might “attenuate” as {k} increases. Can this be leveraged for a finer analysis that unlocks some secrets of lower bounds?

At the very least, Mike’s LBL notion provides a succinct way to frame finer questions. For example, how does {\mathsf{P = PSPACE}} hang on the relation between individual {\mathsf{DTIME[n^k]}} levels of {\mathsf{P}} and {\mathsf{DSPACE[n^k]}} levels of {\mathsf{PSPACE}}? The simplest way to express this, using Mike’s notation {D^T_{n^k}} for Medium DTM Acceptance and {D^S_{n^k}} for the analogous parameterized problem for {n^k} space-bounded acceptance, seems to be:

Theorem 6 {\mathsf{P = PSPACE}} if and only if {D^T_{n^k}} and {D^S_{n^k}} are LBL-equivalent.

Mike’s thesis itself is succinct, at 73 pages, yet contains a wealth of other {\mathsf{IE}} variants for tree-shaped automata and other restricted models and connections to forms of the Exponential Time Hypothesis.

Open Problems

How can one appraise the benefit of expressing “polynomial” and “{O(\log n)}” complexity questions consciously in terms of individual {n^k} and {k\log n} levels? Does the LBL reduction notion make this more attractive?

It has been a pleasure supervising Mike and seeing him blaze his way in the larger community, and Dick and I wish all the best in his upcoming endeavors.

[fixed first part of Theorem 2, added link to Cygan et al. 2011 paper, added award photo]


by KWRegan at January 19, 2017 04:31 AM UTC

Everyone who plays tic-tac-toe quickly learns that the nine squares have different strengths. The center square is best, because it belongs to four lines. The corners, on three lines each, are next. And weakest are the squares in the middle of a side of the tic-tac-toe grid, which are only on two lines each. What about higher dimensions? (Let's ignore the fact that tic-tac-toe on a 3×3×... grid is an easy win in more than two dimensions.) Which are the weak cells, and which are the strong ones?

For instance, the image below shows a four-dimensional tic-tac-toe grid, projected to the plane. Each cell of the grid becomes a colored dot; the five colors distinguish the cells by the dimension of the hypercube face that they're the midpoint of. We have vertices (blue), edges (yellow), squares (red), cubes (white), and the center of the hypercube itself (black). But I've only shown a subset of the lines, four through each point. If I included all the lines, which colors would be more powerful (on more lines), and which less?



It is not hard to work out the formula for this, if we give each of these points coordinates that are d-tuples of the numbers in the set {−1,0,1}. If k of these coordinates are equal to 0, then the given point is the middle point of (3k − 1)/2 lines. These lines can be described by replacing each zero by one of three possibilities: 0, x, or −x, and then plugging in 1, 0, and −1 as the value of x. There are 3k choices for how to replace each zero, one of which doesn't give a line: if we leave all the zeros as they are, then all three choices of x give the same point. But for any other set of replacements, we get a line. Each line gets represented twice, because we can negate all the replaced coordinates and get the same line. That double-counting explains the division by two in the formula.

These aren't the only lines, though. If there are k zeros in the coordinates of a point, there are d − k nonzeros. These, also, can be replaced, either keeping the same value in place, replacing a −1 with −x, or replacing a 1 by x. When we plug in 1, 0, and −1 as the value of x, we get a line on which the given point is the first point. This time there is no double-counting, but there are only two choices per coordinate, so we get 2d − k − 1 lines.

The center point (the one that in this system has all-zero coordinates) is always the winner, the most powerful point. It has the maximum number of lines possible, because they cover all of the other points in pairs.

To find the least powerful point, we need to minimize (3k − 1)/2 + 2d − k − 1. This can easily be done numerically, showing for instance that in our 4d picture the black point is on 40 lines, the white points are on 14, the red points are on 7, the yellow are on 8, and the blue points are on 15. The red points, the middle-dimensional ones, are the losers in this case.

However, in higher dimensions, the weakest cells are not the middle-dimensional ones. To minimize the total number of lines, we want the two types of lines (the ones where the point is in the middle, and the ones where the point is first) to be roughly balanced in number. Ignoring the subtraction of one and division by two in the formulas (as these do not significantly affect the result when the dimension is high) this balanced is achieved for points whose number of zero coordinates is approximately (log62)d, and whose number of nonzero coordinates is approximately (log63)d. These, then, are the weakest points in high-dimensional tic-tac-toe boards.

This also shows that, in high dimensions, no point is extremely weak. They are all on a number of lines that grows as a polynomial of the size of the whole board. If there are n cells on the whole board, then the number of lines through even the weakest cell is proportional to nlog62 ≈ n0.387.

at January 19, 2017 01:24 AM UTC

Authors: Wouter van Toll, Atlas F. Cook IV, Marc J. van Kreveld, Roland Geraerts
Download: PDF
Abstract: Path planning for walking characters in complicated virtual environments is a fundamental task in simulations and games. In this paper, we present an improved definition of the Explicit Corridor Map (ECM), a navigation mesh that allows efficient path planning and crowd simulation for disk-shaped characters of any radius. The ECM is a medial axis (MA) annotated with nearest-obstacle information. For a planar environment with $n$ obstacle vertices, the ECM has size $O(n)$ and can be computed in $O(n \log n)$ time.

We also introduce multi-layered environments (MLEs), in which multiple planar layers are connected by line segment connections. Typical real-world examples are multi-storey buildings, train stations, and sports stadiums. We define the MA and the ECM for multi-layered environments, based on projected distances on the ground plane. For an MLE with $n$ obstacle points and $k$ connections, the MA has size $O(n)$. We present an improved algorithm that constructs the MA and ECM in $O(n \log n \log k)$ time.

Our implementations show that the ECM can be computed efficiently for large 2D and multi-layered environments, and that it can be used to compute paths within milliseconds. This enables simulations of large virtual crowds of heterogeneous characters in real-time.

at January 19, 2017 12:00 AM UTC

Ignacio Fábregas has kindly sent me this report on the first two days of SOFSEM 2017, which is being held this week in Limerick, Ireland. I hope you'll enjoy reading it.  Many thanks to Ignacio for agreeing to write this guest post.
 
SOFSEM has always be an atypical conference with a format closest to a Winter School. This year marks a new beginning for the SOFSEM conference, since it's the first time it's held outside the Czech Republic or the Slovak Republic. As Jan van Leeuwen told us in the Welcome Reception on Monday, the goal of the Steering Committee of SOFSEM is to keep the spirit of SOFSEM alive while moving to a more typical format for a computer science conference. The main ingredients are still there: the keynote talks of renowned experts, tutorial sessions and the Student Research Forum; the only main difference is the country (Ireland) and the schedule (instead of having all the keynote talks the first and last day, now they are distributed across the conference days).

The first day of the Conference the keynote speaker was Kim G. Larsen. He told us about Cyber-Physical Systems (CPS), that is, systems combining computing elements with dedicated hardware and software having to monitor and control a particular physical environment. In order to study them, Larsen and his team propose a model-based approach, powered by the use of the tool Uppaal. An example of CPS is the adaptive cruise control for cars, an application where Europe is trying to match U.S. (where the Google Self-Driving car has already been approved by the legislation in several U.S. states) and where the team of Kim Larsen is making advances.

After the keynote talk we had the first parallel sessions: two sessions of the Foundations of Computer Science (FOCS) track and the first tutorial. I went to the FOCS session about Semantics, Specification and Compositionality, mainly because my talk was there :) But, more importantly, there was a talk of Uli Fahrenberg and another by Linda Bordo. Uli talked about behavioural specification theories for most equivalences in the linear-time–branching-time spectrum. He humbly defined his work as "almost" a rewriting of some old and overlooked results. Going back, reading and, most importantly, understanding, old results is a non-trivial exercise that computer science researchers should do more often. On the other hand, Linda talked about link-calculus, as a model that extends the point-to-point communication discipline of Milner’s CCS with multiparty interactions. She used links to build chains describing the flow of information among the agents participating in that interaction. The difficult part comes when deciding both the number of participants in an interaction and how they synchronize.

The second day of conference we had two keynote talks. Axel Legay talked about Software Product Lines (SPL), that is, the families of similar software products built from a common set of features. For example, like the mobile phones that a company as Samsung or HTC produce. He showed us how he and his team formalizes SPLs by means of what they called Featured Transition Systems and how the designed verification algorithms and tools to check temporal properties over them. The second keynote talk was by Marjan Mernik. He told us about Domain-Specific Languages that assist software developers in programming, and some open problems in the field like the lacking of tool support for different Domain-Specific Languages and the difficulties combining them.

Relative to the conference papers, in the FOCS session in Verification and Automated System Analysis we have talks by Mohammad Reza Mousavi, Nataliya Gribovskaya, Zhaowei Xu and Saket Saurabh. I want to highlight Mohammad's talk about the complexity of deriving invertible sequences from finite-state machines. The problem is the following: checking the existence of input/output sequences (UIOs) that identify states of the finite state machine specification (as it's the case of ioco) is PSPACE-complete; so some UIO generation algorithms utilise what are called invertible sequences; these allow one to construct additional UIOs once a UIO has been found. Mohammad and his coauthors studied some optimization problems and their complexity.

And that's all for now. Tomorrow (18 January 2017) we'll have fewer talks since we have the excursion and conference dinner. Also I'm planning to go for one tutorial session by Andrew Butterfield. I'll tell you more on that.

by Luca Aceto (noreply@blogger.com) at January 18, 2017 06:48 PM UTC

When I cover directed acyclic graphs for my algorithms classes, I usually mention as an example (familiar to the students) a DAG with university courses as its vertices and with prerequisites as its edges.

Today I learned that this is a lie, or maybe I should say more of a lie than I already thought it was. It turns out that, in our course catalog, prerequisites are not just lists of courses that all must be taken first. Instead, even if we ignore the grade thresholds in some of them, they are complicated and-or formulas describing the combinations of courses that are allowed to be used as preparation. Some of them are written in conjunctive normal form (ors of ands), some are in disjunctive normal form (ands of ors), and at least one (our information retrieval course) requires multiple levels of parentheses in its formula.

I had thought that this was mostly because some courses that have different numbers really have the same content, and should be treated as equivalent for the purposes of the prerequisite ordering. In order-theoretic terms, that would mean that we have a quasi-order rather than a partial order. For instance, we went through a refactoring of our curriculum a few years back and our prerequisite listings still include the numbers for both our old sophomore data structures course and our new one. But that's just to avoid blocking the leftover students who took it before we changed; you wouldn't want to take both classes. Or, we have three different ways you can learn linear algebra: by doing lots of hand calculations (the way the engineers want), by learning about vector spaces as abstract coordinate-free things (the way the mathematicians want), or by interacting with MATLAB or the equivalent (the way the computer scientists, and especially the machine learning peopl want). But you would only take one of these, not all three. So my feeling was that, at least when you restricted the set of classes to the ones taken by any individual student, you would get a partial order.

But even that is not true. Take a look early in the linked catalog page, at our course CS 113, computer game development. Its prerequisites are one of computer graphics, AI, software design, critical analysis of games, graph algorithms, or game design. Why that list? I don't know, it's not my course. But it's completely reasonable to take more than one of these courses, in various different orderings, and the fact that it's an "or" of these courses rather than an "and" means that we're not treating this list the way we would for topological ordering of DAGs.

So, if not a DAG, what is the prerequisite structure? An antimatroid! Or almost. Once you've fulfilled the prerequisites for a course, you can take the course in any later term that it's offered — they don't go stale. More abstractly, an element, once available to be included in an ordering, remains available until it is used. And this is the defining property of an antimatroid. The "almost" is because there are also some constraints that you can't take both of some pairs of classes for credit, but this only applies if you look at the whole course catalog at once. If you look at what any individual student takes, I think it really is an antimatroid. Of course, I may not have examined the catalog closely enough to find the exceptions...

at January 18, 2017 03:20 AM UTC

Authors: Charis Papadopoulos, Spyros Tzimas
Download: PDF
Abstract: Given a vertex-weighted graph $G=(V,E)$ and a set $S \subseteq V$, a subset feedback vertex set $X$ is a set of the vertices of $G$ such that the graph induced by $V \setminus X$ has no cycle containing a vertex of $S$. The \textsc{Subset Feedback Vertex Set} problem takes as input $G$ and $S$ and asks for the subset feedback vertex set of minimum total weight. In contrast to the classical \textsc{Feedback Vertex Set} problem which is obtained from the \textsc{Subset Feedback Vertex Set} problem for $S=V$, restricted to graph classes the \textsc{Subset Feedback Vertex Set} problem is known to be NP-complete on split graphs and, consequently, on chordal graphs. However as \textsc{Feedback Vertex Set} is polynomially solvable for AT-free graphs, no such result is known for the \textsc{Subset Feedback Vertex Set} problem on any subclass of AT-free graphs. Here we give the first polynomial-time algorithms for the problem on two unrelated subclasses of AT-free graphs: interval graphs and permutation graphs. As a byproduct we show that there exists a polynomial-time algorithm for circular-arc graphs by suitably applying our algorithm for interval graphs. Moreover towards the unknown complexity of the problem for AT-free graphs, we give a polynomial-time algorithm for co-bipartite graphs. Thus we contribute to the first positive results of the \textsc{Subset Feedback Vertex Set} problem when restricted to graph classes for which \textsc{Feedback Vertex Set} is solved in polynomial time.

at January 18, 2017 02:02 AM UTC

Authors: Iliano Cervesato, Maribel Fernández
Download: PDF
Abstract: This volume contains the papers presented at LINEARITY 2016, the Fourth International Workshop on Linearity, held on June 26, 2016 in Porto, Portugal. The workshop was a one-day satellite event of FSCD 2016, the first International Conference on Formal Structures for Computation and Deduction.

The aim of this workshop was to bring together researchers who are developing theory and applications of linear calculi, to foster their interaction and provide a forum for presenting new ideas and work in progress, and enable newcomers to learn about current activities in this area. Of interest were new results that made a central use of linearity, ranging from foundational work to applications in any field. This included: sub-linear logics, linear term calculi, linear type systems, linear proof-theory, linear programming languages, applications to concurrency, interaction-based systems, verification of linear systems, and biological and chemical models of computation.

at January 18, 2017 02:01 AM UTC

Authors: Pravesh K. Kothari, Ryuhei Mori, Ryan O'Donnell, David Witmer
Download: PDF
Abstract: Let $P:\{0,1\}^k \to \{0,1\}$ be a nontrivial $k$-ary predicate. Consider a random instance of the constraint satisfaction problem $\mathrm{CSP}(P)$ on $n$ variables with $\Delta n$ constraints, each being $P$ applied to $k$ randomly chosen literals. Provided the constraint density satisfies $\Delta \gg 1$, such an instance is unsatisfiable with high probability. The \emph{refutation} problem is to efficiently find a proof of unsatisfiability.

We show that whenever the predicate $P$ supports a $t$-\emph{wise uniform} probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree $d = \Theta(\frac{n}{\Delta^{2/(t-1)} \log \Delta})$ (which runs in time $n^{O(d)}$) \emph{cannot} refute a random instance of $\mathrm{CSP}(P)$. In particular, the polynomial-time SOS algorithm requires $\widetilde{\Omega}(n^{(t+1)/2})$ constraints to refute random instances of CSP$(P)$ when $P$ supports a $t$-wise uniform distribution on its satisfying assignments. Together with recent work of Lee et al. [LRS15], our result also implies that \emph{any} polynomial-size semidefinite programming relaxation for refutation requires at least $\widetilde{\Omega}(n^{(t+1)/2})$ constraints.

Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate~$P$, they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen et al. [AOW15] and Raghavendra et al. [RRS16], this full three-way tradeoff is \emph{tight}, up to lower-order factors.

at January 18, 2017 02:00 AM UTC

Authors: D. M. Stull
Download: PDF
Abstract: We prove a downward separation for $\mathsf{\Sigma}_2$-time classes. Specifically, we prove that if $\Sigma_2$E does not have polynomial size non-deterministic circuits, then $\Sigma_2$SubEXP does not have \textit{fixed} polynomial size non-deterministic circuits. To achieve this result, we use Santhanam's technique on augmented Arthur-Merlin protocols defined by Aydinlio\u{g}lu and van Melkebeek. We show that augmented Arthur-Merlin protocols with one bit of advice do not have fixed polynomial size non-deterministic circuits. We also prove a weak unconditional derandomization of a certain type of promise Arthur-Merlin protocols. Using Williams' easy hitting set technique, we show that $\Sigma_2$-promise AM problems can be decided in $\Sigma_2$SubEXP with $n^c$ advice, for some fixed constant $c$.

at January 18, 2017 02:00 AM UTC

The Polymath10 project on the Erdos-Rado Delta-System conjecture took place over this blog from November 2015 to May 2016. I aimed for an easy-going project that people could participate calmly aside from their main research efforts and  the duration of the project was planned for one year. I also wanted to propose and develop my own homological approach to the problem.

The purpose of this post is to (belatedly) formally announce that the project has ended,  to give links to the individual posts and to briefly mention some advances and some thoughts about it.

The posts were

The problem was not solved and we did not come near a solution. The posts contain some summary of the discussions, a few results, and some proposals by the participants. Phillip Gibbs found a remarkable relation between the general case and the balanced case.  Dömötör Palvolgyi shot down quite a few conjectures I made, and Ferdinand Ihringer presented results about some Erdos-Ko-Rado extensions we considered  (In term of upper bounds for sunflower-free families). Several participants have made interesting proposals for attacking the problem.

I presented in the second post a detailed homological approach, and developed it further in the later threads  with the help of Eran Nevo and a few others. Then, after a major ingredient was shot down, I revised it drastically in the last post.

Participants made several computer experiments, for sunflower-free sets, for random sunflower-free sets, and also regarding the homologica/algebraic ideas.

The posts (and some comments) give some useful links to literature regarding the problem, and post 5 was devoted  to a startling development which occurred separately – the solution of the Erdos-Szemeredi sunflower conjecture for sunflowers with three petals following the cup set developments.  (The Erdos-Szemeredi sunflower conjecture is  weaker than the Erdos-Rado conjecture.)

 

The origin of my homological approach

delta

A (too) strong version of the homological conjecture appeared in my 1983 Ph. D. thesis written in Hebrew. The typesetting used the Hebrew version of Troff.

 

 

 


by Gil Kalai at January 17, 2017 07:06 PM UTC

Authors: Andreas Alpers, Peter Gritzmann
Download: PDF
Abstract: Super-resolution imaging aims at improving the resolution of an image by enhancing it with other images or data that might have been acquired using different imaging techniques or modalities. In this paper we consider the task of doubling the resolution of tomographic grayscale images of binary objects by fusion with double-resolution tomographic data that has been acquired from two viewing angles. We show that this task is polynomial-time solvable if the gray levels have been reliably determined. The task becomes $\mathbb{N}\mathbb{P}$-hard if the gray levels of some pixels come with an error of $\pm1$ or larger. The $\mathbb{N}\mathbb{P}$-hardness persists for any larger resolution enhancement factor. This means that noise does not only affect the quality of a reconstructed image but, less expectedly, also the algorithmic tractability of the inverse problem itself.

at January 17, 2017 12:00 AM UTC

Authors: Sepehr Assadi, Sanjeev Khanna, Yang Li
Download: PDF
Abstract: We study the problem of estimating the maximum matching size in graphs whose edges are revealed in a streaming manner. We consider both insertion-only streams and dynamic streams and present new upper and lower bound results for both models.

On the upper bound front, we show that an $\alpha$-approximate estimate of the matching size can be computed in dynamic streams using $\widetilde{O}({n^2/\alpha^4})$ space, and in insertion-only streams using $\widetilde{O}(n/\alpha^2)$-space. On the lower bound front, we prove that any $\alpha$-approximation algorithm for estimating matching size in dynamic graph streams requires $\Omega(\sqrt{n}/\alpha^{2.5})$ bits of space, even if the underlying graph is both sparse and has arboricity bounded by $O(\alpha)$. We further improve our lower bound to $\Omega(n/\alpha^2)$ in the case of dense graphs.

Furthermore, we prove that a $(1+\epsilon)$-approximation to matching size in insertion-only streams requires RS$(n) \cdot n^{1-O(\epsilon)}$ space; here, RS${n}$ denotes the maximum number of edge-disjoint induced matchings of size $\Theta(n)$ in an $n$-vertex graph. It is a major open problem to determine the value of RS$(n)$, and current results leave open the possibility that RS$(n)$ may be as large as $n/\log n$. We also show how to avoid the dependency on the parameter RS$(n)$ in proving lower bound for dynamic streams and present a near-optimal lower bound of $n^{2-O(\epsilon)}$ for $(1+\epsilon)$-approximation in this model.

Using a well-known connection between matching size and matrix rank, all our lower bounds also hold for the problem of estimating matrix rank. In particular our results imply a near-optimal $n^{2-O(\epsilon)}$ bit lower bound for $(1+\epsilon)$-approximation of matrix ranks for dense matrices in dynamic streams, answering an open question of Li and Woodruff (STOC 2016).

at January 17, 2017 12:00 AM UTC

Authors: Amir Hashemi, Joos Heintz, Luis Miguel Pardo, Pablo Solernó
Download: PDF
Abstract: We introduce a "workable" notion of degree for non-homogeneous polynomial ideals and formulate and prove ideal theoretic B\'ezout Inequalities for the sum of two ideals in terms of this notion of degree and the degree of generators. We compute probabilistically the degree of an equidimensional ideal.

at January 17, 2017 02:01 AM UTC

Authors: Tong Yang, Lingtong Liu, Yibo Yan, Muhammad Shahzad, Yulong Shen, Xiaoming Li, Bin Cui, Gaogang Xie
Download: PDF
Abstract: A sketch is a probabilistic data structure that is used to record frequencies of items in a multi-set. Various types of sketches have been proposed in literature and applied in a variety of fields, such as data stream processing, natural language processing, distributed data sets etc. While several variants of sketches have been proposed in the past, existing sketches still have a significant room for improvement in terms of accuracy. In this paper, we propose a new sketch, called Slim-Fat (SF) sketch, which has a significantly higher accuracy compared to prior art, a much smaller memory footprint, and at the same time achieves the same speed as the best prior sketch. The key idea behind our proposed SF-sketch is to maintain two separate sketches: a small sketch called Slim-subsketch and a large sketch called Fat-subsketch. The Slim-subsketch, stored in the fast memory (SRAM), enables fast and accurate querying. The Fat-subsketch, stored in the relatively slow memory (DRAM), is used to assist the insertion and deletion from Slim-subsketch. We implemented and extensively evaluated SF-sketch along with several prior sketches and compared them side by side. Our experimental results show that SF-sketch outperforms the most commonly used CM-sketch by up to 33.1 times in terms of accuracy. The concise version of our paper will appear in IKDE 2017.

at January 17, 2017 12:00 AM UTC

Authors: Neil Lutz, D. M. Stull
Download: PDF
Abstract: This paper investigates the algorithmic dimension spectra of lines in the Euclidean plane. Given any line L with slope a and vertical intercept b, the dimension spectrum sp(L) is the set of all effective Hausdorff dimensions of individual points on L. We draw on Kolmogorov complexity and geometrical arguments to show that if the effective Hausdorff dimension dim(a, b) is equal to the effective packing dimension Dim(a, b), then sp(L) contains a unit interval. We also show that, if the dimension dim(a, b) is at least one, then sp(L) is infinite. Together with previous work, this implies that the dimension spectrum of any line is infinite.

at January 17, 2017 02:01 AM UTC

Authors: Ran Ben Basat, Gil Einziger, Roy Friedman, Yaron Kassner
Download: PDF
Abstract: Monitoring the traffic volumes of elephant flows, including the total byte count per flow, is a fundamental capability for online network measurements. We present an asymptotically optimal algorithm for solving this problem in terms of both space and time complexity. This improves on previous approaches, which can only count the number of packets in constant time. We evaluate our work on real packet traces, demonstrating an up to X2.5 speedup compared to the best alternative.

at January 17, 2017 12:00 AM UTC

Authors: Jianxin Chen, Andrew M. Childs, Shih-Han Hung
Download: PDF
Abstract: How many quantum queries are required to determine the coefficients of a degree-$d$ polynomial in $n$ variables? We present and analyze quantum algorithms for this multivariate polynomial interpolation problem over the fields $\mathbb{F}_q$, $\mathbb{R}$, and $\mathbb{C}$. We show that $k_{\mathbb{C}}$ and $2k_{\mathbb{C}}$ queries suffice to achieve probability $1$ for $\mathbb{C}$ and $\mathbb{R}$, respectively, where $k_{\mathbb{C}}=\smash{\lceil\frac{1}{n+1}{n+d\choose d}\rceil}$ except for $d=2$ and four other special cases. For $\mathbb{F}_q$, we show that $\smash{\lceil\frac{d}{n+d}{n+d\choose d}\rceil}$ queries suffice to achieve probability approaching $1$ for large field order $q$. The classical query complexity of this problem is $\smash{n+d\choose d}$, so our result provides a speedup by a factor of $n+1$, $\frac{n+1}{2}$, and $\frac{n+d}{d}$ for $\mathbb{C}$, $\mathbb{R}$, and $\mathbb{F}_q$, respectively. Thus we find a much larger gap between classical and quantum algorithms than the univariate case, where the speedup is by a factor of $2$. For the case of $\mathbb{F}_q$, we conjecture that $2k_{\mathbb{C}}$ queries also suffice to achieve probability approaching $1$ for large field order $q$, although we leave this as an open problem.

at January 17, 2017 12:00 AM UTC

Authors: Colin Cooper, Martin Dyer, Catherine Greenhill, Andrew Handley
Download: PDF
Abstract: Mahlmann and Schindelhauer (2005) defined a Markov chain which they called $k$-Flipper, and showed that it is irreducible on the set of all connected regular graphs of a given degree (at least 3). We study the 1-Flipper chain, which we call the flip chain, and prove that the flip chain converges rapidly to the uniform distribution over connected $2r$-regular graphs with $n$ vertices, where $n\geq 8$ and $r = r(n)\geq 2$. Formally, we prove that the distribution of the flip chain will be within $\varepsilon$ of uniform in total variation distance after $\text{poly}(n,r,\log(\varepsilon^{-1}))$ steps. This polynomial upper bound on the mixing time is given explicitly, and improves markedly on a previous bound given by Feder et al.(2006). We achieve this improvement by using a direct two-stage canonical path construction, which we define in a general setting.

This work has applications to decentralised networks based on random regular connected graphs of even degree, as a self-stablising protocol in which nodes spontaneously perform random flips in order to repair the network.

at January 17, 2017 12:00 AM UTC

Authors: Yu Zhang, Kanat Tangwongsan, Srikanta Tirthapura
Download: PDF
Abstract: We present methods for k-means clustering on a stream with a focus on providing fast responses to clustering queries. When compared with the current state-of-the-art, our methods provide a substantial improvement in the time to answer a query for cluster centers, while retaining the desirable properties of provably small approximation error, and low space usage. Our algorithms are based on a novel idea of "coreset caching" that reuses coresets (summaries of data) computed for recent queries in answering the current clustering query. We present both provable theoretical results and detailed experiments demonstrating their correctness and efficiency.

at January 17, 2017 07:58 AM UTC

uripeter

Five years ago I wrote a post entitled Is Backgammon in P? It was based on conversations with Peter Bro Miltersen and Uri Zwick (shown together in the above picture) about the computational complexity of computing the values (and equilibrium points) of various stochastic games, and also on some things I learned from my game theory friends over the years about proving that values exist for some related games. A few weeks ago  two former students of Peter, Rasmus Ibsen-Jensen and  Kristoffer Arnsfelt Hansen visited Israel and I had a chance to chat with them and learn about some recent exciting advances.

In what way is Backgammon harder than chess?

Is there a polynomial time algorithm for chess?  Well, if we consider the complexity of chess in terms of the board size then it is fair to think that the answer is “no”. But if we wish to consider the complexity in terms of the number of all possible positions then it is easy to go backward over all positions and determine  the outcome of the game when we start with each given position.

Now, what about backgammon?   Like chess, backgammon is a game of complete information.  The difference between backgammon and chess is the element of luck; at each position your possible moves are determined by a roll of two dice. This element of luck increases the computational  skill needed for playing backgammon compared to chess. It can easily be seen that optimal strategy for players in backgammon need not involve any randomness.

Problem 1: Is there a polynomial time algorithm to find the optimal strategy (and thus the value) of a stochastic zero sum game with perfect information? (Like backgammon)

This question (raised by Ann Condon in 1998) represents one of the most fundamental open problem in algorithmic game theory.

In what way is heads-up poker harder than backgammon?

Heads-up poker is just a poker game with two players.  To make it concrete you may think about heads-up Texas hold’em poker. This is not a game with complete information, but by according to the minmax theorem  it still has a value. The optimal strategies are mixed and involve randomness.

Problem 2: Is there a polynomial time algorithm to find the optimal strategy (and thus the value) of a stochastic zero-sum game with incomplete information? (like heads-up Texas hold’em poker).

It will be very nice to find even a sub-exponential algorithm for a stochastic zero-sum game with incomplete information like poker.

Problem 2′: Is there a subexponential-time algorithm to find the optimal strategy (and thus the value) of a stochastic zero-sum game with incomplete information?

For games with complete information like backgammon, a subexponential algorithm was found by  Walter Ludwig and in greater generality by Sergei Vorobyov,  Henrik Björklund, and Sven Sandberg.   It is related to subexponential simplex-type algorithms for linear programming called RANDOM-FACET, found in the early 90s by Matousek, Sharir and Welzl and myself.

Poker-like games with a bounded number of states are in P!

Kristoffer Arnsfelt Hansen (see abstract below) presented a polynomial-time algorithm for 2-persons zero sum stochastic games, when the games have a bounded number of states. (Earlier algorithms were exponential.) The paper is:  Exact Algorithms for Solving
Stochastic Games by Kristoffer Arnsfelt Hansen, Michal Koucky, Niels Lauritsen,
Peter Bro Miltersen, and Elias P. Tsigaridas. Slides of the talk are linked here.

Practical algorithms for heads-up poker – The Alberta group

As for backgammon there are very good computer programs. (We talked about chess-playing computers in this guest post by Amir Ban and since that time Go-playing computers are also available.) The site Cepheus Poker Project and this science paper Heads-up limit hold’em poker is solved are good sources on major achievements by a group of researchers from Alberta regarding two players poker.

poker

What about poker with more players?

Problem 3: Is there a polynomial time algorithm to find Nash equilibrium point (or another form of optimal strategy)  of a stochastic n-player game with incomplete information? (like Texas holdem poker.) Here n is fixed and small.

I think that people are optimistic that even the answer to problem 3 is yes.  (There are hardness results for finding equilibrium points in matrix games but the relevance to our case is not clear.) If we want an algorithm which optimally plays poker, it is not clear that finding a Nash equilibrium is the way to go.

Problem 4: Find an algorithm for playing Texas hold’em poker when there are more than two players.

When the objective is to maximize revenues against human players I expect that it will be possible to develop computer programs for  playing poker better than humans.

MEDIOCRICY

Problem 5: How to play the game MEDIAN of the previous post?

Matching pennies

Matching pennies  is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match (both heads or both tails), then Even keeps both pennies, so wins one from Odd (+1 for Even, −1 for Odd). If the pennies do not match (one heads and one tails) Odd keeps both pennies, so receives one from Even (−1 for Even, +1 for Odd) (source wikiPedia)

Variants of this game have been played since ancient times. In Hebrew matching pennies is called ZUG O PERET (even or odd; זוג או פרט). It is played like this: There are two players. Each player in his turn makes an announcement “even” or “odd”. Then each of the two players shows (simultaneously) some number of fingers and the announcing player wins if the sum of fingers has the announced parity.

The Big Match

The big match is a drastic repeated version of  matching pennies. The game is played between players Even and Odd. Each player has a penny and in each stage must secretly turn the penny to heads or tails and the payoffs are the same as  in matching pennies.  If Even plays “head” the game continues to the next stage. However if Even plays “tails” (or tries for the “big match” as it is called)  then the payoff in that round is repeated for all future rounds: Namely, if the pennies match Even will get 1 for all future rounds, and if the pennies do not match Even will pay one for all future rounds.

By playing heads with probability 1/2 and tails with probability 1/2, Odd can guarantee an expected payoff of 0. But what about Even? Can he also guarantee an expected payoff of 0? This was an open question for quite some time. The big match was  introduced in 1957 by Dean Gillette who asked if the game has a value, namely if Even has a strategy to  guarantee a payoff of 0.

Problem 7: Does big match has a value?

Here is a blog post on the big match by Presh Talwalkar on his blog “mind your decisions.” You also can read about the big match in this post of Kevin Bryan’s economics blog “a fine theorem.”

In 1968, David Blackwell and Thomas S. Ferguson settled Gillete’s question and proved that even can guarantee a zero payoff and thus big match did in fact have a value.  This was the first step to showing all zero-sum stochastic games have value under limiting average payoff, which was proven in 1982 by Mertens and Neyman.

The Big Match in small space

Rasmus Ibsen-Jensen presented both positive and negative results on attaining the value for the big match with limited types of strategies and also on complexity issues regarding other stochastic games.  Here are the slides for Rasmus’ talk (see full abstract below). Part of the talk is based on the paper The Big Match in Small Space by Kristoffer Arnsfelt Hansen, Rasmus Ibsen-Jensen, and Michal Koucky.

Values and equilibrium points for stochastic games.

This is a remarkable story with very important results and open questions.  Here is the Wikipedia article on stochastic games and this short paper by Eilon Solan. I see now that the post is becoming too long  and I will have to talk about it in a different post.

Problem 8 (informal): Does every stochastic game have a value an equilibrium?

Following a major step by Truman Bewley and Elon Kohlberg (1976),  Jean-François Mertens and Abraham Neyman (1981) proved that every two-person zero-sum stochastic game with finitely many states and actions has a uniform value. Nicolas Vieille (2000) has shown that all two-person stochastic games with finite state and action spaces have a limiting-average equilibrium payoff. The big question is to extend Vieille’s result to games with many players.

big-match-heroes

Kristoffer, Rasmus and Abraham (Merale) Neyman.

The new advances:  Kristoffer’s lecture

Exact algorithms for solving stochastic games

Speaker: Kristoffer Arnsfelt Hansen, Aarhus University
==================================================

 

In this talk we consider two-player zero sum stochastic games
with finite state and action space from an algorithmic
perspective. Prior to our work, algorithms for solving
stochastic games relied either on generic reductions to decision
procedures for the first order theory of the reals or on value or
strategy iteration. For all these algorithms, the complexity is
at least exponential even when the number of positions is a
constant and even when only a crude approximation is required

We will present an exact algorithm for solving these games based
on a simple recursive bisection pattern. The algorithm runs in
polynomial time when the number of positions is constant and our
algorithms are the first algorithms with this property. While the
algorithm is not based directly on real algebraic geometry, our
algorithm depends heavily on results from the field.

Based on joint work with Michal Koucký, Niels Lauritzen,
Peter Bro Miltersen, and Elias P. Tsigaridas published at STOC’11.

Rasmus’s lecture: Complexity of Good Strategies in Stochastic Games

Abstract: The talk will attempt to characterize good strategies for some special cases of stochastic games. For instance, the talk will argue that there might always be a good strategy with a certain property for all games in a special case of stochastic games or that no good strategy exists that has some property for some game. Concretely,

1) for the stochastic game the Big Match, no good strategy (for lim inf) exists that only depends on how long the game has been playing and a finite amount of extra memory (when the extra memory is updated deterministically).

2) for the Big Match there is a good strategy that uses only a single coin flip per round and exponentially less space then previous known good strategies.

3) let x be the greatest reward in a stochastic game. The talk will next give a simple characterization of the states of value equal to x for which there exists either (a) an optimal strategy; (b) for each epsilon>0, a stationary epsilon-optimal strategy; or (c) for each epsilon>0, a finite-memory epsilon-optimal strategy (when the memory is updated deterministically) . The characterization also gives the corresponding strategy.

4) the talk will then consider stochastic games where there exists epsilon-optimal stationary strategies for all epsilon>0. It will argue that the smallest positive probability in a stationary epsilon-optimal strategy must be at least double exponential small for some sub-classes of stochastic games, while for other classes exponential small probabilities suffices.

1) and 2) is based on “The Big Match in Small Space”, 3) is based on “The Value 1 Problem Under Finite-memory Strategies for Concurrent Mean-payoff Games” 4) is based on “Strategy Complexity of Concurrent Stochastic Games with Safety and Reachability Objectives” and “The complexity of ergodic mean-payoff games“. All papers can be found in http://Rasmus.Ibsen-Jensen.com

 

bmt


by Gil Kalai at January 16, 2017 06:30 AM UTC

I run an REU program (Research Experience for Undergraduates) and I would normally urge you to urge undergrads who would benefit to apply to it and present both this link: here and this flyer: here.

I just did that. But I want to talk about REU programs, not just mine. A few points
which can be construed as advise- though its more questions and what I do.

1) Do flyers still have any effect? If there is a bulletin board in a dept that is known for where to look for announcements of summer opps, then YES. Otherwise--- not sure. when I asked this question to a professor I emailed the flyer to she said that at her school they stick flyers in the bathroom stalls where they are sure to get a captive audience.

2) Should you visit schools directly? I have done this; however, most of the applicants find out about it from the NSF REU website.

3) Should students pick projects ahead of time or in the first week? When students apply they list a  set of projects they want to work on (unranked but the Statement of Purpose can say which one(s)  they REALLY want)  so we can start on Day 1. Some of the mentors are in contact with students ahead of time. Other programs have them decide the first week. There are PROS and CONS to both.


4) How to do admissions? I do them myself since there are few enough of them (about 180 for 10 slots- though see next item) and since there are so many criteria to balance I'd rather not get into arguments with other committee members. I will sometimes (for example) say

``John, here are two people who want to work on your project. What do you think of them''

5) Of the 180 applicants about 50 do not have a statement of purpose. For me this is a deal-breaker. Either they were in the middle of applying and got another offer and took it- which is fine, but no reason for me to even look at the application, OR they have a hard time completing things and again, not worth my time. A prof once asked me `But what if they are really good''-- there are plenty of really good applicants who do fill out the statement of purpose.

6) The main activity is research but we have some social activities as well.

7) How big should teams be? We try to avoid teams of 1. We usually have teams of 2 or 3.

8) What about students from your own school? My REU grant tends to urge them to go elsewhere since the NSF likes to have people from NON-research schools, and because I personally think its better for broadening to go elsewhere. Other programs are set up differently.

9) Why do some transcript not have the name of the school on them. Drives me nuts.

In the Summer of 2017 I will be running this program for the 5th time. Feel free to leave questions about how to run one, OR your own experiences, in the comments.


by GASARCH (noreply@blogger.com) at January 16, 2017 03:24 AM UTC

Authors: Mohsen Ghaffari
Download: PDF
Abstract: In a recent breakthrough, Paz and Schwartzman [SODA'17] presented a single-pass ($2+\epsilon$)-approximation algorithm for the maximum weight matching problem in the semi-streaming model. Their algorithm uses $O(n\log^2 n)$ bits of space, for any constant $\epsilon>0$.

In this note, we present a different analysis, for essentially the same algorithm, that improves the space complexity to the optimal bound of $O(n\log n)$ bits, while also providing a more intuitive explanation of the process.

at January 16, 2017 02:01 AM UTC