Theory of Computing Blog Aggregator

Authors: Gregory Gutin, M. S. Ramanujan, Felix Reidl, Magnus Wahlström
Download: PDF
Abstract: We study several problems related to graph modification problems under connectivity constraints from the perspective of parameterized complexity: {\sc (Weighted) Biconnectivity Deletion}, where we are tasked with deleting~$k$ edges while preserving biconnectivity in an undirected graph, {\sc Vertex-deletion Preserving Strong Connectivity}, where we want to maintain strong connectivity of a digraph while deleting exactly~$k$ vertices, and {\sc Path-contraction Preserving Strong Connectivity}, in which the operation of path contraction on arcs is used instead. The parameterized tractability of this last problem was posed by Bang-Jensen and Yeo [DAM 2008] as an open question and we answer it here in the negative: both variants of preserving strong connectivity are $\sf W[1]$-hard. Preserving biconnectivity, on the other hand, turns out to be fixed parameter tractable and we provide a $2^{O(k\log k)} n^{O(1)}$-algorithm that solves {\sc Weighted Biconnectivity Deletion}. Further, we show that the unweighted case even admits a randomized polynomial kernel. All our results provide further interesting data points for the systematic study of connectivity-preservation constraints in the parameterized setting.

at April 24, 2017 12:00 AM UTC

Authors: David G. Harris, Thomas Pensyl, Aravind Srinivasan, Khoa Trinh
Download: PDF
Abstract: We consider an issue of much current concern: could fairness, an issue that is already difficult to guarantee, worsen when algorithms run much of our lives? We consider this in the context of resource-allocation problems; we show that algorithms can guarantee certain types of fairness in a verifiable way. Our conceptual contribution is a simple approach to fairness in this context, which only requires that all users trust some public lottery. Our technical contributions are in ways to address the $k$-center and knapsack-center problems that arise in this context: we develop a novel dependent-rounding technique that, via the new ingredients of "slowing down" and additional randomization, guarantees stronger correlation properties than known before.

at April 24, 2017 12:00 AM UTC

Authors: Andrii Riazanov, Mikhail Vyalyiy
Download: PDF
Abstract: The nonnegative and positive semidefinite (PSD-) ranks are closely connected to the nonnegative and positive semidefinite extension complexities of a polytope, which are the minimal dimensions of linear and SDP programs which represent this polytope. Though some exponential lower bounds on the nonnegative and PSD- ranks has recently been proved for the slack matrices of some particular polytopes, there are still no tight bounds for these quantities. We explore some existing bounds on the PSD-rank and prove that they cannot give exponential lower bounds on the extension complexity. Our approach consists in proving that the existing bounds are upper bounded by the polynomials of the regular rank of the matrix, which is equal to the dimension of the polytope (up to an additive constant). As one of the implications, we also retrieve an upper bound on the mutual information of an arbitrary matrix of a joint distribution, based on its regular rank.

at April 24, 2017 12:00 AM UTC

Authors: Jingcheng Liu, Alistair Sinclair, Piyush Srivastava
Download: PDF
Abstract: We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters $\beta$ (the interaction) and $\lambda$ (the external field), except for the case $\vert{\lambda}\vert=1$ (the "zero-field" case). A randomized algorithm (FPRAS) for all graphs, and all $\beta,\lambda$, has long been known. Unlike most other deterministic approximation algorithms for problems in statistical physics and counting, our algorithm does not rely on the "decay of correlations" property. Rather, we exploit and extend machinery developed recently by Barvinok, and Patel and Regts, based on the location of the complex zeros of the partition function, which can be seen as an algorithmic realization of the classical Lee-Yang approach to phase transitions. Our approach extends to the more general setting of the Ising model on hypergraphs of bounded degree and edge size, where no previous algorithms (even randomized) were known for a wide range of parameters. In order to achieve this extension, we establish a tight version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a classical result of Suzuki and Fisher.

at April 24, 2017 12:00 AM UTC

Authors: Dariusz Dereniowski, Wieslaw Kubiak
Download: PDF
Abstract: We study the shared processor scheduling problem with a single shared processor where a unit time saving (weight) obtained by processing a job on the shared processor depends on the job. A polynomial-time optimization algorithm has been given for the problem with equal weights in the literature. This paper extends that result by showing an $O(n \log n)$ optimization algorithm for a class of instances in which non-decreasing order of jobs with respect to processing times provides a non-increasing order with respect to weights --- this instance generalizes the unweighted case of the problem. This algorithm also leads to a $\frac{1}{2}$-approximation algorithm for the general weighted problem. The complexity of the weighted problem remains open.

at April 24, 2017 12:00 AM UTC

Authors: Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, Jinyu Xie
Download: PDF
Abstract: We prove that any non-adaptive algorithm that tests whether an unknown Boolean function $f: \{0, 1\}^n\to \{0, 1\}$ is a $k$-junta or $\epsilon$-far from every $k$-junta must make $\widetilde{\Omega}(k^{3/2} / \epsilon)$ many queries for a wide range of parameters $k$ and $\epsilon$. Our result dramatically improves previous lower bounds from [BGSMdW13, STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Blais08], which makes $\widetilde{O}(k^{3/2})/\epsilon$ queries. Combined with the adaptive tester of [Blais09] which makes $O(k\log k + k /\epsilon)$ queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.

at April 24, 2017 12:00 AM UTC

Authors: Yi-Jun Chang, Seth Pettie
Download: PDF
Abstract: The celebrated Time Hierarchy Theorem for Turing machines states, informally, that more problems can be solved given more time. The extent to which a time hierarchy-type theorem holds in the distributed LOCAL model has been open for many years. It is consistent with previous results that all natural problems in the LOCAL model can be classified according to a small constant number of complexities, such as $O(1),O(\log^* n), O(\log n), 2^{O(\sqrt{\log n})}$, etc.

In this paper we establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the LOCAL time hierarchy.

1. We define an infinite set of simple coloring problems called Hierarchical $2\frac{1}{2}$-Coloring}. A correctly colored graph can be confirmed by simply checking the neighborhood of each vertex, so this problem fits into the class of locally checkable labeling (LCL) problems. However, the complexity of the $k$-level Hierarchical $2\frac{1}{2}$-Coloring problem is $\Theta(n^{1/k})$, for $k\in\mathbb{Z}^+$. The upper and lower bounds hold for both general graphs and trees, and for both randomized and deterministic algorithms.

2. Consider any LCL problem on bounded degree trees. We prove an automatic-speedup theorem that states that any randomized $n^{o(1)}$-time algorithm solving the LCL can be transformed into a deterministic $O(\log n)$-time algorithm. Together with a previous result, this establishes that on trees, there are no natural deterministic complexities in the ranges $\omega(\log^* n)$---$o(\log n)$ or $\omega(\log n)$---$n^{o(1)}$.

3. We expose a gap in the randomized time hierarchy on general graphs. Any randomized algorithm that solves an LCL problem in sublogarithmic time can be sped up to run in $O(T_{LLL})$ time, which is the complexity of the distributed Lovasz local lemma problem, currently known to be $\Omega(\log\log n)$ and $O(\log n)$.

at April 24, 2017 12:00 AM UTC

Authors: Marcin Jurdziński, Ranko Lazić
Download: PDF
Abstract: The recent breakthrough paper by Calude et al. has given the first algorithm for solving parity games in quasi-polynomial time, where previously the best algorithms were mildly subexponential. We devise an alternative quasi-polynomial time algorithm based on progress measures, which allows us to reduce the space required from quasi-polynomial to nearly linear. Our key technical tools are a novel concept of ordered tree coding, and a succinct tree coding result that we prove using bounded adaptive multi-counters, both of which are interesting in their own right.

at April 24, 2017 12:00 AM UTC

In the recent paper of~\cite{BR16}, the authors show that, for any constant $10^{-15} > \varepsilon > 0$ the communication complexity of $\varepsilon$-approximate Nash equilibria in $2$-player $n \times n$ games is $n^{\Omega(\varepsilon)}$, resolving the long open problem of whether or not there exists a polylogarithmic communication protocol. In this paper we address an open question they pose regarding the communication complexity of $2$-player $\varepsilon$-approximate correlated equilibria. For our upper bounds, we provide a communication protocol that outputs a $\varepsilon$-approximate correlated equilibrium after exchanging $\tilde{O}(n \varepsilon^{-2})$ bits, saving over the naive protocol which requires $O(n^2)$-bits. This is in sharp contrast to Nash equilibria where for sufficiently small constant $\varepsilon$, no $o(n^2)$-communication protocol is known. In the $m$-player, $n$-action setting, our protocol can be extended to a $\tilde{O}(nm)$-bit protocol. For our lower bounds, we exhibit a simple two player game that has a logarithmic information lower bound: for any constant $\varepsilon < \frac{1}{8}$ the two players need to communicate $\Omega(\log n)$-bits of information to compute any $\varepsilon$-correlated equilibrium in the game. For the $m$-players, $2$-action setting we show a lower bound of $\Omega(m)$ bits, which matches the upper bound we provide up to polylogarithmic terms and shows that the dependence on the number of players is unavoidable.

at April 23, 2017 04:09 PM UTC

A new probabilistic technique for establishing the existence of certain regular combinatorial structures has been introduced by Kuperberg, Lovett, and Peled (STOC 2012). Using this technique, it can be shown that under certain conditions, a randomly chosen structure has the required properties of a $t-(n,k,?)$ combinatorial design with tiny, yet positive, probability. Herein, we strengthen both the method and the result of Kuperberg, Lovett, and Peled as follows. We modify the random choice and the analysis to show that, under the same conditions, not only does a $t- (n,k,?)$ design exist but, in fact, with positive probability there exists a large set of such designs —that is, a partition of the set of $k$-subsets of $[n]$ into $t-(n,k,?)$ designs. Specifically, using the probabilistic approach derived herein, we prove that for all sufficiently large $n$, large sets of $t-(n,k,?)$ designs exist whenever $k > 9t$ and the necessary divisibility conditions are satisied. This resolves the existence conjecture for large sets of designs for all $k > 9t$.

at April 23, 2017 07:01 AM UTC

We consider a robust analog of the planted clique problem. In this analog, a set $S$ of vertices is chosen and all edges in $S$ are included; then, edges between $S$ and the rest of the graph are included with probability $\frac{1}{2}$, while edges not touching $S$ are allowed to vary arbitrarily. For this semi-random model, we show that the information-theoretic threshold for recovery is $\tilde{\Theta}(\sqrt{n})$, in sharp contrast to the classical information-theoretic threshold of $\Theta(\log(n))$. This matches the conjectured computational threshold for the classical planted clique problem, and thus raises the intriguing possibility that, once we require robustness, there is no computational-statistical gap for planted clique.

at April 23, 2017 06:56 AM UTC

We prove that any non-adaptive algorithm that tests whether an unknown Boolean function $f\colon \{0, 1\}^n\to\{0, 1\} $ is a $k$-junta or $\epsilon$-far from every $k$-junta must make $\widetilde{\Omega}(k^{3/2} / \epsilon)$ many queries for a wide range of parameters $k$ and $\epsilon$. Our result dramatically improves previous lower bounds from [BGSMdW13, STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Blais08], which makes $\widetilde{O}(k^{3/2})/\epsilon$ queries. Combined with the adaptive tester of [Blais09] which makes $O(k\log k + k /\epsilon)$ queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.

at April 23, 2017 06:50 AM UTC

A week or so ago at our Theory Lunch we had the pleasure to listen to Harishchandra Ramadas (student of Thomas Rothvoss) who told us about their latest discrepancy algorithm. I think the algorithm is quite interesting as it combines ideas from gradient descent and multiplicative weights in a non-trivial (yet very simple) way. Below I reprove Spencer’s 6 deviations theorem with their machinery (in the actual paper Levy, Ramadas and Rothvoss do more than this).

First let me remind you the setting (see also this previous blog post for some motivation on discrepancy and a bit more context; by the way it is funny to read the comments in that post after this): given v_1, \hdots, v_n \in \mathbb{S}^{n-1} one wants to find x \in \{-1,1\}^n (think of it as a “coloring” of the coordinates) such that \max_{i \in [n]} |x \cdot v_i| \leq C for some numerical constant C>0 (when v_i is a normalized vectors of 1‘s and 0‘s the quantity |x \cdot v_i| represents the unbalancedness of the coloring in the set corresponding to v_i). Clearly it suffices to give a method to find x \in [-1,1]^n with at least half of its coordinates equal to -1 and 1 and such that \max_{i \in [n]} |x \cdot v_i| \leq C' for some numerical constant C'>0 (indeed one can then simply recurse on the coordinates not yet set to -1 or 1; this is the so-called “partial coloring” argument). Note also that one can drop the absolute value by taking v_i and -v_i (the number of constraints then becomes 2n but this is easy to deal with and we ignore it here for sake of simplicity).

The algorithm

Let x_0 = 0, w_0 = 1 \in \mathbb{R}^n. We run an iterative algorithm which keeps at every time step t \in \mathbb{N} a subspace U_t of valid update directions and then proceeds as follows. First find (using for instance a basis for U_t) z_t \in \mathbb{S}^{n-1} \bigcap U_t such that

(1)   \begin{equation*}  \sum_{i=1}^n \frac{w_t(i)}{\|w\|_1} (v_i \cdot z_t)^2 \leq \frac{1}{\mathrm{dim}(U_t)} . \end{equation*}

Then update x_{t+1}= x_t + \lambda_t z_t where \lambda_t \in [0,1] is maximal so that x_{t+1} remains in [-1,1]^n. Finally update the exponential weights by w_{t+1}(i) = w_t(i) \exp( v_i \cdot (x_{t+1} - x_t) ).

 

It remains to describe the subspace U_t. For this we introduce the set I_t \subset [n] containing the n/16^{th} largest coordinates of w_t (the “inactive” coordinates) and the set F_t \subset [n] containing the coordinates of x_t equal to -1 or 1 (the “frozen” coordinates). The subspace U_t is now described as the set of points orthogonal to (i) x_t, (ii) e_j, j \in F_t, (iii) v_i, i \in I_t, (iv) \sum_{i=1}^n w_t(i) v_i. The intuition for (i) and (ii) is rather clear: for (i) one simply wants to ensure that the method keeps making progress towards the boundary of the cube (i.e., |x_{t+1}| > |x_t|) while for (ii) one wants to make sure that coordinates which are already “colored” (i.e., set to -1 or 1) are not updated. In particular (i) and (ii) together ensures that at each step either the norm squared of x_t augments by 1 (in particular \lambda_t=1) or that one fixes forever one of the coordinates to -1 or 1. In particular this means that after at most 3 n /2 iterations one will have a partial coloring (i.e., half of the coordinates set to -1 or 1, which was our objective). Property (iii) is about ensuring that we stop walking in the directions where we are not making good progress (there are many ways to ensure this and this precise form will make sense towards the end of the analysis). Property (iv) is closely related, and while it might be only a technical condition it can also be understood as ensuring that locally one is not increasing the softmax of the constraints, indeed (iv) exactly says that one should move orthogonally to the gradient of \log(\sum_{i=1}^n \exp(x \cdot v_i)).

The analysis

Let Z_t = \sum_{i=1}^n w_t(i). Note that since z_t is on the sphere and \lambda_t \in [0,1] one has that |v_i \cdot (x_{t+1} - x_t)| \leq 1. Thus using \exp(x) \leq 1 + x + x^2 for x \in [-1,1], as well as property (iv) (i.e., \sum_{i=1}^n w_t(i) v_i \cdot z_t = 0) and \lambda_t \in [0,1] one obtains:

    \[Z_{t+1} = \sum_{i=1}^n w_t(i) \exp(v_i \cdot (x_{t+1} - x_t)) \leq \sum_{i=1}^n w_t(i) (1 + (v_i \cdot z_t)^2) .\]

Observe now that the subspace U_t has dimension at least n/4 (say for n \geq 16) and thus by (1) and the above inequalities one gets:

    \[Z_{t+1} \leq (1+ 4/n) Z_t .\]

In particular for any t \leq 2n, Z_{t} \leq C n for some numerical constant C >0. It only remains to observe that this ensures w_{2n}(i) = O(1) for any i \in [n] (this concludes the proof since we already observed that at time 2 n at least half of the coordinates are colored). For this last implication we simply use property (iii). Indeed assume that some coordinate i satisfies at some time t \leq 2n, w_t(i) > c e for some c>0. Since each update increases the weights (multiplicatively) by at most e it means that there is a previous time (say s) where this weight was larger than c and yet it got updated, meaning that it was not in the top n/16 weights, and in particular one had Z_s \geq c n / 16 which contradicts Z_{s} \leq C n for c large enough (namely c > 16 C).

by Sebastien Bubeck at April 22, 2017 08:59 PM UTC

Yao's garbled circuit construction is a central cryptographic tool with numerous applications. In this tutorial, we study garbled circuits from a foundational point of view under the framework of \emph{randomized encoding} (RE) of functions. We review old and new constructions of REs, present some lower bounds, and describe some applications. We also discuss new directions and open problems in the foundations of REs. This is a survey that appeared in a book of surveys in honor of Oded Goldreich's 60th birthday.

at April 22, 2017 01:08 AM UTC

A conference that attempts to draw lessons from the FCC incentive spectrum auction will be held on Friday May 12th, 2017 in Washington DC.

More information here.


by algorithmicgametheory at April 21, 2017 05:33 PM UTC

One year research position at a postdoctoral level (with a possible one year extension) is available at Algorithms research group (http://www.uib.no/rg/algo/), University of Bergen, Norway.

The successful candidate will work on “Multivariate Algorithms: New domains and paradigms” project funded by the Norwegian Research Council and led by Fedor Fomin. The objective of the project is t

Webpage: http://dmatheorynet.blogspot.no/2017/04/dmanet-postdoc-in-algorithms-bergen.html

Email: fomin@ii.uib.no

by theorycsjobs at April 21, 2017 01:36 PM UTC

I have been in Israel for the last couple of days attending an event in honor of Oded Goldreich‘s 60th birthday.

Oded has touched countless lives, with his boundless dedication to mentoring, executed with a unique mix of tough love and good humor. He embodies a purity of vision in the pursuit of the “right” definitions, the “right” conceptual point of view and the “right” proofs in the areas of theoretical computer science that he has transformed with his work and his influence.

A turning point in my own work in theoretical computer science came when I found this paper online in the Spring of 1995. I was a second-year graduate student in Rome, and I was interested in working on PCP-based hardness of approximation, but this seemed like an impossible goal for me. Following the publication of ALMSS, there had been an avalanche of work between 1992 and 1995, mostly in the form of extended abstracts that were impossible to understand without an awareness of a context that was, at that point, purely an oral tradition. The aforementioned paper, instead, was a 100+ page monster, that explained everything. Studying that paper gave me an entrance into the area.

Three years later, while i was a postdoc at MIT and Oded was there on sabbatical, he played a key role in the series of events that led me to prove that one can get extractors from pseudorandom generators, and it was him who explained to me that this was, in fact, what I had proved. (Initially, I thought my argument was just proving a much less consequential result.) For the most part, it was this result that got me a good job and that is paying my mortgage.

Like me, there are countless people who started to work in a certain area of theoretical computer science because of a course that Oded taught or a set of lecture notes that he wrote, and countless people whose work was made possible by Oded nudging, or usually shoving, them along the right path.

The last two days have felt a bit like going to a wedding, and not just because I saw friends that I do not get to see too often and because there was a lot to eat and to drink. A wedding is a celebration of the couple getting married, but it is also a public event in which friends and family, by affirming their bonds to the newlyweds, also affirm their bonds to each other.

I was deeply moved by the speeches given by Silvio and Shafi, and really everybody did a great job at telling Oded stories and bringing to life various aspects of his work and personality. But perhaps the most fittingly weird tribute was Benny Chor presenting the Chor-Goldreich paper (the one that introduced min-entropy as a measure of randomness for weak random sources, and the problem of 2-source extraction) using the original 1985 slides.

IMG_6726

Speaking of public celebrations, there is less than a month left to register for STOC 2017, the “Theory Fest” that will take place in Montreal in June.


by luca at April 21, 2017 06:21 AM UTC


Theory Fest—Should You Go?

Boaz Barak and Michael Mitzenmacher are well known for many great results. They are currently working not on a theory paper, but on a joint “experiment” called Theory Fest.

Today Ken and I want to discuss their upcoming experiment and spur you to consider attending it.

There are many pros and some cons in attending the new Theory Fest this June 19-23. One pro is where it is being held—Montreal—and another is the great collection of papers that will appear at the STOC 2017 part of the Fest. But the main ‘pro’ is that Boaz and Mike plan on doing some special events to make the Fest more than just a usual conference on theory.

The main ‘con’ is that you need to register soon here, so do not forget to do that.

Possible New Activities

We humbly offer some suggestions to spice up the week:

{\bullet} A Bug-a-thon: Many conferences have hack-a-thons these days. A theory version could be a P=NP debugging contest. Prior to the Fest anyone claiming to have solved P vs NP must submit a paper along with a $100 fee– -Canadian. At the Fest teams of “debuggers” would get the papers and have a fixed time—say three hours—to find a bug in as many papers as they can. The team that debugs the most claims wins the entrance fees.

Note that submissions can be “stealth”—you know your paper is wrong, but the bugs are very hard to find.

{\bullet} Present a Paper: People submit a deck for a ten minute talk. Then randomly each is assigned a deck and they must give a talk based only on the deck. There will be an audience vote and the best presenter will win a trophy.

Note there are two theory issues. The random assignment must be random but fixed-point free—-no one can get their own deck. Also since going last seems to give an unfair advantage, we suggest that each person gets the deck only ten minutes before their talk. Thus all presenters would have the same time to prepare for their talk.

{\bullet} Silent Auction For Co-authorship: We will set up a series of tables. On each table is a one page abstract of a paper. You get to bid as in a standard silent auction. The winner at each table becomes a co-author and pays their bid to STOC. The money could go to a student travel fund.

{\bullet} The A vs B Debate: Theory is divide into A and B at least in many conferences. We will put together a blue ribbon panel and have them discuss: Is A more important than B? We will ask that the panel be as snippy as possible—a great evening idea while all drink some free beer.

{\bullet } Betting: We will have a variety of topics from P=NP to quantum computation where various bets can be made.

{\bullet} Cantal Complexity: The Fest will mark the 40th anniversary of Donald Knuth’s famous paper, “The Complexity of Songs.” Evening sessions at a pub will provide unprecedented opportunity for applied research in this core area. Ken’s research, which he began with Dexter Kozen and others at the ICALP 1982 musicfest, eventually led to this.

{\bullet} Lemmas For Sale: In an Ebay-like manner a lemma can be sold. We all have small insights that we will never publish, but they might be useful for others.

{\bullet} Zoo Excursion: This is not to the Montreal zoo—which is rather far—but to the Complexity Zoo which is housed elsewhere in Canada. Participants will take a virtual tour of all 535 classes. The prize for “collapsing” any two of them will be an instant STOC 2017 publication. In case of collapsing more than two, or actually finding a new separation of any pair of them, see under “Bug-a-thon” above.

{\bullet} Write It Up: This is a service-oriented activity. Many results never have been written up formally and submitted to journals. Often the reason is that the author(s) are busy with new research. This would be a list of such papers and an attempt to get students or others to write up the paper. This has actually happen many times already in an informal manner. So organizing it might be fun. We could use money to get people to sign up—or give a free registration to next years conference— for example.

Open Problems

GLL plans on gavel-to-gavel coverage of the Fest: we hope to have helpers that will allow us to make at least one post per day about the Fest. Anyone interested in being a helper should contact us here.

This will be especially appreciated because Ken will be traveling to a different conference in a voivodeship that abuts an oblast and two voblasts.


by RJLipton+KWRegan at April 21, 2017 04:35 AM UTC

Authors: Jan Krajicek, Igor C. Oliveira
Download: PDF
Abstract: We investigate monotone circuits with local oracles (K., 2016), i.e., circuits containing additional inputs $y_i = y_i(\vec{x})$ that can perform unstructured computations on the input string $\vec{x}$. Let $\mu \in [0,1]$ be the locality of the circuit, a parameter that bounds the combined strength of the oracle functions $y_i(\vec{x})$, and $U_{n,k}, V_{n,k} \subseteq \{0,1\}^m$ be the classical sets of positive and negative inputs considered for $k$-clique in the approximation method (Razborov, 1985). Our results can be informally stated as follows.

1. For an appropriate extension of depth-$2$ monotone circuits with local oracles, we show that the size of the smallest circuits separating $U_{n,3}$ (triangles) and $V_{n,3}$ (complete bipartite graphs) undergoes two phase transitions according to $\mu$.

2. For $5 \leq k(n) \leq n^{1/4}$, arbitrary depth, and $\mu \leq 1/50$, we prove that the monotone circuit size complexity of separating the sets $U_{n,k}$ and $V_{n,k}$ is $n^{\Theta(\sqrt{k})}$, under a certain technical assumption on the local oracle gates.

The second result, which concerns monotone circuits with restricted oracles, extends and provides a matching upper bound for the exponential lower bounds on the monotone circuit size complexity of $k$-clique obtained by Alon and Boppana (1987).

at April 21, 2017 12:47 AM UTC

Authors: Clement Carbonnel, David A. Cohen, Martin C. Cooper, Stanislav Zivny
Download: PDF
Abstract: Singleton arc consistency is an important type of local consistency which has been recently shown to solve all constraint satisfaction problems (CSPs) over constraint languages of bounded width. We aim to characterise all classes of CSPs defined by a forbidden pattern that are solved by singleton arc consistency and closed under removing constraints. We identify five new patterns whose absence ensures solvability by singleton arc consistency, four of which are provably maximal and three of which generalise 2-SAT. Combined with simple counter-examples for other patterns, we make significant progress towards a complete classification.

at April 21, 2017 12:47 AM UTC

Authors: Josh Alman, Joshua R. Wang, Huacheng Yu
Download: PDF
Abstract: In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players Bob his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result it presents Bob with the next piece. This model has a closer and more natural correspondence to dynamic data structures than the classic communication models do and hence presents a new perspective on data structures.

We first present a lower bound for the \emph{online set intersection} problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a stronger lower bound. Then we apply the online communication model to data structure lower bounds by studying the Group Range Problem, a dynamic data structure problem. This problem admits an $O(\log n)$-time worst-case data structure. Using online communication complexity, we prove a tight cell-probe lower bound: spending $o(\log n)$ (even amortized) time per operation results in at best an $\exp(-\delta^2 n)$ probability of correctly answering a $(1/2+\delta)$-fraction of the $n$ queries.

at April 21, 2017 12:00 AM UTC

Authors: Aleksandr Maksimenko
Download: PDF
Abstract: Let $BQP(n)$ be a boolean quadric polytope, $LOP(m)$ be a linear ordering polytope. It is shown that $BQP(n)$ is linearly isomorphic to a face of $LOP(2n)$.

at April 21, 2017 12:40 AM UTC

Authors: Pawel Gawrychowski, Patrick K. Nicholson
Download: PDF
Abstract: We revisit the range $\tau$-majority problem, which asks us to preprocess an array $A[1..n]$ for a fixed value of $\tau \in (0,1/2]$, such that for any query range $[i,j]$ we can return a position in $A$ of each distinct $\tau$-majority element. A $\tau$-majority element is one that has relative frequency at least $\tau$ in the range $[i,j]$: i.e., frequency at least $\tau (j-i+1)$. Belazzougui et al. [WADS 2013] presented a data structure that can answer such queries in $O(1/\tau)$ time, which is optimal, but the space can be as much as $\Theta(n \lg n)$ bits. Recently, Navarro and Thankachan [Algorithmica 2016] showed that this problem could be solved using an $O(n \lg (1/\tau))$ bit encoding, which is optimal in terms of space, but has suboptimal query time. In this paper, we close this gap and present a data structure that occupies $O(n \lg (1/\tau))$ bits of space, and has $O(1/\tau)$ query time. We also show that this space bound is optimal, even for the much weaker query in which we must decide whether the query range contains at least one $\tau$-majority element.

at April 21, 2017 12:50 AM UTC

Authors: Daniel Saunders
Download: PDF
Abstract: This paper serves as a review and discussion of the recent works on memcomputing. In particular, the $\textit{universal memcomputing machine}$ (UMM) and the $\textit{digital memcomputing machine}$ (DMM) are discussed. We review the memcomputing concept in the dynamical systems framework and assess the algorithms offered for computing $NP$ problems in the UMM and DMM paradigms. We argue that the UMM is a physically implausible machine, and that the DMM model, as described by numerical simulations, is no more powerful than Turing-complete computation. We claim that the evidence for the resolution of $P$ vs. $NP$ is therefore inconclusive, and conclude that the memcomputing machine paradigm constitutes an energy efficient, special-purpose class of models of dynamical systems computation.

at April 21, 2017 12:40 AM UTC

Authors: Venkatesan Guruswami, Chaoping Xing, Chen Yuan
Download: PDF
Abstract: Subspace designs are a (large) collection of high-dimensional subspaces $\{H_i\}$ of $\F_q^m$ such that for any low-dimensional subspace $W$, only a small number of subspaces from the collection have non-trivial intersection with $W$; more precisely, the sum of dimensions of $W \cap H_i$ is at most some parameter $L$. The notion was put forth by Guruswami and Xing (STOC'13) with applications to list decoding variants of Reed-Solomon and algebraic-geometric codes, and later also used for explicit rank-metric codes with optimal list decoding radius.

Guruswami and Kopparty (FOCS'13, Combinatorica'16) gave an explicit construction of subspace designs with near-optimal parameters. This construction was based on polynomials and has close connections to folded Reed-Solomon codes, and required large field size (specifically $q \ge m$). Forbes and Guruswami (RANDOM'15) used this construction to give explicit constant degree "dimension expanders" over large fields, and noted that subspace designs are a powerful tool in linear-algebraic pseudorandomness.

Here, we construct subspace designs over any field, at the expense of a modest worsening of the bound $L$ on total intersection dimension. Our approach is based on a (non-trivial) extension of the polynomial-based construction to algebraic function fields, and instantiating the approach with cyclotomic function fields. Plugging in our new subspace designs in the construction of Forbes and Guruswami yields dimension expanders over $\F^n$ for any field $\F$, with logarithmic degree and expansion guarantee for subspaces of dimension $\Omega(n/(\log \log n))$.

at April 21, 2017 12:40 AM UTC

Authors: Tamal K. Dey, Alfred Rossi, Anastasios Sidiropoulos
Download: PDF
Abstract: We study the problem of clustering sequences of unlabeled point sets taken from a common metric space. Such scenarios arise naturally in applications where a system or process is observed in distinct time intervals, such as biological surveys and contagious disease surveillance. In this more general setting existing algorithms for classical (i.e.~static) clustering problems are not applicable anymore.

We propose a set of optimization problems which we collectively refer to as 'temporal clustering'. The quality of a solution to a temporal clustering instance can be quantified using three parameters: the number of clusters $k$, the spatial clustering cost $r$, and the maximum cluster displacement $\delta$ between consecutive time steps. We consider spatial clustering costs which generalize the well-studied $k$-center, discrete $k$-median, and discrete $k$-means objectives of classical clustering problems. We develop new algorithms that achieve trade-offs between the three objectives $k$, $r$, and $\delta$. Our upper bounds are complemented by inapproximability results.

at April 21, 2017 12:48 AM UTC

Authors: Alexandr Kazda, Matt Valeriote
Download: PDF
Abstract: In this paper we investigate the computational complexity of deciding if a given finite algebraic structure satisfies a fixed (strong) Maltsev condition $\Sigma$. Our goal in this paper is to show that $\Sigma$-testing can be accomplished in polynomial time when the algebras tested are idempotent and the Maltsev condition $\Sigma$ can be described using paths. Examples of such path conditions are having a Maltsev term, having a majority operation, and having a chain of J\'onsson (or Gumm) terms of fixed length.

at April 21, 2017 12:47 AM UTC

Authors: J. F. Peters
Download: PDF
Abstract: This article introduces a theory of proximal nerve complexes and nerve spokes, restricted to the triangulation of finite regions in the Euclidean plane. A nerve complex is a collection of filled triangles with a common vertex, covering a finite region of the plane. Structures called $k$-spokes, $k\geq 1$, are a natural extension of nerve complexes. A $k$-spoke is the union of a collection of filled triangles that pairwise either have a common edge or a common vertex. A consideration of the closeness of nerve complexes leads to a proximal view of simplicial complexes. A practical application of proximal nerve complexes is given, briefly, in terms of object shape geometry in digital images.

at April 21, 2017 12:51 AM UTC

Authors: Piotr Wygocki
Download: PDF
Abstract: In this paper, we examine the hash functions expressed as scalar products, i.e., $f(x)=<v,x>$, for some bounded random vector $v$. Such hash functions have numerous applications, but often there is a need to optimize the choice of the distribution of $v$. In the present work, we focus on so-called anti-concentration bounds, i.e. the upper bounds of $\mathbb{P}\left[|<v,x>| < \alpha \right]$. In many applications, $v$ is a vector of independent random variables with standard normal distribution. In such case, the distribution of $<v,x>$ is also normal and it is easy to approximate $\mathbb{P}\left[|<v,x>| < \alpha \right]$. Here, we consider two bounded distributions in the context of the anti-concentration bounds. Particularly, we analyze $v$ being a random vector from the unit ball in $l_{\infty}$ and $v$ being a random vector from the unit sphere in $l_{2}$. We show optimal up to a constant anti-concentration measures for functions $f(x)=<v,x>$.

As a consequence of our research, we obtain new best results for \newline \textit{$c$-approximate nearest neighbors without false negatives} for $l_p$ in high dimensional space for all $p\in[1,\infty]$, for $c=\Omega(\max\{\sqrt{d},d^{1/p}\})$. These results improve over those presented in [16]. Finally, our paper reports progress on answering the open problem by Pagh~[17], who considered the nearest neighbor search without false negatives for the Hamming distance.

at April 21, 2017 12:51 AM UTC

Authors: Zhao An, Qilong Feng, Iyad Kanj, Ge Xia
Download: PDF
Abstract: Given a tree $T$ on $n$ vertices, and $k, b, s_1, \ldots, s_b \in N$, the Tree Partitioning problem asks if at most $k$ edges can be removed from $T$ so that the resulting components can be grouped into $b$ groups such that the number of vertices in group $i$ is $s_i$, for $i =1, \ldots, b$. The case when $s_1=\cdots =s_b =n/b$, referred to as the Balanced Tree Partitioning problem, was shown to be NP-complete for trees of maximum degree at most 5, and the complexity of the problem for trees of maximum degree 4 and 3 was posed as an open question. The parameterized complexity of Balanced Tree Partitioning was also posed as an open question in another work.

In this paper, we answer both open questions negatively. We show that Balanced Tree Partitioning (and hence, Tree Partitioning) is NP-complete for trees of maximum degree 3, thus closing the door on the complexity of Balanced Tree Partitioning, as the simple case when $T$ is a path is in P. In terms of the parameterized complexity of the problems, we show that both Balanced Tree Partitioning and Tree Partitioning are $W[1]$-complete. Finally, using a compact representation of the solution space for an instance of the problem, we present a dynamic programming algorithm for Tree Partitioning (and hence, for Balanced Tree Partitioning) that runs in subexponential-time $2^{O(\sqrt{n})}$, adding a natural problem to the list of problems that can be solved in subexponential time.

at April 21, 2017 12:46 AM UTC

On Wed April 19 I was at the Harry Lewis 70th birthday celebration!
I will blog on that later.

Harry Lewis was my thesis adviser. Odd to use the past tense- I DID finish my thesis with him
and so he IS my adviser? Anyway, I will do a blog about the celebration next week.

This week I ponder- what was different then and now (I got my PhD in 1985).

False predictions that I made in 1985:

1) CS depts all have different views of what a CS major should know. By the year 2017 they will have figured out EVERY CS MAJOR SHOULD KNOW XXX and I will still write questions for the CS GRE. DID NOT HAPPEN. And a MINOR source of income for me has been cut off.

2) CS will be about 45% or more female. After all, the old guard is dying, its a new field without a tradition of sexism (this may have been false even then). Actually Women in CS has DECLINED since 1985. I'm still surprised since people in computing tend to be progressive. One could do several blog posts on this, but lacking the expertise I won't. (Gee bill- since when has lacking expertise stopped you before :-)

3) There will be some progress on P vs NP. Maybe an n^2 lower bound on SAT. Saying we've made NO progress is perhaps pessimistic, but we haven't made much.

4) in 2017 when Jet Blue emails me `CLICK HERE TO PRINT YOUR BOARDING PASS' the previous night then it will always work, and if it doesn't then I can call them and after 9 minutes on hold (not too bad) be able to fix the problem. They were not able to, though at the airport they fixed it and got me onto the plane fast as compensation.

OTHER CHANGES

1) Theory was more centralized. STOC and FOCS were the only prestige conferences, and everyone went to them.

2) A grad student could get a PhD and only have 2 papers published and get a Tenure Track Job.

3) One could learn all that was known in complexity theory in about two years.

4) You didn't have to do ugrad research to get into grad school (I don't think you HAVE TO now either, but many more do it so I PREDICT in the future you'll have to. Though my other predictions were not correct so .... there's that)

5) Complexity was more based in Logic then Combinatorics.

6) Complexity theory was easier! Gee, when did it get so hard and use so much hard math!

7) It seemed feasible that P vs NP would be solved within twenty years. I've heard it said that the Graph Minor Theorem was when P lost its innocence- there were now problems in P that used VERY HARD math--- techniques that were hard to pin down and hence hard to show would not work.

8) The number of complexity classes was reasonable. (I don't count Sigma_i as an infinite number of classes)

9) Grad students were just beginning to NOT learn the Blum Speed Up Theorem. It would take a while before they began to NOT learn finite injury priority arguments in recursion theory. OH- speaking of which...

10) Computability theory was called recursion theory.

11) Some schools had this odd idea that in FRESHMAN programming one should teach proofs of program correctness.

12) Some schools (SUNY Stonybrook and Harvard were among them) did not have a discrete math course. Hence the course in automata theory spend some of its time teaching how to prove things. (Both schools now have such a course. For Maryland I don't recall- either it didn't have one and I invented it OR it did have one and I revamped it.)

13) No Web. You had to go to a library to copy papers on a copier (Ask your grandparents what a copier is)

14) Copying cost far less than printing.

15) Someone who looked good on paper for MATH but had no real CS background could get into Harvard Applied science department for grad school and get a degree in ... speaking of which

16) In 1980 Harvard did not have a CS dept. So my Masters degree is formally in Applied Math, though I don't recall solving partial diff equations or other things that one associates with applied math. Sometime when I was there CS became officially something so I got my PhD in CS. (My students are surprised to hear this-- they think I got my PhD in Math.)

17) Harry Lewis had a moustache and smoked a pipe.  He has shaved off one and gave up the other.

SO, what to make of this list? ONE THING- I DO NOT `yearn for the good old days' That was then, this is now. I am GLAD about everything on the list EXCEPT two area where NOT ENOUGH change has happened- (a) I wish there was more diversity in CS, and (b) I wish Jet Blue had better software for boarding passes.



by GASARCH (noreply@blogger.com) at April 20, 2017 10:28 PM UTC

In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players Bob his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result it presents Bob with the next piece. This model has a closer and more natural correspondence to dynamic data structures than the classic communication models do and hence presents a new perspective on data structures. We first present a lower bound for the \emph{online set intersection} problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a stronger lower bound. Then we apply the online communication model to data structure lower bounds by studying the Group Range Problem, a dynamic data structure problem. This problem admits an $O(\log n)$-time worst-case data structure. Using online communication complexity, we prove a tight cell- probe lower bound: spending $o(\log n)$ (even amortized) time per operation results in at best an $\exp(-\delta^2 n)$ probability of correctly answering a $(1/2+\delta)$-fraction of the $n$ queries.

at April 20, 2017 07:03 PM UTC

We survey the computational foundations for public-key cryptography. We discuss the computational assumptions that have been used as bases for public-key encryption schemes, and the types of evidence we have for the veracity of these assumptions. This is a survey that appeared in a book of surveys in honor of Oded Goldreich's 60th birthday.

at April 20, 2017 03:33 PM UTC

Subspace designs are a (large) collection of high-dimensional subspaces $\{H_i\}$ of $\F_q^m$ such that for any low-dimensional subspace $W$, only a small number of subspaces from the collection have non-trivial intersection with $W$; more precisely, the sum of dimensions of $W \cap H_i$ is at most some parameter $L$. The notion was put forth by Guruswami and Xing (STOC'13) with applications to list decoding variants of Reed-Solomon and algebraic-geometric codes, and later also used for explicit rank-metric codes with optimal list decoding radius. Guruswami and Kopparty (FOCS'13, Combinatorica'16) gave an explicit construction of subspace designs with near-optimal parameters. This construction was based on polynomials and has close connections to folded Reed-Solomon codes, and required large field size (specifically $q \ge m$). Forbes and Guruswami (RANDOM'15) used this construction to give explicit constant degree ``dimension expanders" over large fields, and noted that subspace designs are a powerful tool in linear-algebraic pseudorandomness. Here, we construct subspace designs over any field, at the expense of a modest worsening of the bound $L$ on total intersection dimension. Our approach is based on a (non-trivial) extension of the polynomial-based construction to algebraic function fields, and instantiating the approach with cyclotomic function fields. Plugging in our new subspace designs in the construction of Forbes and Guruswami yields dimension expanders over $\F^n$ for any field $\F$, with logarithmic degree and expansion guarantee for subspaces of dimension $\Omega(n/(\log \log n))$.

at April 20, 2017 01:39 PM UTC

The next TCS+ talk will take place next Wednesday, April 26th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Santosh Vempala (Georgia Tech) will present joint work with Yin Tat Lee to appear in STOC’17, “Sampling Polytopes: From Euclid to Riemann”  (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

We introduce the Geodesic Walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from the interior of a convex polytope in n-dimensional Euclidean space. The walk is a simulation of a stochastic differential equation defined using a convex barrier function; each step is the solution of an ordinary differential equation. It improves the time complexity of sampling polytopes and is the first walk to achieve sub-quadratic complexity. Most of the talk will be spent introducing relevant aspects of manifolds, stochastic processes, efficient solution of differential equations, and how this walk overcomes the bottlenecks of random walks in Euclidean space. No background in string theory (or Riemannian geometry) will be assumed.

Joint work with Lee Yin Tat.


by plustcs at April 20, 2017 04:02 AM UTC

Authors: Melika Abolhasani, Soheil Ehsani, Hosein Esfandiari, MohammadTaghi Hajiaghayi, Robert Kleinberg, Brendan Lucier
Download: PDF
Abstract: Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of $1-\frac{1}{e}$ on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is $\frac{1}{1+1/e} \approx 0.731$. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of $\frac{1}{1+1/e}$, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-iid distributions and discuss its applications in mechanism design.

at April 20, 2017 12:45 AM UTC

Authors: Sina Dehghani, Soheil Ehsani, MohammadTaghi Hajiaghayi, Vahid Liaghat, Harald Racke, Saeed Seddighin
Download: PDF
Abstract: We design the first online algorithm with poly-logarithmic competitive ratio for the edge-weighted degree-bounded Steiner forest(EW-DB-SF) problem and its generalized variant. We obtain our result by demonstrating a new generic approach for solving mixed packing/covering integer programs in the online paradigm. In EW-DB-SF we are given an edge-weighted graph with a degree bound for every vertex. Given a root vertex in advance we receive a sequence of terminal vertices in an online manner. Upon the arrival of a terminal we need to augment our solution subgraph to connect the new terminal to the root. The goal is to minimize the total weight of the solution while respecting the degree bounds on the vertices. In the offline setting edge-weighted degree-bounded Steiner tree (EW-DB-ST) and its many variations have been extensively studied since early eighties. Unfortunately the recent advancements in the online network design problems are inherently difficult to adapt for degree-bounded problems. In contrast in this paper we obtain our result by using structural properties of the optimal solution, and reducing the EW-DB-SF problem to an exponential-size mixed packing/covering integer program in which every variable appears only once in covering constraints. We then design a generic integral algorithm for solving this restricted family of IPs. We demonstrate a new technique for solving mixed packing/covering integer programs. Define the covering frequency k of a program as the maximum number of covering constraints in which a variable can participate. Let m denote the number of packing constraints. We design an online deterministic integral algorithm with competitive ratio of O(k log m) for the mixed packing/covering integer programs. We believe this technique can be used as an interesting alternative for the standard primal-dual techniques in solving online problems.

at April 20, 2017 12:41 AM UTC

Authors: Torsten Gross, Nils Blüthgen
Download: PDF
Abstract: A sum where each of the $N$ summands can be independently chosen from two choices yields $2^N$ possible summation outcomes. There is an $\mathcal{O}(K^2)$-algorithm that finds the $K$ smallest/largest of these sums by evading the enumeration of all sums.

at April 20, 2017 12:53 AM UTC

Authors: Andreas Poyias, Simon J. Puglisi, Rajeev Raman
Download: PDF
Abstract: We consider the problem of implementing a space-efficient dynamic trie, with an emphasis on good practical performance. For a trie with $n$ nodes with an alphabet of size $\sigma$, the information-theoretic lower bound is $n \log \sigma + O(n)$ bits. The Bonsai data structure is a compact trie proposed by Darragh et al. (Softw., Pract. Exper. 23(3), 1993, p. 277-291). Its disadvantages include the user having to specify an upper bound $M$ on the trie size in advance (which cannot be changed easily after initalization), a space usage of $M \log \sigma + O(M \log \log M)$ (which is asymptotically non-optimal for smaller $\sigma$ or if $n \ll M$) and a lack of support for deletions. It supports traversal and update operations in $O(1/\epsilon)$ expected time (based on assumptions about the behaviour of hash functions), where $\epsilon = (M-n)/M$ and has excellent speed performance in practice. We propose an alternative, m-Bonsai, that addresses the above problems, obtaining a trie that uses $(1+\beta) n (\log \sigma + O(1))$ bits in expectation, and supports traversal and update operations in $O(1/\beta)$ expected time and $O(1/\beta^2)$ amortized expected time, for any user-specified parameter $\beta > 0$ (again based on assumptions about the behaviour of hash functions). We give an implementation of m-Bonsai which uses considerably less memory and is slightly faster than the original Bonsai.

at April 20, 2017 12:51 AM UTC