# Theory of Computing Blog Aggregator

### Path-contractions, edge deletions and connectivity preservation

Authors: Gregory Gutin, M. S. Ramanujan, Felix Reidl, Magnus Wahlström
Abstract: We study several problems related to graph modification problems under connectivity constraints from the perspective of parameterized complexity: {\sc (Weighted) Biconnectivity Deletion}, where we are tasked with deleting~$k$ edges while preserving biconnectivity in an undirected graph, {\sc Vertex-deletion Preserving Strong Connectivity}, where we want to maintain strong connectivity of a digraph while deleting exactly~$k$ vertices, and {\sc Path-contraction Preserving Strong Connectivity}, in which the operation of path contraction on arcs is used instead. The parameterized tractability of this last problem was posed by Bang-Jensen and Yeo [DAM 2008] as an open question and we answer it here in the negative: both variants of preserving strong connectivity are $\sf W[1]$-hard. Preserving biconnectivity, on the other hand, turns out to be fixed parameter tractable and we provide a $2^{O(k\log k)} n^{O(1)}$-algorithm that solves {\sc Weighted Biconnectivity Deletion}. Further, we show that the unweighted case even admits a randomized polynomial kernel. All our results provide further interesting data points for the systematic study of connectivity-preservation constraints in the parameterized setting.

### Fairness in Resource Allocation and Slowed-down Dependent Rounding

Authors: David G. Harris, Thomas Pensyl, Aravind Srinivasan, Khoa Trinh
Abstract: We consider an issue of much current concern: could fairness, an issue that is already difficult to guarantee, worsen when algorithms run much of our lives? We consider this in the context of resource-allocation problems; we show that algorithms can guarantee certain types of fairness in a verifiable way. Our conceptual contribution is a simple approach to fairness in this context, which only requires that all users trust some public lottery. Our technical contributions are in ways to address the $k$-center and knapsack-center problems that arise in this context: we develop a novel dependent-rounding technique that, via the new ingredients of "slowing down" and additional randomization, guarantees stronger correlation properties than known before.

### Exploring the bounds on the positive semidefinite rank

Authors: Andrii Riazanov, Mikhail Vyalyiy
Abstract: The nonnegative and positive semidefinite (PSD-) ranks are closely connected to the nonnegative and positive semidefinite extension complexities of a polytope, which are the minimal dimensions of linear and SDP programs which represent this polytope. Though some exponential lower bounds on the nonnegative and PSD- ranks has recently been proved for the slack matrices of some particular polytopes, there are still no tight bounds for these quantities. We explore some existing bounds on the PSD-rank and prove that they cannot give exponential lower bounds on the extension complexity. Our approach consists in proving that the existing bounds are upper bounded by the polynomials of the regular rank of the matrix, which is equal to the dimension of the polytope (up to an additive constant). As one of the implications, we also retrieve an upper bound on the mutual information of an arbitrary matrix of a joint distribution, based on its regular rank.

### The Ising Partition Function: Zeros and Deterministic Approximation

Authors: Jingcheng Liu, Alistair Sinclair, Piyush Srivastava
Abstract: We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters $\beta$ (the interaction) and $\lambda$ (the external field), except for the case $\vert{\lambda}\vert=1$ (the "zero-field" case). A randomized algorithm (FPRAS) for all graphs, and all $\beta,\lambda$, has long been known. Unlike most other deterministic approximation algorithms for problems in statistical physics and counting, our algorithm does not rely on the "decay of correlations" property. Rather, we exploit and extend machinery developed recently by Barvinok, and Patel and Regts, based on the location of the complex zeros of the partition function, which can be seen as an algorithmic realization of the classical Lee-Yang approach to phase transitions. Our approach extends to the more general setting of the Ising model on hypergraphs of bounded degree and edge size, where no previous algorithms (even randomized) were known for a wide range of parameters. In order to achieve this extension, we establish a tight version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a classical result of Suzuki and Fisher.

### Shared processor scheduling

Authors: Dariusz Dereniowski, Wieslaw Kubiak
Abstract: We study the shared processor scheduling problem with a single shared processor where a unit time saving (weight) obtained by processing a job on the shared processor depends on the job. A polynomial-time optimization algorithm has been given for the problem with equal weights in the literature. This paper extends that result by showing an $O(n \log n)$ optimization algorithm for a class of instances in which non-decreasing order of jobs with respect to processing times provides a non-increasing order with respect to weights --- this instance generalizes the unweighted case of the problem. This algorithm also leads to a $\frac{1}{2}$-approximation algorithm for the general weighted problem. The complexity of the weighted problem remains open.

### Settling the query complexity of non-adaptive junta testing

Authors: Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, Jinyu Xie
Abstract: We prove that any non-adaptive algorithm that tests whether an unknown Boolean function $f: \{0, 1\}^n\to \{0, 1\}$ is a $k$-junta or $\epsilon$-far from every $k$-junta must make $\widetilde{\Omega}(k^{3/2} / \epsilon)$ many queries for a wide range of parameters $k$ and $\epsilon$. Our result dramatically improves previous lower bounds from [BGSMdW13, STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Blais08], which makes $\widetilde{O}(k^{3/2})/\epsilon$ queries. Combined with the adaptive tester of [Blais09] which makes $O(k\log k + k /\epsilon)$ queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.

### A Time Hierarchy Theorem for the LOCAL Model

Authors: Yi-Jun Chang, Seth Pettie
Abstract: The celebrated Time Hierarchy Theorem for Turing machines states, informally, that more problems can be solved given more time. The extent to which a time hierarchy-type theorem holds in the distributed LOCAL model has been open for many years. It is consistent with previous results that all natural problems in the LOCAL model can be classified according to a small constant number of complexities, such as $O(1),O(\log^* n), O(\log n), 2^{O(\sqrt{\log n})}$, etc.

In this paper we establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the LOCAL time hierarchy.

1. We define an infinite set of simple coloring problems called Hierarchical $2\frac{1}{2}$-Coloring}. A correctly colored graph can be confirmed by simply checking the neighborhood of each vertex, so this problem fits into the class of locally checkable labeling (LCL) problems. However, the complexity of the $k$-level Hierarchical $2\frac{1}{2}$-Coloring problem is $\Theta(n^{1/k})$, for $k\in\mathbb{Z}^+$. The upper and lower bounds hold for both general graphs and trees, and for both randomized and deterministic algorithms.

2. Consider any LCL problem on bounded degree trees. We prove an automatic-speedup theorem that states that any randomized $n^{o(1)}$-time algorithm solving the LCL can be transformed into a deterministic $O(\log n)$-time algorithm. Together with a previous result, this establishes that on trees, there are no natural deterministic complexities in the ranges $\omega(\log^* n)$---$o(\log n)$ or $\omega(\log n)$---$n^{o(1)}$.

3. We expose a gap in the randomized time hierarchy on general graphs. Any randomized algorithm that solves an LCL problem in sublogarithmic time can be sped up to run in $O(T_{LLL})$ time, which is the complexity of the distributed Lovasz local lemma problem, currently known to be $\Omega(\log\log n)$ and $O(\log n)$.

### Succinct progress measures for solving parity games

Authors: Marcin Jurdziński, Ranko Lazić
Abstract: The recent breakthrough paper by Calude et al. has given the first algorithm for solving parity games in quasi-polynomial time, where previously the best algorithms were mildly subexponential. We devise an alternative quasi-polynomial time algorithm based on progress measures, which allows us to reduce the space required from quasi-polynomial to nearly linear. Our key technical tools are a novel concept of ordered tree coding, and a succinct tree coding result that we prove using bounded adaptive multi-counters, both of which are interesting in their own right.

### TR17-071 | Bounds for the Communication Complexity of Two-Player Approximate Correlated Equilibria | Young Kun Ko, Arial Schvartzman

from ECCC papers

In the recent paper of~\cite{BR16}, the authors show that, for any constant $10^{-15} > \varepsilon > 0$ the communication complexity of $\varepsilon$-approximate Nash equilibria in $2$-player $n \times n$ games is $n^{\Omega(\varepsilon)}$, resolving the long open problem of whether or not there exists a polylogarithmic communication protocol. In this paper we address an open question they pose regarding the communication complexity of $2$-player $\varepsilon$-approximate correlated equilibria. For our upper bounds, we provide a communication protocol that outputs a $\varepsilon$-approximate correlated equilibrium after exchanging $\tilde{O}(n \varepsilon^{-2})$ bits, saving over the naive protocol which requires $O(n^2)$-bits. This is in sharp contrast to Nash equilibria where for sufficiently small constant $\varepsilon$, no $o(n^2)$-communication protocol is known. In the $m$-player, $n$-action setting, our protocol can be extended to a $\tilde{O}(nm)$-bit protocol. For our lower bounds, we exhibit a simple two player game that has a logarithmic information lower bound: for any constant $\varepsilon < \frac{1}{8}$ the two players need to communicate $\Omega(\log n)$-bits of information to compute any $\varepsilon$-correlated equilibrium in the game. For the $m$-players, $2$-action setting we show a lower bound of $\Omega(m)$ bits, which matches the upper bound we provide up to polylogarithmic terms and shows that the dependence on the number of players is unavoidable.

### TR17-070 | Probabilistic Existence of Large Sets of Designs | Shachar Lovett, Sankeerth Rao, Alex Vardy

from ECCC papers

A new probabilistic technique for establishing the existence of certain regular combinatorial structures has been introduced by Kuperberg, Lovett, and Peled (STOC 2012). Using this technique, it can be shown that under certain conditions, a randomly chosen structure has the required properties of a $t-(n,k,?)$ combinatorial design with tiny, yet positive, probability. Herein, we strengthen both the method and the result of Kuperberg, Lovett, and Peled as follows. We modify the random choice and the analysis to show that, under the same conditions, not only does a $t- (n,k,?)$ design exist but, in fact, with positive probability there exists a large set of such designs —that is, a partition of the set of $k$-subsets of $[n]$ into $t-(n,k,?)$ designs. Specifically, using the probabilistic approach derived herein, we prove that for all sufficiently large $n$, large sets of $t-(n,k,?)$ designs exist whenever $k > 9t$ and the necessary divisibility conditions are satisied. This resolves the existence conjecture for large sets of designs for all $k > 9t$.

### TR17-069 | Does robustness imply tractability? A lower bound for planted clique in the semi-random model | Jacob Steinhardt

from ECCC papers

We consider a robust analog of the planted clique problem. In this analog, a set $S$ of vertices is chosen and all edges in $S$ are included; then, edges between $S$ and the rest of the graph are included with probability $\frac{1}{2}$, while edges not touching $S$ are allowed to vary arbitrarily. For this semi-random model, we show that the information-theoretic threshold for recovery is $\tilde{\Theta}(\sqrt{n})$, in sharp contrast to the classical information-theoretic threshold of $\Theta(\log(n))$. This matches the conjectured computational threshold for the classical planted clique problem, and thus raises the intriguing possibility that, once we require robustness, there is no computational-statistical gap for planted clique.

### TR17-068 | Settling the query complexity of non-adaptive junta testing | Xi Chen, Rocco Servedio, Li-Yang Tan, Erik Waingarten, Jinyu Xie

from ECCC papers

We prove that any non-adaptive algorithm that tests whether an unknown Boolean function $f\colon \{0, 1\}^n\to\{0, 1\}$ is a $k$-junta or $\epsilon$-far from every $k$-junta must make $\widetilde{\Omega}(k^{3/2} / \epsilon)$ many queries for a wide range of parameters $k$ and $\epsilon$. Our result dramatically improves previous lower bounds from [BGSMdW13, STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Blais08], which makes $\widetilde{O}(k^{3/2})/\epsilon$ queries. Combined with the adaptive tester of [Blais09] which makes $O(k\log k + k /\epsilon)$ queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.

### Discrepancy algorithm inspired by gradient descent and multiplicative weights; after Levy, Ramadas and Rothvoss

from Sébastian Bubeck

A week or so ago at our Theory Lunch we had the pleasure to listen to Harishchandra Ramadas (student of Thomas Rothvoss) who told us about their latest discrepancy algorithm. I think the algorithm is quite interesting as it combines ideas from gradient descent and multiplicative weights in a non-trivial (yet very simple) way. Below I reprove Spencer’s deviations theorem with their machinery (in the actual paper Levy, Ramadas and Rothvoss do more than this).

First let me remind you the setting (see also this previous blog post for some motivation on discrepancy and a bit more context; by the way it is funny to read the comments in that post after this): given one wants to find (think of it as a “coloring” of the coordinates) such that for some numerical constant (when is a normalized vectors of ‘s and ‘s the quantity represents the unbalancedness of the coloring in the set corresponding to ). Clearly it suffices to give a method to find with at least half of its coordinates equal to and and such that for some numerical constant (indeed one can then simply recurse on the coordinates not yet set to or ; this is the so-called “partial coloring” argument). Note also that one can drop the absolute value by taking and (the number of constraints then becomes but this is easy to deal with and we ignore it here for sake of simplicity).

The algorithm

Let , . We run an iterative algorithm which keeps at every time step a subspace of valid update directions and then proceeds as follows. First find (using for instance a basis for ) such that

(1)

Then update where is maximal so that remains in . Finally update the exponential weights by .

It remains to describe the subspace . For this we introduce the set containing the largest coordinates of (the “inactive” coordinates) and the set containing the coordinates of equal to or (the “frozen” coordinates). The subspace is now described as the set of points orthogonal to (i) , (ii) , (iii) , (iv) . The intuition for (i) and (ii) is rather clear: for (i) one simply wants to ensure that the method keeps making progress towards the boundary of the cube (i.e., ) while for (ii) one wants to make sure that coordinates which are already “colored” (i.e., set to or ) are not updated. In particular (i) and (ii) together ensures that at each step either the norm squared of augments by (in particular ) or that one fixes forever one of the coordinates to or . In particular this means that after at most iterations one will have a partial coloring (i.e., half of the coordinates set to or , which was our objective). Property (iii) is about ensuring that we stop walking in the directions where we are not making good progress (there are many ways to ensure this and this precise form will make sense towards the end of the analysis). Property (iv) is closely related, and while it might be only a technical condition it can also be understood as ensuring that locally one is not increasing the softmax of the constraints, indeed (iv) exactly says that one should move orthogonally to the gradient of .

The analysis

Let . Note that since is on the sphere and one has that . Thus using for , as well as property (iv) (i.e., ) and one obtains:

Observe now that the subspace has dimension at least (say for ) and thus by (1) and the above inequalities one gets:

In particular for any for some numerical constant . It only remains to observe that this ensures for any (this concludes the proof since we already observed that at time at least half of the coordinates are colored). For this last implication we simply use property (iii). Indeed assume that some coordinate satisfies at some time , for some . Since each update increases the weights (multiplicatively) by at most it means that there is a previous time (say ) where this weight was larger than and yet it got updated, meaning that it was not in the top weights, and in particular one had which contradicts for large enough (namely ).

by Sebastien Bubeck at April 22, 2017 08:59 PM UTC

### Me at the Science March today, in front of the Texas Capitol in Austin

from Scott Aaronson

### TR17-067 | Garbled Circuits as Randomized Encodings of Functions: a Primer | Benny Applebaum

from ECCC papers

Yao's garbled circuit construction is a central cryptographic tool with numerous applications. In this tutorial, we study garbled circuits from a foundational point of view under the framework of \emph{randomized encoding} (RE) of functions. We review old and new constructions of REs, present some lower bounds, and describe some applications. We also discuss new directions and open problems in the foundations of REs. This is a survey that appeared in a book of surveys in honor of Oded Goldreich's 60th birthday.

### Lessons from the FCC Spectrum Incentive Auction

A conference that attempts to draw lessons from the FCC incentive spectrum auction will be held on Friday May 12th, 2017 in Washington DC.

by algorithmicgametheory at April 21, 2017 05:33 PM UTC

### postdoc in algorithms Bergen at University of Bergen, Norway (apply by June 1, 2017)

from CCI: jobs

One year research position at a postdoctoral level (with a possible one year extension) is available at Algorithms research group (http://www.uib.no/rg/algo/), University of Bergen, Norway.

The successful candidate will work on “Multivariate Algorithms: New domains and paradigms” project funded by the Norwegian Research Council and led by Fedor Fomin. The objective of the project is t

Email: fomin@ii.uib.no

by theorycsjobs at April 21, 2017 01:36 PM UTC

### Fests

from Luca Trevisan

I have been in Israel for the last couple of days attending an event in honor of Oded Goldreich‘s 60th birthday.

Oded has touched countless lives, with his boundless dedication to mentoring, executed with a unique mix of tough love and good humor. He embodies a purity of vision in the pursuit of the “right” definitions, the “right” conceptual point of view and the “right” proofs in the areas of theoretical computer science that he has transformed with his work and his influence.

A turning point in my own work in theoretical computer science came when I found this paper online in the Spring of 1995. I was a second-year graduate student in Rome, and I was interested in working on PCP-based hardness of approximation, but this seemed like an impossible goal for me. Following the publication of ALMSS, there had been an avalanche of work between 1992 and 1995, mostly in the form of extended abstracts that were impossible to understand without an awareness of a context that was, at that point, purely an oral tradition. The aforementioned paper, instead, was a 100+ page monster, that explained everything. Studying that paper gave me an entrance into the area.

Three years later, while i was a postdoc at MIT and Oded was there on sabbatical, he played a key role in the series of events that led me to prove that one can get extractors from pseudorandom generators, and it was him who explained to me that this was, in fact, what I had proved. (Initially, I thought my argument was just proving a much less consequential result.) For the most part, it was this result that got me a good job and that is paying my mortgage.

Like me, there are countless people who started to work in a certain area of theoretical computer science because of a course that Oded taught or a set of lecture notes that he wrote, and countless people whose work was made possible by Oded nudging, or usually shoving, them along the right path.

The last two days have felt a bit like going to a wedding, and not just because I saw friends that I do not get to see too often and because there was a lot to eat and to drink. A wedding is a celebration of the couple getting married, but it is also a public event in which friends and family, by affirming their bonds to the newlyweds, also affirm their bonds to each other.

I was deeply moved by the speeches given by Silvio and Shafi, and really everybody did a great job at telling Oded stories and bringing to life various aspects of his work and personality. But perhaps the most fittingly weird tribute was Benny Chor presenting the Chor-Goldreich paper (the one that introduced min-entropy as a measure of randomness for weak random sources, and the problem of 2-source extraction) using the original 1985 slides.

Speaking of public celebrations, there is less than a month left to register for STOC 2017, the “Theory Fest” that will take place in Montreal in June.

### Theory Fest—Should You Go?

from Richard Lipton

Theory Fest—Should You Go?

Boaz Barak and Michael Mitzenmacher are well known for many great results. They are currently working not on a theory paper, but on a joint “experiment” called Theory Fest.

Today Ken and I want to discuss their upcoming experiment and spur you to consider attending it.

There are many pros and some cons in attending the new Theory Fest this June 19-23. One pro is where it is being held—Montreal—and another is the great collection of papers that will appear at the STOC 2017 part of the Fest. But the main ‘pro’ is that Boaz and Mike plan on doing some special events to make the Fest more than just a usual conference on theory.

The main ‘con’ is that you need to register soon here, so do not forget to do that.

## Possible New Activities

We humbly offer some suggestions to spice up the week: