**Authors: **Gregory Gutin, M. S. Ramanujan, Felix Reidl, Magnus Wahlström **Download:** PDF**Abstract: **We study several problems related to graph modification problems under
connectivity constraints from the perspective of parameterized complexity: {\sc
(Weighted) Biconnectivity Deletion}, where we are tasked with deleting~$k$
edges while preserving biconnectivity in an undirected graph, {\sc
Vertex-deletion Preserving Strong Connectivity}, where we want to maintain
strong connectivity of a digraph while deleting exactly~$k$ vertices, and {\sc
Path-contraction Preserving Strong Connectivity}, in which the operation of
path contraction on arcs is used instead. The parameterized tractability of
this last problem was posed by Bang-Jensen and Yeo [DAM 2008] as an open
question and we answer it here in the negative: both variants of preserving
strong connectivity are $\sf W[1]$-hard. Preserving biconnectivity, on the
other hand, turns out to be fixed parameter tractable and we provide a
$2^{O(k\log k)} n^{O(1)}$-algorithm that solves {\sc Weighted Biconnectivity
Deletion}. Further, we show that the unweighted case even admits a randomized
polynomial kernel. All our results provide further interesting data points for
the systematic study of connectivity-preservation constraints in the
parameterized setting.

**Authors: **David G. Harris, Thomas Pensyl, Aravind Srinivasan, Khoa Trinh **Download:** PDF**Abstract: **We consider an issue of much current concern: could fairness, an issue that
is already difficult to guarantee, worsen when algorithms run much of our
lives? We consider this in the context of resource-allocation problems; we show
that algorithms can guarantee certain types of fairness in a verifiable way.
Our conceptual contribution is a simple approach to fairness in this context,
which only requires that all users trust some public lottery. Our technical
contributions are in ways to address the $k$-center and knapsack-center
problems that arise in this context: we develop a novel dependent-rounding
technique that, via the new ingredients of "slowing down" and additional
randomization, guarantees stronger correlation properties than known before.

**Authors: **Andrii Riazanov, Mikhail Vyalyiy **Download:** PDF**Abstract: **The nonnegative and positive semidefinite (PSD-) ranks are closely connected
to the nonnegative and positive semidefinite extension complexities of a
polytope, which are the minimal dimensions of linear and SDP programs which
represent this polytope. Though some exponential lower bounds on the
nonnegative and PSD- ranks has recently been proved for the slack matrices of
some particular polytopes, there are still no tight bounds for these
quantities. We explore some existing bounds on the PSD-rank and prove that they
cannot give exponential lower bounds on the extension complexity. Our approach
consists in proving that the existing bounds are upper bounded by the
polynomials of the regular rank of the matrix, which is equal to the dimension
of the polytope (up to an additive constant). As one of the implications, we
also retrieve an upper bound on the mutual information of an arbitrary matrix
of a joint distribution, based on its regular rank.

**Authors: **Jingcheng Liu, Alistair Sinclair, Piyush Srivastava **Download:** PDF**Abstract: **We study the problem of approximating the partition function of the
ferromagnetic Ising model in graphs and hypergraphs. Our first result is a
deterministic approximation scheme (an FPTAS) for the partition function in
bounded degree graphs that is valid over the entire range of parameters $\beta$
(the interaction) and $\lambda$ (the external field), except for the case
$\vert{\lambda}\vert=1$ (the "zero-field" case). A randomized algorithm (FPRAS)
for all graphs, and all $\beta,\lambda$, has long been known. Unlike most other
deterministic approximation algorithms for problems in statistical physics and
counting, our algorithm does not rely on the "decay of correlations" property.
Rather, we exploit and extend machinery developed recently by Barvinok, and
Patel and Regts, based on the location of the complex zeros of the partition
function, which can be seen as an algorithmic realization of the classical
Lee-Yang approach to phase transitions. Our approach extends to the more
general setting of the Ising model on hypergraphs of bounded degree and edge
size, where no previous algorithms (even randomized) were known for a wide
range of parameters. In order to achieve this extension, we establish a tight
version of the Lee-Yang theorem for the Ising model on hypergraphs, improving a
classical result of Suzuki and Fisher.

**Authors: **Dariusz Dereniowski, Wieslaw Kubiak **Download:** PDF**Abstract: **We study the shared processor scheduling problem with a single shared
processor where a unit time saving (weight) obtained by processing a job on the
shared processor depends on the job. A polynomial-time optimization algorithm
has been given for the problem with equal weights in the literature. This paper
extends that result by showing an $O(n \log n)$ optimization algorithm for a
class of instances in which non-decreasing order of jobs with respect to
processing times provides a non-increasing order with respect to weights ---
this instance generalizes the unweighted case of the problem. This algorithm
also leads to a $\frac{1}{2}$-approximation algorithm for the general weighted
problem. The complexity of the weighted problem remains open.

**Authors: **Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, Jinyu Xie **Download:** PDF**Abstract: **We prove that any non-adaptive algorithm that tests whether an unknown
Boolean function $f: \{0, 1\}^n\to \{0, 1\}$ is a $k$-junta or $\epsilon$-far
from every $k$-junta must make $\widetilde{\Omega}(k^{3/2} / \epsilon)$ many
queries for a wide range of parameters $k$ and $\epsilon$. Our result
dramatically improves previous lower bounds from [BGSMdW13, STW15], and is
essentially optimal given Blais's non-adaptive junta tester from [Blais08],
which makes $\widetilde{O}(k^{3/2})/\epsilon$ queries. Combined with the
adaptive tester of [Blais09] which makes $O(k\log k + k /\epsilon)$ queries,
our result shows that adaptivity enables polynomial savings in query complexity
for junta testing.

**Authors: **Yi-Jun Chang, Seth Pettie **Download:** PDF**Abstract: **The celebrated Time Hierarchy Theorem for Turing machines states, informally,
that more problems can be solved given more time. The extent to which a time
hierarchy-type theorem holds in the distributed LOCAL model has been open for
many years. It is consistent with previous results that all natural problems in
the LOCAL model can be classified according to a small constant number of
complexities, such as $O(1),O(\log^* n), O(\log n), 2^{O(\sqrt{\log n})}$, etc.

In this paper we establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the LOCAL time hierarchy.

1. We define an infinite set of simple coloring problems called Hierarchical $2\frac{1}{2}$-Coloring}. A correctly colored graph can be confirmed by simply checking the neighborhood of each vertex, so this problem fits into the class of locally checkable labeling (LCL) problems. However, the complexity of the $k$-level Hierarchical $2\frac{1}{2}$-Coloring problem is $\Theta(n^{1/k})$, for $k\in\mathbb{Z}^+$. The upper and lower bounds hold for both general graphs and trees, and for both randomized and deterministic algorithms.

2. Consider any LCL problem on bounded degree trees. We prove an automatic-speedup theorem that states that any randomized $n^{o(1)}$-time algorithm solving the LCL can be transformed into a deterministic $O(\log n)$-time algorithm. Together with a previous result, this establishes that on trees, there are no natural deterministic complexities in the ranges $\omega(\log^* n)$---$o(\log n)$ or $\omega(\log n)$---$n^{o(1)}$.

3. We expose a gap in the randomized time hierarchy on general graphs. Any randomized algorithm that solves an LCL problem in sublogarithmic time can be sped up to run in $O(T_{LLL})$ time, which is the complexity of the distributed Lovasz local lemma problem, currently known to be $\Omega(\log\log n)$ and $O(\log n)$.

**Authors: **Marcin Jurdziński, Ranko Lazić **Download:** PDF**Abstract: **The recent breakthrough paper by Calude et al. has given the first algorithm
for solving parity games in quasi-polynomial time, where previously the best
algorithms were mildly subexponential. We devise an alternative
quasi-polynomial time algorithm based on progress measures, which allows us to
reduce the space required from quasi-polynomial to nearly linear. Our key
technical tools are a novel concept of ordered tree coding, and a succinct tree
coding result that we prove using bounded adaptive multi-counters, both of
which are interesting in their own right.

A week or so ago at our Theory Lunch we had the pleasure to listen to Harishchandra Ramadas (student of Thomas Rothvoss) who told us about their latest discrepancy algorithm. I think the algorithm is quite interesting as it combines … Continue reading

A week or so ago at our Theory Lunch we had the pleasure to listen to Harishchandra Ramadas (student of Thomas Rothvoss) who told us about their latest discrepancy algorithm. I think the algorithm is quite interesting as it combines ideas from gradient descent and multiplicative weights in a non-trivial (yet very simple) way. Below I reprove Spencer’s deviations theorem with their machinery (in the actual paper Levy, Ramadas and Rothvoss do more than this).

First let me remind you the setting (see also this previous blog post for some motivation on discrepancy and a bit more context; by the way it is funny to read the comments in that post after this): given one wants to find (think of it as a “coloring” of the coordinates) such that for some numerical constant (when is a normalized vectors of ‘s and ‘s the quantity represents the unbalancedness of the coloring in the set corresponding to ). Clearly it suffices to give a method to find with at least half of its coordinates equal to and and such that for some numerical constant (indeed one can then simply recurse on the coordinates not yet set to or ; this is the so-called “partial coloring” argument). Note also that one can drop the absolute value by taking and (the number of constraints then becomes but this is easy to deal with and we ignore it here for sake of simplicity).

**The algorithm**

Let , . We run an iterative algorithm which keeps at every time step a subspace of valid update directions and then proceeds as follows. First find (using for instance a basis for ) such that

(1)

Then update where is maximal so that remains in . Finally update the exponential weights by .

It remains to describe the subspace . For this we introduce the set containing the largest coordinates of (the “inactive” coordinates) and the set containing the coordinates of equal to or (the “frozen” coordinates). The subspace is now described as the set of points orthogonal to (i) , (ii) , (iii) , (iv) . The intuition for (i) and (ii) is rather clear: for (i) one simply wants to ensure that the method keeps making progress towards the boundary of the cube (i.e., ) while for (ii) one wants to make sure that coordinates which are already “colored” (i.e., set to or ) are not updated. In particular (i) and (ii) together ensures that at each step either the norm squared of augments by (in particular ) or that one fixes forever one of the coordinates to or . In particular this means that after at most iterations one will have a partial coloring (i.e., half of the coordinates set to or , which was our objective). Property (iii) is about ensuring that we stop walking in the directions where we are not making good progress (there are many ways to ensure this and this precise form will make sense towards the end of the analysis). Property (iv) is closely related, and while it might be only a technical condition it can also be understood as ensuring that locally one is not increasing the softmax of the constraints, indeed (iv) exactly says that one should move orthogonally to the gradient of .

**The analysis**

Let . Note that since is on the sphere and one has that . Thus using for , as well as property (iv) (i.e., ) and one obtains:

Observe now that the subspace has dimension at least (say for ) and thus by (1) and the above inequalities one gets:

In particular for any for some numerical constant . It only remains to observe that this ensures for any (this concludes the proof since we already observed that at time at least half of the coordinates are colored). For this last implication we simply use property (iii). Indeed assume that some coordinate satisfies at some time , for some . Since each update increases the weights (multiplicatively) by at most it means that there is a previous time (say ) where this weight was larger than and yet it got updated, meaning that it was not in the top weights, and in particular one had which contradicts for large enough (namely ).