# Theory of Computing Blog Aggregator

### A Glitch in the Maptrix

from bit-player

We must always be on the lookout for glitches in the Matrix—anomalies that give us a fleeting glimpse into the algorithms and data structures of the computer simulation that we call Reality. But there’s also the Maptrix, the alternative reality supplied by online mapping services. I spend a fair amount of time exploring that digital terrain, and lately I’ve noticed a few glitches. Exhibit 1 is a tank farm in Bayonne, New Jersey, near where Kill Van Kull enters New York Bay:

I’ve seen a lot of oil tanks over the years, but never before have I encountered such ragged, faceted approximations to cylindrical form. These lumpy, polyhedral tanks suggest that in this little corner of industrial New Jersey, π has a value somewhat smaller than 3.14.

But the π peculiarity is ephemeral. The image above was captured from a laptop screen at the instant the landscape was first rendered in the Apple Maps program. Moments later most of the defects were magically healed, and the illusion of solid, circular reality reasserted itself:

Here’s another example, from a tank farm in Carteret, New Jersey, just across the Arthur Kill from Staten Island. This time we’re looking down on the tanks from directly overhead. The image at left was captured as soon as the scene appeared on screen; at right is the rounded-out version that emerged a few second later. The software, again, is Apple Maps.

It’s not just Apple’s version of reality that has such anomalies. Here’s a sample from another source:

The mangled petroleum tanks in this image are in Kearny, New Jersey, a few miles north of Bayonne along the Passaic River. In this case the picture was taken by Google’s eye in the sky, not by Apple’s. The distortion is different, but no less disturbing. Now it’s not just the geometry of the cylinders that has gone goofy but also the topology. Some of those tanks won’t hold oil (or any other fluid); they have holes in them. And notice how the spill-containment walls surrounding the tanks also look moth-eaten.

Finally, returning to Apple Maps, and scrolling just half a mile northwest from the Carteret tanks, we cross the Rahway River into Linden, New Jersey, where we come upon this alarming scene:

Toward the right side of the image we see more cylindrical tanks, some with faint, remnant traces of polyhedral approximation. But when your glance wanders to the upper left, you find that the world suddenly loses all depth. The tank farm over there, and the water treatment plant at the top of the frame, are merely painted on the landscape—trompe l’oeil structures that don’t trompe anyone.

This image offers another view of the same Linden landscape, looking obliquely to the west or northwest. The road that runs through the scene from foreground to background, crossing the New Jersey Turnpike at the very top of the frame, is Tremley Point Road. Suppose you were driving west along that road. Just beyond the row of lumpy trees that extends from the left edge of the image toward the road, you would cross a mysterious boundary, leaving behind the pop-up 3D world and entering flatland. What would happen to you there? Would you be pancaked like those tanks, reduced to a two-dimensional object painted on the pavement, with a painted shadow to accompany you?

Leaving behind tank farms but still poking around in the same general neighborhood of northern New Jersey, I was able to record four stages in the “construction” of the Bayonne Bridge, which crosses Kill Van Kull between Bayonne and Staten Island. These are images from Google Maps, the first three captured at intervals of about a second, the last after a delay of a few seconds more:

In calling attention to these oddities in online map imagery, my aim is not to mock or belittle. To me, these maps are one of the marvels of the age. A century ago, it was a huge novelty and liberation to soar above the earth’s surface for the first time, and see the landscape spread out below as if it were a map. It’s no less remarkable that we have now transformed the experience of looking at a map into something like flying an airplane.

The first “satellite view” maps were just that: montages of images made from hundreds of miles above the earth’s surface, looking straight down. They portrayed the territory as an array of pixels, assigning an RGB value to every (lat, lon) pair on the surface of a sphere.

The next step was to add a digital elevation model, giving each point on the surface a z value as well as a color. This scheme allows us to gaze obliquely across the landscape and see a realistic rendering of mountains, river valleys, and other natural landforms. It works well as long as you don’t try to get too close: the model is well-suited to forests, but not to trees. And it doesn’t work well at all for manmade artifacts.

In representing engineered structures, one weakness of elevation maps is that any reasonable scheme for interpolating between sample points will tend to round over corners and sharp edges, so that buildings become lumpish mounds. Symmetries are also lost: the sampling fails to preserve the rectilinearity or circularity of the various objects we strew around the inhabited patches of the world. And the biggest problem with an elevation map is that it’s a mapping in the mathematical as well as the cartographic sense. The surface of the planet is defined by a smooth, single-valued function, assigning a unique elevation z to every (lat, lon) point on the sphere. Any line radiating from the center of the earth must cross that surface in exactly one point. As a result, there can be no vertical cliffs and no overhangs. Also no bridges. The surface defined by the elevation model can go under a bridge or over it, but not both.

The latest mapping programs are apparently addressing these issues by building explicit three-dimensional models of selected landscape features. I see evidence of several different techniques. The Bayonne Bridge model that assembles itself in the four frames above is clearly based on a triangulated mesh: all the surfaces making up of the envelope of the structure are decomposed into elementary triangles. The cylindrical tanks in the Apple Maps images seem to grow their circular form through an Archimedean process, in which a circle is defined as the limit of an n-gon as n goes to infinity. Elsewhere, I think we may be seeing some kind of spline curves or patches.

Having constructed the model and embedded it in the landscape, the next trick is to project the pixel pattern from a photograph onto the model surface, as a texture. This process too has a certain comical potential:

What we’re seeing here is apparently a tugboat nudging a barge equipped with a crane. The matching of surface pattern to three-dimensional form has gone badly awry, which gives the whole scene a toylike quality. I find it charming. In future years, when the Maptrix has become a hyperrealistic, real-time virtual world with day and night, weather and seasons—maybe with inhabitants who wave back at you—we’ll wax nostalgic over such quaint foibles.

by Brian Hayes at April 17, 2014 12:45 PM UTC

### Should you reveal a P = NP algorithm?

What would you do if you could prove that P=NP with a time-complexity of n2 or better... moreover, would you publish it?
There are all these statements of the good that could come of it. But how would the government react in its present state? Would it ever see the light of day? How would a person be treated if they just gave it away on the internet? Could a person be labeled a threat to national security for giving it away?
I consider this a completely hypothetical and unlikely scenario. If you think this applies to you, make sure you truly have a working algorithm. Code it up and mint yourself some bitcoins, but not enough to notice. If you can't use your algorithm to mint a bitcoin, you don't have a working algorithm.

The next step is up to you. I believe that the positives of P v NP, like their use in curing diseases for example, greatly outweigh the negatives. I would first warn the Internet companies (like was done for heartbleed) so they can modify their systems. Then I would just publish the algorithm. Once the genie is out of the bottle everyone can use it and the government wouldn't be able to hide it.

If you can find an algorithm so can others so you should just take the credit or someone else will discover it. I don't see how one can get into trouble for revealing an algorithm you created. But you shouldn't take legal advice from this blog.

Once again though no one will take you seriously unless you really have a working algorithm. If you just tell Google you have an algorithm for NP-complete problem they will just ignore you. If you hand them their private keys then they will listen.

by Lance Fortnow (noreply@blogger.com) at April 17, 2014 12:13 PM UTC

### The More Variables, the Better?

from Richard Lipton

Ideas from algebraic geometry and arithmetic complexity

Hyman Bass is a professor of both mathematics and mathematics education at the University of Michigan, after a long and storied career at Columbia University. He was one of the first generation of mathematicians to investigate K-theory, and gave what is now the recognized definition of the first generation of K-groups, that is ${K_1(R)}$ for a ring ${R}$. He co-founded with Jean-Pierre Serre a theory of graphs of groups, that is mappings associating a group to each vertex and edge of a graph. He is a past President of the American Mathematical Society—and exemplifies the idea that the AMS promotes mathematics at all levels: since 1996 he has worked on on elementary school math education with his Michigan colleague Deborah Ball.

Today Ken and I wish to discuss an idea used in a paper on the Jacobian Conjecture by Bass with Edwin Connell and David Wright.

One of the most powerful methods we use in theory is often the reduction of the dimension of a problem. The famous JL theorem and its relatives show us that often problems in high-dimensional Euclidean spaces can be reduced to much lower dimensional problems, with little error. This method can and has been used in many areas of theory to solve a variety of problems.

The idea used by Bass and many others is the opposite. Now we are interested in lifting a problem from a space to a much higher space. The critical intuition is that by moving your problem to a higher space there is more “room” to navigate, and this extra “space” allows operations to be performed that would not have been possible in the original lower-dimensional space. An easily quoted example is that the Poincaré property was proved relatively easily for spaces of dimension ${d = 5}$ and higher, then for ${d=4}$ in the 1980′s, and finally for ${d=3}$ by Grigory Perelman drawing on Richard Hamilton.

Let ${F(x)}$ be a polynomial map from ${\mathbb{Z}^{n}}$ to ${\mathbb{Z}^{n}}$ where

$\displaystyle x = x_{1},\dots,x_{n}.$

The Jacobian Conjecture (JC), which we have covered several times before, indeed recently, studies when such maps are injective. The trouble with the JC is quite simply that such maps can be very complex in their structure, which explains the reason JC remains open after over 70 years.

A very simple idea, almost trivial idea, is to replace ${F(x)}$ by ${G(x,y)}$ which is defined by

$\displaystyle G(x,y) = (F(x),y),$

for new variables ${y=y_{1},\dots,y_{m}}$. For example if ${F(x) = (x_{1},x_{1}+x_{2}^{7})}$ then ${G}$ could be

$\displaystyle (x_{1},x_{1}+x_{2}^{7},y_{1},y_{2},y_{3}).$

The new variables ${y}$ are used in a “trivial” way. One reason the method is useful for the JC conjecture is that ${F}$ is injective if and only if ${G}$ is injective. This is trivial: the new variables do not interact in any manner with the original variables, and so the map ${G}$ is injective precisely when ${F}$ is.

Why is this of any use? Clearly ${G}$ is really just ${F}$ with an identity function on the variables ${y}$ “tacked on” to it. How can this help?

## Using The Extra Variables

The answer is that we can use the extra variables to modify the polynomial ${G}$ so that it looks very different from ${F}$. Suppose that we replace ${G}$ by ${H = A \circ G}$ where ${A}$ is a nice polynomial map. We may be able to vastly change the structure of ${G}$ and get a ${H}$ that is much simpler on some measure. The hope is that this restructuring will allow us to prove something about ${H}$ that then implies something about ${G}$. This is exactly what happens in the study of the JC conjecture.

Here is a famous result from the Bass-Connell-Wright paper:

Theorem 1 Given ${F(x)}$ there is an automorphism ${A}$ so that ${H=A \circ G}$ has degree at most three. Moreover ${F}$ is injective if and only if ${H}$ is.

This allows the JC question to be reduced to the study of cubic maps. Of course such low degree maps still are complex in their behavior, but the hope is that while they are on many more variables the restrict on their degree many make their structure easier to understand. This has yet to be fruitful—the JC remains open. The following quotation from the paper explains the idea:

## General Use In Theory?

One idea is to try and use this stabilization philosophy to attack problems about polynomials that arise in complexity theory. For example can we use the addition of extra variables to attack the power of polynomial modulo composite numbers? Suppose that ${F(x)}$ is a polynomial of many variables that modulo ${pq}$ computes something interesting. What if we add extra variables as above, then rearrange the resulting polynomial to have a “better” structure? We must preserve not its injectivity, but for us its ability to compute something. If we can do this, then perhaps we can use the use the extra variables to obtain a lower bound.

Two simple examples from arithmetical complexity are aliasing variables in a formula to make it read-once, and the Derivative Lemma proved by Walter Baur and Volker Strassen. In the latter, the idea is to take a certain gate ${g}$ of a circuit ${C}$ computing a function ${f}$, and regard it as a new variable ${x_{n+1}}$ in a circuit ${C'}$. The circuit ${C'}$ computes a function ${f'}$ of ${n+1}$ variables such that

$\displaystyle f(x_1,\dots,x_n) = f'(x_1,\dots,x_n,g(x)).$

In the Derivative Lemma the gate ${g}$ is chosen to involve one-or-two input variables ${x_i}$, but the idea can be extended to other cases. When the construction is applied recursively we generate circuits ${C'',C^{(3)},C^{(4)},\dots}$ in which the number of variables becomes higher and higher, but the circuits themselves become flatter and flatter, until only a “stardust” of many single-variable functions is left.

## Open Problems

Are there general rules of thumb on when you should increase versus decrease the dimension? Or when to induct on the output gate(s) of a circuit, on the number of variables going down, or on the number of variables going up?

### Be Fair and Be Selfish! Characterizing Deterministic Diffusive Load-Balancing Schemes with Small Discrepancy

Authors: Petra Berenbrink, Ralf Klasing, Adrian Kosowski, Frederik Mallmann-Trenn, Przemyslaw Uznanski
Abstract: We consider the problem of deterministic distributed load balancing of indivisible tasks in the discrete setting. A set of $n$ processors is connected into a $d$-regular symmetric network. In every time step, each processor exchanges some of the tasks allocated to it with each of their neighbours in the network. The goal is to minimize the discrepancy between the number of tasks on the most-loaded and the least-loaded processor as quickly as possible. In this model, the performance of load-balancing schemes obtained by rounding the continuous diffusion process up or down to the nearest integer was considered by Rabani et al. (1998), who showed that after $T = O(\log (Kn)/\mu)$ steps any such scheme achieves a discrepancy of $O(d\log n/\mu)$, where $\mu$ is the spectral gap of the transition matrix of the network graph, and $K$ is the initial load discrepancy in the system. In this work, we identify natural additional conditions on the form of the discrete deterministic balancing scheme, which result in smaller values of discrepancy between maximum and minimum load. Specifically, we introduce the notion of a \emph{cumulatively fair} load-balancing scheme, in which every node sends in total almost the same number of tasks along each of its outgoing edges during every interval of consecutive time steps, and not only at every single step. As our first main result, we prove that any scheme which is cumulatively fair and which has the selfish property that every node retains a sufficient part of its load for itself at each step, achieves a discrepancy of $O(\min\{d\sqrt{\log n/\mu} ,d\sqrt{n}\})$ in time $O(T)$, and that in general neither of these assumptions may be omitted without increasing discrepancy. We then show that any cumulatively fair scheme satisfying additional assumptions on fairness and selfishness achieves a discrepancy of $O(d)$ almost as quickly as the continuous diffusion process. In particular, we provide an example of a scheme belonging to this class, which is both deterministic and stateless, i.e.\ such that the number of tasks sent by a node to its neighbours is a function of only its current load. The $O(d)$ discrepancy achieved by this scheme is asymptotically the best possible in the class of stateless schemes and is reached after time $O(T + \log^2 n/\mu)$. We also provide time bounds for other schemes belonging to the characterized class, such as variants of the deterministic rotor-router process, and remark on possible generalizations of our results.

### On Level persistence (Relevant level persistence numbers)

Authors: Dong Du
Abstract: The purpose of this note is to describe of a new set of numerical invariants, the relevant level persistence numbers, and make explicit their relationship with the four types of bar codes, a more familiar set of complete invariants for level persistence. The paper provides the opportunity to compare level persistence with the persistence introduced by Edelsbrunner- Letscher-Zomorodian called in this paper, as sub-level persistence.

### List decoding group homomorphisms between supersolvable groups

Abstract: We show that the set of homomorphisms between two supersolvable groups can be locally list decoded up to the minimum distance of the code, extending the results of Dinur et al who studied the case where the groups are abelian. Moreover, when specialized to the abelian case, our proof is more streamlined and gives a better constant in the exponent of the list size. The constant is improved from about 3.5 million to 105.

### Extremal examples of collapsible complexes and random discrete Morse theory

Authors: Karim A. Adiprasito, Bruno Benedetti, Frank H. Lutz
Abstract: We present extremal constructions connected with the property of simplicial collapsibility.

(1) For each $d \ge 2$, there are collapsible simplicial $d$-complexes with only one free face. Also, there are non-evasive $d$-complexes with only two free faces. (Both results are optimal in all dimensions.)

(2) Optimal discrete Morse vectors need not be unique. We explicitly construct a contractible, but non-collapsible $3$-dimensional simplicial complex with face vector $f=(106,596,1064,573)$ that admits two distinct optimal discrete Morse vectors, $(1,1,1,0)$ and $(1,0,1,1)$. Indeed, we show that in every dimension $d\geq 3$ there are contractible, non-collapsible simplicial $d$-complexes that have $(1,0,\dots,0,1,1,0)$ and $(1,0,\dots,0,0,1,1)$ as distinct optimal discrete Morse vectors.

(3) We give a first explicit example of a (non-PL) $5$-manifold, with face vector $f=(5013,72300,290944,$ $495912,383136,110880)$, that is collapsible but not homeomorphic to a ball.

Furthermore, we discuss possible improvements and drawbacks of random approaches to collapsibility and discrete Morse theory. We will see that in many instances, the \textbf{random-revlex} strategy works better than Benedetti--Lutz's (uniform) random strategy in various practical instances.

On the theoretical side, we prove that after repeated barycentric subdivisions, the discrete Morse vectors found by randomized algorithms have, on average, an exponential (in the number of barycentric subdivisions) number of critical cells asymptotically almost surely.

### Covering problems in edge- and node-weighted graphs

Authors: Takuro Fukunaga
Abstract: This paper discusses the graph covering problem in which a set of edges in an edge- and node-weighted graph is chosen to satisfy some covering constraints while minimizing the sum of the weights. In this problem, because of the large integrality gap of a natural linear programming (LP) relaxation, LP rounding algorithms based on the relaxation yield poor performance. Here we propose a stronger LP relaxation for the graph covering problem. The proposed relaxation is applied to designing primal-dual algorithms for two fundamental graph covering problems: the prize-collecting edge dominating set problem and the multicut problem in trees. Our algorithms are an exact polynomial-time algorithm for the former problem, and a 2-approximation algorithm for the latter problem, respectively. These results match the currently known best results for purely edge-weighted graphs.

### TR14-054 | Parallel Repetition of Fortified Games | Dana Moshkovitz

from ECCC papers

The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game G^k in terms of the value of the game G and the number of repetitions k. Contrary to what one might have guessed, the value of G^k is not val(G)^k, but rather a more complicated expression depending on properties of G. Indeed, there are numerous works aiming to understand the value of repeated games, both for general games G and for special cases. A case of particular interest is that of projection games, where the answer of the first prover determines at most one acceptable answer for the second prover. In this work we give a simple transformation, which we call "fortification", that can be applied to any projection game. We show that for fortified games G, the value of the k-repeated game is approximately val(G)^k. This results in nearly a quadratic improvement in the size of projection PCP with the lowest error known today. Unlike previous proofs of the parallel repetition theorem that relied on information theory or linear algebra, our proof is purely combinatorial and quite short. We then discuss the problem of derandomizing parallel repetition, and the limitations of the fortification idea in this setting. We point out a connection between the problem of derandomizing parallel repetition and the problem of composition. This connection could shed light on the so-called Projection Games Conjecture, which asks for projection PCP with minimal error.

### Accepted papers for ICALP 2014

from Luca Aceto

The list of accepted papers for ICALP 2014 is now available here. The PC chairs told me that the quality of the submissions was very good and the scientific programme for the conference looks mouth-watering. Apart from the contributed papers and the excellent invited speakers, there will also be three award presentations. The conference dinner also promises to be a cultural highlight and the Copenhagen Jazz Festival will be in full swing.

There is going to be a lot going on in Copenhagen for TCS buffs this summer, which makes it one of the two places to be. (The other is Vienna.)

I hope to see many of you at what will be a memorable ICALP. Thanks to Thore Husfeldt and his team for all the sterling work they are doing!

by Luca Aceto (noreply@blogger.com) at April 16, 2014 03:39 PM UTC

### The Complexity of Counting Edge Colorings and a Dichotomy for Some Higher Domain Holant Problems

Authors: Jin-Yi Cai, Heng Guo, Tyson Williams
Abstract: We show that an effective version of Siegel's Theorem on finiteness of integer solutions and an application of elementary Galois theory are key ingredients in a complexity classification of some Holant problems. These Holant problems, denoted by Holant(f), are defined by a symmetric ternary function f that is invariant under any permutation of the k >= 3 domain elements. We prove that Holant(f) exhibits a complexity dichotomy. This dichotomy holds even when restricted to planar graphs. A special case of this result is that counting edge k-colorings is #P-hard over planar 3-regular graphs for k >= 3. In fact, we prove that counting edge k-colorings is #P-hard over planar r-regular graphs for all k >= r >= 3. The problem is polynomial-time computable in all other parameter settings. The proof of the dichotomy theorem for Holant(f) depends on the fact that a specific polynomial p(x,y) has an explicitly listed finite set of integer solutions, and the determination of the Galois groups of some specific polynomials. In the process, we also encounter the Tutte polynomial, medial graphs, Eulerian partitions, Puiseux series, and a certain lattice condition on the (logarithm of) the roots of polynomials.

### Colorful bin packing

Authors: Gyorgy Dosa, Leah Epstein
Abstract: We study a variant of online bin packing, called colorful bin packing. In this problem, items that are presented one by one are to be packed into bins of size 1. Each item i has a size s_i \in [0,1] and a color c_i \in C, where C is a set of colors (that is not necessarily known in advance). The total size of items packed into a bin cannot exceed its size, thus an item i can always be packed into a new bin, but an item cannot be packed into a non-empty bin if the previous item packed into that bin has the same color, or if the occupied space in it is larger than 1-s_i. This problem generalizes standard online bin packing and online black and white bin packing (where |C|=2). We prove that colorful bin packing is harder than black and white bin packing in the sense that an online algorithm for zero size items that packs the input into the smallest possible number of bins cannot exist for C \geq 3, while it is known that such an algorithm exists for |C|=2. We show that natural generalizations of classic algorithms for bin packing fail to work for the case |C| \geq 3 and moreover, algorithms that perform well for black and white bin packing do not perform well either, already for the case |C|=3. Our main results are a new algorithm for colorful bin packing that we design and analyze, whose absolute competitive ratio is 4, and a new lower bound of 2 on the asymptotic competitive ratio of any algorithm, that is valid even for black and white bin packing.

### On the Error Resilience of Ordered Binary Decision Diagrams

Authors: Anna Bernasconi, Valentina Ciriani, Lorenzo Lago
Abstract: Ordered Binary Decision Diagrams (OBDDs) are a data structure that is used in an increasing number of fields of Computer Science (e.g., logic synthesis, program verification, data mining, bioinformatics, and data protection) for representing and manipulating discrete structures and Boolean functions. The purpose of this paper is to study the error resilience of OBDDs and to design a resilient version of this data structure, i.e., a self-repairing OBDD. In particular, we describe some strategies that make reduced ordered OBDDs resilient to errors in the indexes, that are associated to the input variables, or in the pointers (i.e., OBDD edges) of the nodes. These strategies exploit the inherent redundancy of the data structure, as well as the redundancy introduced by its efficient implementations. The solutions we propose allow the exact restoring of the original OBDD and are suitable to be applied to classical software packages for the manipulation of OBDDs currently in use. Another result of the paper is the definition of a new canonical OBDD model, called {\em Index-resilient Reduced OBDD}, which guarantees that a node with a faulty index has a reconstruction cost $O(k)$, where $k$ is the number of nodes with corrupted index.

### A simple SVD algorithm for finding hidden partitions

Authors: Van Vu
Abstract: Finding a hidden partition in a random environment is a general and important problem, which contains as subproblems many famous questions, such as finding a hidden clique, finding a hidden coloring, finding a hidden bipartition etc.

In this paper, we provide a simple SVD algorithm for this purpose, answering a question of McSherry. This algorithm is very easy to implement and works for sparse graphs with optimal density.

### Algorithms parameterized by vertex cover and modular width, through potential maximal cliques

Authors: Fedor V. Fomin, Mathieu Liedloff, Pedro Montealegre, Ioan Todinca
Abstract: In this paper we give upper bounds on the number of minimal separators and potential maximal cliques of graphs w.r.t. two graph parameters, namely vertex cover ($\operatorname{vc}$) and modular width ($\operatorname{mw}$). We prove that for any graph, the number of minimal separators is $\mathcal{O}^*(3^{\operatorname{vc}})$ and $\mathcal{O}^*(1.6181^{\operatorname{mw}})$, and the number of potential maximal cliques is $\mathcal{O}^*(4^{\operatorname{vc}})$ and $\mathcal{O}^*(1.7347^{\operatorname{mw}})$, and these objects can be listed within the same running times. (The $\mathcal{O}^*$ notation suppresses polynomial factors in the size of the input.) Combined with known results, we deduce that a large family of problems, e.g., Treewidth, Minimum Fill-in, Longest Induced Path, Feedback vertex set and many others, can be solved in time $\mathcal{O}^*(4^{\operatorname{vc}})$ or $\mathcal{O}^*(1.7347^{\operatorname{mw}})$.

### Bolzmann samplers for random generation of lambda terms

Authors: Pierre Lescanne
Abstract: Randomly generating structured objects is important in testing and optimizing functional programs, whereas generating random $'l$-terms is more specifically needed for testing and optimizing compilers. For that a tool called QuickCheck has been proposed, but in this tool the control of the random generation is left to the programmer. Ten years ago, a method called Boltzmann samplers has been proposed to generate combinatorial structures. In this paper, we show how Boltzmann samplers can be developed to generate lambda-terms, but also other data structures like trees. These samplers rely on a critical value which parameters the main random selector and which is exhibited here with explanations on how it is computed. Haskell programs are proposed to show how samplers are actually implemented.

### Circuit complexity, proof complexity, and polynomial identity testing

Authors: Joshua A. Grochow, Toniann Pitassi
Abstract: We introduce a new algebraic proof system, which has tight connections to (algebraic) circuit complexity. In particular, we show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits (VNP is not equal to VP). As a corollary to the proof, we also show that super-polynomial lower bounds on the number of lines in Polynomial Calculus proofs (as opposed to the usual measure of number of monomials) imply the Permanent versus Determinant Conjecture. Note that, prior to our work, there was no proof system for which lower bounds on an arbitrary tautology implied any computational lower bound.

Our proof system helps clarify the relationships between previous algebraic proof systems, and begins to shed light on why proof complexity lower bounds for various proof systems have been so much harder than lower bounds on the corresponding circuit classes. In doing so, we highlight the importance of polynomial identity testing (PIT) for understanding proof complexity.

More specifically, we introduce certain propositional axioms satisfied by any Boolean circuit computing PIT. We use these PIT axioms to shed light on AC^0[p]-Frege lower bounds, which have been open for nearly 30 years, with no satisfactory explanation as to their apparent difficulty. We show that either: a) Proving super-polynomial lower bounds on AC^0[p]-Frege implies VNP does not have polynomial-size circuits of depth d - a notoriously open question for d at least 4 - thus explaining the difficulty of lower bounds on AC^0[p]-Frege, or b) AC^0[p]-Frege cannot efficiently prove the depth d PIT axioms, and hence we have a lower bound on AC^0[p]-Frege.

Using the algebraic structure of our proof system, we propose a novel way to extend techniques from algebraic circuit complexity to prove lower bounds in proof complexity.

### Shortest reconfiguration paths in the solution space of Boolean formulas

Authors: Amer E. Mouawad, Naomi Nishimura, Vinayak Pathak, Venkatesh Raman
Abstract: Given a Boolean formula $\phi$ and two satisfying assignments $s$ and $t$, define a \emph{flip} as an operation that changes the value of a variable in an assignment so that the resulting assignment is also satisfying. We study the computational problem of finding the shortest sequence of flips (if one exists) that transforms $s$ into $t$. We use Schaefer's framework for classification of Boolean formulas and prove that for any class of formulas, computing the shortest flip sequence is either in P, NP-complete, or PSPACE-complete. In the process, we obtain a class where the shortest path can be found in polynomial time even though its length is not equal to the symmetric difference. This is in contrast to all reconfiguration problems studied so far, where polynomial time algorithms for computing the shortest reconfiguration path were known only for cases where the shortest path only modified the symmetric difference.

### Approximation Schemes for Partitioning: Convex Decomposition and Surface Approximation

Abstract: We revisit two NP-hard geometric partitioning problems - convex decomposition and surface approximation. Building on recent developments in geometric separators, we present quasi-polynomial time algorithms for these problems with improved approximation guarantees.

### Changing Bases: Multistage Optimization for Matroids and Matchings

Authors: Anupam Gupta, Kunal Talwar, Udi Wieder
Abstract: This paper is motivated by the fact that many systems need to be maintained continually while the underlying costs change over time. The challenge is to continually maintain near-optimal solutions to the underlying optimization problems, without creating too much churn in the solution itself. We model this as a multistage combinatorial optimization problem where the input is a sequence of cost functions (one for each time step); while we can change the solution from step to step, we incur an additional cost for every such change. We study the multistage matroid maintenance problem, where we need to maintain a base of a matroid in each time step under the changing cost functions and acquisition costs for adding new elements. The online version of this problem generalizes online paging. E.g., given a graph, we need to maintain a spanning tree $T_t$ at each step: we pay $c_t(T_t)$ for the cost of the tree at time $t$, and also $| T_t\setminus T_{t-1} |$ for the number of edges changed at this step. Our main result is an $O(\log m \log r)$-approximation, where $m$ is the number of elements/edges and $r$ is the rank of the matroid. We also give an $O(\log m)$ approximation for the offline version of the problem. These bounds hold when the acquisition costs are non-uniform, in which caseboth these results are the best possible unless P=NP.

We also study the perfect matching version of the problem, where we must maintain a perfect matching at each step under changing cost functions and costs for adding new elements. Surprisingly, the hardness drastically increases: for any constant $\epsilon>0$, there is no $O(n^{1-\epsilon})$-approximation to the multistage matching maintenance problem, even in the offline case.

### Quantum Information Complexity and Amortized Communication

Authors: Dave Touchette
Abstract: We define a new notion of information cost for quantum protocols, and a corresponding notion of quantum information complexity for bipartite quantum channels, and then investigate the properties of such quantities. These are the fully quantum generalizations of the analogous quantities for bipartite classical functions that have found many applications recently, in particular for proving communication complexity lower bounds. Our definition is strongly tied to the quantum state redistribution task.

Previous attempts have been made to define such a quantity for quantum protocols, with particular applications in mind; our notion differs from these in many respects. First, it directly provides a lower bound on the quantum communication cost, independent of the number of rounds of the underlying protocol. Secondly, we provide an operational interpretation for quantum information complexity: we show that it is exactly equal to the amortized quantum communication complexity of a bipartite channel on a given state. This generalizes a result of Braverman and Rao to quantum protocols, and even strengthens the classical result in a bounded round scenario. Also, this provides an analogue of the Schumacher source compression theorem for interactive quantum protocols, and answers a question raised by Braverman.

We also discuss some potential applications to quantum communication complexity lower bounds by specializing our definition for classical functions and inputs. Building on work of Jain, Radhakrishnan and Sen, we provide new evidence suggesting that the bounded round quantum communication complexity of the disjointness function is \Omega (n/M + M), for M-message protocols. This would match the best known upper bound.

### TR14-053 | Computing with a full memory: Catalytic space | Harry Buhrman, Richard Cleve, Michal Koucky, Bruno Loff, Florian Speelman

from ECCC papers

We define the notion of a catalytic-space computation. This is a computation that has a small amount of clean space available and is equipped with additional auxiliary space, with the caveat that the additional space is initially in an arbitrary, possibly incompressible, state and must be returned to this state when the computation is finished. We show that the extra space can be used in a nontrivial way, to compute uniform $TC^1$-circuits with just a logarithmic amount of clean space. The extra space thus works analogously to a catalyst in a chemical reaction. $TC^1$-circuits can compute for example the determinant of a matrix, which is not known to be computable in logspace. In order to obtain our results we study an algebraic model of computation, a variant of straight-line programs. We employ register machines with input registers $x_1, \ldots, x_n$ and work registers $r_1, \ldots, r_m$. The instructions available are of the form $r_i \law r_i \pm u \times v$, with $u, v$ registers (distinct from $r_i$) or constants. We wish to compute a function $f(x_1, \ldots, x_n)$ through a sequence of such instructions. The working registers have some arbitrary initial value $r_i = \tau_i$, and they may be altered throughout the computation, but by the end all registers must be returned to their initial value $\tau_i$, except for, say, $r_1$ which must hold $\tau_1 + f(x_1, \ldots, x_n)$. We show that all of Valiant's class VP, and more, can be computed in this model. This significantly extends the framework and techniques of Ben-Or and Cleve [Ben-Or Cleve 1992]. Upper bounding the power of catalytic computation we show that catalytic logspace is contained in ZPP. We further construct an oracle world where catalytic logpace is equal to PSPACE, and show that under the exponential time hypothesis (ETH), SAT can not be computed in catalytic sub-linear space. [Ben-Or Cleve 1992]: M. Ben-Or and R. Cleve. Computing algebraic formulas using a constant number of registers. SIAM Journal on Computing, 21(1):54–58, 1992.

### Restricted Invertiblity by Interlacing Polynomials

In this post, I am going to give you a simple, self-contained, and fruitful demonstration of a recently introduced proof technique called the method of interlacing families of polynomials, which was also mentioned in an earlier post. This method, which may be seen as an incarnation of the probabilistic method, is relevant in situations when you want to show that at least one matrix from some collection of matrices must have eigenvalues in a certain range. The basic methodology is to write down the characteristic polynomial of each matrix you care about, compute the average of all these polynomials (literally by averaging the coefficients), and then reason about the eigenvalues of the individual matrices by studying the zeros of the average polynomial. The relationship between the average polynomial and the individuals hinges on a phenomenon known as interlacing.

We will use the method to prove the following sharp variant of Bourgain and Tzafriri’s restricted invertibility theorem, which may be seen as a robust, quantitative version of the fact that every rank ${k}$ matrix contains a set of ${k}$ linearly independent columns.

Theorem 1 Suppose ${v_1,\ldots,v_m\in{\mathbb R}^n}$ are vectors with ${\sum_{i=1}^mv_iv_i^T=I_n}$. Then for every ${k there is a subset ${\sigma\subset [m]}$ of size ${k}$ with

$\displaystyle \lambda_k\left(\sum_{i\in\sigma}v_iv_i^T\right)\geq \left(1-\sqrt{\frac{k}{n}}\right)^2\frac{n}{m}.$

That is, any set of vectors ${v_1,\ldots,v_m}$ with variance ${\sum_{i=1}^m \langle x,v_i\rangle^2}$ equal to one in every direction ${\|x\|=1}$ must contain a large subset which is far from being linearly degenerate, in the sense of having large eigenvalues (compared to $(n/m)$, which is the average squared length of the vectors). Such variance one sets go by many other names in different contexts: they are also called isotropic sets, decompositions of the identity, and tight frames. This type of theorem was first proved by Bourgain and Tzafriri in 1987, and later generalized and sharpened to include the form stated here.

The original applications of Theorem 1 and its variants were mainly in Banach space theory and harmonic analysis. More recently, it was used in theoretical CS by Nikolov, Talwar, and Zhang in the contexts of differential privacy and discrepancy minimization. Another connection with TCS was discovered by Joel Tropp, who showed that the set ${\sigma}$ can be found algorithmically using a semidefinite program whose dual is related to the Goemans-Wiliamson SDP for Max-Cut.

In more concrete notation, the theorem says that every rectangular matrix ${B_{n\times m}}$ with ${BB^T=I_n}$ contains an ${n\times k}$ column submatrix ${S\subset [m]}$ with ${\sigma_k^2(B_S)\ge (1-\sqrt{k/n})^2(n/m)}$, where ${\sigma_k}$ is the kth largest singular value. Written this way, we see some similarity with the column subset selection problem in data mining, which seeks to extract a maximally nondegenerate set of representative’ columns from a data matrix. There are also useful generalizations of Theorem 1 for arbitrary rectangular ${B}$.

As I said earlier, the technique is based on studying the roots of averages of polynomials. In general, averaging polynomials coefficient-wise can do unpredictable things to the roots. For instance, the average of ${(x-1)(x-2)}$ and ${(x-3)(x-4)}$, which are both real-rooted quadratics, is ${x^2-5x+7}$, which has complex roots ${2.5\pm\sqrt{3}i}$. Even when the roots of the average are real, there is in general no simple relationship between the roots of two polynomials and the roots of their average.

The main insight is that there are nonetheless many situations where averaging the coefficients of polynomials also has the effect of averaging each of the roots individually, and that it is possible to identify and exploit these situations. To speak about this concretely, we will need to give the roots names. There is no canonical way to do this for arbitrary polynomials, whose roots are just sets of points in the complex plane. However, when all the roots are real there is a natural ordering given by the real line; we will use this ordering to label the roots of a real-rooted polynomial ${p(x)=\prod_{i=1}^n (x-\alpha_i)}$ in descending order ${\alpha_1\ge\ldots\ge\alpha_n}$.

Interlacing

We will use the following classical notion to characterize precisely the good situations mentioned above.

Definition 2 (Interlacing) Let ${f}$ be a degree ${n}$ polynomial with all real roots ${\{\alpha_i\}}$, and let ${g}$ be degree ${n}$ or ${n-1}$ with all real roots ${\{\beta_i\}}$ (ignoring ${\beta_n}$ in the degree ${n-1}$ case). We say that ${g}$ interlaces ${f}$ if their roots alternate, i.e.,

$\displaystyle \beta_n\le\alpha_n\le \beta_{n-1}\le \ldots \beta_1\le \alpha_1.$

Following Fisk, we denote this as ${g\longrightarrow f}$, to indicate that the largest root belongs to ${f}$.

If there is a single ${g}$ which interlaces a family of polynomials ${f_1,\ldots,f_m}$, we say that they have a common interlacing.

It is an easy exercise that ${f_1,\ldots,f_m}$ of degree ${n}$ have a common interlacing iff there are closed intervals ${I_n\le I_{n-1}\le\ldots I_1}$ (where ${\le}$ means to the left of) such that the ${i}$th roots of all the ${f_j}$ are contained in ${I_i}$. It is also easy to see that a set of polynomials has a common interlacing iff every pair of them has a common interlacing (this may be viewed as Helly’s theorem on the real line).

We now state the main theorem about averaging polynomials with common interlacings.

Theorem 3 Suppose ${f_1,\ldots,f_m}$ are real-rooted of degree ${n}$ with positive leading coefficients. Let ${\lambda_k(f_j)}$ denote the ${k^{th}}$ largest root of ${f_j}$ and let ${\mu}$ be any distribution on ${[m]}$. If ${f_1,\ldots,f_m}$ have a common interlacing, then for all ${k=1,\ldots,n}$

$\displaystyle \min_j \lambda_k(f_j)\le \lambda_k(\mathop{\mathbb E}_{j\sim \mu} f_j)\le \max_j \lambda_k(f_j).$

The proof of this theorem is a three line exercise. Since it is the crucial fact upon which the entire technique relies, I encourage you to find this proof for yourself (Hint: Apply the intermediate value theorem inside each interval ${I_i}$.) You can also look at the picture below, which shows what happens for two cubic polynomials with a quadratic common interlacing.

One of the nicest features of common interlacings is that their existence is equivalent to certain real-rootedness statements. Often, this characterization gives us a systematic way to argue that common interlacings exist, rather than having to rely on cleverness and pull them out of thin air. The following seems to have been discovered a number of times (for instance, Fell or Chudnovsky & Seymour); the proof of it included below assumes that the roots of a polynomial are continuous functions of its coefficients (which may be shown using elementary complex analysis).

Theorem 4 If ${f_1,\ldots,f_m}$ are degree ${n}$ polynomials and all of their convex combinations ${\sum_{i=1}^m \mu_if_i}$ have real roots, then they have a common interlacing.

Proof: Since common interlacing is a pairwise condition, it suffices to handle the case of two polynomials ${f_0}$ and ${f_1}$. Let

$\displaystyle f_t := (1-t)f_0+tf_1$

with ${t\in [0,1]}$. Assume without loss of generality that ${f_0}$ and ${f_1}$ have no common roots (if they do, divide them out and put them back in at the end). As ${t}$ varies from ${0}$ to ${1}$, the roots of ${f_t}$ define ${n}$ continuous curves in the complex plane ${C_1,\ldots,C_n}$, each beginning at a root of ${f_0}$ and ending at a root of ${f_1}$. By our assumption the curves must all lie in the real line. Observe that no curve can cross a root of either ${f_0}$ or ${f_1}$ in the middle: if ${f_t(r)=0}$ for some ${t \in (0,1)}$ and ${f_0(r)=0}$, then immediately we also have ${f_t(r)=tf_1(r)=0}$, contradicting the no common roots assumption. Thus, each curve defines a closed interval containing exactly one root of ${f_0}$ and one root of ${f_1}$, and these intervals do not overlap except possibly at their endpoints, establishing the existence of a common interlacing. $\Box$

Characteristic Polynomials and Rank One Updates

A very natural and relevant example of interlacing polynomials comes from matrices. Recall that the characteristic polynomial of a matrix ${A}$ is given by

$\displaystyle \chi(A)(x):=\det(xI-A)$

and that its roots are the eigenvalues of ${A}$. The following classical fact tells us that rank one updates create interlacing.

Lemma 5 (Cauchy’s Interlacing Theorem) If ${A}$ is a symmetric matrix and ${v}$ is a vector then

$\displaystyle \chi(A)\longrightarrow \chi(A+vv^T).$

Proof: There are many ways to prove this, and it is a nice exercise. One way which I particularly like, and which will be relevant for the rest of this post, is to observe that

$\displaystyle \begin{array}{rcl} \det(xI-A-vv^T) &= \det(xI-A)\det(I-vv^T(xI-A)^{-1})\qquad (*) \\&=\det(xI-A)(1-v^T(xI-A)^{-1}v) \\&=\chi(A)(x)\left(1-\sum_{i=1}^n\frac{\langle v,u_i\rangle^2}{x-\lambda_i}\right),\end{array}$

where ${u_i}$ and ${\lambda_i}$ are the eigenvectors and eigenvalues of ${A}$. Interlacing then follows by inspecting the poles and zeros of the rational function on the right hand side. $\Box$

We are now in a position to do something nontrivial. Suppose ${A}$ is a symmetric ${n\times n}$ real matrix and ${v_1,\ldots,v_m}$ are some vectors in ${{\mathbb R}^n}$. Cauchy’s theorem tells us that the polynomials

$\displaystyle \chi(A+v_1v_1^T),\chi(A+v_2v_2^T),\ldots,\chi(A+v_mv_m^T)$

have a common interlacing, namely ${\chi(A)}$. Thus, Theorem 3 implies that for every ${k}$, there exists a ${j}$ so that the ${k}$th largest eigenvalue of ${A+v_jv_j^T}$ is at least the ${k}$th largest root of the average polynomial

$\displaystyle \mathop{\mathbb E}_j \chi(A+v_jv_j^T).$

We can compute this polynomial using the calculation ${(*)}$ as follows:

$\displaystyle \mathop{\mathbb E} \chi(A+v_jv_j^T) = \chi(A)\left(1-\sum_{i=1}^n\frac{ \mathop{\mathbb E} \langle v_j,u_i\rangle^2}{x-\lambda_i}\right).$

In general, this polynomial depends on the squared inner products ${\langle v_j,u_i\rangle^2}$. When ${\sum_{i=1}^m v_iv_i^T=I}$, however, we have ${\mathop{\mathbb E}_j \langle v_j,u_i\rangle^2=1/m}$ for all ${u_i}$, and:

$\displaystyle \mathop{\mathbb E}_j \chi(A+v_jv_j^T) = \chi(A)(x)\left(1-\sum_{i=1}^n\frac{ 1/m}{x-\lambda_i}\right)=\chi(A)(x)-(1/m)\frac{\partial}{\partial x}\chi(A)(x).\qquad (**)$

That is, adding a random rank one matrix in the isotropic case corresponds to subtracting off a multiple of the derivative from the characteristic polynomial. Note that there is no dependence on the vectors ${v_j}$ in this expression, and it has forgotten’ all of the eigenvectors ${u_i}$. This is where the gain is: we have reduced a high-dimensional linear algebra problem (of finding a ${v_j}$ for which ${A+v_jv_j^T}$ has certain eigenvalues, which may be difficult when the matrices involved do not commute) to a univariate calculus / analysis problem (given a polynomial, figure out what subtracting the derivative does to its roots). Moreover, the latter problem is amenable to a completely different set of tools than the original eigenvalue problem.

As a sanity check, if we apply the above deduction to ${A=0}$, we find that any isotropic set ${\sum_{i=1}^m v_iv_i^T=I}$ must contain a ${j}$ such that ${\lambda_1(v_jv_j^T)}$ is at least the largest root of

$\displaystyle \chi(0)(x)-(1/m)\chi(0)'(x)=x^n-(n/m)x^{n-1},$

which is just ${n/m}$. This makes sense since ${\lambda_1(v_jv_j^T)=\|v_j\|^2}$, and the average squared length of the vectors is indeed ${n/m}$ since ${\mathrm{trace}(\sum_i v_iv_i^T)=n}$.

Differential Operators and Induction

The real power of the method comes from being able to apply it inductively to a sum of many independent random ${v_j}$‘s at once, rather than just once. In this case, establishing the necessary common interlacings requires a combination of Theorem 4 and Cauchy’s theorem. A central role is played by the differential operators ${(I-(1/m)\frac{\partial}{\partial x})}$ seen above, which I will henceforth denote as ${(I-(1/m)D)}$. The proof relies on the following key properties of these operators:

Lemma 6 (Properties of Differential Operators)

1. If ${ \mathbf{X} }$ is a random vector with ${\mathop{\mathbb E} \mathbf{X} \mathbf{X} ^T = cI}$ then

$\displaystyle \mathop{\mathbb E} \chi(A+ \mathbf{X} \mathbf{X} ^T) = (I-cD)\chi(A).$

2. If ${f}$ has real roots then so does ${(I-cD)f}$.
3. If ${f_1,\ldots,f_m}$ have a common interlacing, then so do ${(I-cD)f_1,\ldots, (1-cD)f_m}$.

Proof: Part (1) was essentially shown in ${(**)}$. Part (2) follows by applying ${(*)}$ to the matrix ${A}$ with diagonal entries equal to the roots of ${f}$, and plugging in ${v=\sqrt{c}\cdot(1,1,\ldots,1)^T}$, so that ${f=\chi[A]}$ and ${(1-cD)f=\chi[A+vv^T]}$.

For part (3), Theorem 3 tells us that all convex combinations ${\sum_{i=1}^m\mu_if_i}$ have real roots. By part (2) it follows that all

$\displaystyle (1-cD)\sum_{i=1}^m\mu_if_i = \sum_{i=1}^m \mu_i (1-cD)f_i$

also have real roots. By Theorem 4, this means that the ${(1-cD)f_i}$ must have a common interlacing.$\Box$

We are now ready to perform the main induction which will give us the proof of Theorem 1.

Lemma 7 Let ${ \mathbf{X} }$ be uniformly chosen from ${\{v_i\}_{i\le m}}$ so that ${\mathop{\mathbb E} \mathbf{X} \mathbf{X} ^T=(1/m)I}$, and let ${ \mathbf{X} _1,\ldots, \mathbf{X} _k}$ be i.i.d. copies of ${ \mathbf{X} }$. Then there exists a choice of indices ${j_1,\ldots j_k}$ satisfying

$\displaystyle \lambda_k \left(\chi(\sum_{i=1}^k v_{j_i}v_{j_i}^T)\right) \ge \lambda_k \left(\mathop{\mathbb E} \chi(\sum_{i=1}^k \mathbf{X} _i \mathbf{X} _i^T)\right).$

Proof: For any partial assignment ${j_1,\ldots,j_\ell}$ of the indices, consider the conditional expectation’ polynomial:

$\displaystyle q_{j_1,\ldots,j_\ell}(x) := \mathop{\mathbb E}_{ \mathbf{X} _{\ell+1},\ldots, \mathbf{X} _k} \chi\left(\sum_{i=1}^\ell v_{j_i}v_{j_i}^T + \sum_{i=\ell+1}^k \mathbf{X} _i \mathbf{X} _i^T\right).$

We will show that there exists a ${j_{\ell+1}\in [m]}$ such that

$\displaystyle \lambda_k(q_{j_1,\ldots,j_{\ell+1}})\ge \lambda_k(q_{j_1,\ldots,j_\ell}),\ \ \ \ \ (1)$

which by induction will complete the proof. Consider the matrix

$\displaystyle A = \sum_{i=1}^\ell v_{j_i}v_{j_i}^T,$

By Cauchy’s interlacing theorem ${\chi(A)}$ interlaces ${\chi(A+v_{j_{\ell+1}}v_{j_{\ell+1}}^T)}$ for every ${j_{\ell+1}\in [m]}$. Lemma 6 tells us ${(1-(1/m)D)}$ operators preserve common interlacing, so the polynomials

$\displaystyle (1-(1/m)D)^{k-(\ell+1)}\chi(A+v_{j_{\ell+1}}v_{j_{\ell+1}}^T) = q_{j_1,\ldots,j_\ell,j_{\ell+1}}(x)$

(by applying Lemma 6 ${k-(\ell+1)}$ times) must also have a common interlacing. Thus, some ${j_{\ell+1}\in [m]}$ must satisfy (1), as desired. $\Box$

Bounding the Roots: Laguerre Polynomials

To finish the proof of Theorem 1, it suffices by Lemma 7 to prove a lower bound on the ${k}$th largest root of the expected polynomial ${\mathop{\mathbb E} \chi\left(\sum_{i=1}^k \mathbf{X} _i \mathbf{X} _i^T\right)}$. By applying Lemma 6 ${k}$ times to ${\chi(0)=x^n}$, we find that

$\displaystyle \mathop{\mathbb E} \chi\left(m\cdot \sum_{i=1}^k \mathbf{X} _i \mathbf{X} _i^T\right) = (1-D)^kx^n =: p_k(x).\ \ \ \ \ (2)$

This looks like a nice polynomial, and we are free to use any method we like to bound its roots.

The easiest way is to observe that

$\displaystyle p_k(x)=x^{n-k}\L_k^{(n-k)}(x),$

where ${\L_k^{(n-k)}(x)}$ is a degree ${k}$ associated Laguerre polynomial. These are a classical family of orthogonal polynomials and a lot is known about the locations of their roots; in particular, there is the following estimate due to Krasikov.

Lemma 8 (Roots of Laguerre Polynomials) The roots of the associated Laguerre polynomial

$\displaystyle \L_k^{(n-k)}(x):= (1-D)^{n}x^k\ \ \ \ \ (3)$

are contained in the interval ${[n(1-\sqrt{k/n})^2,n(1+\sqrt{k/n})^2].}$

It follows by Lemma 8 that ${\lambda_k(p_k)\ge \lambda_k(\L_k^{(n-k)})\ge n(1-\sqrt{k/n})^2}$, which immediately yields Theorem 1 by Lemma 7 and (2).

If you think that appealing to Laguerre polynomials was magical, it is also possible to prove the bound (3) from scratch in less than a page using the barrier function’ argument from this paper, which is also intimately related to the formulas ${(*)}$ and ${(**)}$.

Conclusion

The argument given here is a special case of a more general principle: that expected characteristic polynomials of certain random matrices can be expressed in terms of differential operators, which can then be used to establish the existence of the necessary common interlacings as well as to analyze the roots of the expected polynomials themselves. In the isotropic case of Bourgain-Tzafriri presented here, all of these objects can be chosen to be univariate polynomials. Morally, this is because the covariance matrices of all of the random vectors involved are multiples of the identity (which trivially commute with each other), and all of the characteristic polynomials involved are simple univariate linear transformations of each other (such a ${(I-cD)}$). The above argument can also be made to work in the non-isotropic case, yielding improvements over previously known bounds. This is the subject of a paper in preparation, Interlacing Families III, with Adam Marcus and Dan Spielman.

On the other hand, the proofs of Kadison-Singer and existence of Ramanujan graphs involve analyzing sums of independent rank one matrices which come from non-identically distributed distributions whose covariance matrices do not commute. At a high level, this is what creates the need to consider multivariate characteristic polynomials and differential operators, which are then analyzed using techniques from the theory of real stable polynomials.

Acknowledgments

Everything in this post is joint work with Adam Marcus and Dan Spielman. Thanks to Raghu Meka for helpful comments in the preparation of this post.

by Nikhil Srivastava at April 15, 2014 02:23 PM UTC

### Diameter Constrained Reliability: Computational Complexity in terms of the diameter and number of terminals

Authors: Eduardo Canale, Pablo Romero
Abstract: Let $G=(V,E)$ be a simple graph with $|V|=n$ nodes and $|E|=m$ links, a subset $K \subseteq V$ of \emph{terminals}, a vector $p=(p_1,\ldots,p_m) \in [0,1]^m$ and a positive integer $d$, called \emph{diameter}. We assume nodes are perfect but links fail stochastically and independently, with probabilities $q_i=1-p_i$. The \emph{diameter-constrained reliability} (DCR for short), is the probability that the terminals of the resulting subgraph remain connected by paths composed by $d$ links, or less. This number is denoted by $R_{K,G}^{d}(p)$.

The general DCR computation is inside the class of $\mathcal{N}\mathcal{P}$-Hard problems, since is subsumes the complexity that a random graph is connected. In this paper, the computational complexity of DCR-subproblems is discussed in terms of the number of terminal nodes $k=|K|$ and diameter $d$. Either when $d=1$ or when $d=2$ and $k$ is fixed, the DCR is inside the class $\mathcal{P}$ of polynomial-time problems. The DCR turns $\mathcal{N}\mathcal{P}$-Hard when $k \geq 2$ is a fixed input parameter and $d\geq 3$.

The case where $k=n$ and $d \geq 2$ is fixed are not studied in prior literature. Here, the $\mathcal{N}\mathcal{P}$-Hardness of this case is established.

### On Backdoors To Tractable Constraint Languages

Authors: Clement Carbonnel, Martin C. Cooper, Emmanuel Hebrard
Abstract: In the context of CSPs, a strong backdoor is a subset of variables such that every complete assignment yields a residual instance guaranteed to have a specified property. If the property allows efficient solving, then a small strong backdoor provides a reasonable decomposition of the original instance into easy instances. An important challenge is the design of algorithms that can find quickly a small strong backdoor if one exists. We present a systematic study of the parameterized complexity of backdoor detection when the target property is a restricted type of constraint language defined by means of a family of polymorphisms. In particular, we show that under the weak assumption that the polymorphisms are idempotent, the problem is unlikely to be FPT when the parameter is either r (the constraint arity) or k (the size of the backdoor) unless P = NP or FPT = W[2]. When the parameter is k+r, however, we are able to identify large classes of languages for which the problem of finding a small backdoor is FPT.

### Constant-factor approximations for Capacitated Arc Routing without triangle inequality

Authors: René van Bevern, Sepp Hartung, André Nichterlein, Manuel Sorge
Abstract: Given an undirected graph with edge costs and edge demands, the Capacitated Arc Routing problem (CARP) asks for minimum-cost routes for equal-capacity vehicles so as to satisfy all demands. Constant-factor polynomial-time approximation algorithms were proposed for CARP with triangle inequality, while CARP was claimed to be NP-hard to approximate within any constant factor in general. Correcting this claim, we show that any factor {\alpha} approximation for CARP with triangle inequality yields a factor {\alpha} approximation for the general CARP.

### Cache-Oblivious VAT-Algorithms

Authors: Tomasz Jurkiewicz, Kurt Mehlhorn, Patrick Nicholson
Abstract: The VAT-model (virtual address translation model) extends the EM-model (external memory model) and takes the cost of address translation in virtual memories into account. In this model, the cost of a single memory access may be logarithmic in the largest address used. We show that the VAT-cost of cache-oblivious algorithms is only by a constant factor larger than their EM-cost; this requires a somewhat more stringent tall cache assumption as for the EM-model.

### Polynomial Delay Algorithm for Listing Minimal Edge Dominating sets in Graphs

Authors: Mamadou Moustapha Kanté, Vincent Limouzy, Arnaud Mary, Lhouari Nourine, Takeaki Uno
Abstract: The \textsc{Transversal} problem, i.e, the enumeration of all the minimal transversals of a hypergraph in output-polynomial time, i.e, in time polynomial in its size and the cumulated size of all its minimal transversals, is a fifty years old open problem, and up to now there are few examples of hypergraph classes where the problem is solved. A minimal dominating set in a graph is a subset of its vertex set that has a non empty intersection with the closed neighborhood of every vertex. It is proved in [M. M. Kant\'e, V. Limouzy, A. Mary, L. Nourine, On the Enumeration of Minimal Dominating Sets and Related Notions, Submitted 2013] that the enumeration of minimal dominating sets in graphs and the enumeration of minimal transversals in hypergraphs are two equivalent problems. Hoping this equivalence can help to get new insights in the \textsc{Transversal} problem, it is natural to look inside graph classes. It is proved independently and with different techniques in [Golovach et al. - ICALP 2013] and [Kant\'e et al. - ISAAC 2012] that minimal edge dominating sets in graphs (i.e, minimal dominating sets in line graphs) can be enumerated in incremental output-polynomial time. We provide the first polynomial delay and polynomial space algorithm that lists all the minimal edge dominating sets in graphs, answering an open problem of [Golovach et al. - ICALP 2013]. Besides the result, we hope the used techniques that are a mix of a modification of the well-known Berge's algorithm and a strong use of the structure of line graphs, are of great interest and could be used to get new output-polynomial time algorithms.

### On the hardness of the decoding and the minimum distance problems for rank codes

Authors: Gaborit Philippe, Zemor Gilles
Abstract: In this paper we give a randomized reduction for the Rank Syndrome Decoding problem and Rank Minimum Distance problem for rank codes. Our results are based on an embedding from linear codes equipped with Hamming distance unto linear codes over an extension field equipped with the rank metric. We prove that if both previous problems for rank metric are in ZPP = RP$\cap$coRP, then we would have NP=ZPP. We also give complexity results for the respective approximation problems in rank metric.

### From Proximity to Utility: A Voronoi Partition of Pareto Optima

Authors: Hsien-Chih Chang, Sariel Har-Peled, Benjamin Raichel
Abstract: We present an extension of Voronoi diagrams where not only the distance to the site is taken into account when considering which site the client is going to use, but additional attributes (i.e., prices) are also considered. A cell in this diagram is then the locus of all points (i.e. clients) that consider the same set of sites to be relevant. In particular, the precise site a client might use from this candidate set depends on parameters that might change between usages, and the candidate set lists all of the relevant sites. The resulting diagram is significantly more expressive than Voronoi diagrams, but naturally has the drawback that its complexity, even in the plane, might be quite high.

Nevertheless, we show that if the attributes of the sites are drawn from the same distribution (note that the locations are fixed), then the expected complexity of the candidate diagram is near linear. To derive this result, we derive several new technical results, which are of independent interest.

### Bounds on the sum of L1 influences

Authors: Yuval Filmus, Hamed Hatami
Abstract: Let f: {-1,1}^n -> [-1,1] have degree d as a multilinear polynomial. It is well-known that the total influence of f is at most d. Aaronson and Ambainis asked whether the total L1 influence of f can also be bounded as a function of d. Backurs and Bavarian answered this question in the affirmative, providing a bound of O(d^3) for general functions and O(d^2) for homogeneous functions. We improve on their results by providing a bound of d^2 for general functions and O(d log d) for homogeneous functions.

### Adjacency labeling schemes and induced-universal graphs

Authors: Stephen Alstrup, Haim Kaplan, Mikkel Thorup, Uri Zwick
Abstract: We describe a way of assigning labels to the vertices of any undirected graph on up to $n$ vertices, each composed of $n/2+O(1)$ bits, such that given the labels of two vertices, and no other information regarding the graph, it is possible to decide whether or not the vertices are adjacent in the graph. This is optimal, up to an additive constant, and constitutes the first improvement in almost 50 years of an $n/2+O(\log n)$ bound of Moon. As a consequence, we obtain an induced-universal graph for $n$-vertex graphs containing only $O(2^{n/2})$ vertices, which is optimal up to a multiplicative constant, solving an open problem of Vizing from 1968. We obtain similar tight results for directed graphs, tournaments and bipartite graphs.

### Complexity theoretic limitations on learning DNF's

Authors: Amit Daniely, Shai Shalev-Shwatz
Abstract: Using the recently developed framework of [Daniely et al], we show that under a natural assumption on the complexity of refuting random K-SAT formulas, learning DNF formulas is hard. Furthermore, the same assumption entails the hardness of virtually all learning problems that were previously shown hard under cryptographic assumptions.

### Near-optimal sample compression for nearest neighbors

Abstract: We present the first sample compression algorithm for nearest neighbors with non-trivial performance guarantees. We complement these guarantees by demonstrating almost matching hardness lower bounds, which show that our bound is nearly optimal. Our result yields new insight into margin-based nearest neighbor classification in metric spaces and allows us to significantly sharpen and simplify existing bounds. Some encouraging empirical results are also presented.

### Supremum-Norm Convergence for Step-Asynchronous Successive Overrelaxation on M-matrices

Authors: Sebastiano Vigna
Abstract: Step-asynchronous successive overrelaxation updates the values contained in a single vector using the usual Gau\ss-Seidel-like weighted rule, but arbitrarily mixing old and new values, the only constraint being temporal coherence: you cannot use a value before it has been computed. We show that given a nonnegative real matrix $A$, a $\sigma\geq\rho(A)$ and a vector $\boldsymbol w>0$ such that $A\boldsymbol w\leq\sigma\boldsymbol w$, every iteration of step-asynchronous successive overrelaxation for the problem $(sI- A)\boldsymbol x=\boldsymbol b$, with $s >\sigma$, reduces geometrically the $\boldsymbol w$-norm of the current error by a factor that we can compute explicitly. Then, we show that given a $\sigma>\rho(A)$ it is in principle always possible to compute such a $\boldsymbol w$. This property makes it possible to estimate the supremum norm of the absolute error at each iteration without any additional hypothesis on $A$, even when $A$ is so large that computing the product $A\boldsymbol x$ is feasible, but estimating the supremum norm of $(sI-A)^{-1}$ is not.

### On Simplex Pivoting Rules and Complexity Theory

Abstract: We show that there are simplex pivoting rules for which it is PSPACE-complete to tell if a particular basis will appear on the algorithm's path. Such rules cannot be the basis of a strongly polynomial algorithm, unless P = PSPACE. We conjecture that the same can be shown for most known variants of the simplex method. However, we also point out that Dantzig's shadow vertex algorithm has a polynomial path problem. Finally, we discuss in the same context randomized pivoting rules.

### Network-Oblivious Algorithms

Authors: Gianfranco Bilardi, Andrea Pietracaprina, Geppino Pucci, Michele Scquizzato, Francesco Silvestri
Abstract: A framework is proposed for the design and analysis of \emph{network-oblivious algorithms}, namely, algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem's input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that, for a wide class of network-oblivious algorithms, optimality in the latter model implies optimality in the Decomposable BSP model, which is known to effectively describe a wide and significant class of parallel platforms. The proposed framework can be regarded as an attempt to port the notion of obliviousness, well established in the context of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed.

### Optimization Problems with Diseconomies of Scale via Decoupling

Authors: Konstantin Makarychev, Maxim Sviridenko
Abstract: We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as $x^q$, $q\ge 1$, with the amount $x$ of resources used. We define a novel linear programming relaxation for such problems, and then show that the integrality gap of the relaxation is $A_q$, where $A_q$ is the $q$-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems.

Our analysis relies on the decoupling inequality for nonnegative random variables. The inequality states that $$\big \|\sum_{i=1}^n X_i\big\|_{q} \leq C_q \,\big \|\sum_{i=1}^n Y_i\big\|_{q},$$ where $X_i$ are independent nonnegative random variables, $Y_i$ are possibly dependent nonnegative random variable, and each $Y_i$ has the same distribution as $X_i$. The inequality was proved by de la Pe\~na in 1990. However, the optimal constant $C_q$ was not known. We show that the optimal constant is $C_q=A_q^{\frac{1}{q}}$.

### TR14-052 | Circuit complexity, proof complexity, and polynomial identity testing | Joshua Grochow, Toniann Pitassi

from ECCC papers

We introduce a new and very natural algebraic proof system, which has tight connections to (algebraic) circuit complexity. In particular, we show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits ($VNP \neq VP$). As a corollary to the proof, we also show that super-polynomial lower bounds on the number of lines in Polynomial Calculus proofs (as opposed to the usual measure of number of monomials) imply the Permanent versus Determinant Conjecture. Note that, prior to our work, there was no proof system for which lower bounds on an arbitrary tautology implied *any* computational lower bound. Our proof system helps clarify the relationships between previous algebraic proof systems, and begins to shed light on why proof complexity lower bounds for various proof systems have been so much harder than lower bounds on the corresponding circuit classes. In doing so, we highlight the importance of polynomial identity testing (PIT) for understanding proof complexity. More specifically, we introduce certain propositional axioms satisfied by any Boolean circuit computing PIT. (The existence of efficient proofs for our PIT axioms appears to be somewhere in between the major conjecture that PIT is in P and the known result that PIT is in P/poly.) We use these PIT axioms to shed light on $AC^0[p]$-Frege lower bounds, which have been open for nearly 30 years, with no satisfactory explanation as to their apparent difficulty. We show that either: a. Proving super-polynomial lower bounds on $AC^0[p]$-Frege implies $VNP_{\mathbb{F}_p}$ does not have polynomial-size circuits of depth d - a notoriously open question for any $d \geq 4$ - thus explaining the difficulty of lower bounds on $AC^0[p]$-Frege, or b. $AC^0[p]$-Frege cannot efficiently prove the depth d PIT axioms, and hence we have a lower bound on $AC^0[p]$-Frege. We also prove many variants on this statement for other proof systems and other computational lower bounds. Finally, using the algebraic structure of our proof system, we propose a novel way to extend techniques from algebraic circuit complexity to prove lower bounds in proof complexity. Although we have not yet succeeded in proving such lower bounds, this proposal should be contrasted with the difficulty of extending $AC^0[p]$ circuit lower bounds to $AC^0[p]$-Frege lower bounds.

### First EATCS Young Researcher School on Automata, Logic and Games

from Luca Aceto

In the year 2013, the European Association for Theoretical Computer Science (EATCS) established a series of Young Researcher Schools on TCS topics. The first such school is devoted Automata, Logic and Games, and it will be held in Telc, Czech Republic, from July 27 to August 1, 2014.
The programme consists of five Basic Tutorials (four hours each) devoted to fundamental subjects, and eight Advanced Lectures (2 hours each) focussed on recent results and specific topics.

Basic Tutorials
---------------

- Probabilistic Model-Checking
Christel Baier (Dresden)

- Logic and Databases
Phokion G. Kolaitis (Santa Cruz)

- Games and Synthesis
Nir Piterman (Leicester)

- Logic and Automata
Wolfgang Thomas (Aachen)

- Timed Automata
Wang Yi (Uppsala)

-----------------

- The Unbounding Quantifier
Mikolaj Bojanczyk (Warsaw)

- Robustness in Timed Systems
Patricia Bouyer-Decitre (Cachan)

- A Logic-Based Approach to Cloud Computing
Jan Van den Bussche (Hasselt)

- Regular Automata and Monadic Theory
Didier Caucal (Paris)

- Stochastic model checking - A continuous time perspective
Holger Hermanns (Saarbruecken)

- Synthesis of Recursive Programs
Martin Lange (Kassel)

- Infinite-State Probabilistic Systems
Richard Mayr (Edinburgh)

- Prophetic Automata
Thomas Wilke (Kiel)
See  http://eatcs-school.fi.muni.cz/  for further details.

by Luca Aceto (noreply@blogger.com) at April 14, 2014 02:12 PM UTC