Theory of Computing Blog Aggregator

Authors: Christian Engels, B. V. Raghavendra Rao
Download: PDF
Abstract: Non-commutative arithmetic computations differ significantly from their commutative counterparts, in fact the determinant is known to have the same complexity as the permanent [Chien et.\ al STOC 2011, Bl\"aser ICALP 2013] and several lower bounds are known [Nisan STOC 1991].

Looking to obtain tight characterizations for hard special cases of permanent, we observe the following: 1) We exhibit a parameter $t$ for graphs of bounded component size so that there is an $n^{O(t)}$ algorithm for computing the Cayley permanent on such graphs. Also, we prove a $2^{\Omega (n)}$ lower bound against ABPs for computing the Cayley permanent on graphs with component size bounded by two. 2)We show that non-commutative permanent over matrices of rank one is at least as hard as the commutative permanent.

Additionally, by exploiting the structural weaknesses of non-commutative arithmetic circuits, we obtain efficient algorithms for problems such as DegSLP and CoeffSLP. This is in sharp contrast to the commutative case where the best known upper bound for DegSLP is ${\sf co-RP}^{\sf PP}$ and CoeffSLP is known to be $\# {\sf P}$ complete.

at September 03, 2014 12:40 AM UTC

Authors: Shiyan Zhong
Download: PDF
Abstract: We propose an efficient method for active particle selection, working with Hermite Individual Time Steps (HITS) scheme in direct N-body simulation code. For a simulation with $N$ particles, this method can reduce the computation complexity of active particle selection, from $O(N\cdot N_{step})$ to $O(\bar{N_{act}}\cdot N_{step})$, where $\bar{N_{act}}$ is the average active particle number in every time step which is much smaller than $N$ and $N_{step}$ is the total time steps integrated during the simulation. Thus can save a lot of time spent on active particle selection part, especially in the case of low $\bar{N_{act}}$.

at September 03, 2014 12:41 AM UTC

Authors: Chidambaram Annamalai, Christos Kalaitzis, Ola Svensson
Download: PDF
Abstract: We study the basic allocation problem of assigning resources to players so as to maximize fairness. This is one of the few natural problems that enjoys the intriguing status of having a better estimation algorithm than approximation algorithm. Indeed, a certain configuration-LP can be used to estimate the value of the optimal allocation to within a factor of $4 + {\epsilon}$. In contrast, however, the best known approximation algorithm for the problem has an unspecified large constant guarantee.

In this paper we significantly narrow this gap by giving a $13$-approximation algorithm for the problem. Our approach develops a local search technique introduced by Haxell [Hax95] for hypergraph matchings, and later used in this context by Asadpour, Feige, and Saberi [AFS12]. For our local search procedure to terminate in polynomial time, we introduce several new ideas such as lazy updates and greedy players. Besides the improved approximation guarantee, the highlight of our approach is that it is purely combinatorial and uses the configuration-LP only in the analysis.

at September 03, 2014 12:40 AM UTC

Authors: Michael T. Goodrich, Joseph A. Simons
Download: PDF
Abstract: Motivated by privacy preservation for outsourced data, data-oblivious external memory is a computational framework where a client performs computations on data stored at a semi-trusted server in a way that does not reveal her data to the server. This approach facilitates collaboration and reliability over traditional frameworks, and it provides privacy protection, even though the server has full access to the data and he can monitor how it is accessed by the client. The challenge is that even if data is encrypted, the server can learn information based on the client data access pattern; hence, access patterns must also be obfuscated. We investigate privacy-preserving algorithms for outsourced external memory that are based on the use of data-oblivious algorithms, that is, algorithms where each possible sequence of data accesses is independent of the data values. We give new efficient data-oblivious algorithms in the outsourced external memory model for a number of fundamental graph problems. Our results include new data-oblivious external-memory methods for constructing minimum spanning trees, performing various traversals on rooted trees, answering least common ancestor queries on trees, computing biconnected components, and forming open ear decompositions. None of our algorithms make use of constant-time random oracles.

at September 03, 2014 12:40 AM UTC

In the last season of  The Big Band Theory Sheldon wants to change field from String theory to something else (I don't recall if he settled on what it would be, though Standard Model Physics, Quantum Loop Gravity, Calculation of nuclear matrix elements, were mentioned negatively, and Geology is, according to Sheldon, not a real science.)

Sheldon faced opposition from his department. And since Physics is... hard... changing fields seems hard.
How hard is it to change fields, both intellectually and in terms of your dept?

  1. If you are hired as a string theorist and you are one of the only ones in your dept, your dept may quite reasonably ask you to still teach string theory. I think this is fine.
  2. Math and Physics are far older than CS so to change fields requires learning more background knowledge. In CS it was easier about 20 years ago, but CS has grown so much that I suspect it would be harder now.
  3. There may be a time when you have less papers and grants as you are making the transition. Hence it may be unwise to do this before you get Tenure.
  4. If your dept hired you to do String Theory and you change to Calculation of nuclear Matrix elements should they mind that? I would think that as long as it's still good work they wouldn't. And they should give you enough time to get grants and papers in it. If you change to a field they don't care about, or change to a field not in the area they might not like that. If Sheldon went into Computational Complexity then would his dept (physics) be justified in not liking that?  If he solved P vs NP then all would be forgiven.
  5. Perhaps the further away you change from your field the better your work has to be before your dept doesn't mind. This could be modelled by a formula. Maybe Sheldon will change to computational dept dynamics and derive it for us.
  6. The obvious thing to say is Depts should allow their professors to wander free as a bird and not erect arbitrary walls since the best work comes from people not being constrained. I would like to think this is true but I wonder--- how many people have changed fields and ended up doing great work? good work? totally failed?

by GASARCH ( at September 02, 2014 05:40 PM UTC

See a number, make a set


Henning Bruhn and Oliver Schaudt are mathematicians or computer scientists, or both. They are currently working in Germany, but wrote their survey on the Frankl Conjecture (FC) while working together in Paris. Schaudt is also known as an inventor of successful mathematical games.

Today Ken and I wish to talk about the Frankl conjecture and a principle of mathematics.

Before we start let’s be up-front: The FC is a mathematical disease. Ken and I are trying to avoid catching a full blown case—well Ken also has a lot on his plate in the chess world. We have some small insights to share—perhaps doing this will help someone solve the problem, or at least advance the understanding of it. In any case talking about it may keep us from being completely infected.

We covered the conjecture last spring. As Bruhn and Schaudt’s survey relates, FC probably pre-dates Péter Frankl’s 1979 formulation of it while he was in-transit from Paris to Montreal, but all agree that Frankl was the last to discover it. It reads:

If a family {\cal F} of subsets of a finite set {U} is closed under union, then there is an element of {U} that belongs to at least half of the sets in the family.

One may suppose that the {\emptyset} always belongs to the union-closed family. The “half” is tight, as follows on considering the power set of {U} as the family.

The Survey

The survey covers several equivalent forms of the FC, but not the simple re-statement involving the following “Frankl Property” (FP) for Boolean functions {f} on {\{0,1\}^n}:

\displaystyle  f(x \vee y) \ge f(x)f(y),

The Frankl Conjecture (FC) property then reads: there is an {i} such that at least half of the satisfying assignments {x} have {x_i = 1}. The conjecture itself then states:

If {f} has FP, then it satisfies FC.

The survey actually emphasizes the dual form, which in our terms involves the function {f'(x) = f(x')}, where {x'} is the bit-flip of {x}. FP becomes {f'(u \wedge v) \geq f'(u)f'(v)}, which means that the satisfying assignments of the flipped function correspond to an intersection-closed family of sets. The FC then states that there is an {i} such that at most half the satisfying assignments {u} of {f'} have {u_i = 1}. Just to be different, we stay with the union-closed form, though in fact Frankl stated the other. Then the survey’s equivalent statements include:

  • Every finite lattice {S} has an element that is not a greatest lower bound of two other elements, but sits above at most half the elements (including itself).

  • Every non-trivial graph has a “popular edge,” meaning an edge both of whose vertices belong to at least half of the minimal vertex covers.

We say more about lattices below–the nifty fact is that the Frankl Property makes the satisfying assignments (plus {0^n}) into a lattice. The graph statement actually comes from a paper by Bruhn and Schaudt with Pierre Charbit and Jan Arne Telle, the last of whom hosted Ken for lectures in Bergen, Norway, two years ago. Since every vertex cover must include at least one vertex from every edge, at least one vertex from every edge must belong to half or more of the minimal covers. Extending this reasoning shows that every odd cycle has a popular edge, so the statement is problematic only for bipartite graphs.

The survey authors say the lattice statement strips FC “down to its bare essential parts: the elements have vanished and all that counts is the inclusion relationship between the sets.” Indeed the sets seem to vanish too. We have the opposite idea: how can we make the structure surrounding the conjecture richer?

Boolean Function Approach

We are complexity theorists, so our hammer is Boolean functions: forget lattices and graphs. We love Boolean functions. We try to see everything through properties of Boolean functions; this is both the strength of our area and its weakness. Okay, ours may be just a trivial restatement of FC: the function {f} is just the indicator function of the sets. Yet. Yet, we believe that this simple insight may help us make progress on FC. There are many equivalent statements of FC as questions about lattices, about graphs, and about other mathematical objects. What we find exciting about this statement is that it suggests two things:

  • A potential powerful set of tools from Boolean complexity theory that may shed light on FC.

  • A large class of possible weaker conjectures that we may be able to solve.

The latter echoes a motivation for the lattice and graph forms expressed in the survey—they yield interesting special cases. We can try to prove theorems like this:

Theorem: For any {f} that has property {X} in addition to FP, the FC holds.

We can prove this when {X} is the property “symmetric” or “monotone.” We wonder about other properties such as having low degree as polynomials, having read-once decision trees, and so on.

Influence Of Boolean Functions

Let {f: \{0,1\}^{n} \rightarrow \{0,1\}} be a Boolean function. The notion of the influence of the {i}th input of {f} is defined by

\displaystyle  I_{i}(f)=Pr_{x}[ f(x) \neq f(x^{i}) ],

where {x^{i}} is equal to

\displaystyle  x_{1 },\dots, x_{i-1}, \neg x_{i}, x_{i+1}, \dots, x_{n}.

Also, the probability is over {x} selected uniformly. There are many applications of this basic notion, and it is used in complexity theory in many places.

The Principle

If you see a number or a count of something, it often helps to replace it by the set of “somethings.” This yields additional information, and perhaps additional insight. Among many examples of this phenomenon in mathematics, we mention the notion of Betti numbers in topology. Named for Enrico Betti, they measure the complexity of spaces: the higher the number the more complex the connectivity of the space, roughly speaking. For example, Andy Yao used them to bound the depth of decision trees.

While Betti numbers remain useful, a major advance in topology was made when they were replaced by homology groups. These groups yield the Betti numbers but contain additional information. This ability to recover the old idea, yet yield a richer context, is what makes homology groups so powerful. See this for an interesting discussion of how Betti numbers evolved into homology groups from the 1920’s on, with Emmy Noether in a leading role.

Influence As A Set

Following our principle we plan on replacing {I_{i}(f)} as a number, by a set. Let us use {J_{i}(f)} to denote those {x} such that

\displaystyle  f(x) \neq f(x^{i}).

Obviously the following is true:

\displaystyle  \frac{|J_{i}(f)|}{2^{n}} = I_{i}(f).

There must be a better notation than {J_{i}}—we are open to suggestions. Any?

The key is the following lemma:

Lemma: Let {f(x_{1},\dots,x_{n})} be a Boolean function and let {x_{i}} be fixed. Then the Boolean inputs can be partitioned into six sets:

\displaystyle  A_{0},A_{1},B_{0},B_{1},C_{0},C_{1}.

These sets have the following properties:

  1. The variable {x_{i}} is equal to {0} on {A_{0} \cup B_{0} \cup C_{0}};

  2. The variable {x_{i}} is equal to {1} on {A_{1} \cup B_{1} \cup C_{1}};

  3. The union {A_{0} \cup A_{1}} is equal to {J_{i}};

  4. The function is always {0} on {B_{0} \cup B_{1}};

  5. The function is always {1} on {C_{0} \cup C_{1}};

  6. Finally {|A_{0}|=|A_{1}|} and {|B_{0}|=|B_{1}|} and {|C_{0}|=|C_{1}|}.


The key observation from this lemma is that the number of {x} such that {f(x)=1} and {x_{i}=1} is exactly

\displaystyle  |A_{1} \cap S| + |C_{1}|,

where {S} is the set of {x} so {f(x)=1}. This implies that for any {f}, it satisfies FC if and only if some decomposition has {f(x) = 1} for at least half of {A_{1}}. When {f} is monotone the next theorem makes this clear, but the hope is that FP alone can be made to imply that at least half of these {x} have {f(x)=1}.

Results for Monotone Boolean Functions

Every monotone function {f} immediately has FP, since {f(x) = 1} and {f(y) = 1} implies {f(x \vee y) = 1.} This insight says to us that the FP is a kind of weaker notion than monotone. But perhaps not too much weaker.

Theorem 3 Every monotone Boolean function satisfies FC.

Proof: Since {f} is monotone, an {x} in {A_{1}} must have {f(x)=1}. So {x_{i}} witnesses FC for {f}. This uses that one half at least of the places where {f} is {1} are when {x_{i}=1}. \Box

This also yields the following result.

Theorem 4 Every symmetric Frankl function satisfies FC.

Proof: Let {f(x)} be a non-trivial symmetric Boolean function. Let {L_{k}} be the set of all {x} so that {x} have {k} {1}‘s and {n-k} {0}‘s: these are the level sets. Since {f} is symmetric it is easy to see that for each level set either {f} is all {1} on this set or all {0}. Let {k} be the smallest such that {f} is {1} on the level {L_{k}}. Then we claim that {f} is also {1} on all level sets {L_{m}} with {m \ge k}. This claim shows that {f} is monotone and completes the proof.

So assume that {m=k+1}. Let {z} be in {L_{m}}. It is clear there are {x} and {y} both in {L_{k}} so that {x \vee y = z}. But then by the Frankl property, {f(z)=1}. Thus the claim is proved. \Box

Lattice Connections

It is interesting to try to get these results from the lattice version of FC. Given a Frankl function {f}, let

\displaystyle  S = \{0^n\} \cup \{x: f(x) = 1\}.

Given any {x,y \in S}, the least possible upper bound of {x,y \in S} is {x \vee y}, and by FP this belongs to {S}, so {S} obeys the unique-join property of lattices. If the set {L(x,y)} of lower bounds of {x} and {y} had no maximum, then there would be {u,v \in L(x,y)} such that {u \vee v \notin L(x,y)}. However, {u \vee v \in S} by FP again, and {u \vee v \in L(x,y)}, a contradiction. Thus {S} has unique meets, though {x \wedge_S y} need not equal the bitwise-AND {x \wedge y}—indeed the meet can be {0^n}. So FP always makes {S} into a lattice.

For an example, consider {f = x_1 \vee (x_2 \wedge x_3)}. This {f} is monotone, and {S} excludes only {010} and {001}.


Now consider {x = 100}, {a = 011}, and {b = 110}. Then {a \wedge_S b = 000} since {010} is not in {S}, so {x \vee (a \wedge_S b) = x}. On the other hand, {x \vee a = 111} so {(x \vee a) \wedge_S b = b}. Thus {S} violates the equation

\displaystyle  (\forall x,b): x \leq b \implies (\forall a)[x \vee (a \wedge_S b) = (x \vee a) \wedge_S b],

which defines a modular lattice. Thus we have a monotone Boolean function whose lattice is not modular, hence not distributive either. However, for monotone {f}, {S} “almost” satisfies the following condition, which is dual to a condition called “lower semimodular” in the survey:

For all incomparable {a,b \in S}, if there is no element of {S} properly between {c = a \wedge_S b} and {b}, then there is no element properly between {a} and {a \vee b}.

Given {f} monotone, it follows that unless {a \wedge_S b = 0^n}, we have {f(a \wedge_S b) = 1} so {f(a \wedge b) = 1}. The if-clause then implies there must be exactly one {i} such that {b_i = 1} and {a_i = 0}. This implies that {a \vee b} flips only place {i} compared to {a}, so the then-clause also holds. Thus the only exception is when {0^n} acts as the meet, and indeed the above lattice is a counterexample with {a = 100} and {b = 011}.

We suspect that the proof method of a paper by Tetsuya Abe and Bumpei Nagano cited by the survey might overcome this exception, and hence prove the lattice version of FC in the monotone case directly in such manner, but we are not sure. All this should say something interesting about lattices, since monotonicity is such a natural Boolean property.

Open Problems

We noted that monotone Boolean functions have FP. Turned around, this says that Frankl functions generalize monotone functions. Can we hence prove complexity lower bounds on these functions?

One advantage of thinking of this as a Boolean problem is that new tools arise that might be available. Can we use the kind of Fourier methods that have worked for results on Boolean functions already? A prime example of the Boolean view being so powerful is the proof of the famous theorem of Kenneth Arrow on voting schemes.

by Pip at September 02, 2014 04:08 PM UTC

Let {G=(V,E)} be a {d}-regular graph, and let

\displaystyle  d= \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n

be the eigenvalues of the adjacency matrix of {A} counted with multiplicities and sorted in descending order.

How good can the spectral expansion of {G} be?

1. Simple Bounds

The simplest bound comes from a trace method. We have

\displaystyle  trace(A^2) = \sum_i \lambda_i^2

by using one definition of the trace and

\displaystyle  trace(A^2) = \sum_{v,v} (A^2)_{v,v} \geq dn

using the other definition and observing that {(A^2)_{v,v}} counts the paths that go from {v} to {v} in two steps, of which there are at least {d}: follow an edge to a neighbor of {v}, then follow the same edge back. (There could be more if {G} has multiple edges or self-loops.)

So we have

\displaystyle  dn \leq d^2 + \sum_{i=2,\ldots,n} \lambda_i ^2

and so

\displaystyle  \max_{i=2,\ldots,n} |\lambda_i | \geq \sqrt d \cdot \sqrt{\frac {n-d}{n-1}}

The condition {d \leq n(1-\epsilon)} is necessary to get lower bounds of {\Omega(\sqrt d)}; in the clique, for example, we have {d=n-1} and {\lambda_2 = \lambda_n = -1}.

A trace argument does not give us a lower bound on {\lambda_2}, and in fact it is possible to have {\lambda_2=0} and {d= n/2}, for example in the bipartite complete graph.

If the diameter of {G} is at least 4, it is easy to see that {\lambda_2 \geq \sqrt d}. Let {a,b} be two vertices at distance 4. Define a vector {x} as follows: {x_a = 1}, {x_v = 1/\sqrt d} if {v} is a neighbor of {a}, {x_b=-1} and {x_v = - 1/\sqrt d} if {v} is a neighbor of {b}. Note that there cannot be any edge between a neighbor of {a} and a neighbor of {b}. Then we see that {||x||^2 = 4}, that {x^T A x \geq 4\sqrt d} (because there are {2d} edges, each counted twice, that give a contribution of {1/\sqrt d} to {\sum_{u,v} A_{uv} x_u x_v}) and that {x} is orthogonal to {(1,\ldots,1)}.

2. Nilli’s Proof of the Alon-Boppana Theorem

Nilli’s proof of the Alon-Boppana theorem gives

\displaystyle  \lambda_2 \geq 2 \sqrt{ d-1 } - O \left( \frac {\sqrt{d-1}}{diam(G)-4} \right)

where {diam(G) \geq \frac {\log |V|}{\log d-1}} is the diameter of {G}. This means that if one has a family of (constant) degree-{d} graphs, and every graph in the family satisfies {\lambda_2 \leq \lambda}, then one must have {\lambda \geq 2 \sqrt{d -1}}. This is why families of Ramanujan graphs, in which {\lambda_2 \leq 2 \sqrt{d-1}}, are special, and so hard to construct, or even to prove existence of.

Friedman proves a stronger bound, in which the error term goes down with the square of the diameter. Friedman’s proof is the one presented in the Hoory-Linial-Wigderson survey. I like Nilli’s proof, even if it is a bit messier than Friedman’s, because it starts off with something that clearly is going to work, but the first two or three ways you try to establish the bound don’t work (believe me, I tried, because I didn’t see why some steps in the proof had to be that way), but eventually you find the right way to break up the estimate and it works.

So here is Nilli’s proof.

We are going to use essentially the same vector that we used to analyze the spectrum of the infinite tree, although the analysis will be a bit different.

Let {a,b} be two vertices in {G} at distance {diam(G)}, and call {k = \lfloor (diam(G) -4)/2 \rfloor}. Let {a'} be a neighbor of {a}. We say that the distance of a vertex {v} from {\{ a,a'\}} is the smallest of the shortest path distance from {v} to {a} and from {v} to {a'}.

We construct a vector {x\in {\mathbb R}^V} as follows:

  • {x_v = \displaystyle \frac 1 { ( \sqrt{d-1})^{t }}} if {v \mbox{ is at distance } t \mbox{ from } \{ a,a'\}, \ t \leq k}
  • {x_v = 0} if {v} is at distance {> k} from {\{ a,a'\}}

Note that this is more or less the same vector we constructed in the case of the tree. The reason for talking about the distance from two vertices instead of one, is that we want to say that every vertex is adjacent to at most {d-1} vertices whose value of {x_v} is strictly smaller; in the case of a tree, the root is exceptional because it has {d} neighbors whose value of {x_v} is strictly smaller. There will be a step in the proof in which this choice really makes a difference.

We claim

\displaystyle   x^TA x \geq 2 \sqrt{d-1} \cdot ||x ||^2 \cdot \left( 1 - \frac 1 {k-1} \right) \ \ \ \ \ (1)

It turns out that, in order to prove (1) it is easier to reason about the Laplacian matrix than about the adjacency matrix. Define {L:= dI - A} to be the (non-normalized) Laplacian of {G}. We have the following nice expression for the quadratic form of {L}

\displaystyle  x^T L x = dx^Tx - x^TAx = \sum_{ \{ u,v \} \in E} (x_u - x_v)^2

For every vertex {u} let us call {S_u} (for smaller) the set of neighbors {v} of {u} such that {x_v < x_u}. We always have {|S_u| \leq d-1}. Let us call {L_t} the set of vertices at distance exactly {t} from {\{a,a'\}}.

Now we do the calculations:

\displaystyle  \sum_{ \{ u,v \} \in E} (x_u - x_v)^2 = \sum_{t\leq k} \sum_{u\in L_t} \sum _{ v\in S_u} (x_u - x_v)^2

\displaystyle  = \sum_{t\leq k-1} \sum_{u,\in L_t} \sum_{ v\in S_u} (x_u - x_v)^2 + \sum_{u\in L_k} \sum_{v\in S_u} x_u^2

\displaystyle  = \sum_{t\leq k-1} \sum_{u,\in L_t} |S_u| \cdot \left( x_u - \frac {x_u}{\sqrt {d-1}} \right)^2 + \sum_{u\in L_k} |S_u| x_u^2

\displaystyle  \leq \sum_{t\leq k-1} \sum_{u,\in L_t} (d-1) \cdot \left( x_u - \frac {x_u}{\sqrt {d-1}} \right)^2 + \sum_{u\in L_k} (d-1) x_u^2

\displaystyle  = \sum_{t\leq k-1} \sum_{u,\in L_t} x_u^2 \cdot \left( \sqrt{d-1} -1 \right)^2 + \sum_{u\in L_k} (d-1) x_u^2

\displaystyle  = \sum_{t\leq k-1} \sum_{u,\in L_t} x_u^2 \cdot (d - 2 \sqrt {d-1} ) + \sum_{u\in L_k} (d-1) x_u^2

\displaystyle  = \sum_u d x_u^2 - \sum_u 2\sqrt{d-1} x_u^2 + \sum_{u\in L_k} (2 \sqrt {d-1} - 1) x_u^2


\displaystyle  \sum_{u\in L_k} x_u^2 = |L_k | \frac 1 {(d-1)^k} \leq \frac 1{k+1} \sum_{t=0}^k |L_t| \frac{1} {(d-1)^t} = \frac 1 {k+1} \sum_v x_v^2

because {|L_t| (d-1)^{-t}} decreases (or at least does not increase) for increasing {t}.

Putting everything together we have

\displaystyle  x^T L x \leq d ||x||^2 - 2 \sqrt{d-1} ||x||^2 + \frac {2 \sqrt{d-1} -1}{k+1}||x||^2

and so

\displaystyle  x^T Ax \geq \left( 2 \sqrt{d-1} - \frac {2 \sqrt{d-1} -1}{k+1} \right) ||x||^2

Now we are finally almost done: define a vector {y} with the same construction we did for {x}, but using the set {\{b,b'\}} as the reference for the distance, where {b'} is a neighbor of {b}. We then have

\displaystyle  y^T Ay \geq \left( 2 \sqrt{d-1} - \frac {2 \sqrt{d-1} -1}{k+1} \right) ||y||^2

It is clearly not possible for a vertex at distance {\leq k} from {\{a,a'\}} to also be at distance {\leq k} from {\{b,b'\}}, otherwise we would have a path of length {\leq 2k+2} from {a} to {b}, so the vectors {x} and {y} are non-zero on disjoint subsets of coordinate, and hence are orthogonal.

But we can say more: we also have {x^T A y = 0}, because there cannot be an edge {\{ u,v\}\in E} such that both {x_u >0} and {y_v >0}, because otherwise we would have a path of length {\leq 2k + 3} from {a} to {b}.

This means that if we take any linear combination {z = \alpha x + \beta y} of {x} and {y} we have

\displaystyle  z^T A z = \alpha^2 x^T Ax + \beta^2 y^TAy

\displaystyle  \geq \left( 2 \sqrt{d-1} - \frac {2 \sqrt{d-1} -1}{k+1} \right) \cdot ( \alpha^2 ||x||^2 + \beta^2 ||y||^2 )

\displaystyle  = \left( 2 \sqrt{d-1} - \frac {2 \sqrt{d-1} -1}{k+1} \right) ||z||^2

so we have found a two-dimensional set of vectors whose Rayleigh quotient is at most the above expression, and so

\displaystyle  \lambda_2 \geq 2 \sqrt{d-1} - \frac {2 \sqrt{d-1} -1}{k+1}

What did just happen? The basic intuition is that, as in the infinite tree, we set weights to go down by {1/\sqrt{d-1}} every time we get away from the “root,” and we would like to argue that, for every node {x_u}, we have

\displaystyle  \sum_{v} A_{u,v} x_v \geq 2\sqrt{d-1} x_u

by reasoning that one of the neighbors must be closer to the root, and hence have value {\sqrt{d-1} x_u}, while the other {d-1} neighbors are all at least {x_u/\sqrt{d-1}}. This bound fails at the “leaves” of the construction, which is fine because they account for a small portion of {||x||^2}, but it also fails at the root, which is not adjacent to any larger vertex. In the case of the infinite tree this is still ok, because the root also accounts for only a small portion of {||x||^2}; in general graphs, however, the “root” vertex might account for a very large fraction of {||x||^2}.

Indeed, the root contributes 1 to {||x||^2}, and each set {L_t} contributes {|L_t| \cdot (d-1)^t}. If the size of {L_t} grows much more slowly than {(d-1)^t}, then the contribution of the root to {||x||^2} is too large and we have a problem. In this case, however, for many levels {t}, there have to be many vertices in {L_t} that have fewer than {d-1} edges going forward to {L_{t+1}}, and in that case, for many vertices {u \in L_t} we will have that {\sum_v A_{u,v} x_v} will be much more than {2 \sqrt{d-1} x_u^2}.

Although it seems hopeless to balance this argument and charge the weight {x_u x_v} of the edge {(u,v)} in just the right way to {x_u^2} and to {x_v^2}, the calculation with the Laplacian manages to do that automatically.

3. A More Sophisticated “Trace” Method

The Hoory-Linial-Wigderson survey also gives a very clean and conceptual proof that

\displaystyle  \sigma_2 = \max \{ |\lambda_2| , | \lambda_n| \} \geq 2 \sqrt {d-1} \cdot \left( 1 - O \left( \frac {\log diam(G)}{diam(G)} \right) \right)

via a trace argument. I am not sure who was the first to come up with this idea.

Let us pick two vertices {a,b} at distance {diam(G)}, and call {x := {\bf 1}_{\{ a\}}}, {y:= {\bf 1 }_{\{ b\}}} and {z:= x-y}. Fix {k = (diam(G)-2)/2}. Then {A^k x} and {A^ky} are orthogonal vectors, because the coordinates on which {A^kx} is nonzero correspond to vertices at distance {\leq k} from {a} and the coordinates on which {A^ky} is nonzero correspond to vertices at distance {\leq k} from {b}, and these conditions cannot be simultaneously satisfied. This means that {x^T A^{2k} y = 0}.

Since {z} is orthogonal to {(1,\ldots,1)}, we have

\displaystyle  z^T A^{2k} z \leq \sigma_2^{2k} ||z||^2 = 2 \sigma_2^{2k}

but we also have

\displaystyle  z^T A^{2k} z = x^T A^{2k} x + y^T A^{2k} y - 2 x^T A^{2k} y = x^T A^{2k} x + y^T A^{2k} y

Now, {x^T A^{2k} x} is the number of walks of length {2k} in {G} that start at {a} and get back to {a}. In every {d}-regular graph, and for every start vertex, the number of such walks is at least the corresponding number in the infinite {d}-ary tree. This is clear if {G} is a Cayley graph; it takes a few minutes (at least I had to think about it for a bit) to see that it holds in every graph.

Ok, then, what is the number of closed walks of length {k} in the infinite {d}-ary tree? This is a long story, but it is a very well-studied question and there is a bound (not the tightest know, but sufficient for our purposes) giving

\displaystyle \sigma_2^k \geq \min \{ x^T A^{2k} x , y^T A^{2k} y \}

\displaystyle  \geq t_{2k} \geq {2k \choose k} \frac 1{k+1} (d-1)^k

\displaystyle  = \Omega( 2^{2k} \cdot k^{-1.5} \cdot (d-1)^k )


\displaystyle  \sigma_2 \geq 2 \sqrt{d-1} \cdot (\Omega(k^{-1.5}))^{1/2k} \geq 2 \sqrt{d-1} \cdot \left( 1 - \frac { O(\log k) }{k} \right)

I put “trace” in quote in the title of this section, but one can turn the argument into an honest-to-God application of the trace method, although there is no gain in doing so. Pick an even number {k}. We can say that

\displaystyle  trace(A^k) = \sum_i \lambda_i^k \leq d^k + (n-1) \sigma_2^k


\displaystyle  trace(A^k) \geq n t_k \geq \Omega( 2^k \cdot k^{-1.5} \cdot \sqrt{d-1} )

which gives

\displaystyle  \sigma_2 \leq 2 \sqrt{d-1} \cdot \left( 1 - \frac { (O\log k) }{k} \right)

for {k << \frac{ \log n}{\log d}}, which is implied by the bound based on diameter that we proved above.

by luca at September 02, 2014 03:13 AM UTC

Authors: Maximilian Aulbach, Martin Fink, Julian Schuhmann, Alexander Wolff
Download: PDF
Abstract: We study the problem of selecting a maximum-weight subgraph of a given graph such that the subgraph can be drawn within a prescribed drawing area subject to given non-uniform vertex sizes. We develop and analyze heuristics both for the general (undirected) case and for the use case of (directed) calculation graphs which are used to analyze the typical mistakes that high school students make when transforming mathematical expressions in the process of calculating, for example, sums of fractions.

at September 02, 2014 12:00 AM UTC

Authors: Jason Teutsch, Marius Zimand
Download: PDF
Abstract: An index $e$ in a numbering of partial-recursive functions is called minimal if every lesser index computes a different function from $e$. Since the 1960's it has been known that, in any reasonable programming language, no effective procedure determines whether or not a given index is minimal. We investigate whether the task of determining minimal indices can be solved in an approximate sense. Our first question, regarding the set of minimal indices, is whether there exists an algorithm which can correctly label 1 out of $k$ indices as either minimal or non-minimal. Our second question, regarding the function which computes minimal indices, is whether one can compute a short list of candidate indices which includes a minimal index for a given program. We give some negative results and leave the possibility of positive results as open questions.

at September 02, 2014 12:40 AM UTC

Authors: Michael A. Bekos, Sabine Cornelsen, Luca Grilli, Seok-Hee Hong, Michael Kaufmann
Download: PDF
Abstract: Fan-planar graphs were recently introduced as a generalization of 1-planar graphs. A graph is fan-planar if it can be embedded in the plane, such that each edge that is crossed more than once, is crossed by a bundle of two or more edges incident to a common vertex. A graph is outer-fan-planar if it has a fan-planar embedding in which every vertex is on the outer face. If, in addition, the insertion of an edge destroys its outer-fan-planarity, then it is maximal outer-fan-planar. In this paper, we present a polynomial-time algorithm to test whether a given graph is maximal outer-fan-planar. The algorithm can also be employed to produce an outer-fan-planar embedding, if one exists. On the negative side, we show that testing fan-planarity of a graph is NP-hard, for the case where the rotation system (i.e., the cyclic order of the edges around each vertex) is given.

at September 02, 2014 12:41 AM UTC

Authors: Amaury Pouly, Daniel S. Graça
Download: PDF
Abstract: In this paper we investigate the computational complexity of solving ordinary differential equations (ODEs) $y^{\prime}=f(y)$ over \emph{unbounded} domains. Contrarily to the bounded case, this problem has not been well-studied, apparently due to the "conventional wisdom" that it can always be reduced to the bounded case by using rescaling techniques. However, as we show in this paper, rescaling techniques do not seem to provide meaningful insights on the complexity of this problem, since the use of such techniques introduces a dependence on parameters which are hard to compute.

Instead, we take another approach and study ODEs of the form $y^{\prime}=f(t,y)$, where each component of $f$ is an elementary function in the sense of Analysis, i.e.~each component can be built from the usual functions of Analysis: polynomials, trigonometric functions, the exponential, etc. through composition and basic operations (addition, product, etc.). We present algorithms which numerically solve these ODEs over unbounded domains. These algorithms have guaranteed precision, i.e.~given some arbitrarily large time $T$ and error bound $\varepsilon$ as input, they will output a value $Y$ which satisfies $\|y(T)-Y\|\leq\varepsilon$. We analyze the complexity of solving these algorithms and show that they compute $Y$ in time polynomial in several quantities like the time $T$ or the precision of the output $\varepsilon$. We consider both algebraic complexity and bit complexity.

at September 02, 2014 12:40 AM UTC

Authors: Anatoly Panyukov
Download: PDF
Abstract: Polynomial algorithm for solving "Hamiltonian circuit" problem is presented in the paper. Computational complexity of the algorithm is equal $O(n^{20.5}\log_2n\log_2\log_2n)$ where $n$ is cardinality of the observed graph vertex set. That is why polynomial solvability for NP-complete problems is proved.

at September 02, 2014 12:40 AM UTC

Authors: Martin Nöllenburg, Roman Prutkin, Ignaz Rutter
Download: PDF
Abstract: An $st$-path in a drawing of a graph is self-approaching if during a traversal of the corresponding curve from $s$ to any point $t'$ on the curve the distance to $t'$ is non-increasing. A path has increasing chords if it is self-approaching in both directions. A drawing is self-approaching (increasing-chord) if any pair of vertices is connected by a self-approaching (increasing-chord) path.

We study self-approaching and increasing-chord drawings of triangulations and 3-connected planar graphs. We show that in the Euclidean plane, triangulations admit increasing-chord drawings, and for planar 3-trees we can ensure planarity. Moreover, we give a binary cactus that does not admit a self-approaching drawing. Finally, we show that 3-connected planar graphs admit increasing-chord drawings in the hyperbolic plane and characterize the trees that admit such drawings.

at September 02, 2014 12:42 AM UTC

Authors: Sayan Bandyapadhyay
Download: PDF
Abstract: We study a natural extension of the Maximum Weight Independent Set Problem (MWIS), one of the most studied optimization problems in Graph algorithms. The problem is as follows. We are given a graph $G=(V,E)$, a weight function $w: V \rightarrow \mathbb{R^+}$, a budget function $b: V \rightarrow \mathbb{Z^+}$, and a positive integer $B$. A $k$-budgeted independent set in $G$ is a subset of vertices such that no pair of vertices in that subset are adjacent and the sum of the budgets of the vertices in that subset is at most $k$. The goal is to find a $B$-budgeted independent set in $G$ such that its weight is maximum among all the $B$-budgeted independent sets in $G$. We refer to this problem as MWBIS. Being a generalization of MWIS, MWBIS also has several applications in Scheduling, Mobile networks and so on. Due to the hardness results implied from MWIS, we study the MWBIS problem in several special classes of graphs. We design exact algorithms for trees, forests, cycle graphs, and interval graphs. In unweighted case we design an approximation algorithm for $d+1$-claw free graphs whose approximation ratio is competitive with the approximation ratio of MWIS (unweighted). We have also designed approximation algorithms for MWBIS in bounded degree graphs.

at September 02, 2014 12:00 AM UTC

Authors: Hung-Lung Wang
Download: PDF
Abstract: In this paper, we are concerned with the weighted backup 2-center problem on a tree. The backup 2-center problem is a kind of center facility location problem, in which one is asked to deploy two facilities, with a given probability to fail, in a network. Given that the two facilities do not fail simultaneously, the goal is to find two locations, possibly on edges, that minimize the expected value of the maximum distance over all vertices to their closest functioning facility. In the weighted setting, each vertex in the network is associated with a nonnegative weight, and the distance from vertex $u$ to $v$ is weighted by the weight of $u$. With the strategy of prune-and-search, we propose a linear time algorithm, which is asymptotically optimal, to solve the weighted backup 2-center problem on a tree.

at September 02, 2014 12:41 AM UTC

Authors: Akira SaiToh
Download: PDF
Abstract: We consider the problem of mapping digital data encoded on a quantum register to analog amplitudes in parallel. It is shown to be unlikely that a fully unitary polynomial-time quantum algorithm exists for this problem; NP becomes a subset of BQP if it exists. In the practical point of view, we propose a non-unitary linear-time algorithm using quantum decoherence. It tacitly uses an exponentially large physical resource, which is typically a huge number of identical molecules. Quantumness of correlation appearing in the process of the algorithm is also discussed.

at September 02, 2014 12:40 AM UTC

Authors: Oswin Aichholzer, Ruy Fabila-Monroy, Hernán González-Aguilar, Thomas Hackl, Marco A. Heredia, Clemens Huemer, Jorge Urrutia, Pavel Valtr, Birgit Vogtenhuber
Download: PDF
Abstract: We consider a variation of the classical Erd\H{o}s-Szekeres problems on the existence and number of convex $k$-gons and $k$-holes (empty $k$-gons) in a set of $n$ points in the plane. Allowing the $k$-gons to be non-convex, we show bounds and structural results on maximizing and minimizing their numbers. Most noteworthy, for any $k$ and sufficiently large $n$, we give a quadratic lower bound for the number of $k$-holes, and show that this number is maximized by sets in convex position.

at September 02, 2014 12:41 AM UTC

Authors: Edith Cohen, Daniel Delling, Thomas Pajor, Renato F. Werneck
Download: PDF
Abstract: Closeness centrality, first considered by Bavelas (1948), is an importance measure of a node in a network which is based on the distances from the node to all other nodes. The classic definition, proposed by Bavelas (1950), Beauchamp (1965), and Sabidussi (1966), is (the inverse of) the average distance to all other nodes.

We propose the first highly scalable (near linear-time processing and linear space overhead) algorithm for estimating, within a small relative error, the classic closeness centralities of all nodes in the graph. Our algorithm applies to undirected graphs, as well as for centrality computed with respect to round-trip distances in directed graphs.

For directed graphs, we also propose an efficient algorithm that approximates generalizations of classic closeness centrality to outbound and inbound centralities. Although it does not provide worst-case theoretical approximation guarantees, it is designed to perform well on real networks.

We perform extensive experiments on large networks, demonstrating high scalability and accuracy.

at September 02, 2014 12:41 AM UTC

[Boaz's note: videos of all ICM 2014 talks, including Mark's talk discussed below, as well as the talks of  Candes and Bhargava I mentioned before are available online here. In particular, if you still don't know how one constructs a fully homomorphic encryption scheme then you should (a) be ashamed of yourself and (b) watch Craig Gentry's talk to see a description of a simple scheme due to him, Sahai and Waters that will close this gap in your education. ]

Guest post by Mark Braverman

My survey covers recent developments in the area of interactive coding theory. This area has been quite active recently, with at least 4 papers on the topic appearing in the next FOCS. This level of activity means that parts of the survey will probably become obsolete within a few years (in fact, I had to rewrite parts of it when the separation result by Ganor, Kol, and Raz was announced in April. [See also this newer result that was posted after Mark sent me his text --Boaz]

The basic premise of interactive coding theory is extending the reach of classical coding and information theory to interactive scenarios. Broadly speaking “coding” encompasses compression (aka noiseless coding), error correction (over both adversarial and randomized channels), and cryptography. The latter does not really fit with the rest of the agenda, since cryptographic protocols have always been interactive.

The interactive version of noiseless coding is communication complexity – and taking the information-theoretic view to it yields information complexity, which behaves as the interactive analogue of Shannon’s entropy. The analogue of Shannon’s Noiseless Coding Theorem holds in the interactive case. To what extent interactive compression is possible (i.e. to what extent the interactive analogue of Huffman Coding exists) is a wide-open problem.

On the noisy side, much progress has been made in the adversarial model, starting with the seminal work of Schulman in the 1990s. Many problems surrounding the interactive analogue of Shannon’s channel capacity, even for simple channels, such as the Binary Symmetric Channel remain open.

For the current state of affairs (surveyed for a Math audience) see my ICM survey which is available here.


by Boaz Barak at September 01, 2014 08:37 PM UTC

Within the last two months I had the privilege of teaching a week-long Analysis of Boolean Functions course at two different summer schools.

In July I was at the (First Annual?) Swedish Summer School in Computer Science. This was wonderfully organized by KTH faculty Per Austrin, Johan Håstad, and Jakob Nordström, and took place on the scenic island of Djurhamn in the Stockholm archipelago. Boaz Barak and I alternated lectures (he on Sums of Squares) to 50 or so highly motivated graduate students. I was very impressed that many of them worked quite hard on the homework outside of lecture times! Some photos by Talya Abram can be found here, including one of me holding an exquisite table linen gift, with the De–Mossel–Neeman differential equation in the background.

The second summer school (just finished) was held at the Nesin Mathematical Village, an idyllic site in the hills near the Aegean coast of Turkey. The village operates year-round, but its main business is in the summer, when a hundred or so high school, college, and graduate math students come each week for courses. The spirit of the place is charming, and the math village itself is quite beautiful, as you can see from these photos. I lectured here, though I wish I got to lecture here.

By the way, if you ever teach a class on Analysis of Boolean Functions yourself, please drop me a line; I like to keep track of links to them. (So far I know of 31 such courses since 2002.)

by Ryan O'Donnell at September 01, 2014 03:08 PM UTC

More Google+ links from the last couple of weeks:

at September 01, 2014 05:59 AM UTC

Authors: Iris Hupkens, Michael Emmerich, André Deutz
Download: PDF
Abstract: The expected improvement algorithm (or efficient global optimization) aims for global continuous optimization with a limited budget of black-box function evaluations. It is based on a statistical model of the function learned from previous evaluations and an infill criterion - the expected improvement - used to find a promising point for a new evaluation. The `expected improvement' infill criterion takes into account the mean and variance of a predictive multivariate Gaussian distribution.

The expected improvement algorithm has recently been generalized to multiobjective optimization. In order to measure the improvement of a Pareto front quantitatively the gain in dominated (hyper-)volume is used. The computation of the expected hypervolume improvement (EHVI) is a multidimensional integration of a step-wise defined non-linear function related to the Gaussian probability density function over an intersection of boxes. This paper provides a new algorithm for the exact computation of the expected improvement to more than two objective functions. For the bicriteria case it has a time complexity in $O(n^2)$ with $n$ denoting the number of points in the current best Pareto front approximation. It improves previously known algorithms with time complexity $O(n^3 \log n)$. For tricriteria optimization we devise an algorithm with time complexity of $O(n^3)$. Besides discussing the new time complexity bounds the speed of the new algorithm is also tested empirically on test data. It is shown that further improvements in speed can be achieved by reusing data structures built up in previous iterations. The resulting numerical algorithms can be readily used in existing implementations of hypervolume-based expected improvement algorithms.

at September 01, 2014 12:40 AM UTC

Authors: Joan Boyar, Lene M. Favrholdt, Christian Kudahl, Jesper W. Mikkelsen
Download: PDF
Abstract: The advice complexity of an online problem is a measure of how much knowledge of the future an online algorithm needs in order to achieve a certain competitive ratio. We determine the advice complexity of hard online problems such as Online Independent Set, Online Vertex Cover and Online Disjoint Path Allocation. These problems are hard since a single wrong answer by the online algorithm can have devastating consequences. We show that $\log\left(1+\frac{(c-1)^{c-1}}{c^{c}}\right)n=\Theta (n/c)$ bits of advice are necessary and sufficient (up to a lower-order term) to achieve a strict competitive ratio of $c$ for each of these problems. This is done by introducing a new string guessing problem related to that of B\"ockenhauer et al. (Theoretical Computer Science 2014). It turns out that this gives a powerful but easy-to-use method for providing both upper and lower bounds on the advice complexity of an entire class of online problems.

Previous results of Halld\'orsson et al. (Theoretical Computer Science 2002) on Online Independent Set, in a related model, implies that the advice complexity of the problem is $\Theta (n/c)$. Our results improve on this by providing an exact formula for the higher-order term. B\"ockenhauer et al. (ISAAC 2009) gave a lower bound of $\Omega (n/c)$ and an upper bound of $O((n\log c)/c)$ on the advice complexity of Online Disjoint Path Allocation. We improve on the upper bound by a $\log c$ factor. For the remaining problems, no bounds on their advice complexity were previously known.

at September 01, 2014 12:40 AM UTC

Authors: Pui Tung Choi, Lok Ming Lui
Download: PDF
Abstract: Surface parameterizations have been widely used in computer graphics and geometry processing. In particular, as simply-connected open surfaces are conformally equivalent to the unit disk, it is desirable to compute the disk conformal parameterizations of the surfaces. In this paper, we propose a novel algorithm for the conformal parameterization of a simply-connected open surface onto the unit disk, which significantly speeds up the computation, enhances the conformality and stability, and guarantees the bijectivity. The conformality distortions at the inner region and on the boundary are corrected by two steps, with the aid of an iterative scheme using quasi-conformal theories. Experimental results demonstrate the effectiveness of our proposed method.

at September 01, 2014 12:40 AM UTC

Authors: Md. Jawaherul Alam, Stephen G. Kobourov, Sergey Pupyrev, Jackson Toeniskoetter
Download: PDF
Abstract: We study a variant of intersection representations with unit balls, that is, unit disks in the plane and unit intervals on the line. Given a planar graph and a bipartition of the edges of the graph into near and far sets, the goal is to represent the vertices of the graph by unit balls so that the balls representing two adjacent vertices intersect if and only if the corresponding edge is near. We consider the problem in the plane and prove that it is NP-hard to decide whether such a representation exists for a given edge-partition. On the other hand, every series-parallel graph admits such a representation with unit disks for any near/far labeling of the edges. We also show that the representation problem on the line is equivalent to a variant of a graph coloring. We give examples of girth-4 planar and girth-3 outerplanar graphs that have no such representation with unit intervals. On the other hand, all triangle-free outerplanar graphs and all graphs with maximum average degree less than 26/11 can always be represented. In particular, this gives a simple proof of representability of all planar graphs with large girth.

at September 01, 2014 12:41 AM UTC

The autumn semester 2014 started on Monday, 18 August, for us at the School of Computer Science at Reykjavik University (SCS). As usual at this time of the year, teaching-related matters took centre stage and occupied most faculty members, especially since we had just welcomed our third intake of well over 300 new computer science students. However, many of us at the Icelandic Centre of Excellence in Theoretical Computer Science (ICE-TCS) will remember the first week of the new academic year for two events showcasing the artistic and scientific work of Erik Demaine, who visited ICE-TCS  and the School of Computer Science at Reykjavik University in the period 20-27 August.

The first event, Crossroads of Art and Science, took place on Thursday, 21 August 2014, from 5pm till 7pm GMT, and was organized  by  ICE-TCS, SCS  and Vísindafélag Íslendinga (the Icelandic Academy of Sciences). It was attended by about 220 people, including a good number of our undergraduate students, and featured a keynote address by Erik entitled Folding Paper: Visual Art Meets Mathematics.

Art and science are often viewed as very different, and somewhat antithetic, human endeavours, so much so that some artists happily profess ignorance of the basic methods and results of science and some scientists sneer at art. In 1959, this prompted British scientist and novelist C. P. Snow to express his view that "the intellectual life of the whole of western society" was split into two separate cultures---namely the sciences and the humanities---and that this was a major hindrance to solving the world's problems.

However, we can all mention key figures from the Renaissance and earlier times who were both artists and scientists. Is the breed of the Renaissance man a thing from a long gone past? The aim of Crossroads of Art and Science was to provide resounding evidence that the distinction between art and science in modern society is fictitious. We did so by showcasing three figures of polymaths whose work, be it artistic or scientific, benefits from the interplay of art and science.

Erik Demaine needs no introduction to the readers of this piece. He  helped start the field of Computational Origami, but he is also an artist, some of whose work has been exhibited at the MoMA, amongst other venues. He uses art to explore the feasibility of some of his mathematical ideas and mathematics to suggest artistic ideas, leading, for instance, to paper or glass sculptures. To quote Erik himself:
"One of our growing realizations over the years is that mathematics itself is an art form, and I think that's what attracted both of us to this area in the first place. [I]t has the same kind of creativity, the same kinds of aesthetics, that you get in art: You want a clean problem to solve, and you want an elegant solution to that problem. Both art and mathematics are about having the right ideas [and] executing those ideas in some convincing way, so in that sense they're indistinguishable." (See here for the video of that presentation.)
In case you have not done so already, do look at the inspirational video Erik produced when he received the Presburger Award 2013.

Crossroads of Art and Science also featured two Icelandic speakers. 
  • Anna Hrund Másdóttir is an artist and a mathematics teacher who teaches mathematics to support her art practice and whose teaching benefits from her artistic work. 
  • Kjartan Emilsson, a mathematical physicist by training, uses artistic methods of working and thinking in his work as Principal Game Designer and Studio Managing Director at CCP in Shanghai, where he directs the innovation and implementation of core virtual world game designs for CCP Games. 
The thesis underlying Crossroads of Art and Science was that art and science are not antithetic, and that their cooperation can lead to powerful ways for solving scientific problems and to creating new art forms. As Nicholasof Cusa put it in his philosophy, and as Anna Hrund wrote in the title of her presentation, we are in the presence of a "coincidence of opposites". We like to think that the people attending the event went home agreeing with that message.

If you understand Icelandic, you might enjoying listening to an excellent radio interview with Magnús M.  Halldórsson where he discusses the event and its theme with the host of the show. (The interview starts at around minute 15:40 and lasts for about 14 minutes.)

The second keynote address delivered by Erik Demaine was part of the 10th ICE-TCS Theory Day and was entitled Replicators, Transformers, and Robot Swarms: Science Fiction through Geometric Algorithms.. Erik's talk wasvery well attended and the audience included a good number of undergraduate students. During the talk, Erik presented some of his work that draws inspiration from science fiction and presented some answers to the following questions:
  1. How can we build reconfigurable robots like Transformers or Terminator 2? 
  2. How can we orchestrate the motion of a large swarm of robots? 
  3. How can we build Star Trek-style replicators that duplicate or mass-produce a given shape at the nano scale?
While addressing the first question, Erik gave us a whirlwind tour of his work on the equivalence of models for modular robots, hinged dissections (see this Wikipedia page for a brief introduction) and robotic paper. He also mentioned some of the work done in the context of the Millibiology project at MIT. Of course, the recently unveiled origami robot featured in this part of Erik's presentation, as well as some of its early incarnations.

In his discussion of problems related to robot swarms, Erik mentioned the distributed boundary detection algorithm he developed with James McLurkin, which is suitable for use on multi-robot systems with dynamic network topologies. The game of Tilt also featured in this part of Erik's talk, together with a sketch of the proof of its NP-hardness.

Finally, we were given a glimpse of some of Erik's fascinating work related to replicators. Two highlights were staged self-assembly of Wang tiles and shape replication in the Wang tile self-assembly model. Amongst other things, Erik showed the audience how to make infinitely many copies of any shape without holes in a single stage!

Last, but by no means least, the Theory Day 2014 included two presentations by members of ICE-TCS. Marjan Sirjani discussed some work done by her research group in Reykjavik and Tehran on analysis of network-on-chips using probabilistic timed actors. This work uses the language Rebeca and its tool set.  Christian Konrad closed the programme by presenting work on Robust Set Reconciliation, which was selected for presentation at SIGMOD 2014.

At the end of the scientific programme, the attendees mingled while being treated to chocolate and sparkling wine.

As you can see, the fun continued after dinner with an impromptu origami masterclass where people attempted to fold a hyperbolic paraboloid (Albers 1927). The instructions are here.

We held the first Theory Day in 2005, the day after the opening of ICE-TCS, and we have been having one such event every year since then.The list of foreign guests we have had so far for the Theory Day reads as follows:

  • 2005: Ryan Hayward (Alberta), Kim G. Larsen (Aalborg), Mogens Nielsen (Aarhus)
  • 2006: Wan Fokkink (VU Amsterdam), Jan Kratochvil (Prague), Moshe Vardi (Rice)
  • 2007: Thomas Erlebach (Leicester), Tommi Sottinen (Helsinki)
  • 2008: None, but we had all the ICALP invited speakers in July. 
  • 2009: Zoltan Esik (Szeged), Paul van Tilburg (Eindhoven)
  • 2010: David de Frutos Escrig and Carlos Gregorio-Rodriguez (both at the Universidad Complutense de Madrid).
  • 2011: None
  • 2012: None
  • 2013: Pierluigi Crescenzi (Firenze), Pierre Fraigniaud (Paris Diderot) and Stephan Holzer (ETH)
  • 2014: Erik Demaine (MIT)
We will see what the future will bring and look forward to celebrating the tenth birthday of ICE-TCS next year. Whatever we do, it will be hard to top the impact and national visibility of last week's events.

by Luca Aceto ( at August 29, 2014 04:01 PM UTC

The last of my Graph Drawing preprints is also the first of the papers to see daylight from the workshop last spring in Barbados: "Flat Foldings of Plane Graphs with Prescribed Angles and Edge Lengths" (arXiv:1408.6771, with Zachary Abel, Erik Demaine, Martin Demaine, Anna Lubiw, and Ryuhei Uehara).

It's part of what has become a long line of research on trying to understand which shapes can be squashed flat, starting with Maekawa's theorem (every vertex of a flat-folded sheet of paper has to have numbers of mountain folds and valley folds that differ by two) and Kawasaki's theorem (alternating angles at each vertex must sum to π). If you have a piece of paper with a desired set of fold lines marked on it, and these fold lines all converge on a single vertex, then it's possible to exactly characterize whether you can fold it flat along those lines. A greedy algorithm can find a way to fold it efficiently, and a version of the Carpenter's rule theorem implies that, when a flat-folded state exists, you can always get to it by a continuous motion. But as Bern and Hayes showed at SODA 1996, the restriction to a single vertex is crucial: the folds at different vertices can interact with each other in complicated ways, making flat foldability of multi-vertex folding patterns NP-complete.

The new paper follows an earlier one at ISAAC 2011 and IJCGA 2013, "Folding equilateral plane graphs", by an overlapping set of authors, in studying the flat-foldability of objects that are not just flat sheets of paper. But in order to avoid Bern and Hayes' NP-hardness proof, we again restrict our attention only to objects with a single vertex. That is, we have some sort of complex of flat polygons connected by piano hinges at their edges, but all the hinges meet at a point. As in the following illustration:

The blue sphere highlights the vertex, but it also indicates a way to think of the problem in only two dimensions instead of three: cut the complex through the surface of the sphere so that each polygon turns into a line segment (or really a segment of a great circle), each hinge turns into a point, and what you have is just a planar graph drawing. The line segments are rigid but the hinges at their endpoints make the whole drawing floppy (within the plane it's defined in). The question is whether it is sufficiently floppy that it can be made to lie flat along a line.

Our key insight is in some sense the opposite of Bern and Hayes': the folds within different faces of this planar graph cannot interact with each other at all. This is not obvious, or at least it wasn't to us. But with this in hand, the rest is easy: we already know how to flatten things with one face using the results about folding patterns on flat sheets of paper, so to test whether a complex can be flattened we just need to test each of its faces separately. By using dynamic programming within each face and then multiplying the results, we can also count the number of different flat-folded states.

at August 29, 2014 05:46 AM UTC

A shout-out today to my friend and colleague Pardis Sabeti (and her lab) for their Science article on the Ebola virus that appeared earlier today.  Pardis and her group have been studying the genetics of viral diseases, and in particular the Lassa virus.  So they were there and ready when the recent Ebola virus began and went to work.  They sequenced 99 Ebola virus genomes from 78 patients, and have analyzed the resulting information to gain insight into how the disease is spreading and mutating. They have released the genetic sequences to the public so that the information is available to groups who could use the sequencing to help find and test cures.  This is both very important work, and (sadly) very serious.  As reported, 5 co-authors of the study (health care workers, a lab technician, and the director of the national Lassa fever program in Sierra Leone) have died of Ebola before today's publication.  

Numerous news articles appeared today discussing this work, too many for me to link to.  But the SabetiLab page here contains a great deal of information about her research and various projects.  Her work in biology, which makes powerful use of statistics and computation, should serve as an inspiration for future scientists, and for people who want to use algorithmic and mathematical methods in ways to benefit the world. 

by Michael Mitzenmacher ( at August 29, 2014 03:42 AM UTC

Authors: Ke Li, Graeme Smith
Download: PDF
Abstract: We prove a version of the quantum de Finetti theorem: permutation-invariant quantum states are well approximated as a probabilistic mixture of independent and identically distributed states. The approximation is measured by distinguishability under one-way LOCC (local operations and classical communication) measurements. Indeed, we consider approximation with respect to the fully one-way LOCC norm or relative entropy, compared to the parallel one-way LOCC norm of earlier work, while the error bound is kept essentially the same. We apply our new de Finetti theorem to the problems considered in [Brandao and Harrow, arXiv:1210.6367], and obtain several new or improved results. Of particular interest are a quasipolynomial-time algorithm which distinguishes multipartite density matrices with entanglement larger than a small constant amount (measured with a variant of the relative entropy of entanglement) from separable, and a proof that in quantum Merlin-Arthur proof systems, polynomially many provers are not more powerful than a single prover when the verifier is restricted to one-way LOCC operations.

at August 29, 2014 12:40 AM UTC

Authors: Alan Frieze, Wesley Pegden
Download: PDF
Abstract: Given an instance of the preferential attachment graph $G_n=([n],E_n)$, we would like to find vertex 1, using only `local' information about the graph. Borgs et. al gave a $O(\log^4 n)$ time algorithm, which is local in the sense that it grows a connected set of vertices which will contain vertex 1 while still of size $O(\log^4 n)$. % We give a $O(\omega \log^{7/2} n)$ algorithm to find vertex 1 (where $\omega\to \infty$ is arbitrary), which is local in the strong sense that it traverses the graph vertex-by-vertex, and operates only on the neighborhood of its current vertex.

at August 29, 2014 12:41 AM UTC

Authors: Prosenjit Bose, Jean-Lou De Carufel
Download: PDF
Abstract: Consider the following search problem: given a target point p in R, starting at the origin, find p with minimum cost, where cost is defined as the distance travelled. When no lower bound on |p| is given, no competitive search strategy exists. Demaine, Fekete and Gal (Online searching with turn cost, Theor. Comput. Sci., 361(2-3):342-355, 2006) considered the situation where no lower bound on |p| is given but a fixed turn cost t>0 is charged every time the searcher changes direction. When the total cost is expressed as gamma|p|+phi, where gamma and phi are positive constants, they showed that if gamma is set to 9, then the optimal search strategy has a cost of 9|p|+2t. Although their strategy is optimal for gamma=9, we prove that the minimum cost in their framework is 5|p|+t+2sqrt{2|p|(2|p|+t)}<9|p|+2t. Note that the minimum cost requires knowledge of |p|, however, given |p|, the optimal strategy has a smaller cost of 3|p|+t. Therefore, this problem cannot be solved optimally when no lower bound on |p| is given.

To resolve this issue, we introduce a general framework where the cost of moving distance x away from the origin is alpha_1x+beta_1 and the cost of moving distance y towards the origin is alpha_2y+beta_2 for constants alpha_1,alpha_2,beta_1,beta_2. Given a lower bound lambda on |p|, we provide a provably optimal competitive search strategy when alpha_1,alpha_2>=0, alpha_1+alpha_2>0, beta_1,beta_2>=0. We show how our framework encompasses many of the results in the literature, and also point out its relation to other existing frameworks.

We also address the problem of searching for a target lying on one of m rays extending from the origin where the cost is measured as the total distance travelled plus t>=0 times the number of turns. We provide a search strategy and compute its cost. We prove our strategy is optimal for small values of t and conjecture it is always optimal.

at August 29, 2014 12:41 AM UTC

Authors: Zachary Abel, Erik D. Demaine, Martin L. Demaine, David Eppstein, Anna Lubiw, Ryuhei Uehara
Download: PDF
Abstract: When can a plane graph with prescribed edge lengths and prescribed angles (from among $\{0,180^\circ, 360^\circ\}$) be folded flat to lie in an infinitesimally thick line, without crossings? This problem generalizes the classic theory of single-vertex flat origami with prescribed mountain-valley assignment, which corresponds to the case of a cycle graph. We characterize such flat-foldable plane graphs by two obviously necessary but also sufficient conditions, proving a conjecture made in 2001: the angles at each vertex should sum to $360^\circ$, and every face of the graph must itself be flat foldable. This characterization leads to a linear-time algorithm for testing flat foldability of plane graphs with prescribed edge lengths and angles, and a polynomial-time algorithm for counting the number of distinct folded states.

at August 29, 2014 12:00 AM UTC

Authors: Ville Salo, Ilkka Törmä
Download: PDF
Abstract: In this article, we study countable sofic shifts of Cantor-Bendixson rank at most 2. We prove that their conjugacy problem is complete for GI, the complexity class of graph isomorphism, and that the existence problems of block maps, factor maps and embeddings are NP-complete.

at August 29, 2014 12:40 AM UTC

Authors: Steven Chaplick, Jiří Fiala, Pim van 't Hof, Daniël Paulusma, Marek Tesař
Download: PDF
Abstract: A homomorphism from a graph G to a graph H is locally bijective, surjective, or injective if its restriction to the neighborhood of every vertex of G is bijective, surjective, or injective, respectively. We prove that the problems of testing whether a given graph G allows a homomorphism to a given graph H that is locally bijective, surjective, or injective, respectively, are NP-complete, even when G has pathwidth at most 5, 4, or 2, respectively, or when both G and H have maximum degree 3. We complement these hardness results by showing that the three problems are polynomial-time solvable if G has bounded treewidth and in addition G or H has bounded maximum degree.

at August 29, 2014 12:40 AM UTC

Authors: Nico Pietroni, Davide Tonelli, Enrico Puppo, Maurizio Froli, Roberto Scopigno, Paolo Cignoni
Download: PDF
Abstract: We introduce a framework for the generation of grid-shell structures that is based on Voronoi diagrams and allows us to design tessellations that achieve excellent static performances. We start from an analysis of stress on the input surface and we use the resulting tensor field to induce an anisotropic non-Euclidean metric over it. Then we compute a Centroidal Voronoi Tessellation under the same metric. The resulting mesh is hex-dominant and made of cells with a variable density, which depends on the amount of stress, and anisotropic shape, which depends on the direction of maximum stress. This mesh is further optimized taking into account symmetry and regularity of cells to improve aesthetics. We demonstrate that our grid-shells achieve better static performances with respect to quad-based grid shells, while offering an innovative and aesthetically pleasing look.

at August 29, 2014 12:41 AM UTC

Every paper has a story but Sunny Daniel's Arxiv paper from yesterday deserves a blog post.

We begin in 1982 when Ravi Kannan proved that Σ2 (the problems computable in NP with an NP oracle) cannot have n2 size circuits. Kannan's result hold for nk-size circuits but for this story we'll keep it simple.

Kannan had an ingeniously simple proof. By diagonalization you can create a language L in Σthat does not have n2-size circuits. Now there are two cases:
  1. SAT doesn't have n2-size circuits. Since SAT is in Σ2 we are done.
  2. SAT has n2-size circuits. Then by Karp-Lipton Σ4 = Σ2 so L is in Σ2 and we are done.
Kannan's proof is non-constructive and doesn't give an explicit Σ2 language that we can show doesn't have n2-size circuits. Either SAT or L but one can't be sure which one.

In 1998, Sunny Daniels, a PhD student at Rutgers, took Eric Allender's complexity course. Eric offered up a constructive example of Kannan as an open problem. Sunny came up with a solution. He wrote up a draft in LaTeX but for personal reasons dropped out of academics and never published the paper.

In 2003, Jin-Yi Cai and Osamu Watanabe, not aware of Daniels' paper, came up with their own independent construction and presented their paper at the COCOON conference in Montana. Word got back to Sunny but he thought he had lost the LaTeX file and didn't want to retypeset the whole proof.

Sunny had Iomega Zip Drive cartridges from his Rutgers days. Recently he found someone who had a Zip Drive reader and managed to recover the files. In there he discovered the original LaTeX, cleaned the paper up, and sixteen years after his proof put the paper on ArXiv. Even if you don't care about the math, read the introduction for the complete version of this story.

Kannan's proof actually shows Σ2∩Π2 does not have n2-size circuits and this was later improved to S2P. Whether we have any constructive language in Σ2∩Π2 or S2P without n2-size circuits still remains open. 

by Lance Fortnow ( at August 28, 2014 11:23 PM UTC

A roboticist and Shtetl-Optimized fan named Jon Groff recently emailed me the following suggestion for a blog entry:

I think a great idea for an entry would be the way that in fields like particle physics the theoreticians and experimentalists get along quite well but in computer science and robotics in particular there seems to be a great disdain for the people that actually do things from the people that like to think about them. Just thought I’d toss that out there in case you are looking for some subject matter.

After I replied (among other things, raising my virtual eyebrows over his rosy view of the current state of theoretician/experimentalist interaction in particle physics), Jon elaborated on his concerns in a subsequent email:

[T]here seems to be this attitude in CS that getting your hands dirty is unacceptable. You haven’t seen it because you sit a lofty heights and I tend to think you always have. I have been pounding out code since ferrite cores. Yes, Honeywell 1648A, so I have been looking up the posterior of this issue rather than from the forehead as it were. I guess my challenge would be to find a noteworthy computer theoretician somewhere and ask him:
1) What complete, working, currently functioning systems have you designed?
2) How much of the working code did you contribute?
3) Which of these systems is still operational and in what capacity?
Or say, if the person was a famous robotics professor or something you may ask:
1) Have you ever actually ‘built’ a ‘robot’?
2) Could you, if called upon, design and build an easily tasked robot safe for home use using currently available materials and code?

So I wrote a second reply, which Jon encouraged me to turn into a blog post (kindly giving me permission to quote him).  In case it’s of interest to anyone else, my reply is below.

Dear Jon,

For whatever it’s worth, when I was an undergrad, I spent two years working as a coder for Cornell’s RoboCup robot soccer team, handling things like the goalie.  (That was an extremely valuable experience, one reason being that it taught me how badly I sucked at meeting deadlines, documenting my code, and getting my code to work with other people’s code.)   Even before that, I wrote shareware games with my friend Alex Halderman (now a famous computer security expert at U. of Michigan); we made almost $30 selling them.  And I spent several summers working on applied projects at Bell Labs, back when that was still a thing.  And by my count, I’ve written four papers that involved code I personally wrote and experiments I did (one on hypertext, one on stylometric clusteringone on Boolean function query properties, one on improved simulation of stabilizer circuits—for the last of these, the code is actually still used by others).  While this is all from the period 1994-2004 (these days, if I need any coding done, I use the extremely high-level programming language called “undergrad”), I don’t think it’s entirely true to say that I “never got my hands dirty.”

But even if I hadn’t had any of those experiences, or other theoretical computer scientists hadn’t had analogous ones, your questions still strike me as unfair.  They’re no more fair than cornering a star coder or other practical person with questions like, “Have you ever proved a theorem?  A nontrivial theorem?  Why is BPP contained in P/poly?  What’s the cardinality of the set of Turing-degrees?”  If the coder can’t easily answer these questions, would you say it means that she has “disdain for theorists”?  (I was expecting some discussion of this converse question in your email, and was amused when I didn’t find any.)

Personally, I’d say “of course not”: maybe the coder is great at coding, doesn’t need theory very much on a day-to-day basis and doesn’t have much free time to learn it, but (all else equal) would be happy to know more.  Maybe the coder likes theory as an outsider, even has friends from her student days who are theorists, and who she’d go to if she ever did need their knowledge for her work.  Or maybe not.  Maybe she’s an asshole who looks down on anyone who doesn’t have the exact same skill-set that she does.  But I certainly couldn’t conclude that from her inability to answer basic theory questions.

I’d say just the same about theorists.  If they don’t have as much experience building robots as they should have, don’t know as much about large software projects as they should know, etc., then those are all defects to add to the long list of their other, unrelated defects.  But it would be a mistake to assume that they failed to acquire this knowledge because of disdain for practical peoplerather than for mundane reasons like busyness or laziness.

Indeed, it’s also possible that they respect practical people all the more, because they tried to do the things the practical people are good at, and discovered for themselves how hard they were.  Maybe they became theorists partly because of that self-discovery—that was certainly true in my case.  Maybe they’d be happy to talk to or learn from a practical roboticist like yourself, but are too shy or too nerdy to initiate the conversation.

Speaking of which: yes, let’s let bloom a thousand collaborations between theorists and practitioners!  Those are the lifeblood of science.  On the other hand, based on personal experience, I’m also sensitive to the effect where, because of pressures from funding agencies, theorists have to try to pretend their work is “practically relevant” when they’re really just trying to discover something cool, while meantime, practitioners have to pretend their work is theoretically novel or deep, when really, they’re just trying to write software that people will want to use.  I’d love to see both groups freed from this distorting influence, so that they can collaborate for real reasons rather than fake ones.

(I’ve also often remarked that, if I hadn’t gravitated to the extreme theoretical end of computer science, I think I might have gone instead to the extreme practical end, rather than to any of the points in between.  That’s because I hate the above-mentioned distorting influence: if I’m going to try to understand the ultimate limits of computation, then I should pursue that wherever it leads, even if it means studying computational models that won’t be practical for a million years.  And conversely, if I’m going to write useful software, I should throw myself 100% into that, even if it means picking an approach that’s well-understood, clunky, and reliable over an approach that’s new, interesting, elegant, and likely to fail.)


by Scott at August 28, 2014 05:12 PM UTC

If you're used to writing mathematics, but haven't paid much attention to model theory, you probably think a fully-quantified mathematical sentence is generally either true or false. Fermat's last theorem, for instance, can be written as the following sentence: For all positive integers a, b, c, and n, if n > 2, then an + bn ≠ cn. This sentence is fully quantified: the four variables a, b, c, and n are all covered by the quantifier "for all positive integers". It's one of the true ones, if difficult to prove.

But when we're working with the logic of graphs, a (fully-quantified) sentence is itself just another mathematical object, and its truth is relative: it might be true for some graphs and false for others. Consider, for instance, the following sentence about an undirected graph: "There exist vertices v and w such that v and w are adjacent, and for all vertices u, if u and v are adjacent, then u equals w." It can be satisfied only when v is a vertex whose degree is exactly one, and w is its unique neighbor. We can write this more concisely using a notation in which adjacency is written using a tilde:

Let's give this sentence a name, D1. Then D1 is true for a graph that has a degree-one vertex, such as the complete bipartite graph K1,4. But it's false for a graph that doesn't have such a vertex, such as the complete graph K4. If a sentence is true for a graph, we say that the graph "models" the sentence, and we can also write that in mathematical notation:

This kind of logic, in which the only things we can talk about are vertices and their adjacencies, is called the first order logic of graphs, and it's kind of weak. Each of its sentences is equivalent to an algorithm that can contain nested loops over vertices, if-then-else logic involving adjacency tests and equality, and the ability to return Boolean values, but nothing else. For instance:

def d1(G):
   for v in G:
       for w in G:
           if G.adjacent(v,w):
               for u in G:
                   if G.adjacent(u,v):
                       if u != w:
                   return True
    return False

This is good enough to recognize some families of graphs (such as the ones with a finite set of forbidden induced subgraphs) but not many others. For instance, I don't know how to describe the distance-hereditary graphs in this way. They can be described by forbidden induced subgraphs, but infinitely many of them, and we're not allowed to write infinitely-long sentences.

On the other hand, the weakness of first-order logic means that we can prove interesting facts about it. For instance, every first-order sentence defines a family of graphs that can be recognized in polynomial time. Also, we have the 0-1 law: if S is any sentence in first-order logic then the probability that a graph chosen uniformly at random among all n-vertex graphs models S is either zero or one in the limit as n goes to infinity. Using the 0-1 law, even though we can't describe the distance-hereditary graphs precisely in first-order logic, we can get an approximate description that's good enough to prove that almost all random graphs are not distance-hereditary. A distance-hereditary graph either has a degree-one vertex (it models D1) or it has two twin vertices, vertices whose sets of neighbors (not counting the two twins themselves) are identical. The existence of twins can also be described by a first-order sentence T:

But for a uniformly random graph, both the expected number of degree-one vertices and the expected number of twin pairs, can be calculated directly from these formulas, and are exponentially small in the number n of vertices. So almost all graphs do not model D1, do not model T, and therefore cannot be distance-hereditary.

The name "first order" should be a hint that there's another kind of logic, "second order logic", and there is. In second order logic, the variables can represent complicated structures built out of k-ary relations (for instance, entire graphs), the quantifiers quantify over these structures, and we need more primitives to be able to look at what's inside these structures. The idea of using second order logic seems to be somewhat controversial in mathematics, in part because there's not a uniquely-defined way of assigning meanings to statements in this logic, but there's a restricted subset of the second order logic of graphs, called monadic second order logic, where these problems do not arise. Or actually there are two such subsets: MSO1 and MSO2.

MSO1 is what you get from the first order logic described above when you add another type of variable for sets of vertices (usually written with capital letters) and you allow quantification over sets of vertices. The only other feature beyond first order logic that's necessary to define this logic is the ability to test whether a vertex belongs to a set. It's convenient to write formulas using more complicated tests such as whether one set is a subset of another, but those can be broken down into membership tests. We can also get the effect of using these sets as variable types that can be quantified over, by instead quantifying over all vertices but then testing whether the results of the quantification belong to the given set. For instance we can write sentences D1[X] and T[X] that have the same form as D1 and T, but restrict all their variables to be in X. The effect of this restriction would be to test whether the subgraph induced by X has a degree-one vertex or has twins. A distance-hereditary graph is a graph in which every induced subgraph of two or more vertices has a degree-zero vertex, a degree-one vertex or twins, and this logic allows us to express this definition in a sentence DH:

A graph G models DH if and only if G is distance-hereditary. MSO2 is similar, but allows four types of variables: vertices, edges, and sets of vertices and edges. The ability to represent sets of edges allows it to express some properties (such as the property of having a Hamiltonian cycle) that cannot be expressed in MSO1.

Unlike first-order logic, we don't necessarily get efficient algorithms out of MSO expressions. Simulating the formula directly would involve an exponential-time search over all possible subsets of vertices or edges in a given graph. But that's not the only way to turn one of these formulas into an algorithm. In particular, we can apply Courcelle's theorem, which says that every MSO2 formula can be translated into a fixed-parameter tractable algorithm on graphs parameterized by their treewidth, and that every MSO1 formula can be translated into an FPT algorithm on graphs parameterized by their clique-width. In the example of the distance-hereditary graphs, we also know that all such graphs have bounded clique-width. So applying Courcelle and plugging in the fixed bound on the clique-width of these graphs immediately tells us that there's a polynomial time algorithm for recognizing distance-hereditary graphs.

All of this is, I think, completely constructive: it's not just that an algorithm exists, but in principle we could automatically translate the formula into the algorithm. It's also completely useless in practice because the dependence on the parameter is ridiculously high (some kind of tower of powers). When an algorithm is found in this way, additional research is needed to find a more direct algorithm that reduces this dependence to something more reasonable like single-exponential with a small base, or even removes it to get a polynomial time algorithm. In the case of the distance-hereditary graphs, there's an easy polynomial algorithm: look for degree one vertices or twins, and whenever one of these patterns is found use it to reduce the size of the graph by one vertex. With a little more care one can even achieve linear time for distance-hereditary graph recognition.

My latest preprint, "Crossing minimization for 1-page and 2-page drawings of graphs with bounded treewidth" (arXiv:1408.6321, with Michael Bannister, to appear at Graph Drawing), uses this same logical approach to attack some problems related to book embedding. We had a paper with Joe Simons in GD 2013 that showed that, for graphs formed from trees by adding a very small number of edges, we can find 1-page and 2-page book drawings with a minimum number of crossings in FPT time. In the new paper, we characterize these drawings using second order logic and apply Courcelle's theorem, allowing us to generalize these algorithms to the graphs of low treewidth, a much larger class of inputs. But because we use Courcelle's theorem, our algorithms are completely impractical. More research is needed to find a way to minimize crossings in practice.

at August 28, 2014 12:43 AM UTC

Authors: Elizaveta Frenkel, Andrey Nikolaev, Alexander Ushakov
Download: PDF
Abstract: The classic knapsack and related problems have natural generalizations to arbitrary (non-commutative) groups, collectively called knapsack-type problems in groups. We study the effect of free and direct products on their time complexity. We show that free products in certain sense preserve time complexity of knapsack-type problems, while direct products may amplify it.

at August 28, 2014 12:40 AM UTC

Authors: Ciaran McCreesh, Patrick Prosser
Download: PDF
Abstract: A clique in a graph is a set of vertices, each of which is adjacent to every other vertex in this set. A k-clique relaxes this requirement, requiring vertices to be within a distance k of each other, rather than directly adjacent. In theory, a maximum clique algorithm can easily be adapted to solve the maximum k-clique problem. We use a state of the art maximum clique algorithm to show that this is feasible in practice, and introduce a lazy global domination rule which sometimes vastly reduces the search space. We include experimental results for a range of real-world and benchmark graphs, and a detailed look at random graphs.

at August 28, 2014 12:41 AM UTC