Theory of Computing Blog Aggregator

This is the second of two posts by Samira Daruki on the streaming sessions at SODA 2015. For the first post, see here. 

In the third paper from the streaming graph family in SODA15: "Parameterized Streaming: Maximal Matching and Vertex Cover", Chitnis, Cormode, Hajiaghayi and Monemizadeh introduce a new approach to handling graph streams called  parameterized streaming algorithms. Also, in addition to insertion-only model, they consider the dynamic model of streaming graphs in which the input is a sequence of insertion/deletion on the edges.

This dynamic model of streaming graph processing is popular when the graph size is changing, and has recently received much attention due to breakthroughs by Ahn, Guha and McGregor (one, and two).  Over these two papers, they showed the first results for a number of graph problems over dynamic streams. This has provoked much interest into what can be computed over dynamic graph streams, although still there is not much work on solving graph optimization problems in this model. The challenge here is that when an edge is deleted, sometimes it requires a substantial work to repair the solution again, so we need to make sure that the algorithm has enough information to do so, while keeping only a bounded amount of working space. (ed: not surprisingly, some of these ideas are useful for non-streaming dynamic graph algorithms: witness the paper by Kapron, King and Mountjoy on dynamic connectivity in (randomized) worst-case polylog time from SODA a few years ago)

Returning to parametrized streaming, in this paper instead of solving exactly the optimization problem on the graph stream, the goal is to solve the “parametrized” version of the problem, where the parameter $k$ is given and we want to solve the following decision problem:
Is there a solution with size bounded by $k$? 
The motivation behind parametrizing the problem comes from real world applications in which the solution of the graph problems is small comparing to the size of the input (i.e. sublinear in the size of input). In these cases, the interesting challenge is to solve the optimization graph problems in streaming fashion using space bounded by some function of the “solution size” instead of the “input size”.

To solve the parameterized problems, one of the techniques which is used is kernelization, which uses a polynomial time preprocessing to map the input to another equivalent input of smaller size $f(k)$ (called a kernel) with a new parameter value $k’ \le g(k)$, for a computable function $g$.

In this paper, by combining kernelization techniques with randomized sketch structures, the first streaming algorithms for the parameterized versions of the Vertex Cover problem is obtained. The main idea here is to maintain a maximal matching of underlying graph in a streaming fashion. Then run the well-known kernelization algorithm for Vertex Cover on the maintained maximal matching. The data structure to maintain the maximal matching use the $k$-sample recovery sketching algorithm, which is a generalization of linear sketching for $\ell_0$-sampling, as the main tool and apply it to the neighborhood of each vertex (incident edges) in the resulted matching. So as the edges are inserted or deleted, these sketches can be updated without needing knowledge of the full neighborhood of nodes. However, there are some challenges with deletion of edges: as the edges are deleted we need to have an intelligent mechanism to ensure the matching remains maximal using only limited stored information.

Another nice result here is showing a tight lower bound of $\Omega(k^2)$ (by reducing from the INDEX problem in communication complexity) for the space complexity of any (randomized) streaming algorithms for  parameterized Vertex Cover, which holds even in the insertion-only model.

Besides the general models of insert-only and dynamic, another restricted model in dynamic framework is also discussed in which we know for sure that at time $i$, the size of the vertex cover of underlying graph induced by the edges till that point is at most $k$. With this promise, they develop a dynamic parameterized streaming algorithm whose space usage matches the proved lower bound.

It is interesting to think about other NP-hard problems in the framework of parameterized streaming and explore how kernelization can be helpful in this direction or see whether we can find other powerful hammers to overcome the challenges which arises in designing algorithms for hard problems in streaming setting.

Along with the three papers discussed above, there was another paper on streaming presented at SODA (by Naumovitz and Saks) which provides a deterministic polylogarithmic-space streaming algorithm for approximating distance to monotonicity for a sequence of $n$ numbers, compared to the corresponding randomized result presented at SODA two years ago.

While I won't discuss this work here so as to keep these posts  just about streaming graph algorithms, I encourage the interested readers to take a look at this paper as well, as the last one in the streaming family of SODA15.

by Suresh Venkatasubramanian ( at January 28, 2015 06:27 AM UTC

Authors: Maarten Löffler, Martin Nöllenburg, Frank Staals
Download: PDF
Abstract: Point feature map labeling is a geometric problem, in which a set of input points must be labeled with a set of disjoint rectangles (the bounding boxes of the label texts). Typically, labeling models either use internal labels, which must touch their feature point, or external (boundary) labels, which are placed on one of the four sides of the input points' bounding box and which are connected to their feature points by crossing-free leader lines. In this paper we study polynomial-time algorithms for maximizing the number of internal labels in a mixed labeling model that combines internal and external labels. The model requires that all leaders are parallel to a given orientation $\theta \in [0,2\pi)$, whose value influences the geometric properties and hence the running times of our algorithms.

at January 28, 2015 01:41 AM UTC

Authors: Bernhard Schölkopf, Krikamol Muandet, Kenji Fukumizu, Jonas Peters
Download: PDF
Abstract: We describe a method to perform functional operations on probability distributions of random variables. The method uses reproducing kernel Hilbert space representations of probability distributions, and it is applicable to all operations which can be applied to points drawn from the respective distributions. We refer to our approach as {\em kernel probabilistic programming}. We illustrate it on synthetic data, and show how it can be used for nonparametric structural equation models, with an application to causal inference.

at January 28, 2015 01:40 AM UTC

Authors: Clément L. Canonne
Download: PDF
Abstract: The field of property testing of probability distributions, or distribution testing, aims to provide fast and (most likely) correct answers to questions pertaining to specific aspects of very large datasets. In this work, we consider a property of particular interest, monotonicity of distributions. We focus on the complexity of monotonicity testing across different models of access to the distributions; and obtain results in these new settings that differ significantly from the known bounds in the standard sampling model.

at January 28, 2015 01:41 AM UTC

Authors: Christoph Haase, Stefan Kiefer
Download: PDF
Abstract: We show that the Kth largest subset problem and the Kth largest m-tuple problem are in PP and hard for PP under polynomial-time Turing reductions. Several problems from the literature were previously shown NP-hard via reductions from those two problems, and by our main result they become PP-hard as well. We also provide complementary PP-upper bounds for some of them.

at January 28, 2015 01:40 AM UTC

Authors: Haris Aziz, Serge Gaspers, Simon Mackenzie, Nicholas Mattei, Nina Narodytska, Toby Walsh
Download: PDF
Abstract: The probabilistic serial (PS) rule is one of the most prominent randomized rules for the assignment problem. It is well-known for its superior fairness and welfare properties. However, PS is not immune to manipulative behaviour by the agents. We initiate the study of the computational complexity of an agent manipulating the PS rule. We show that computing an expected utility better response is NP- hard. On the other hand, we present a polynomial-time algorithm to compute a lexicographic best response. For the case of two agents, we show that even an expected utility best response can be computed in polynomial time. Our result for the case of two agents relies on an interesting connection with sequential allocation of discrete objects.

at January 28, 2015 01:41 AM UTC

Authors: Yuto Nakashima, Tomohiro I., Shunsuke Inenaga, Hideo Bannai, Masayuki Takeda
Download: PDF
Abstract: We present the first worst-case linear-time algorithm to compute the Lempel-Ziv 78 factorization of a given string over an integer alphabet. Our algorithm is based on nearest marked ancestor queries on the suffix tree of the given string. We also show that the same technique can be used to construct the position heap of a set of strings in worst-case linear time, when the set of strings is given as a trie.

at January 28, 2015 01:41 AM UTC

I am pleased to announce that the EATCS Awards Committee consisting of Fedor Fomin, Kim G. Larsen and Vladimiro Sassone (chair) has selected  Christos Papadimitriou (UC Berkeley, USA; WWW:; Wikipedia: as the recipient of the EATCS Award 2015. Congratulations to Christos!

The award, which is given to acknowledge extensive and widely recognized contributions to theoretical computer science over a life long scientific career, will be presented to Christos at ICALP 2015, which will be held in Kyoto, Japan, in the period 6-10 July 2015. The list of previous recipients of the EATCS Award is here. An official laudatio for the award is forthcoming. What follows is a short preliminary laudatio penned for this blog post.

Christos Papadimitriou’s body of work is of amazing breadth and depth, and has had a profound and lasting influence on many areas of Computer Science.

In an era of great specialization, Christos Papadimitriou stands out as a present-day Renaissance man. He is an intellectual who, citing the title of one of his essays, is not afraid of asking "big queries" and  applies the “computational lens” to shed light on important problems in several areas of scientific enquiry, ranging from economics to the theory of evolution. While doing what he might himself call “extroverted Computer Science”, he has contributed truly seminal work to a large number of fields within our subject, including algorithmics, complexity theory, computational game theory, database theory, internet and sensor nets, optimization and robotics.

Christos Papadimitriou is also one of the very best expositors and teachers within our field. He has written classic textbooks on the theory of computation, combinatorial optimization, database concurrency control, computational complexity and algorithms. In so doing, he has helped to inspire several generations of computer scientists.

If that wasn't enough, Christos Papadimitriou is a tireless expositor and is able to explain the beauty of our discipline to a general educated public. He is not afraid to cross boundaries, and to use literary forms such as novels (see "Turing: A Novel About Computation", and comics (see the graphic novel "Logicomix", to offer accessible expositions of the science of computing and its origins.

To sum up, Christos Papadimitriou is one of those rare scientists who combines a large, influential and varied body of scientific results with the gifts of an inspiring teacher and of a great communicator.

by Luca Aceto ( at January 27, 2015 10:27 PM UTC

As a child, I recall learning the Latin tag “Ira furor brevis est” (Anger is a brief madness). It makes the point that anger is irrational; really, as game theorists we ought never to get angry. On the other hand, emotional modelling is important for us as computer scientists: the article Computationally Modeling Human Emotion was highlighted on the front cover of the December issue of the CACM. A recent article, The Commitment Function of Angry Facial Expressions, by Lawrence Ian Reed, Peter DeScioli, and Steven A. Pinker (RDP in what follows), provides a satisfying game-theoretic explanation. The clue is in the Latin tag: you may benefit from being irrational, if you can convince your opponent that you are irrational. This helps you get better payoff in the ultimatum game.

Recall that in the ultimatum game, player 1 gets (provisionally) a sum of money that he is to share with player 2. He does this by offering player 2 some percentage of the money, and if player 2 accepts, the money is divided accordingly, but if player 2 rejects, neither player gets anything. This means that a rational player 2 should accept even a derisory offer from player 1, but in experiments the money tends to be split more evenly. If player 2 can convince player 1 that he is not rational (will reject a derisory offer) he stands to do better.

The RDP paper tested the ultimatum game on several hundred players (over Amazon’s Mechanical Turk), in which player 1 got to see a video clip, purporting to come from player 2, announcing what player 2 would accept, either with or without an angry expression. A one-sentence summary of results is that the angry expression helps player 2 get a better offer from player 1. Player 2 wins by convincing player 1 he is irrational.

As a usage of game theory, does this still fall foul of Ariel Rubinstein’s critique of game theory, in which he says that some of the arguments for game theory do nothing more than attach labels to real life situations? I feel like it does help to justify the study of game theory, e.g. study of the ultimatum game for its own sake, since it would be hard to just devise it on ad ad-hoc basis, in the context of the RDP paper.

Finally, while the RDP paper concentrates on the appearance of anger, as opposed to its reality, it seems like a basis for explaining the existence of anger in the first place. That is, in a world where we worry that undervaluing someone else’s welfare may cause them to succumb to a “furor brevis” and do something you’ll both regret, we all cooperate more. There are articles like this one that seek to explain anger and advise people on how to deal with it, that miss this game-theoretic point. So next time you see someone lose their temper, explain to them that their behaviour is not a bug but a feature, that is against the interest of the individual but in the interest of the tribe. Then duck for cover.

by Paul Goldberg ( at January 27, 2015 11:08 AM UTC

This two part series is written by my student Samira Daruki

Modern graph data sets are too large to fit in the memory. And so the streaming model is one of the more popular and attractive ones for analyzing massive graphs: in this model, for an input graph $G = (V, E)$ with $n$ vertices and $m$ edges, the edges arrive in an arbitrary order in a stream and the algorithm can only use $\tilde{O}(n)$ space. The algorithm is allowed to have several passes over the stream but usually the ideal case is to have just one pass.

For many graph problems such as matching, spanning tree, graph sparsification, approximate distance and counting subgraphs there now exist streaming algorithms with space complexity $O(n \text{poly} (\log n))$. In these algorithms, usually we assume that the nodes of the graphs can be stored in memory but edges cannot. Notice that for constructing  matchings, spanners and sparsifiers, the output size is often $\Omega(n)$, so it forces the streaming algorithm to use $\Omega(n)$ space.

But if you don't need to report the output, then this can be avoided. For an example, see the work of Kapralov, Khanna and Sudan from SODA 2014 which approximates the matching size to a $\text{poly}(\log n)$ factor using $\text{poly}(\log n)$ space in a random stream (where edges appear in a random order rather than arbitrarily)

Thus, the question now is:
can we obtain a $\text{poly}\log$ space streaming algorithm for approximating the solution cost for other graph problems? 
Consider for instance MAX-CUT. There are several results on approximating the maximum cut in graphs in a non-streaming model; the trivial approach is to take a random cut. This selects half of the edges in expectation, resulting in a factor $1/2$-approximation.

Thus implies that in a streaming model we can just count the number of edges $m$ and output $\frac{m}{2}$ which results in a $O(\log n)$-space algorithm. By keeping a sample of the edge set we can get a different tradeoff: a $(1+\epsilon)$-approximation algorithm which uses $O(\frac{n}{\epsilon^2})$ space.

Can we get a streaming algorithm with better than factor-$2$ approximation using just $poly(\log n)$ space?

A paper by Kapralov, Khanna and Sudan in the streaming session of SODA15 this year answers this question. This is the latest in a line of results on streaming graph problems by Kapralov and others from SODA 2012, 2013 and 2014 (mentioned above)

Here is their main result
For any constant $\epsilon > 0$ a single pass streaming algorithm for approximating the value of MAX-CUT  to a factor $2 − \epsilon$ requires $\Omega(\sqrt{n})$ space, even in the random order model.
This result rules out the possibility of $\text{poly}(\log n)$ space, but suggests that $\tilde{O}(\sqrt{n})$ space cost might be possible in some specific settings.

Another major result of this paper is as follows:

For any constant $\epsilon > 0$ a single pass streaming algorithm for approximating MAX-CUT value to factor $1 + \epsilon$ requires $n^{1−O(\epsilon)}$ space in adversarial streams.

The main fact they  use here is the connection between the MAX CUT value and (distance from) bipartiteness:
if a graph $G$ with $m$ edges is $\beta$-far from being bipartite, then maxcut value of $G$ is at most $(1-\beta)m$. 
The other simple observation is that any algorithm that computes a $\gamma$-approximation to MAX CUT distinguishes between bipartite graphs and graphs that are $1-\frac{1}{\gamma}$-far from being bipartite. Thus to show that no streaming algorithm using space $c$ can achieve a $\gamma$- approximation with failure probability at most $\delta$, it's enough enough to show no streaming algorithm using space $c$ can distinguish between bipartite graphs and graphs which are $1- \frac{1}{\gamma}$-far from being bipartite with probability at least $1- \delta$.

Given these facts, now the core idea to prove the main result here is exhibiting a distribution over inputs where $(2-\epsilon)$ approximation to MAX-CUT requires $\Omega(\sqrt{n})$ space.

More precisely, the input graph instances are based on random Erdos-Renyi graphs, which are either bipartite in YES case or non-bipartite in the NO case. In order to achieve a $(2-\epsilon)$-factor gap for the MAX CUT in this structure, we choose the expected degree of vertices to be $\Theta(\frac{1}{\epsilon^2})$.

This way, the input streaming graph can be partitioned and given to the algorithm in $\Omega(\frac{1}{\epsilon^2})$ phases, which can be simulated as a $\frac{1}{\epsilon^2}$-party one-way communication game.

Then, by giving a reduction from a variation of Boolean Hidden Matching(BHM)  called Distributional Boolean Hidden Partition(D-BHP) to the MAX-CUT on the input instance of the problem, and showing that $\Omega(\sqrt{n})$ space is necessary to differentiate between these two cases, the main streaming lower bound result for obtaining approximate MAX-CUT is straightforward.

There are many technical details in performing this reduction, but roughly speaking they show that any algorithm that solves MAX-CUT on the constructed instances must solve the two-party communication problem in at least one of the phases.

There are still some main open problems left in this paper:

  • One is whether breaking $2$-approximation barrier in $\sqrt{n}$ space is possible if we are allowed to have $poly(\log n)$ passes over the input stream?
  • Also it is interesting to think about designing streaming algorithms for other graph problems using $o(n)$ space.

This brings us to another paper presented in this session. In Streaming Algorithms for Estimating the Matching Size in Planar Graphs and Beyond (by Esfandiari, Hajiaghayi, Liaghat, Monemizadeh and Onak), this latter question is answered about finding the maximum matching for planar graphs using $o(n)$ space.

Here is the main result:
If the underlying graph is planar, then there is a streaming algorithm which provides a $O(1)$-approximation solution to maximum matching with high probability using $O(n^{\frac{2}{3}})$ space.
The main idea for proving the result here is to use a known structural graph property:
If we characterize the nodes of the input graph based on the degrees to two groups of light (deg < 9) and heavy (deg > 9) vertices and then define the shallow edges as the ones with  two light endpoints, then we have the following nice property: (Assuming |maximum matching| = m, |shallow edges| = s and | heavy vertices| = h): 
$$ \frac{\max (s, h)}{12} \leq m \leq h + s$$

Then using this structural fact as the main tool, constructing a small size matching (bounded by $c n^{\frac{2}{3}}$) as we read the edges in a greedy manner, and estimating the number of shallow edges and heavy vertices in the induced subgraph by a subset of sampled vertices with size $c n^{\frac{2}{3}}$, we can approximate the size of the maximum matching by a constant factor.

In addition to planar case, they show that similar results for approximating maximum matching in other graph structures such as $d$-degenerate graphs and forests are achievable.

Coming up: parametrized streaming and VERTEX COVER.

by Suresh Venkatasubramanian ( at January 27, 2015 07:28 AM UTC

(All of the math for this problem is here)
My Discrete Math Honors TA Ioana showed me a Romanian Math Problem book
(She is Romanian) and told the following problem:

(All ai in this post are assumed to be natural numbers)

Show that for all n ≥ 6 there exists (a1,...,an) such that 1/a12 + ... + 1/an2 = 1.

(sum of reciprocals squared)

Normally my curiosity exceeds my ego and I would look up the answer.
But it was in Romanian! Normally I would ask her to read the answer to me.
But I was going out of town! Normally I would look it up the answer on the
web. But this is not the kind of thing the web is good at!

So I did the obvious thing- worked on it while watching Homeland Season 2
the first four episodes. And I solved it! Either try to solve it yourself
OR goto the link.

Some possibly open questions come out of this

1) I also prove that, for all k there is an n0=n0(k) such that

all n ≥ n0 there exists (a1,...,an) such that1/a1k+ ... + 1/ank = 1.

(sum of reciprocal kth powers)

We showed above that n0(2) ≤ 6, its easy to show no(2) ≥ 6, so n0(2)=6.

Obtain upper and lower bounds on n0(k).

2) What is the complexity of the following problem:

Given k,n find out if there exists (a1,...,an) such that1/a1/k + ... + 1/ank = 1.

If so then find the values (a1,...,an).

(We know of an example where the Greedy method does not work.)

3) What is the complexity of the following problem: Just as above
but now we want to know HOW MANY solutions.

4) Meta question: How hard are these questions? The original one was
on the level of a high school or college math competition. The rest
might be easy or hard. I suspect that getting an exact formula for
n0(k) is hard. I also suspect that proving that this is hard
will be hard.

by GASARCH ( at January 26, 2015 08:43 PM UTC

Associate Professor / Professor of Computer Science. Start date 1 September 2015, or as soon as possible thereafter.



by theorycsjobs at January 26, 2015 08:18 PM UTC

The Department of Computer Science at the University of Oxford is planning to make an appointment at associate/full professor level with effect from 1 September 2015 or as soon as possible thereafter. Applicants should hold a PhD in computer science or a related subject and have experience in any area related to algorithms or complexity.  

The details are here

Please help spread the word.

by Luca Aceto ( at January 26, 2015 02:08 PM UTC

Associate Professorship (or Professorship) of Computer Science (Algorithms) at the University of Oxford with Tutorial Fellowship at Magdalen College. To start around Sept 2015

Webpage: http://


by theorycsjobs at January 26, 2015 12:49 PM UTC

Gears and rings…

Flickr source:

Denys Fisher was a British engineer and maker of board games and other toys. In 1965 he invented the Spirograph toy. Some speculate that he was influenced by the designs of the artist, composer, and film animator John Whitney, whose opening sequence for Alfred Hitchcock’s 1958 film “Vertigo” is considered the first use of computer graphics in cinema. The Spirograph toy involves drawing with a pen guided by a gear with m teeth going around inside a ring or around a track or other gear with x teeth. The kind of design you get depends on how x relates to m.

Today Ken and I want to talk about a basic notion of mathematics and theory that is simple to define, very useful, and yet seems to be tough for some students to get.

The notion is:

\displaystyle  x \equiv y \bmod (m).

Often mathematical ideas are not easy to trace: who defined multiplication first, or square roots, or any other basic notion? Often the notion is credited not to the person who invented it, but the one who first used it in some important way.

That is not the case for the notion of congruence. We know who defined it, when they defined it, and what they used it for. The “they” was the 21-year-old Carl Gauss, who in his famous 1798 Disquisitiones Arithmeticae introduced both the concept and the notation.

It is interesting that while Gauss selected a perfect notation for his brilliant notion, others were less excited. Adrien-Marie Legendre wanted to write

\displaystyle  x = y ,

with the m implicit. Others suggested variations of

\displaystyle  x \approx y.

Gauss had nailed the right notion and it stuck—thankfully.

Gauss objected to Legendre’s double use; Charles Babbage also said it violated the doctrine of one notation for one thing. What all seem to have had in common was wanting to view it as a relation between two things, but it really involves three: m as well as x and y.

A common expression that something is complex to understand is that it has “gears within gears.” As embodied in Spirograph, congruence is exactly that. Can we make it as easy as a child’s toy?


Why So Hard To Learn?

The definition is, of course, that

\displaystyle  x \equiv y \bmod (m)

if and only if {m} divides {x-y}.

I know most of you know this cold, but stay with us. We are not trying to advance your knowledge of this famous definition, but discuss why it is hard to learn. I think there are topics in math that are complex to state but can be learned without much difficulty. Yet this simple definition of Gauss is in my teaching experience not easy for many students to really get. The fundamental question is:

Why is the notion of congruence so difficult to understand?

At Tech I used to teach the beginning discrete math class for our majors. It covered all the usual stuff: basic number theory, graph theory, set theory, basic rules of logic, Galois cohomology, and recurrences. Actually we did not cover Galois anything, but I added that to see if you were reading. I often started with number theory, since the students—I thought—had seen numbers since grade school. But the concept of congruence always seemed hard to make register. Was it my poor teaching? Or was it something else?

A Theory

We have a modest theory on why congruence is tricky to learn. The main thesis is that it really is two important ideas that are thrown together. The problem is that the two ideas are not on equal footing. One is naturally high and abstract, the other so low one can easily “forget” it.

The first idea is that given an integer m there is the arithmetic modulo m. In many places on the web this is introduced as “clock arithmetic.” In the US this means that m = 12, while in Europe and elsewhere m = 24. But then it is explained how to add, subtract, and multiply numbers modulo m. Of course this is quite interesting, since this is essentially the study—in disguise—of finite commutative rings of a special type. The addition part is generated by the unit of the ring.

Gauss’s brilliant insight is that such simple objects had a deep and rich theory. The multiplicative structure of such a simple object is quite nontrivial. For example, there is the theorem that when the order—that is to say, the number of elements—is prime, then the nonzero elements form a cyclic group under multiplication. This is a deep theorem. Why should the 16 nonzero elements modulo 17 be cyclic? So are the 18 elements modulo 19, but Gauss found further properties that allow a regular 17-gon to be constructed by classical means and a regular 19-gon not. This verges into the theory of the Legendre symbol and quadratic reciprocity.

These and many other important theorems are available for the elements modulo m. This is a rich area, with many problems remaining open today. So the first hitch is that students must learn some slice of the basics of what is really a deep theory. Even just saying “commutative ring” might sound like saying “Galois cohomology.” What compounds the trouble, we believe, is that while the basic results are not terribly difficult, the motivation is unclear. Why are these objects so important? Today the best answer is these objects play a critical role in cryptography and coding theory and other aspects of theory.

If we stopped there, perhaps that would be enough. Perhaps it would be less confusing to our students. Maybe I will try that next time I have to teach this material. I could say:

Pick a natural number {m>1}. Then define addition and multiplication on

\displaystyle  0,1,\dots,m-1

as follows…

We would then prove that it had lots of nice properties and we could give some nice applications. RSA anyone?

The Second Difficulty

The key to the power of congruence is: it throws information away. If two integers x and y are equal, then for any m,

\displaystyle  x \equiv y \bmod (m).

This means that congruence is in some sense a “forgetful functor.” This aspect underlies applications such as noted here:

Many problems in number theory reduce to the question of the solvability or unsolvability of some type of congruence. … In this connection, research into the question of the number of solutions of a congruence equation is of fundamental importance to number theory.

The sources, however, seem not to explain why this is true. The idea is simple but critical:

\displaystyle  x=y \implies x \equiv y \bmod (m),

for any m that you choose. This means that we can reduce showing {x \neq y} to showing that

\displaystyle  x \equiv y \bmod (m)

is false for some m. This is one of the big ideas in mathematics. Often it is hard to show that x and y are not equal, yet it is much easier to show they are not congruent. This is a brilliant insight, and a surprising one.

Note this is quite counter-intuitive. You take x and y and throw away information about them, and then it becomes easier to handle them. It becomes easier to prove that they are different. Amazing. When you have some freedom to choose m or know a bound on x and y you can use the same idea to prove them equal. Or at least you can gain confidence in equality by sampling m randomly.

Here is a simple example. How can we show that {x^2} can never equal {2y^2} (unless both x and y are zero)? First if they have a common factor we could divide it out without affecting the equation. That needs a pause for thought, but in fact we only need to visualize it when the factor is 3. Now we use the ‘forgetful’ trick and observe that if {x^2 = 2y^2} then

\displaystyle  x^{2} \equiv 2y^{2} \bmod (3).

We’ve agreed that 3 cannot divide both x and y. If it divides neither we get {1 \equiv 2 \bmod (3)} which is impossible because {(\pm 1)^{2} \equiv 1 \bmod (3)}. If 3 divides only x, then we get {0 \equiv 2y^{2} \bmod (3)}. But this implies that 3 divides y, which is again a contradiction. The same works if we start with 3 dividing only y. There is no way out. So {x^2 \neq 2y^2}.

How to Motivate?

Gauss seems to have thought the “forgetting” idea to be obvious beyond need of mention. At least I do not find any mention of it in my English edition of his book. Yet it strikes me both as a vital point and a connection that needs to be reinforced when teaching. It would be an interesting test of our hypothesis if emphasizing this simple “transfer principle” were shown to improve students’ results—or not.

The other difficulty remains problematic. David Mumford, who won a Fields Medal in 1974 for work in algebraic geometry and has recently started a blog on his website, co-wrote an obituary for Alexander Grothendieck with John Tate in the journal Nature. In his post on the editorial difficulties of explaining Grothendieck’s central notion of scheme to scientists in other fields, he wrote:

“To be honest, the central stumbling block for explaining schemes was the word ‘ring.’ … As for finite fields, in spite of John’s discomfort, I thought the numbers on a clock made a decent first exposure. OK, {\mathbb{Z}/12\mathbb{Z}} is not a field but what faster way to introduce finite rings than saying ‘a type of number that are added like the hours on a clock — 7 hours after 9 o’clock is not 16 o’clock, but 4 o’clock’?”

If the term ‘ring’ is a block for scientists then it needs special effort to convey the concept to students however it arises. Mumford says in his next paragraph:

“If math was introduced as connected to the rest of the world instead of being an isolated exercise, if it was shown to connect to money, to measuring the real world, to physics, chemistry and biology, to optimizing decisions and to writing computer code, fewer students would be turned off.”

Ken wonders whether reference to Spirograph and other cases of “gears within gears” can help here. In his novel Cryptonomicon, Neal Stephenson explained the role of congruence in breaking the Enigma code via an analogy to Alan Turing’s bicycle. Its rear wheel has one bent spoke and its chain has one vulnerable link. If they ever come together the bike will jam. The wheel’s sprocket has m teeth and the chain has x links. How frequently will this happen? Maybe never? This post emphasizes the idea of forgetting the total distance C that the chain has traveled and keeping only the congruence.

Open Problems

We use congruences all the time in computer theory. Are there other types of “throw information away” methods that we can also use?

by Pip at January 26, 2015 04:08 AM UTC

Authors: Diptarka Chakraborty, Raghunath Tewari
Download: PDF
Abstract: Given a graph $G$ and two vertices $s$ and $t$ in it, {\em graph reachability} is the problem of checking whether there exists a path from $s$ to $t$ in $G$. We show that reachability in directed layered planar graphs can be decided in polynomial time and $O(n^\epsilon)$ space, for any $\epsilon > 0$. The previous best known space bound for this problem with polynomial time was approximately $O(\sqrt{n})$ space \cite{INPVW13}.

Deciding graph reachability in {\SC} is an important open question in complexity theory and in this paper we make progress towards resolving this question.

at January 26, 2015 12:00 AM UTC

Authors: Pierre Guillon, Emmanuel Jeandel
Download: PDF
Abstract: Suppose that Alice and Bob are given each an infinite string, and they want to decide whether their two strings are in a given relation. How much communication do they need? How can communication be even defined and measured for infinite strings? In this article, we propose a formalism for a notion of infinite communication complexity, prove that it satisfies some natural properties and coincides, for relevant applications, with the classical notion of amortized communication complexity. More-over, an application is given for tackling some conjecture about tilings and multidimensional sofic shifts.

at January 26, 2015 01:40 AM UTC

Authors: Niv Buchbinder, Moran Feldman, Roy Schwartz
Download: PDF
Abstract: Submodular function maximization has been studied extensively in recent years under various constraints and models. The problem plays a major role in various disciplines. We study a natural online variant of this problem in which elements arrive one-by-one and the algorithm has to maintain a solution obeying certain constraints at all times. Upon arrival of an element, the algorithm has to decide whether to accept the element into its solution and may preempt previously chosen elements. The goal is to maximize a submodular function over the set of elements in the solution.

We study two special cases of this general problem and derive upper and lower bounds on the competitive ratio. Specifically, we design a $1/e$-competitive algorithm for the unconstrained case in which the algorithm may hold any subset of the elements, and constant competitive ratio algorithms for the case where the algorithm may hold at most $k$ elements in its solution.

at January 26, 2015 01:41 AM UTC

Authors: Carl Feghali, Matthew Johnson, Daniël Paulusma
Download: PDF
Abstract: Let $G$ be a simple undirected graph on $n$ vertices with maximum degree~$\Delta$. Brooks' Theorem states that $G$ has a $\Delta$-colouring unless~$G$ is a complete graph, or a cycle with an odd number of vertices. To recolour $G$ is to obtain a new proper colouring by changing the colour of one vertex. We show an analogue of Brooks' Theorem by proving that from any $k$-colouring, $k>\Delta$, a $\Delta$-colouring of $G$ can be obtained by a sequence of $O(n^2)$ recolourings using only the original $k$ colours unless $G$ is a complete graph or a cycle with an odd number of vertices, or $k=\Delta+1$, $G$ is $\Delta$-regular and, for each vertex $v$ in $G$, no two neighbours of $v$ are coloured alike.

We use this result to study the reconfiguration graph $R_k(G)$ of the $k$-colourings of $G$. The vertex set of $R_k(G)$ is the set of all possible $k$-colourings of $G$ and two colourings are adjacent if they differ on exactly one vertex. We prove that for $\Delta\geq 3$, $R_{\Delta+1}(G)$ consists of isolated vertices and at most one further component which has diameter $O(n^2)$. This result enables us to complete both a structural classification and an algorithmic classification for reconfigurations of colourings of graphs of bounded maximum degree.

at January 26, 2015 12:00 AM UTC

Rob Calderbank is a mathematician of great caliber in coding and information theory, including its algebraic aspects. He is the winner of the Shannon award and the Hamming medal. Rob's 60th birthday celebration will be in La Jolla this weekend. He was a terrific mentor to me at AT&T in years long behind us, and I am looking forward to being at the celebration!

by metoo ( at January 25, 2015 01:07 PM UTC

We prove an $\omega(\log n)$ lower bound on the conondeterministic communication complexity of the Clique vs. Independent Set problem introduced by Yannakakis (STOC 1988, JCSS 1991). As a corollary, this implies superpolynomial lower bounds for the Alon--Saks--Seymour conjecture in graph theory. Our approach is to first exhibit a query complexity separation for the decision tree analogue of the UP vs. coNP question---namely, unambiguous DNF width vs. CNF width---and then "lift" this separation over to communication complexity using a result from prior work.

at January 25, 2015 11:53 AM UTC

Authors: Volker Diekert, Alexei G. Myasnikov, Armin Weiß
Download: PDF
Abstract: In various occasions the conjugacy problem in finitely generated amalgamated products and HNN extensions can be decided efficiently for elements which cannot be conjugated into the base groups. This observation asks for a bound on how many such elements there are. Such bounds can be derived using the theory of amenable graphs:

In this work we examine Schreier graphs of amalgamated products and HNN extensions. For an amalgamated product $G = H *_A K $ with $[H:A] \geq [K:A] \geq 2$, the Schreier graph with respect to $H$ or $K$ turns out to be non-amenable if and only if $[H:A] \geq 3$. Moreover, for an HNN extension of the form $G = <H,b | bab^{-1}=\phi(a), a \in A >$, we show that the Schreier graph of $G$ with respect to the subgroup $H$ is non-amenable if and only if $A \neq H \neq \phi(A)$.

As application of these characterizations we show that under certain conditions the conjugacy problem in fundamental groups of finite graphs of groups with free abelian vertex groups can be solved in polynomial time on a strongly generic set. Furthermore, the conjugacy problem in groups with more than one end can be solved with a strongly generic algorithm which has essentially the same time complexity as the word problem. These are rather striking results as the word problem might be easy, but the conjugacy problem might be even undecidable. Finally, our results yield another proof that the set where the conjugacy problem of the Baumslag group is decidable in polynomial time is also strongly generic.

at January 23, 2015 01:41 AM UTC

Authors: Meo Mespotine
Download: PDF
Abstract: Run Length Encoding(RLE) is one of the oldest algorithms for data-compression available, a method used for compression of large data into smaller and therefore more compact data. It compresses by looking at the data for repetitions of the same character in a row and storing the amount(called run) and the respective character(called run_value) as target-data. Unfortunately it only compresses within strict and special cases. Outside of these cases, it increases the data-size, even doubles the size in worst cases compared to the original, unprocessed data. In this paper, we will discuss modifications to RLE, with which we will only store the run for characters, that are actually compressible, getting rid of a lot of useless data like the runs of the characters, that are uncompressible in the first place. This will be achieved by storing the character first and the run second. Additionally we create a bit-list of 256 positions(one for every possible ASCII-character), in which we will store, if a specific (ASCII-)character is compressible(1) or not(0). Using this list, we can now say, if a character is compressible (store [the character]+[it's run]) or if it is not compressible (store [the character] only and the next character is NOT a run, but the following character instead). Using this list, we can also successfully decode the data(if the character is compressible, the next character is a run, if not compressible, the next character is a normal character). With that, we store runs only for characters, that are compressible in the first place. In fact, in the worst case scenario, the encoded data will create always just an overhead of the size of the bit-list itself. With an alphabet of 256 different characters(i.e. ASCII) it would be only a maximum of 32 bytes, no matter how big the original data was. [...]

at January 23, 2015 01:41 AM UTC

Authors: Peter Bürgisser, Christian Ikenmeyer, Jesko Hüttenhain
Download: PDF
Abstract: Let Det_n denote the closure of the GL_{n^2}(C)-orbit of the determinant polynomial det_n with respect to linear substitution. The highest weights (partitions) of irreducible GL_{n^2}(C)-representations occurring in the coordinate ring of Det_n form a finitely generated monoid S(Det_n). We prove that the saturation of S(Det_n) contains all partitions lambda with length at most n and size divisible by n. This implies that representation theoretic obstructions for the permanent versus determinant problem must be holes of the monoid S(Det_n).

at January 23, 2015 01:40 AM UTC

Authors: Tobias Brunsch, Kamiel Cornelissen, Bodo Manthey, Heiko Röglin, Clemens Rösner
Download: PDF
Abstract: The minimum-cost flow problem is a classic problem in combinatorial optimization with various applications. Several pseudo-polynomial, polynomial, and strongly polynomial algorithms have been developed in the past decades, and it seems that both the problem and the algorithms are well understood. However, some of the algorithms' running times observed in empirical studies contrast the running times obtained by worst-case analysis not only in the order of magnitude but also in the ranking when compared to each other. For example, the Successive Shortest Path (SSP) algorithm, which has an exponential worst-case running time, seems to outperform the strongly polynomial Minimum-Mean Cycle Canceling algorithm.

To explain this discrepancy, we study the SSP algorithm in the framework of smoothed analysis and establish a bound of $O(mn\phi)$ for the number of iterations, which implies a smoothed running time of $O(mn\phi (m + n\log n))$, where $n$ and $m$ denote the number of nodes and edges, respectively, and $\phi$ is a measure for the amount of random noise. This shows that worst-case instances for the SSP algorithm are not robust and unlikely to be encountered in practice. Furthermore, we prove a smoothed lower bound of $\Omega(m \phi \min\{n, \phi\})$ for the number of iterations of the SSP algorithm, showing that the upper bound cannot be improved for $\phi = \Omega(n)$.

at January 23, 2015 01:41 AM UTC

Authors: Daniel Gonçalves, Kolja Knauer, Benjamin Lévêque
Download: PDF
Abstract: We propose a simple generalization of Schnyder woods from the plane to maps on orientable surfaces of higher genus. This is done in the language of angle labelings. Generalizing results of De Fraysseix and Ossona de Mendez, and Felsner, we establish a correspondence between these labelings and orientations and characterize the set of orientations of a map that correspond to such a Schnyder wood. Furthermore, we study the set of these orientations of a given map and provide a natural partition into distributive lattices depending on the surface homology. This generalizes earlier results of Felsner and Ossona de Mendez. In the toroidal case, a new proof for the existence of Schnyder woods is derived from this approach.

at January 23, 2015 01:42 AM UTC

Authors: Weidong Li, Xi Liu, Xuejie Zhang, Xiaobo Cai
Download: PDF
Abstract: In this paper, we design an efficient algorithm for the energy-aware profit maximizing scheduling problem, where the high performance computing system administrator is to maximize the profit per unit time. The running time of the proposed algorithm is depending on the number of task types, while the running time of the previous algorithm is depending on the number of tasks. Moreover, we prove that the worst-case performance ratio is close to 2, which maybe the best result. Simulation experiments show that the proposed algorithm is more accurate than the previous method.

at January 23, 2015 01:41 AM UTC

Authors: Manuela Bujorianu, Rafael Wisniewski
Download: PDF
Abstract: The interest in autonomous systems is increasing both in industry and academia. Such systems must operate with limited human intervention in a changing environment and must be able to compensate for significant system failures without external intervention. The most appropriate models of autonomous systems can be found in the class of hybrid systems that interact with their environment. This workshop brings together researchers interested in all aspects of autonomy and resilience of hybrid systems.

at January 23, 2015 01:41 AM UTC

Authors: Raphael Kramer, Nelson Maculan, Anand Subramanian, Thibaut Vidal
Download: PDF
Abstract: We propose a new speed and departure time optimization algorithm for the Pollution-Routing Problem (PRP) which runs in quadratic time. This algorithm is embedded into an iterated local search-based metaheuristic to achieve a combined speed, scheduling and routing optimization. Extensive computational experiments are conducted on classic PRP benchmark instances. Allowing delayed departure times from the depot significantly increases the search space, and contributes to reduce CO 2 emissions by 8.31% on the considered instances.

at January 23, 2015 01:41 AM UTC

This quarter, in my advanced algorithms class, I've been going through Vazirani's Approximation Algorithms book chapter-by-chapter, and learning lots of interesting material that I didn't already know myself in the process.

One of the things I recently learned (in covering chapter 6 on feedback vertex set approximation)* is that, although all the students have taken some form of linear algebra, many of them have never seen a vector space in which the base field is not the real numbers or in which the elements of the vector space are not tuples of real coordinates. So instead of discussing the details of that algorithm I ended up spending much of the lecture reviewing the theory of binary vector spaces. These are very important in algebraic graph theory, so I thought it might be helpful to write a very gentle introduction to this material here.

First of all we need the concept of a field. This is just a system of elements in which we can perform the usual arithmetic operations (addition, subtraction, multiplication, and division) and expect them to behave like the familiar real number arithmetic: addition and multiplication are associative and commutative, there are special values 0 and 1 that are the identities for addition and multiplication respectively, subtraction is inverse to addition, division is inverse to multiplication by anything other than zero, and multiplication distributes over addition. The field that's important for this material is a particularly simple one, GF(2), in which the required special values 0 and 1 are the only elements. The arithmetic of these two values can be described as ordinary integer arithmetic mod 2, or equivalently it can be described by saying that addition is Boolean xor and multiplication is Boolean and. Subtraction turns out to be the same as addition, and division by 1 (the only value that it's possible to divide by) is just the identity operation. It's not hard to verify that these operations have all the desired properties of a field, and doing so maybe makes a useful exercise (Exercise 1).

Next, a vector space is a collection of elements that can be added to each other and multiplied by scalars from a field. (One can generalize the same concept to other kinds of arithmetic than fields but then one gets modules instead of vector spaces.) The vector addition operation must be commutative and invertible; this implies that it has an identity element, and this element (whatever it happens to be) is called the zero vector. Additionally, scalar-scalar-vector multiplications must be associative, scalar multiplication by the special element 1 of the field must be the identity operation, and scalar multiplication must be distributive over both vector and field addition.

One easy way to construct vector spaces over a field F is to make its elements be k-tuples of elements of F with the addition and scalar multiplication operations acting independently on each coordinate, but it's not the only way. For the vector spaces used in this chapter, a different construction is more natural: we let the elements of the vector space be sets in some family of sets, and the vector addition operation be the symmetric difference of sets. The symmetric difference S Δ T of two sets S and T is the set of elements that occur in one but not both of S and T. This operation is associative, commutative, and invertible, where the inverse of a set is the same set itself: S Δ T Δ T = S regardless of which order you use to perform the symmetric difference operations. If a nonempty family of sets has the property that the symmetric difference of every two sets in the family stays in the family, then these sets can be interpreted as the elements of a vector space over GF(2) in which the vector addition operation is symmetric difference, the zero vector is the empty set (necessarily in the family because it's the symmetric difference of any other set with itself), scalar multiplication by 0 takes every set to the empty set, and scalar multiplication by 1 takes every set to itself. One has to verify that these addition and multiplication operations are distributive, but again this is a not-very-difficult exercise (Exercise 2).

As with other kinds of vector spaces, these vector spaces of sets have bases, collections of vectors such that everything in the vector space has a unique representation as a sum of scalar products of basis vectors. Every two bases have the same number of vectors as each other (Exercise 3: prove this), and this number is called the dimension of the vector space. If the dimension is d, the total number of vectors in the vector space is always exactly 2d, because that is the number of different ways that you can choose a scalar multiple (0 or 1) for each basis vector.

The families of sets that are needed for this chapter are subsets of edges of a given undirected graph. These can also be interpreted as subgraphs of the graph, but they're not quite the same because the usual definition of a subgraph also allows you to specify a subset of the vertices (as long as all edges in the edge subset have endpoints in the vertex subset), and we won't be doing that. Every graph has three important vector spaces of this type associated with it, the edge space, the cycle space, and the cut space. The edge space is the family of all subsets of edges (including the set of all edges of the given graph and the empty set). That is, it is the power set of the set of all edges; it has a natural basis in which the basis vectors are the one-edge sets, and its dimension is the number of edges in the graph.

The cycle space is the family of all subsets of edges that have even degree at all of the vertices of the graph (Exercise 4: prove that this family is closed under symmetric difference operations). So it includes the simple cycles of the graph, but it also includes other subgraphs; for instance in the graph of an octahedron (a six-vertex graph with four edges at each vertex) the set of all edges is in the cycle space, as are the sets of edges formed by pairs of triangles that touch each other at a single vertex and the sets complementary to triangles or 4-cycles. It's always possible to find a basis for the cycle space in which the basis elements are themselves simple cycles; such a basis is called a cycle basis. For instance you can form a "fundamental cycle basis" by choosing a spanning forest of the given graph and then finding all cycles that have one edge e outside this forest and that include also the edges of the unique path in the forest that connects the endpoints of e. Or, for a planar graph, you can form a cycle basis by choosing one cycle for each bounded face of a planar embedding of the graph. There are lots of interesting algorithmic problems associated with the cycle space and its cycle bases, but for this chapter the main thing that's needed is to compute its dimension, which has the nice formula |E| − |V| + c, where E is the edge set of the given graph, V is the vertex set, and c is the number of connected components. One name for this dimension is the cyclomatic number of the graph, and the book chapter denotes it as cyc(G). (It's also possible to interpret it topologically as the first Betti number of the graph but for students who don't already know about binary vector spaces that would probably be more confusing than helpful.)

The cut space of the graph doesn't take part in this chapter, but can be defined similarly as the set of all cut-sets of the graph. A cut of a graph is a partition of its vertices into two disjoint subsets; in some contexts we require the subsets to both be nonempty but we don't do that here, so the partition into an empty set and the set of all vertices is one of the allowed cuts. The corresponding cut-set is the set of edges that have one endpoint in each of the two subsets. The family of cut-sets is closed under symmetric difference (Exercise 5) so it forms a vector space, the edge space. If the edges are all given positive weights and the graph is connected, then the minimum weight basis of the edge space can be represented by a tree on the vertices of the graph, in which each tree edge determines a cut (the partition of the tree into two subtrees formed by deleting that edges) and has an associated number (the weight of its cut). This tree is called the Gomory–Hu tree of the graph and it came up (stripped of its linear-algebra origin) earlier, in an approximation for k-cuts in chapter 4. I also have a recent preprint on computing this basis and this tree for graphs that can be embedded onto low-genus surfaces: see arXiv:1411.7055.

*Unrelatedly, in preparing to cover this topic, I was confused for a long time by a typo in this chapter. On page 56 it states that, for a minimal feedback set, "clearly" the sum over feedback vertices of the number of components formed by deleting that one vertex equals the number of feedback vertices plus the number of components that are formed by deleting the whole feedback set but that touch only one vertex in the set. This is not true. What is true, and what is needed for the later argument, is that the left hand side is greater than or equal to the right hand side.

at January 22, 2015 11:29 PM UTC

It’s hiring season and many departments are in the midst of deciding who to bring in for interviews.   When discussing the possibility of hiring theorists, here are a couple of reasons why this might be a particularly good year to do so:

1) This year there is an exceptionally strong and deep pool of theorists on the job market, due in part to the closing of the Microsoft Research Silicon Valley lab.  Many departments should be able to hire strong candidates (including ones not from MSR SV) who would be hard to get in a typical year.

2) Theory has a large role to play in many of the current priority areas for hiring, such as big data, machine learning, privacy, and security.  Many theorists can also serve as bridges to other departments, such as math, physics, economics, OR, EE, and more.

by salilvadhan at January 22, 2015 06:44 PM UTC

Ely asked me to remind everyone that the deadline for the 26th Annual Symposium on Combinatorial Pattern Matching is fast approaching. You have until 2nd February to match your combinatorial pattern.

by Andrew at January 22, 2015 03:01 PM UTC

We show a directed and robust analogue of a boolean isoperimetric type theorem of Talagrand. As an application, we give a monotonicity testing algorithm that makes $\tilde{O}(\sqrt{n}/\epsilon^2)$ non-adaptive queries to a function $f:\{0,1\}^n \mapsto \{0,1\}$, always accepts a monotone function and rejects a function that is $\epsilon$-far from being monotone with constant probability.

at January 22, 2015 02:21 PM UTC

My high school daughter Molly was reading her Kindle and said "You know how you can choose a word and the Kindle will give you a definition. There should be an algorithm that chooses the right definition to display depending on the context". She was reading a book that took place in the 60's that referred to a meter man. This was not, as the first definition of "meter" would indicate, a 39 inch tall male. A meter man is the person who records the electric or gas meter at your house. Today we would use a gender neutral term like "meter reader" if technology hadn't made them obsolete.

Molly hit upon a very difficult natural language processing challenge known as word-sense disambiguation with the most successful approaches using supervised machine learning techniques. If anyone from Amazon is reading this post, the Kindle dictionary would make an excellent application of word-sense disambiguation. You don't need perfection, anything better than choosing the first definition would be welcome. Small tweaks to the user interface where the reader can indicate the appropriate definition would give more labelled data to produce better algorithms.

And to Molly: Keep saying "There should be an algorithm". Someday there might be. Someday you might be the one to discover it.

by Lance Fortnow ( at January 22, 2015 01:03 PM UTC

Authors: Ryan R. Curtin, Dongryeol Lee, William B. March, Parikshit Ram
Download: PDF
Abstract: Numerous machine learning algorithms contain pairwise statistical problems at their core---that is, tasks that require computations over all pairs of input points if implemented naively. Often, tree structures are used to solve these problems efficiently. Dual-tree algorithms can efficiently solve or approximate many of these problems. Using cover trees, rigorous worst-case runtime guarantees have been proven for some of these algorithms. In this paper, we present a problem-independent runtime guarantee for any dual-tree algorithm using the cover tree, separating out the problem-dependent and the problem-independent elements. This allows us to just plug in bounds for the problem-dependent elements to get runtime guarantees for dual-tree algorithms for any pairwise statistical problem without re-deriving the entire proof. We demonstrate this plug-and-play procedure for nearest-neighbor search and approximate kernel density estimation to get improved runtime guarantees. Under mild assumptions, we also present the first linear runtime guarantee for dual-tree based range search.

at January 22, 2015 01:41 AM UTC

Authors: Ron Adar, Leah Epstein
Download: PDF
Abstract: Let T=(V,E) be a tree graph with non-negative weights defined on the vertices. A vertex z is called a separating vertex for u and v if the distances of z to u and v are not equal. A set of vertices L\subseteq V is a feasible solution for the non-landmarks model (NL), if for every pair of distinct vertices, u,v \in V\setminus L, there are at least two vertices of L separating them. Such a feasible solution is called a "landmark set". We analyze the structure of landmark sets for trees and design a linear time algorithm for finding a minimum cost landmark set for a given tree graph.

at January 22, 2015 01:41 AM UTC

Authors: Clément Aubert, Marc Bagnol, Thomas Seiller
Download: PDF
Abstract: We give a characterization of deterministic polynomial time computation based on an algebraic structure called the resolution semiring, whose elements can be understood as logic programs or sets of rewriting rules over first-order terms. More precisely, we study the restriction of this framework to terms (and logic programs, rewriting rules) using only unary symbols. We prove it is complete for polynomial time computation, using an encoding of pushdown automata. We then introduce an algebraic counterpart of the memoization technique in order to show its PTIME soundness. We finally relate our approach and complexity results to complexity of logic programming. As an application of our techniques, we show a PTIME-completeness result for a class of logic programming queries which use only unary function symbols.

at January 22, 2015 01:41 AM UTC

Authors: Peter Chin, Anup Rao, Van Vu
Download: PDF
Abstract: In this paper, we present and analyze a simple and robust spectral algorithm for the stochastic block model with $k$ blocks, for any $k$ fixed. Our algorithm works with graphs having constant edge density, under an optimal condition on the gap between the density inside a block and the density between the blocks. As a co-product, we settle an open question posed by Abbe et. al. concerning censor block models.

at January 22, 2015 01:41 AM UTC

Authors: Jurek Czyzowicz, Konstantinos Georgiou, Evangelos Kranakis, Lata Narayanan, Jarda Opatrny, Birgit Vogtenhuber
Download: PDF
Abstract: Assume that two robots are located at the centre of a unit disk. Their goal is to evacuate from the disk through an exit at an unknown location on the boundary of the disk. At any time the robots can move anywhere they choose on the disk, independently of each other, with maximum speed $1$. The robots can cooperate by exchanging information whenever they meet. We study algorithms for the two robots to minimize the evacuation time: the time when both robots reach the exit.

In [CGGKMP14] the authors gave an algorithm defining trajectories for the two robots yielding evacuation time at most $5.740$ and also proved that any algorithm has evacuation time at least $3+ \frac{\pi}{4} + \sqrt{2} \approx 5.199$. We improve both the upper and lower bounds on the evacuation time of a unit disk. Namely, we present a new non-trivial algorithm whose evacuation time is at most $5.628$ and show that any algorithm has evacuation time at least $3+ \frac{\pi}{6} + \sqrt{3} \approx 5.255$. To achieve the upper bound, we designed an algorithm which non-intuitively proposes a forced meeting between the two robots, even if the exit has not been found by either of them.

at January 22, 2015 01:41 AM UTC


Two years ago, I blogged when Lily was born.  Today I can blog that she runs, climbs, swims (sort of), constructs 3-word sentences, demands chocolate cake, counts to 10 in both English and Hebrew, and knows colors, letters, shapes, animals, friends, relatives, the sun, and the moon.  To all external appearances she’s now conscious as you and I are (and considerably more so than the cat in the photo).

But the most impressive thing Lily does—the thing that puts her far beyond where her parents were at the same age, in a few areas—is her use of the iPad.  There she does phonics exercises, plays puzzle games that aren’t always trivial for me to win, and watches educational videos on YouTube (skipping past the ads, and complaining if the Internet connection goes down).  She chooses the apps and videos herself, easily switching between them when she gets bored.  It’s a sight to behold, and definitely something to try with your own toddler if you have one.  (There’s a movement these days that encourages parents to ban kids from using touch-screen devices, fearful that too much screen time will distract them from the real world.  To which I reply: for better or worse, this is the real world that our kids will grow up into.)

People often ask whether Dana and I will steer Lily into becoming a theoretical computer scientist like us.  My answer is “hell no”: I’ll support Lily in whatever she wants to do, whether that means logic, combinatorics, algebraic geometry, or even something further afield like theoretical neuroscience or physics.

As recent events illustrated, the world is not always the kindest place for nerds (male or female), with our normal ways of thinking, talking, and interacting sometimes misunderstood by others in the cruelest ways imaginable.  Yet despite everything, nerds do sometimes manage to meet, get married, and even produce offspring with nerd potential of their own.  We’re here, we’re sometimes inappropriately clear, and we’re not going anywhere.

So to life!  And happy birthday Lily!

by Scott at January 21, 2015 10:05 PM UTC