Theory of Computing Blog Aggregator

Our conference submission was just desk rejected because the PC is unwilling to stop reading at Page 12.

We were asked to format submissions according to the LIPIcs style file, which we did, and to limit the submission length to 12 pages, excluding references; omitted proofs could be placed in the appendix. Our submission was 14.5 pages, excluding references, with no appendix.

Over the years I have submitted and also reviewed many papers that went slightly beyond the page limit; and nobody paid attention. In a few cases I have also seen PC chairs asking for the resubmission of papers which egregiously violated the formatting requirements, such as crammed, 10-point, 2-column submissions. In the present case, despite our good intentions witnessed by the usage of the LIPIcs style file, our misplacement of the \bibliography command was just too severe to be offered a second chance.

In general, there have been many discussions about formatting requirements,
for example here.  The summary of my position is in the title, and details follow.

Of all the useless, time-consuming rules, the one of imposing formatting requirements strikes me as particularly outrageous because it is one that can actually be fixed. I think it is clear to everyone that a proper formatting of the paper has zero relevance compared to the myriad of actual problems that can affect submissions, such as a poorly written introduction, no intuitive exposition of the proof techniques, or missing citations. The only definite outcome of formatting requirements is that authors waste time.

I like the following paragraph from this article:

Science fiction novels of a half-century ago dramatized conflicts between humans and robots, asking if people were controlling their technologies, or if the machines were actually in charge. A few decades later, with the digital revolution in juggernaut mode, the verdict is in. The robots have won. Although the automatons were supposedly going to free people by taking on life’s menial, repetitive tasks, frequently, technological innovation actually offloads such jobs onto human beings.

I ponder upon it every time I need to waste 30 minutes with LaTeX, instead of being given the obvious option of submitting a .pdf file and possibly a .tex file from which a computer would automatically extract author names, title, and all other relevant information, including categories. They too can be quite accurately deduced from the text of the paper, can’t they? The waste of time is magnified if your paper has pictures (ours had two, carefully prepared with LaTeX extensions). No wonder we don’t see many pictures in papers.

I am not aware of any benefit of forcing papers into different latex styles, except one. Publishers can better monetize our donations if they are properly formatted. So this is one more reason to kill the current editorial system. The role of a conference/journal should be to place quality stamps on online papers, not to engulf people into a battle with latex.

Survival tip: One day I felt particularly vexed by being forced to convert all my LaTeX single-line equations, which fit nicely on a single-column format, into align environments that could fit in the 2-column proceedings format. So I offered $100 for a package that would make the process easier. I wanted a package that could recognize if I was using the “&” or the “\\” commands, and if so switch to align, or to multline. And it should also recognize if I am not using labels (in which case I need align*, not align), and so on.

Of course, a LaTeX saint quickly provided a package, and turned down my $100. I used this package ever since. I only write \[ \] and then magically the computer knows what I need. At least for this skirmish, I have won and have offloaded a mindless task onto a computer…

…though good luck making this work with the next formatting requirements.

by Manu at September 30, 2014 02:02 PM UTC

We consider arithmetic complexity classes that are in some sense dual to the classes VP(Fp) that were introduced by Valiant. This provides new characterizations of the complexity classes ACC^1 and TC^1, and also provides a compelling example of a class of high-degree polynomials that can be simulated via arithmetic circuits of much lower degree.

at September 30, 2014 09:04 AM UTC

Authors: Rustem Valeyev
Download: PDF
Abstract: On example of tasks of class NP the questions concerning accuracy of work of already existing and possible in the future algorithms for the solution of tasks on discrete structures are considered.

at September 30, 2014 12:40 AM UTC

Authors: Christoph Haase, Stefan Kiefer
Download: PDF
Abstract: Given Markov chains and Markov decision processes (MDPs) whose transitions are labelled with non-negative integer costs, we study the probability of paths whose accumulated cost satisfy a Boolean combination of inequalities. We investigate the computational complexity of deciding whether this probability exceeds a given threshold. For acyclic Markov chains, we show that this problem is PP-complete, whereas it is hard for the PosSLP problem and in PSpace for general Markov chains. Moreover for acyclic and general MDPs, we prove PSpace- and EXPTime-completeness, respectively. Our results significantly improve the state of the art of the complexity of computing reward quantiles in succinctly represented stochastic systems.

at September 30, 2014 12:41 AM UTC

Authors: Igor Stassiy
Download: PDF
Abstract: We present a provably more efficient implementation of the Minimum Norm Point Algorithm conceived by Fujishige than the one presented in \cite{FUJI06}. The algorithm solves the minimization problem for a class of functions known as submodular. Many important functions, such as minimum cut in the graph, have the so called submodular property \cite{FUJI82}. It is known that the problem can also be efficiently solved in strongly polynomial time \cite{IWAT01}, however known theoretical bounds are far from being practical. We present an improved implementation of the algorithm, for which unfortunately no worst case bounds are know, but which performs very well in practice. With the modifications presented, the algorithm performs an order of magnitude faster for certain submodular functions.

at September 30, 2014 12:41 AM UTC

Authors: Daniel Dadush, Oded Regev, Noah Stephens-Davidowitz
Download: PDF
Abstract: We present a substantially more efficient variant, both in terms of running time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky, and Micciancio for solving CVPP (the preprocessing version of the Closest Vector Problem, CVP) with a distance guarantee. For instance, for any $\alpha < 1/2$, our algorithm finds the (unique) closest lattice point for any target point whose distance from the lattice is at most $\alpha$ times the length of the shortest nonzero lattice vector, requires as preprocessing advice only $N \approx \widetilde{O}(n \exp(\alpha^2 n /(1-2\alpha)^2))$ vectors, and runs in time $\widetilde{O}(nN)$.

As our second main contribution, we present reductions showing that it suffices to solve CVP, both in its plain and preprocessing versions, when the input target point is within some bounded distance of the lattice. The reductions are based on ideas due to Kannan and a recent sparsification technique due to Dadush and Kun. Combining our reductions with the LLM algorithm gives an approximation factor of $O(n/\sqrt{\log n})$ for search CVPP, improving on the previous best of $O(n^{1.5})$ due to Lagarias, Lenstra, and Schnorr. When combined with our improved algorithm we obtain, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve CVPP with (the only slightly worse) approximation factor of O(n).

at September 30, 2014 12:00 AM UTC

Authors: Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, Andreas Krause
Download: PDF
Abstract: Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice? In this paper, we develop the first linear-time algorithm for maximizing a general monotone submodular function subject to a cardinality constraint. We show that our randomized algorithm, Rand-Greedy, can achieve a (1-1/e) approximation guarantee to the optimum solution in time linear in the size of the data and independent of the cardinality constraint. We empirically demonstrate the effectiveness of our algorithm on submodular functions arising in data summarization, including training large-scale kernel methods and exemplar-based clustering. We observe that Rand-Greedy practically achieves the same utility value as lazy greedy but runs much faster. More surprisingly, we observe that in many practical scenarios Rand-Greedy does not evaluate the whole fraction of data points even once and still achieves indistinguishable results compared to lazy greedy.

at September 30, 2014 12:41 AM UTC

Authors: Wojciech Chacholski, Martina Scolamiero, Francesco Vaccarino
Download: PDF
Abstract: A multifiltration is a functor indexed by $\mathbb{N}^r$ that maps any morphism to a monomorphism. The goal of this paper is to describe in an explicit and combinatorial way the natural $\mathbb{N}^r$-graded $R[x_1,\ldots, x_r]$-module structure on the homology of a multifiltration of simplicial complexes. To do that we study multifiltrations of sets and vector spaces. We prove in particular that the $\mathbb{N}^r$-graded $R[x_1,\ldots, x_r]$-modules that can occur as $R$-spans of multifiltrations of sets are the direct sums of monomial ideals.

at September 30, 2014 12:41 AM UTC

Authors: Sivaram Ambikasaran
Download: PDF
Abstract: This article discusses a more general and numerically stable Rybicki Press algorithm, which enables inverting and computing determinants of covariance matrices, whose elements are sums of exponentials. The algorithm is true in exact arithmetic and relies on introducing new variables and corresponding equations, thereby converting the matrix into a banded matrix of larger size. Linear complexity banded algorithms for solving linear systems and computing determinants on the larger matrix enable linear complexity algorithms for the initial semi-separable matrix as well. Benchmarks provided illustrate the linear scaling of the algorithm.

at September 30, 2014 12:41 AM UTC

Authors: Ankit Chauhan, B. V. Raghavendra Rao
Download: PDF
Abstract: We study structural aspects of randomized parameterized computation. We introduce a new class ${\sf W[P]}$-${\sf PFPT}$ as a natural parameterized analogue of ${\sf PP}$. Our definition uses the machine based characterization of the parameterized complexity class ${\sf W[P]}$ obtained by Chen [TCS 2005]. We translate most of the structural properties and characterizations of the class ${\sf PP}$ to the new class ${W[P]}$-${\sf PFPT}$.

We study a parameterization of the polynomial identity testing problem based on the degree of the polynomial computed by the arithmetic circuit. We obtain a parameterized analogue of the well known Schwartz-Zippel lemma [Schwartz, JACM 80 and Zippel, EUROSAM 79].

Additionally, we introduce a parameterized variant of permanent, and prove its $\#W[1]$ completeness.

at September 30, 2014 12:40 AM UTC

Authors: Chinmoy Dutta, Gopal Pandurangan, Rajmohan Rajaraman, Zhifeng Sun, Emanuele Viola
Download: PDF
Abstract: We study how to spread $k$ tokens of information to every node on an $n$-node dynamic network, the edges of which are changing at each round. This basic {\em gossip problem} can be completed in $O(n + k)$ rounds in any static network, and determining its complexity in dynamic networks is central to understanding the algorithmic limits and capabilities of various dynamic network models. Our focus is on token-forwarding algorithms, which do not manipulate tokens in any way other than storing, copying and forwarding them.

We first consider the {\em strongly adaptive} adversary model where in each round, each node first chooses a token to broadcast to all its neighbors (without knowing who they are), and then an adversary chooses an arbitrary connected communication network for that round with the knowledge of the tokens chosen by each node. We show that $\Omega(nk/\log n + n)$ rounds are needed for any randomized (centralized or distributed) token-forwarding algorithm to disseminate the $k$ tokens, thus resolving an open problem raised in~\cite{kuhn+lo:dynamic}. The bound applies to a wide class of initial token distributions, including those in which each token is held by exactly one node and {\em well-mixed} ones in which each node has each token independently with a constant probability.

We also show several upper bounds in varying models.

at September 30, 2014 12:41 AM UTC

Authors: Saad Quader
Download: PDF
Abstract: The problem of computing the Betweenness Centrality (BC) is important in analyzing graphs in many practical applications like social networks, biological networks, transportation networks, electrical circuits, etc. Since this problem is computation intensive, researchers have been developing algorithms using high performance computing resources like supercomputers, clusters, and Graphics Processing Units (GPUs). Current GPU algorithms for computing BC employ Brandes' sequential algorithm with different trade-offs for thread scheduling, data structures, and atomic operations. In this paper, we study three GPU algorithms for computing BC of unweighted, directed, scale-free networks. We discuss and measure the trade-offs of their design choices about balanced thread scheduling, atomic operations, synchronizations and latency hiding. Our program is written in NVIDIA CUDA C and was tested on an NVIDIA Tesla M2050 GPU.

at September 30, 2014 12:41 AM UTC

Authors: Eduardo Canale, Pablo Romero, Gerardo Rubino
Download: PDF
Abstract: Let $G=(V,E)$ be a simple graph with $|V|=n$ nodes and $|E|=m$ links, a subset $K \subseteq V$ of \emph{terminals}, a vector $p=(p_1,\ldots,p_m) \in [0,1]^m$ and a positive integer $d$, called \emph{diameter}. We assume nodes are perfect but links fail stochastically and independently, with probabilities $q_i=1-p_i$. The \emph{diameter-constrained reliability} (DCR for short), is the probability that the terminals of the resulting subgraph remain connected by paths composed by $d$ links, or less. This number is denoted by $R_{K,G}^{d}(p)$. The general computation of the parameter $R_{K,G}^{d}(p)$ belongs to the class of $\mathcal{N}\mathcal{P}$-Hard problems, since is subsumes the complexity that a random graph is connected.

A discussion of the computational complexity for DCR-subproblems is provided in terms of the number of terminal nodes $k=|K|$ and diameter $d$. Either when $d=1$ or when $d=2$ and $k$ is fixed, the DCR is inside the class $\mathcal{P}$ of polynomial-time problems. The DCR turns $\mathcal{N}\mathcal{P}$-Hard even if $k \geq 2$ and $d\geq 3$ are fixed, or in an all-terminal scenario when $d=2$. The traditional approach is to design either exponential exact algorithms or efficient solutions for particular graph classes.

The contributions of this paper are two-fold. First, a new recursive class of graphs are shown to have efficient DCR computation. Second, we define a factorization method in order to develop an exact DCR computation in general. The approach is inspired in prior works related with the determination of irrelevant links and deletion-contraction formula.

at September 30, 2014 12:00 AM UTC

17.8.14 midrasha poster 2015 (1)


The 18th yearly school in mathematics is devoted this year to combinatorics. It will feature lecture series by Irit Dinur, Joel Hass, Peter Keevash, Alexandru Nica, Alexander Postnikov, Wojciech Samotij, and David Streurer and additional activities. As usual grants for local and travel expences are possible.


by Gil Kalai at September 29, 2014 07:00 PM UTC

Luca Aceto has already advertised on his blog the calls for nominations for the Presburger award and the EATCS Distinguished Dissertation award; see the details at those links or on the EATCS web site. Here’s another one:

Call for Nominations for EATCS fellows 2015 : December 31, 2014


Please note: all nominees and nominators must be EATCS Members

Submit by December 31 of the current year for Fellow consideration by email to the EATCS Secretary ( The subject line
of the email should read "EATCS Fellow Nomination - <surname of candidate>".


The EATCS Fellows Program is established by the Association to recognize outstanding EATCS Members for their scientific achievements in the field of Theoretical Computer Science. The Fellow status is conferred by the EATCS Fellows Selection Committee upon a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be “model citizens” of the TCS community, helping to develop the standing of TCS beyond the frontiers of the community.

In order to be considered by the EATCS Fellows Selection Committee, candidates must be nominated by at least four EATCS Members. Please verify your membership at

The EATCS Fellows-Selection Committee consists of

  • Rocco De Nicola (IMT Lucca, Italy)
  • Paul Goldberg (Oxford, UK)
  • Anca Muscholl (Bordeaux, France)
  • Dorothea Wagner (Karlsruhe, Germany, chair)
  • Roger Wattenhofer (ETH Zurich, CH)


A nomination should consist of answers to the questions below. It can be co-signed by several EATCS members. At least two nomination letters per candidate are recommended. If you are supporting the nomination from within the candidate’s field of expertise, it is expected that you will be specific about the individual's technical contributions.

To be considered, nominations for 2015 must be received by December 31, 2014.

1. Name of candidate

Candidate's current affiliation and position

Candidate's email address, postal address and phone number

Nominator(s) relationship to the candidate

2. Short summary of candidate's accomplishments (citation -- 25 words or less)

3. Candidate's accomplishments: Identify the most important contributions that qualify the candidate for the rank of EATCS Fellow according to the following two categories:

A) Technical achievements

B) Outstanding service to the TCS community

Please limit your comments to at most three pages.

4. Nominator(s):


Affiliation(s), email and postal address(es), phone number(s)

by Paul Goldberg ( at September 29, 2014 02:24 PM UTC

Authors: Jiantao Jiao, Kartik Venkat, Yanjun Han, Tsachy Weissman
Download: PDF
Abstract: Maximum likelihood is the most widely used statistical estimation technique. Recent work by the authors introduced a general methodology for the construction of estimators for functionals in parametric models, and demonstrated improvements - both in theory and in practice - over the maximum likelihood estimator (MLE), particularly in high dimensional scenarios involving parameter dimension comparable to or larger than the number of samples. This approach to estimation, building on results from approximation theory, is shown to yield minimax rate-optimal estimators for a wide class of functionals, implementable with modest computational requirements. In a nutshell, a message of this recent work is that, for a wide class of functionals, the performance of these essentially optimal estimators with $n$ samples is comparable to that of the MLE with $n \ln n$ samples.

In the present paper, we highlight the applicability of the aforementioned methodology to statistical problems beyond functional estimation, and show that it can yield substantial gains. For example, we demonstrate that for learning tree-structured graphical models, our approach achieves a significant reduction of the required data size compared with the classical Chow--Liu algorithm, which is an implementation of the MLE, to achieve the same accuracy. The key step in improving the Chow--Liu algorithm is to replace the empirical mutual information with the estimator for mutual information proposed by the authors. Further, applying the same replacement approach to classical Bayesian network classification, the resulting classifiers uniformly outperform the previous classifiers on 26 widely used datasets.

at September 29, 2014 12:40 AM UTC

Authors: Sariel Har-Peled, Banjamin Raichel
Download: PDF
Abstract: We provide a general framework for getting expected linear time constant factor approximations (and in many cases FPTAS's) to several well known problems in Computational Geometry, such as $k$-center clustering and farthest nearest neighbor. The new approach is robust to variations in the input problem, and yet it is simple, elegant and practical. In particular, many of these well studied problems which fit easily into our framework, either previously had no linear time approximation algorithm, or required rather involved algorithms and analysis. A short list of the problems we consider include farthest nearest neighbor, $k$-center clustering, smallest disk enclosing $k$ points, $k$th largest distance, $k$th smallest $m$-nearest neighbor distance, $k$th heaviest edge in the MST and other spanning forest type problems, problems involving upward closed set systems, and more. Finally, we show how to extend our framework such that the linear running time bound holds with high probability.

at September 29, 2014 12:40 AM UTC

When Hong Kong was “handed over” to China on July 1st, 1997, the plan was that the city, now a Special Administrative Region, would retain its independent laws and institutions for 50 years, and it would have elections with universal suffrage (one person one vote). In 2007, it was decided that universal suffrage would start in 2017.

Discussion on how to regulate the 2017 elections has been going on for the last several months. A coalition of pro-democracy groups ran an informal referendum on the preferred system of election, gathering about 800,000 votes, or a fifth of the registered electorate. All the options in the referendum assumed no vetting process for the candidate, contrary to Beijing’s stance that any system for the 2017 election would only allow candidates pre-approved by the mainland government.

Afterwards (this happened during the summer), the pro-democracy groups organized an enormous rally, which had hundreds of thousands of participants, and announced plans to “occupy Central with love and peace” (Central contains the financial district) on October 1st if the Hong Kong legislature passed an election law in which candidates could run only with Beijing’s approval.

This was followed by an anti-democracy rally, partly attended by people bused in from across the border, which is a rather surreal notion; it’s like people are saying “we want our voices heard about the fact that we do not want our voices heard.”

A few days in advance of October 1st, a group of university students, some of them associated with group scholarism started a sit-in at a government building. Scholarism made news three years ago, when it (successfully) fought the proposal to introduce a “patriotic education” curriculum in grade school.


People have been facing the police with umbrellas and goggles to protect themselves from pepper spray.


The plaza in front of the government building, where the sit-in started, has been cleared, but for the whole weekend both Central and the neighboring district of Admiralty have been filled by thousands of protesters, day and night.


There is a petition at that has already exceeded the threshold required to receive a response, but that people might want to sign on.

Considering how the Chinese government feels about students rallying for democracy, there is reason to be worried.


[photos taken from Facebook, credits unknown]

by luca at September 28, 2014 10:11 PM UTC

Professors are available to graduate students in different ways:  in dept coffee rooms, in their office behind open doors,  at weekly dept talks,  via email,  who knows, maybe some via their own apps!  I do a video hangout with CS graduate students of Rutgers every week. It lasts 1 hour and is open to all: not just my students -- they dont make it --- or theory or database students, but all students. I did this in Spring and am doing this again in Fall.  In Spring, we explored questions such as:

T1: How to apply for jobs.
T2: how to apply for internships.
T3: How to attend a conf
T4: How to give a talk.
T5: How to polish a research paper 1 hour before the conf deadline.
T6: How to start a company, if you want to...

There maybe like 10 students in many sessions, most students speak freely during the hangout (because I do), and even though I schedule it for 1 hour, the informal nature of the conversation continues for 1.5--2 hrs. New topics emerged and we abandoned a few of the topics above. Some sessions got technical, eg., into machine learning, others remained more social. Two topics worth mentioning:
  • On collaboration:  I summarized that students collaborate with their advisor or summer internship mentees: in both cases, for most part, the other party takes the responsibility to think ahead and steer the course of the research as needed. Most students dont know what I called "peer collaboration" which is to approach another prof or student at conf or elsewhere and successfully collaborate. This involves one to start off describing some problem, may be even feeding initial observation, and when other part is not hooked,  up the ante and maybe even send initial latex docs, the collaboration gets to a new place when the other party contributes some observations and starts editing the joint latex doc., and pronto, you are collaborating. Latex is the matchmaker! Is this all worth it? Yes, collaborations once begun, dont typically die, they begut more research and even academic friendship over time. 
  • On attending conferences. I communicated that even the social moments -- coffee breaks, banquet cruises --- are in academic context and one should not break away from professional or academic behavior. 
For Fall, the topics are:
T1: What did you do in the summer, re.  CS Research
T2: How to set research goals for the semester and follow up
T3: How to find an advisor and research topic.
T4: How to make use of NY area research resources.
or whatever comes up.

by metoo ( at September 28, 2014 08:43 PM UTC

Alex Nikolov defended his thesis related to discrepancy, and is now on his way to U. Toronto, after a brief stop at MSR, Redmond. Congratulations, Sasho!

by metoo ( at September 28, 2014 06:56 PM UTC

We investigate the possible structural changes one can perform on a game graph without destroying the winning regions of the players playing a parity game on it. More precisely, given a game graph $(G,p)$ for which we can efficiently compute winning regions, can we remove or add a vertex or edge or change a single priority and maintain at least part of the winning region? Also, what about restricted classes of graphs, such as planar, $k$-connected and the like?

at September 28, 2014 05:51 AM UTC

Some additional thoughts


Microsoft Research source

Omer Reingold is a brilliant researcher, and now all can see that he is also a very classy and thoughtful person. As you all know by now, he and many others have suddenly lost their jobs with the closing of Microsoft’s Silicon Valley Campus (SVC) Research Lab. The lab closed this week: it has been removed from Microsoft’s “Research labs worldwide” page. The closing affects about 50 jobs directly, and is part of an overall reduction of about 18,000 staff worldwide.

Today, Ken and I wish to express some of our feelings about this. At first we thought we could not add to what others have already said about the closing, but we realized we could still express support and add to the wider conversation.

Omer is among fourteen listed writers of the “Windows on Theory” blog, all affiliated with Microsoft Research. He wrote a short and classy post on the blog. My former student Parikshit Gopalan, who was at the lab and is now affiliated with a group in Redmond, added a similar comment, among many in the post. Omer and Parikshit wrote a nice paper in FOCS 2012 with Raghu Meka, Luca Trevisan, and Salil Vadhan, on new ways to construct pseudorandom generators. We covered it briefly two years ago.

Parikshit also writes for the blog—here is a nice post on crisply-stated open problems involving alphabet size in coding theory. In his comment last weekend he reflected:

History tells us that research labs are mortal. Like mortals, they are finally judged by their accomplishments rather than their longevity.

Judging the value of accomplishments, however, is often a longer-term process than “longevity,” and that plays into some of our thoughts.

Some Thoughts

{\bullet } We are not shocked. Luca Trevisan wrote a very thoughtful piece on his own blog, “in theory.” His piece started with:

I am still in shock at the news that Microsoft decided to shut down MSR SVC and fire, literally from one day to the next, almost all the scientific staff.

I, Dick, am not so shocked; I will explain in detail why in a moment.

{\bullet } We are very impressed by comments from some of the researchers. Not just Omer but many others affected and not. I think this is something that we can find heartening.

{\bullet } We are happy to hear about the community response. Luca wrote:

Here at Berkeley and Stanford we will do our best to help, and we will make sure that everybody has some space to work. There will also be some kind of community-wide response, but it will take some time to figure out what we can do. Meanwhile, I urge everybody to reach out to their friends formerly at MSR SVC, make them feel the love of their community, and connect them to opportunities for both short term and long term positions, as they weigh their options.

We are very happy to hear this, and we add that other institutions will also try and help any that need it. Of course being far away makes it less likely that some of us can help immediately. But I personally will try to get help from Tech for any that need it.

{\bullet } Probably no personal judgment. In academia with tenure cases etc., all judgment is personal—when cuts are on a larger scale, there is invariably protest which is often effective, as some commenters have noted. In a post three years ago on how research is like the stock market and “failure must be an option,” Ken inserted a note on his father’s recollection of the “Terrible Twenty.” This name for recipients of annual AT&T and/or IBM lab fellowships spoke the ethic that the company needed only a few to succeed to be golden. SVC was far from a case of going 0-for-20; highly profitable work has been documented coming from there. So it was probably a larger personnel matter, not a personal one, and this is far from unusual in big business.

“Windows on Theory” team source

Why Not Shocked

I have been around a while and have seen labs come and go over the years. I can still remember when AT&T Bell Labs was the greatest lab in the world; it still is strong, but once was the lab. One of the reasons for this was simple: its parent company was a monopoly. It had an immense amount of cash and could afford to have researchers that did whatever they wanted. For example, for years Shin Lin, a Bell Labs researcher, worked on the Busy Beaver Problem.

A busy beaver function quantifies these upper limits on a given measure, and is a noncomputable function. In fact, a busy beaver function can be shown to grow faster asymptotically than does any computable function. The concept was first introduced by Tibor Radó as the “busy beaver game” in his 1962 paper, “On Non-Computable Functions.”

Ron Graham was the director of the mathematics division which hired Lin, who was Radó’s student. Ron told me that each year he had to argue with upper management to keep Lin. Finally in 1970, Lin with Brian Kernighan found a brilliant algorithm for cutting graphs. This algorithm and its descendants has been used now for decades to approximate everything from VLSI layouts to the Traveling Salesman Problem. If Bell Labs had not kept Lin around, perhaps this algorithm would still exist, but perhaps not. Or maybe its discovery would have been delayed for decades. Who knows.

So, finally, here are two reasons I am not shocked, beyond factoring in that this is just the tip of a global 14% iceberg of layoffs that shows in our field. First, in my opinion, the ability for companies to support very far out research is sensitively linked to exigencies of their cash flow. Microsoft is being pressed by the shift from PC’s as the main platform—where they had an almost monopoly on the OS—to a place where there are many players in the mobile world. This means that they are less able to support research of an open kind. Second, even in times of good cash flow, whether from monopoly or booms, people had to fight to keep good people. One of my friends opined that in the levels below recent personnel changes at the top, no senior swingman at Microsoft was able “to set priorities for investments that will not mature in ten years.”

This does not let Microsoft off the hook. They could have handled the closure better. Well, perhaps better.

Let’s not let Ken off the hook either—he writes the rest of this post, besides his having written some of what’s above.

Mining Value and Beavering Away

Some commenters have alluded to Microsoft’s recently spending 2.5 billion dollars in cash to acquire the Minecraft computer game. My kids—this is Ken—and their cousins have all been avid players. Unlike most video games—most games, period—its prime value is open-ended creativity. It can be played as a survival game, and servers can be set up to provide battle competitions, but this is secondary. The ethic is creative enjoyment and sharing. A question that was percolating even before the sale is how far players can monetize servers and other content they create.

In essence all of our community, in academia and industry both, have been engaged in a game of “Mindcraft” in which monetization and even practicality are not the prime movers. Perhaps the best-known testament upholding this is Godfrey Hardy’s 1940 essay, “A Mathematician’s Apology.” In this he famously stated that whole branches of mathematics including number theory were “useless,” especially for any “warlike purpose”—and he was instantly proved wrong by the use of number theory to break cryptosystems, to say nothing of creating them. The point we offer, for conversations such as those in Scott Aaronson’s item and comments, is not to say what is or proves to be valuable, but rather the sheer problem of judging—of projecting an appraisal:

What distinguishes “academic” research—in industry also—is undertaking creation that is not predicated on knowing, or even the possibility of judging, its extrinsic value.

Of course academics has institutions for judging intrinsic value by standards of research. By those there is no doubt about the salience of the “Busy Beaver” function {B}. Knowing a bound on {B(k)} yields a decision procedure for any mathematical conjecture that can be coded via a {k}-state machine checking iteratively for a counterexample; the Collatz problem gives an especially small {k}. As the bibliography of Wikipedia’s BB page shows, academic interest has remained high. But let us notch up the moral of Dick’s story above by asking, what about its possible extrinsic value? BB programs are extreme cases for the generally important task of predicting and verifying program behavior. A recent essay by Liesbeth de Mol makes a connection to computer-assisted proving.

Even with something as globally profitable and understood as the open-source ethic, projecting value is hard—including its strategic competitive value to large sponsors. This partly owes to the difficulty of proving a negative, such as the cost of avoided bugs.

Here’s a silly but simpler example from my chess research. Despite near-universal belief in the chess community that the ratings of top players today are “inflated” by over 100 points compared to the 1970s and 1980s, my work shows that the implementation of the Elo rating system by the World Chess Federation (FIDE) has been remarkably stable from the beginning. A 30-point effect that can be more readily ascribed to progressively faster time controls since the early 1990s seems recently to have reversed. FIDE lists over 400,000 rated players, who pay fees of some tens of dollars per year to national federations under FIDE’s auspices for this and other services. Is knowing this integrity of ratings worth 50 cents per player? On the flip side of “judging a negative,” how much is saved by providing resistance to any impulse to fix something that isn’t broken? I may earn compensation for other aspects of my work, but how could one judge to monetize this aspect?

Open Problems

In summary, our thoughts go to all those affected by the closing and we wish all the best in the near and far future.

[name and word fixes]

by Pip at September 28, 2014 03:00 AM UTC

I'm currently in the process of returning* from Würzburg, Germany, where I attended the 22nd International Symposium on Graph Drawing (GD 2014) and was one of the invited speakers at the associated EuroGIGA/CCC Ph.D. school on graph drawing.

The format for the Ph.D. school was three one-hour lectures in the morning and three hours of working on exercises in the afternoon, for two days. My contribution was a high-level overview of graph drawing methods that involve curves (an updated version of my Dagstuhl survey). You'll have to ask one of the students how well the school really went, but my impression is that it was a success. Among other things, it brought in 40 students, and perhaps helped GD itself achieve an unusually high number of participants.

The best presentation award from GD went to Vincent Kusters, for a talk on "Column planarity and partial simultaneous geometric embedding" with Evans, Saumell and Speckmann. Their main result is that if one is given two trees on the same vertex set, then it is possible to find planar drawings of them both, with straight line segments as edges, in which a large fraction of the vertices are in the same location in both drawings. (It is not always possible to place all vertices the same in both drawings.) There was also a best poster award, given to Thomas Bläsius, Fabian Klute, Benjamin Niedermann, and Martin Nöllenburg, for their work on "PIGRA", a tool for pixelated graphs. Instead of the usual poster format (a few boxes of text+figures, not very different from the set of slides from a short talk) they made an effort at presenting their work in a graphic-novel format, with a stick-figure narrator that looked like it was inspired by xkcd. There were two good invited talks, by Oswin Aichholzer on some problems related to the crossing numbers of complete graphs and by Jean-Daniel Fekete on visualizations of graphs based on adjacency matrices rather than node-link diagrams, as well as, of course, many interesting contributed talks.

A few of the talks that were memorable to me:

Wednesday morning, Jan Christoph Athenstädt spoke about making overlaid drawings of two partitions of a set of elements; the same problem can be interpreted as one of drawing a 2-regular hypergraph. It's easy to construct such a drawing by using membership in one partition as an x-coordinate and in the other partition as a y-coordinate, and drawing the sets as ovals around the rows and columns of the resulting grid layout, but this is problematic in some respects; for instance, it has empty regions where pairs of sets look like they intersect but actually don't. On the other hand if the partitions are to be drawn as pseudocircles (at most two crossing points per pair, intersecting only as needed) the problem turns into one of graph planarity, but is not always realizable. An intermediate level of constraint on these drawings is unfortunately NP-complete to recognize.

On Wednesday afternoon, one of Philipp Kindermann and Fabian Lipp (I no longer remember which) spoke about their work with Sasha Wolff on boundary labeling. If you've ever used the todonotes package in LaTeX, you know what this is: a bunch of rectangular labels (notes to you or your co-authors reminding you of things that need to be changed) are to be placed in the margin of your LaTeX output, connected by "leaders" (polygonal paths) to the point where the change needs to be made. todonotes is a little brainless about how it places its notes and leaders, and their paper discussed an improved system for the same problem. Unfortunately it is implemented in LuaLaTeX, which I am still leery of using. (I don't think arXiv allows it and given the security issues of running a real programming language from user-contributed source code I'm not sure I'd want to advocate that they start; the same is true of the publishers I've worked with.)

Thursday morning had a whole session on 1-planarity, a topic of some interest to me. Di Giacomo, Liotta, and Montecchiani showed that outer-1-planar graphs with maximum degree d can be drawn with the given embedding using line segments of O(d) slopes, or planarly (they are always planar graphs) using O(d2) slopes, better than the higher polynomial for planar 3-trees and exponential known bound for planar graphs in general. Two additional papers concerned "fan-planar graphs", graphs drawn so that, when an edge is crossed, its crossing edges form a fan (adjacent on the same side to a single vertex). Apparently there is some connection between these graphs and confluent drawings, although I'm a little fuzzy on the details.

Thursday afternoon, in a session including one of my papers on (impractical) algorithms for crossing minimization, we also had a user study presented by Sergei Pupyrev (with Kobourov and Saket) showing that for larger graphs the number of crossings does not seem to be strongly correlated with usability of the drawing. This is in contrast with many past works showing that for smaller graphs crossings are quite important. I'm always glad to see research in this style at GD – I think it helps keep us grounded, when otherwise we have a strong tendency to do theory that has no practical applications – but I can see why it is a bit rare: most of the questions afterwards were actually poorly-disguised complaints "you're doing it wrong". There's always something you could have done differently that might have shown a different result, and always someone who has strong opinions on that particular detail. So it's difficult to get such research accepted and the reactions can be a bit discouraging when it is.

Another talk Thursday afternoon (one of the strong contenders for best talk, according to some discussions I had with other attendees) was by Fidel Barrera-Cruz on his work with Penny Haxell and Anna Lubiw on morphing one triangulation to another by piecewise linear motions while preserving planarity at the intermediate stages. It was one of the few talks to include a demo of things happening dynamically rather than just static images, which worked very well for this topic. I can't find the paper itself online but it seems to also be part of Barrera-Cruz's Ph.D. thesis.

Friday morning Therese Biedl gave another entertaining talk, on transforming one style of drawing into another while preserving its height. But the moral of the story is that height is probably the wrong thing to optimize, at least for straight-line drawings, because of one of her examples: a planar 3-tree which could be drawn in linear area with height five, but required exponential area for its optimal height-four drawing.

And finally one of the Friday afternoon talks also particularly caught my attention. Fabrizio Frati spoke on his work with Dehkordi and Gudmundsson on "increasing-chord graphs", a strengthening of greedy drawings. A path from s to t is greedy if the distance to t monotonically decreases along the path. It is self-approaching if every subpath is greedy in the same direction. And it is increasing-chord if it is self-approaching in both directions. It is unknown whether every point set in the plane forms the vertex set of a planar increasing-chord graph, but Frati and his co-authors showed that every point set can be augmented by a linear number of points so that the augmented set supports such a graph. The proof has two parts: a proof that a triangulation with all acute angles is always increasing-chord, and an old result of mine that every point set has an all-acute Steiner triangulation.

In organizational news: We are soon to have elections for new GD steering committee members, as four members' terms are expiring (Ulrik Brandes, Seok-Hee Hong, Michael Kaufmann, and me). Next year, GD (renamed as the "International Symposium on Graph Drawing and Network Visualization" but with the same numbering and acronym) will be in Northridge, California, with Csaba Tóth organizing. (Csaba looked into instead having it at a nearby beach town but they were too expensive.) After dropping short submissions as a category this year (because of problems in past years where they were judged head-to-head against longer submissions and, unsurprisingly, lost) we will reinstate them next year as "GD notes and demos" with separate reviewing and shorter talks. Along with this year's best poster and best presentation awards, we are reinstating the best paper and test of time awards previously given in 2012; the test of time one will be for papers published in GD between 1994 and 1998 (inclusive). GD 2016 will be in Athens, Greece.

Much German food and wine was served (this is in Franconia, one of Germany's wine-making areas and the home of drier wines than the other parts of Germany), old friends seen again, etc. I enjoyed my trip and am looking forward to a more conveniently-located GD next year.

*Missed a connection and have to spend the night in Newark.

at September 28, 2014 01:29 AM UTC

I rarely highlight individual events on the blog, but one's advisor only turns sixty once. We will honor Michael Sipser at MIT on Sunday, October 26 with a one-day symposium full of great speakers in complexity. As one of Sipser's PhD students, I'm helping to organize the event and serving as emcee for the dinner speeches. Please send me any funny or embarrassing stories or pictures of Sipser through the decades.

Many of you know Sipser from his popular textbook Introduction to the Theory of Computation. Sipser was one of the driving forces in complexity in the 70's and 80's. Sipser initiated the program of using circuit complexity to separate complexity classes and, with Merrick Furst and James Saxe, showed parity could not be computed by poly-size constant-depth circuits. His research includes how to simulate randomness by alternation, was the first to explore hardness vs randomness, made great advances in the complexity of interactive proofs, and much more. Sipser led the MIT Mathematics department for the last ten years and was recently appointed Dean of Science.

Join us in Cambridge for this great day to celebrate an extraordinary man.


P.S. STOC call posted, Deadline November 4

by Lance Fortnow ( at September 27, 2014 01:38 PM UTC

Via Ronitt Rubinfeld comes word that the STOC 2015 CFP is out.

Submission deadline: Tuesday, November 4, 2014, 3:59pm EST

Conference: June 14-17 2015, Portland, Oregon (part of FCRC)

by Suresh Venkatasubramanian ( at September 26, 2014 11:46 PM UTC

STOC 2015 will be in Portland (yay!), and part of FCRC (boo!). The call for papers is out.

by luca at September 26, 2014 09:34 PM UTC

Authors: Zhengjun Cao, Zhenfu Cao
Download: PDF
Abstract: Shor's factoring algorithm uses two quantum registers. By introducing more registers we show that the measured numbers in these registers which are of the same pre-measurement state, should be equal if the original Shor's complexity argument is sound. This contradicts the argument that the second register has $r$ possible measured values. There is an anonymous comment which argues that the states in these registers are entangled. If so, the entanglement involving many quantum registers can not be interpreted by the mechanism of EPR pairs and the like. In view of this peculiar entanglement has not yet been mentioned and investigated, we think the claim that the Shor's algorithm runs in polynomial time needs more physical verifications. We also discuss the problem to certify quantum computers.

at September 26, 2014 12:41 AM UTC

Authors: Ognjen Arandjelovic, Ducson Pham, Svetha Venkatesh
Download: PDF
Abstract: We address the problem of estimating the running quantile of a data stream when the memory for storing observations is limited. We (i) highlight the limitations of approaches previously described in the literature which make them unsuitable for non-stationary streams, (ii) describe a novel principle for the utilization of the available storage space, and (iii) introduce two novel algorithms which exploit the proposed principle. Experiments on three large real-world data sets demonstrate that the proposed methods vastly outperform the existing alternatives.

at September 26, 2014 12:41 AM UTC

Authors: Gregory Gutin, Stefan Kratsch, Magnus Wahlström
Download: PDF
Abstract: The Workflow Satisfiability Problem (WSP) is a problem of practical interest that arises whenever tasks need to be performed by authorized users, subject to constraints defined by business rules. We are required to decide whether there exists a plan -- an assignment of tasks to authorized users -- such that all constraints are satisfied.

The WSP is, in fact, the conservative Constraint Satisfaction Problem (i.e., for each variable, here called task, we have a unary authorization constraint) and is, thus, NP-complete. It was observed by Wang and Li (2010) that the number k of tasks is often quite small and so can be used as a parameter, and several subsequent works have studied the parameterized complexity of WSP regarding parameter k.

We take a more detailed look at the kernelization complexity of WSP(\Gamma) when \Gamma\ denotes a finite or infinite set of allowed constraints. Our main result is a dichotomy for the case that all constraints in \Gamma\ are regular: (1) We are able to reduce the number n of users to n' <= k. This entails a kernelization to size poly(k) for finite \Gamma, and, under mild technical conditions, to size poly(k+m) for infinite \Gamma, where m denotes the number of constraints. (2) Already WSP(R) for some R \in \Gamma\ allows no polynomial kernelization in k+m unless the polynomial hierarchy collapses.

at September 26, 2014 12:00 AM UTC

Authors: Jakub Łącki, Piotr Sankowski
Download: PDF
Abstract: We show an algorithm for dynamic maintenance of connectivity information in an undirected planar graph subject to edge deletions. Our algorithm may answer connectivity queries of the form `Are vertices $u$ and $v$ connected with a path?' in constant time. The queries can be intermixed with any sequence of edge deletions, and the algorithm handles all updates in $O(n)$ time. This results improves over previously known $O(n \log n)$ time algorithm.

at September 26, 2014 12:41 AM UTC

Authors: Szymon Grabowski
Download: PDF
Abstract: The recently introduced longest common substring with $k$-mismatches ($k$-LCF) problem is to find, given two sequences $S_1$ and $S_2$ of length $n$ each, a longest substring $A_1$ of $S_1$ and $A_2$ of $S_2$ such that the Hamming distance between $A_1$ and $A_2$ is at most $k$. So far, the only subquadratic time result for this problem was known for $k = 1$~\cite{FGKU2014}. We present two output-dependent algorithms solving the $k$-LCF problem and show that for $k = O(\log^{1-\varepsilon} n)$, where $\varepsilon > 0$, at least one of them works in subquadratic time, using $O(n)$ words of space. The choice of one of these two algorithms to be applied for a given input can be done after linear time and space preprocessing.

at September 26, 2014 12:41 AM UTC

The Italian prime minister is in the United States for the UN general assembly meeting, and he was in the San Francisco bay area on Sunday and Monday. (No, this is not the one who paid money to an underage prostitute and had she released from custody by falsely claiming she was the niece of Mubarak, it’s the new one.)

Those two days were busy, and he met Italian start-up founders in the area, went to a dinner at Stanford hosted by John Hennessy, presided the inauguration of a new Italian-language school, went to Twitter, Google, Facebook and Yahoo, and he met the local Italian research community.

For the last event, a few colleagues and I were asked to give a short presentation. Being not sure what to say to a prime minister, I asked a colleague who is the department chair at an Italian computer science department for some data on state funding of university research in computer science, and if there was a way to turn this data into a recommendation, and his response could be summarized as “we cannot be saved, there is no hope.” This might have made a good theme for a presentation, but instead I talked about the importance of fundamental research, and of working on ideas for which the technology is not ready yet, so that when the technology is ready the ideas are mature. Politicians are good at feigning attention when their mind is elsewhere, and he feigned it well.

Yesterday I was “interviewed” as part of the process to naturalize as an American citizen. Part of the interview is a test of the knowledge that an American is supposed to have. I liked to think that the officer would bring up a map of the world and ask me to point to France, then I would point to Brazil, and he would say, “great, now you are ready to be an American!” (Instead he asked how many US senators there are, when was the constitution written, and things like that.) The vibe was very different from any other interaction I have had with the American immigration system before; now it’s not any more “who are you, why are you stealing our jobs, and how do we know you are not a terrorist,” but it’s all “yay, you are going to be one us.”

by luca at September 25, 2014 06:41 PM UTC

Given the symmetry group $S_n$ and two subgroups $G, H\leq S_n$, and $\pi\in S_n$, does $G\pi\cap H=\emptyset$ hold?

As far as I know, the problem is known as the coset intersection problem. I am wondering what's the complexity? In particular, is this problem known to be in coAM?

Moreover, if $H$ is restricted to be abelian, what does the complexity become?

by maomao at September 25, 2014 09:43 AM UTC

Authors: Zoltán Király
Download: PDF
Abstract: We introduce the following notion: a digraph $D=(V,A)$ with arc weights $c: A\rightarrow \R$ is called nearly conservative if every negative cycle consists of two arcs. Computing shortest paths in nearly conservative digraphs is NP-hard, and even deciding whether a digraph is nearly conservative is coNP-complete.

We show that the "All Pairs Shortest Path" problem is fixed parameter tractable with various parameters for nearly conservative digraphs. The results also apply for the special case of conservative mixed graphs.

at September 25, 2014 12:41 AM UTC

Authors: Amit Dhurandhar, Karthik Gurumoorthy
Download: PDF
Abstract: Clustering with submodular functions has been of interest over the last few years. Symmetric submodular functions are of particular interest as minimizing them is significantly more efficient and they include many commonly used functions in practice viz. graph cuts, mutual information. In this paper, we propose a novel constraint to make clustering actionable, which is motivated by applications across multiple domains, and pose the problem of performing symmetric submodular clustering subject to this constraint. We see that obtaining a $k$ partition with approximation guarantees is a non-trivial task requiring further work.

at September 25, 2014 12:41 AM UTC

Authors: Edouard Bonnet, Bruno Escoffier, Vangelis Paschos, Georgios Stamoulis
Download: PDF
Abstract: Given a graph$G=(V,E)$ with $n$ vertices and an integer $k \leqslant n$, the Maximum $k$-Cover problem consists of determining a subset $K \subseteq V$, of cardinality at most $k$ that covers the most of edges in $E$. We study the polynomial approximation of Max-$k$-Cover in bipartite graphs by a purely combinatorial algorithm. With a computer-assisted analysis, we show that bipartite Max-$k$-Cover is approximable within ratio 0.794 and present an experimental analysis of it that finds the worst case approximation guarantee that is bounded below by 0.794. This improves on the currently best known 3/4-approximation ratio for general graphs (A. A. Ageev and M. Sviridenko, \textit{Approximation algorithms for maximum coverage and max cut with given sizes of parts}, Proc. IPCO'99).

at September 25, 2014 12:41 AM UTC

Authors: Chandan Dubey, Thomas Holenstein
Download: PDF
Abstract: Given a non-empty genus in $n$ dimensions with determinant $d$, we give a randomized algorithm that outputs a quadratic form from this genus. The time complexity of the algorithm is poly$(n,\log d)$; assuming Generalized Riemann Hypothesis (GRH).

at September 25, 2014 12:41 AM UTC

Authors: Yasuhiro Takahashi, Seiichiro Tani, Takeshi Yamazaki, Kazuyuki Tanaka
Download: PDF
Abstract: We study the classical simulatability of commuting quantum circuits with n input qubits and O(log n) output qubits, where a quantum circuit is classically simulatable if its output probability distribution can be sampled in classical polynomial time. First, we show that there exists a commuting quantum circuit that is not classically simulatable unless BQP $\subseteq$ PostBPP. This is the first formal evidence that a commuting quantum circuit is not classically simulatable even when the number of output qubits is exponentially small. Then, we consider a generalized version of the circuit and clarify the condition under which it is classically simulatable. Lastly, we apply the argument for the above evidence to Clifford circuits in a similar setting and provide evidence that such a circuit augmented by a depth-1 non-Clifford layer is not classically simulatable. These results reveal subtle differences between quantum and classical computation.

at September 25, 2014 12:40 AM UTC

Authors: Travis Gagie, Aleksi Hartikainen, Juha Kärkkäinen, Gonzalo Navarro, Simon J. Puglisi, Jouni Sirén
Download: PDF
Abstract: We address the problem of counting the number of strings in a collection where a given pattern appears, which has applications in information retrieval and data mining. Existing solutions are in a theoretical stage. We implement these solutions and develop some new variants, comparing them experimentally on various datasets. Our results not only show which are the best options for each situation and help discard practically unappealing solutions, but also uncover some unexpected compressibility properties of the best data structures. By taking advantage of these properties, we can reduce the size of the structures by a factor of 5--400, depending on the dataset.

at September 25, 2014 12:40 AM UTC

Authors: Shi Li
Download: PDF
Abstract: In this paper, we study the uniform capacitated $k$-median problem. Obtaining a constant approximation algorithm for this problem is a notorious open problem; most previous works gave constant approximations by either violating the capacity constraints or the cardinality constraint. Notably, all these algorithms are based on the natural LP-relaxation for the problem. The LP-relaxation has unbounded integrality gap, even when we are allowed to violate the capacity constraints or the cardinality constraint by a factor of $2-\epsilon$.

Our result is an $\exp(O(1/\epsilon^2))$-approximation algorithm for the problem that violates the cardinality constraint by a factor of $1+\epsilon$. This is already beyond the capability of the natural LP relaxation, as it has unbounded integrality gap even if we are allowed to open $(2-\epsilon)k$ facilities. Indeed, our result is based on a novel LP for this problem.

The version as we described is the hard-capacitated version of the problem, as we can only open one facility at each location. This is as opposed to the soft-capacitated version, in which we are allowed to open more than one facilities at each location. We give a simple proof that in the uniform capacitated case, the soft-capacitated version and the hard-capacitated version are actually equivalent, up to a small constant loss in the approximation ratio.

at September 25, 2014 12:40 AM UTC