Theory of Computing Blog Aggregator

The 2nd Workshop on Mechanism Design for Social Good will be taking place at this year’s ACM Conference on Economics and Computation at Cornell University on June 22, 2018.

The goal of the workshop is to highlight work where insights from algorithms, optimization, and mechanism design have the potential to impact social good. In particular, we will focus on the theme of improving access to opportunity. The workshop will feature keynote presentations focusing on economic inequality, online labor markets, bias and discrimination. We encourage submissions addressing these and other domains, such as housing, healthcare, education, civic participation, privacy, and the developing world. The workshop aims to showcase ongoing exemplary work on these topics and to highlight exciting opportunities for future research. Submissions of all types are encouraged, including theoretical or applied mechanism design work, research that solves algorithmic or optimization problems, and empirical research.
Topics of interest for this workshop include but are not limited to:
  • redistributive mechanisms to improve access to opportunity
  • economic inequality and intergenerational mobility
  • mitigating unequal economic outcomes in online labor markets
  • detecting existence or causes of exploitative market behavior in online labor markets
  • the design of algorithms that mitigate bias and improve diversity
  • allocating low-income housing assistance
  • allocating health insurance funds, managing access to healthcare, and pricing medical treatments
  • design of health insurance markets
  • evaluating students, teachers, or schools
  • design of transportation systems
  • market regulations for data and privacy
  • algorithmic solutions to encourage civic participation
  • evaluating fairness in electoral representation 
Submissions will be evaluated on the following criteria:
  • Quality of submission as measured by accuracy and clarity of exposition.
  • Relevance to this workshop and its theme of improving access to opportunity.
  • Novelty of domain: we particularly encourage work on applications that have been less explored within the EC community.
  • Potential for follow-up work in the EC community: those from other communities who feel they fit this criterion are especially encouraged to submit.
Submission Instructions:
Authors should upload a PDF of their paper to EasyChair. There are no specific formatting instructions. Submissions may either be working papers or papers that have been published at an established conference or journal. In the latter case, please include a citation on EasyChair.  In addition to the PDF, authors are asked to upload a 200-250 word description onto EasyChair summarizing the results and their relevance to the workshop. The committee reserves the right not to review all the technical details of submissions. 
Authors may submit papers that are already under review or accepted in conferences or journals. However, papers accepted to this year’s EC will not be considered for presentation at the workshop. There will be no published proceedings.
Important Information:  
  • Submission Deadline: April 21, 2018, 11:59pm AoE
  • Submission page: EasyChair
  • Notification: May 16, 2018
  • Workshop Date: June 22, 2018
Organizing Committee:
Program Chairs:

by robertkleinberg at March 23, 2018 02:17 AM UTC

Authors: Konstantin Kobylkin
Download: PDF
Abstract: Fast constant factor approximation algorithms are devised for a problem of intersecting a set of straight line segments with the smallest cardinality set of disks of fixed radii $r>0,$ where the set of segments forms a straight line drawing $G=(V,E)$ of a planar graph without edge crossings. Exploiting its tough connection with the geometric Hitting Set problem we give $\left(50+52\sqrt{\frac{12}{13}}+\varepsilon\right)$-approximate $O(|E|^4\log|E|)$-time and $O(|E|^2\log|E|)$ spa\-ce algorithm based on the modified Agarwal-Pan algorithm. More accurate $(34+24\sqrt{2}+\varepsilon)$-, $\left(12+6\sqrt{3}+\varepsilon\right)$- and $\left(34+38\sqrt{\frac{15}{19}}+\varepsilon\right)$-ap\-pro\-xi\-mate algorithms are proposed for the case where $G$ is any subgraph of either an outerplane or a Gabriel graph or a Delaunay triangulation respectively, which work within the same time and space complexity bounds, where $\varepsilon>0$ is an arbitrary small constant.

at March 23, 2018 12:40 AM UTC

Authors: Chunhao Wang, Leonard Wossnig
Download: PDF
Abstract: We present a quantum algorithm for simulating the dynamics of Hamiltonians that are not necessarily sparse. Our algorithm is based on the assumption that the entries of the Hamiltonian are stored in a data structure that allows for the efficient preparation of states that encode the rows of the Hamiltonian. We use a linear combination of quantum walks to achieve a poly-logarithmic dependence on the precision. The time complexity measured in terms of circuit depth of our algorithm is $O(t\sqrt{N}\lVert H \rVert \text{polylog}(N, t\lVert H \rVert, 1/\epsilon))$, where $t$ is the evolution time, $N$ is the dimension of the system, and $\epsilon$ is the error in the final state, which we call precision. Our algorithm can directly be applied as a subroutine for unitary Hamiltonians and solving linear systems, achieving a $\widetilde{O}(\sqrt{N})$ dependence for both applications.

at March 23, 2018 12:40 AM UTC

It’s not every day that I check my office mailbox and, amid the junk brochures, find 500 pages on the biggest questions facing civilization—all of them, basically—by possibly the single person on earth most qualified to tackle those questions.  That’s what happened when, on a trip back to Austin from my sabbatical, I found a review copy of Steven Pinker’s Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.

I met with Steve while he was writing this book, and fielded his probing questions about the relationships between the concepts of information, entropy, randomness, Kolmogorov complexity, and coarse graining, in a way that might have affected a few paragraphs in Chapter 2.  I’m proud to be thanked in the preface—well, as “Scott Aronson.”  I have a lot of praise for the book, but let’s start with this: the omission of the second “a” from my surname was the worst factual error that I found.

If you’ve read anything else by Pinker, then you more-or-less know what to expect: an intellectual buffet that’s pure joy to devour, even if many of the dishes are ones you’ve tasted before.  For me, the writing alone is worth the admission price: Pinker is, among many other distinctions, the English language’s master of the comma-separated list.  I can see why Bill Gates recently called Enlightenment Now his “new favorite book of all time“—displacing his previous favorite, Pinker’s earlier book The Better Angels of Our Nature.  If you’ve read Better Angels, to which Enlightenment Now functions as a sort of sequel, then you know even more specifically what to expect: a saturation bombing of line graphs showing you how, despite the headlines, the world has been getting better in almost every imaginable way—graphs so thorough that they’ll eventually drag the most dedicated pessimist, kicking and screaming, into sharing Pinker’s sunny disposition, at least temporarily (but more about that later).

The other book to which Enlightenment Now bears comparison is David Deutsch’s The Beginning of Infinity.  The book opens with one of Deutsch’s maxims—“Everything that is not forbidden by laws of nature is achievable, given the right knowledge”—and Deutsch’s influence can be seen throughout Pinker’s new work, as when Pinker repeats the Deutschian mantra that “problems are solvable.”  Certainly Deutsch and Pinker have a huge amount in common: classical liberalism, admiration for the Enlightenment as perhaps the best thing that ever happened to the human species, and barely-perturbable optimism.

Pinker’s stated aim is to make an updated case for the Enlightenment—and specifically, for the historically unprecedented “ratchet of progress” that humankind has been on for the last few hundred years—using the language and concepts of the 21st century.  Some of his chapter titles give a sense of the scope of the undertaking:

  • Life
  • Health
  • Wealth
  • Inequality
  • The Environment
  • Peace
  • Safety
  • Terrorism
  • Equal Rights
  • Knowledge
  • Happiness
  • Reason
  • Science

When I read these chapter titles aloud to my wife, she laughed, as if to say: how could anyone have the audacity to write a book on just one of these enormities, let alone all of them?  But you can almost hear the gears turning in Pinker’s head as he decided to do it: well, someone ought to take stock in a single volume of where the human race is and where it’s going.  And if, with the rise of thuggish autocrats all over the world, the principles of modernity laid down by Locke, Spinoza, Kant, Jefferson, Hamilton, and Mill are under attack, then someone ought to rise to those principles’ unironic defense.  And if no one else will do it, it might as well be me!  If that’s how Pinker thought, then I agree: it might as well have been him.

I also think Pinker is correct that Enlightenment values are not so anodyne that they don’t need a defense.  Indeed, nothing demonstrates the case for Pinker’s book, the non-obviousness of his thesis, more clearly than the vitriolic reviews the book has been getting in literary venues.  Take this, for example, from John Gray in The New Statesman: “Steven Pinker’s embarrassing new book is a feeble sermon for rattled liberals.”

Pinker is an ardent enthusiast for free-market capitalism, which he believes produced most of the advance in living standards over the past few centuries. Unlike [Herbert Spencer, the founder of Social Darwinism], he seems ready to accept that some provision should be made for those who have been left behind. Why he makes this concession is unclear. Nothing is said about human kindness, or fairness, in his formula. Indeed, the logic of his dictum points the other way.

Many early-20th-century Enlightenment thinkers supported eugenic policies because they believed “improving the quality of the population” – weeding out human beings they deemed unproductive or undesirable – would accelerate the course of human evolution…

Exponents of scientism in the past have used it to promote Fabian socialism, Marxism-Leninism, Nazism and more interventionist varieties of liberalism. In doing so, they were invoking the authority of science to legitimise the values of their time and place. Deploying his cod-scientific formula to bolster market liberalism, Pinker does the same.

You see, when Pinker says he supports Enlightenment norms of reason and humanism, he really means to say that he supports unbridled capitalism and possibly even eugenics.  As I read this sort of critique, the hair stands on my neck, because the basic technique of hostile mistranslation is so familiar to me.  It’s the technique that once took a comment in which I pled for shy nerdy males and feminist women to try to understand each other’s suffering, as both navigate a mating market unlike anything in previous human experience—and somehow managed to come away with the take-home message, “so this entitled techbro wants to return to a past when society would just grant him a female sex slave.”

I’ve noticed that everything Pinker writes bears the scars of the hostile mistranslation tactic.  Scarcely does he say anything before he turns around and says, “and here’s what I’m not saying”—and then proceeds to ward off five different misreadings so wild they wouldn’t have occurred to me, but then if you read Leon Wieseltier or John Gray or his other critics, there the misreadings are, trotted out triumphantly; it doesn’t even matter how much time Pinker spent trying to ward them off.

OK, but what of the truth or falsehood of Pinker’s central claims?

I share Pinker’s sense that the Enlightenment may be the best thing that ever happened in our species’ sorry history.  I agree with his facts, and with his interpretations of the facts.  We rarely pause to consider just how astounding it is—how astounding it would be to anyone who lived before modernity—that child mortality, hunger, and disease have plunged as far as they have, and we show colossal ingratitude toward the scientists and inventors and reformers who made it possible.  (Pinker lists the following medical researchers and public health crusaders as having saved more than 100 million lives each: Karl Landsteiner, Abel Wolman, Linn Enslow, William Foege, Maurice Hilleman, John Enders.  How many of them had you heard of?  I’d heard of none.)  This is, just as Pinker says, “the greatest story seldom told.”

Beyond the facts, I almost always share Pinker’s moral intuitions and policy preferences.  He’s right that, whether we’re discussing nuclear power, terrorism, or GMOs, going on gut feelings like disgust and anger, or on vivid and memorable incidents, is a terrible way to run a civilization.  Instead we constantly need to count: how many would be helped by this course of action, how many would be harmed?  As Pinker points out, that doesn’t mean we need to become thoroughgoing utilitarians, and start fretting about whether the microscopic proto-suffering of a bacterium, multiplied by the 1031 bacteria that there are, outweighs every human concern.  It just means that we should heed the utilitarian impulse to quantify way more than is normally done—at the least, in every case where we’ve already implicitly accepted the underlying values, but might be off by orders of magnitude in guessing what they imply about our choices.

The one aspect of Pinker’s worldview that I don’t share—and it’s a central one—is his optimism.  My philosophical temperament, you might say, is closer to that of Rebecca Newberger Goldstein, the brilliant novelist and philosopher (and Pinker’s wife), who titled a lecture given shortly after Trump’s election “Plato’s Despair.”

Somehow, I look at the world from more-or-less the same vantage point as Pinker, yet am terrified rather than hopeful.  I’m depressed that Enlightenment values have made it so far, and yet there’s an excellent chance (it seems to me) that it will be for naught, as civilization slides back into authoritarianism, and climate change and deforestation and ocean acidification make the one known planet fit for human habitation increasingly unlivable.

I’m even depressed that Pinker’s book has gotten such hostile reviews.  I’m depressed, more generally, that for centuries, the Enlightenment has been met by its beneficiaries with such colossal incomprehension and ingratitude.  Save 300 million people from smallpox, and you can expect a return a lecture about your naïve and arrogant scientistic reductionism.  Or, electronically connect billions of people to each other and to the world’s knowledge, in a way beyond the imaginings of science fiction half a century ago, and people will use the new medium to rail against the gross, basement-dwelling nerdbros who made it possible, then upvote and Like each other for their moral courage in doing so.

I’m depressed by the questions: how can a human race that reacts in that way to the gifts of modernity possibly be trusted to use those gifts responsibly?  Does it even “deserve” the gifts?

As I read Pinker, I sometimes imagined a book published in 1923 about the astonishing improvements in the condition of Europe’s Jews following their emancipation.  Such a book might argue: look, obviously past results don’t guarantee future returns; all this progress could be wiped out by some freak future event.  But for that to happen, an insane number of things would need to go wrong simultaneously: not just one European country but pretty much all of them would need to be taken over by antisemitic lunatics who were somehow also hyper-competent, and who wouldn’t just harass a few Jews here and there until the lunatics lost power, but would systematically hunt down and exterminate all of them with an efficiency the world had never before seen.  Also, for some reason the Jews would need to be unable to escape to Palestine or the US or anywhere else.  So the sane, sober prediction is that things will just continue to improve, of course with occasional hiccups (but problems are solvable).

Or I thought back to just a few years ago, to the wise people who explained that, sure, for the United States to fall under the control of a racist megalomaniac like Trump would be a catastrophe beyond imagining.  Were such a comic-book absurdity realized, there’d be no point even discussing “how to get democracy back on track”; it would already have suffered its extinction-level event.  But the good news is that it will never happen, because the voters won’t allow it: a white nationalist authoritarian could never even get nominated, and if he did, he’d lose in a landslide.  What did Pat Buchanan get, less than 1% of the vote?

I don’t believe in a traditional God, but if I did, the God who I’d believe in is one who’s constantly tipping the scales of fate toward horribleness—a God who regularly causes catastrophes to happen, even when all the rational signs point toward their not happening—basically, the God who I blogged about here.  The one positive thing to be said about my God is that, unlike the just and merciful kind, I find that mine rarely lets me down.

Pinker is not blind.  Again and again, he acknowledges the depths of human evil and idiocy, the forces that even now look to many of us like they’re leaping up at Pinker’s exponential improvement curves with bared fangs.  It’s just that each time, he recommends putting an optimistic spin on the situation, because what’s the alternative?  Just to get all, like, depressed?  That would be unproductive!  As Deutsch says, problems will always arise, but problems are solvable, so let’s focus on what it would take to solve them, and on the hopeful signs that they’re already being solved.

With climate change, Pinker gives an eloquent account of the enormity of the crisis, echoing the mainstream scientific consensus in almost every particular.  But he cautions that, if we tell people this is plausibly the end of civilization, they’ll just get fatalistic and paralyzed, so it’s better to talk about solutions.  He recommends an aggressive program of carbon pricing, carbon capture and storage, nuclear power, research into new technologies, and possibly geoengineering, guided by strong international cooperation—all things I’d recommend as well.  OK, but what are the indications that anything even close to what’s needed will get done?  The right time to get started, it seems to me, was over 40 years ago.  Since then, the political forces that now control the world’s largest economy have spiralled into ever more vitriolic denial, the more urgent the crisis has gotten and the more irrefutable the evidence.  Pinker writes:

“We cannot be complacently optimistic about climate change, but we can be conditionally optimistic.  We have some practicable ways to prevent the harms and we have the means to learn more.  Problems are solvable.  That does not mean that they will solve themselves, but it does mean that we can solve them if we sustain the benevolent forces of modernity that have allowed us to solve problems so far…” (p. 154-155)

I have no doubt that conditional optimism is a useful stance to adopt, in this case as in many others.  The trouble, for me, is the gap between the usefulness of a view and its probable truth—a gap that Pinker would be quick to remind me about in other contexts.  Even if a placebo works for those who believe in it, how do you make yourself believe in what you understand to be a placebo?  Even if all it would take, for the inmates to escape a prison, is simultaneously optimism that they’ll succeed if they work together—still, how can an individual inmate be optimistic, if he sees that the others aren’t, and rationally concludes that dying in prison is his probable fate?  For me, the very thought of the earth gone desolate—its remaining land barely habitable, its oceans a sewer, its radio beacons to other worlds fallen silent—all for want of ability to coordinate a game-theoretic equilibrium, just depresses me even more.

Likewise with thermonuclear war: Pinker knows, of course, that even if there were “only” an 0.5% chance of one per year, multiplied across the decades of the nuclear era that’s enormously, catastrophically too high, and there have already been too many close calls.  But look on the bright side: the US and Russia already reduced their arsenals dramatically from their Cold War high.  There’d be every reason for optimism about continued progress, if we weren’t in this freak branch of the wavefunction where the US and Russia (not to mention North Korea and other nuclear states) were controlled by authoritarian strongmen.

With Trump—how could anyone avoid him in a book like this?—Pinker spends several pages reviewing the damage he’s inflicted on democratic norms, the international order, the environment, and the ideal of truth itself:

“Trump’s barefaced assertion of canards that can instantly be debunked … shows that he sees public discourse not as a means of finding common ground based on objective reality but as a weapon with which to project dominance and humiliate rivals.”

Pinker then writes a sentence that made me smile ruefully: “Not even a congenital optimist can see a pony in this Christmas stocking” (p. 337).  Again, though, Pinker looks at poll data suggesting that Trump and the world’s other resurgent quasi-fascists are not the wave of the future, but the desperate rearguard actions of a dwindling and aging minority that feels itself increasingly marginalized by the modern world (and accurately so).  The trouble is, Nazism could also be seen as “just” a desperate, failed attempt to turn back the ratchet of cosmopolitanism and moral progress, by people who viscerally understood that time and history were against them.  Yet even though Nazism ultimately lost (which was far from inevitable, I think), the damage it inflicted on the way out was enough, you might say, to vindicate the shrillest pessimist of the 1930s.

Then there’s the matter of takeover by superintelligent AI.  I’ve now spent years hanging around communities where it’s widely accepted that “AI value alignment” is the most pressing problem facing humanity.  I strongly disagree with this view—but on reflection, not because I don’t think AI could be a threat; only because I think other, more prosaic things are much more imminent threats!  I feel the urge to invent a new, 21st-century Yiddish-style proverb: “oy, that we should only survive so long to see the AI-bots become our worst problem!”

Pinker’s view is different: he’s dismissive of the fear (even putting it in the context of the Y2K bug, and people marching around sidewalks with sandwich boards that say “REPENT”), and thinks the AI-risk folks are simply making elementary mistakes about the nature of intelligence.  Pinker’s arguments are as follows: first, intelligence is not some magic, all-purpose pixie dust, which humans have more of than animals, and which a hypothetical future AI would have more of than humans.  Instead, the brain is a bundle of special-purpose modules that evolved for particular reasons, so “the concept [of artificial general intelligence] is barely coherent” (p. 298).  Second, it’s only humans’ specific history that causes them to think immediately about conquering and taking over, as goals to which superintelligence would be applied.  An AI could have different motivations entirely—and it will, if its programmers have any sense.  Third, any AI would be constrained by the resource limits of the physical world.  For example, just because an AI hatched a brilliant plan to recursively improve itself, doesn’t mean it could execute that plan without (say) building a new microchip fab, acquiring the necessary raw materials, and procuring the cooperation of humans.  Fourth, it’s absurd to imagine a superintelligence converting the universe into paperclips because of some simple programming flaw or overliteral interpretation of human commands, since understanding nuances is what intelligence is all about:

“The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence.  So is the ability to interpret the intentions of a language user in context” (p. 300).

I’ll leave it to those who’ve spent more time thinking about these issues to examine these arguments in detail (in the comments of this post, if they like).  But let me indicate briefly why I don’t think they fare too well under scrutiny.

For one thing, notice that the fourth argument is in fundamental tension with the first and second.  If intelligence is not an all-purpose elixir but a bundle of special-purpose tools, and if those tools can be wholly uncoupled from motivation, then why couldn’t we easily get vast intelligence expended toward goals that looked insane from our perspective?  Have humans never been known to put their intelligence in the service of ends that strike many of us as base, evil, simpleminded, or bizarre?  Consider the phrase often applied to men: “thinking with their dicks.”  Is there any sub-Einsteinian upper bound on the intelligence of the men who’ve been guilty of that?

Second, while it seems clear that there are many special-purpose mental modules—the hunting instincts of a cat, the mating calls of a bird, the pincer-grasping or language-acquisition skills of a human—it seems equally clear that there is some such thing as “general problem-solving ability,” which Newton had more of than Roofus McDoofus, and which even Roofus has more of than a chicken.  But whatever we take that ability to consist of, and whether we measure it by a scalar or a vector, it’s hard to imagine that Newton was anywhere near whatever limits on it are imposed by physics.  His brain was subject to all sorts of archaic evolutionary constraints, from the width of the birth canal to the amount of food available in the ancestral environment, and possibly also to diminishing returns on intelligence in humans’ social environment (Newton did, after all, die a virgin).  But if so, then given the impact that Newton, and others near the ceiling of known human problem-solving ability, managed to have even with their biology-constrained brains, how could we possibly see the prospect of removing those constraints as a narrow technological matter, like building a faster calculator or a more precise clock?

Third, the argument about intelligence being constrained by physical limits would seem to work equally well for a mammoth or cheetah scoping out the early hominids.  The mammoth might say: yes, these funny new hairless apes are smarter than me, but intelligence is just one factor among many, and often not the decisive factor.  I’m much bigger and stronger, and the cheetah is faster.  Of course we know what happened: from wild animals’ perspective, the arrival of humans really was a catastrophic singularity, comparable to the Chicxulub asteroid (and far from over), albeit one that took between 104 and 106 years depending on when we start the clock.  Over the short term, the optimistic mammoths would be right: pure, disembodied intelligence can’t just magically transform itself into spears and poisoned arrows that render you extinct.  Over the long term, the most paranoid mammoth on the tundra couldn’t imagine the half of what the new “superintelligence” would do.

Finally, any argument that relies on human programmers choosing not to build an AI with destructive potential, has to contend with the fact that humans did invent, among other things, nuclear weapons—and moreover, for what seemed like morally impeccable reasons at the time.  And a dangerous AI would be a lot harder to keep from proliferating, since it would consist of copyable code.  And it would only take one.  You could, of course, imagine building a good AI to neutralize the bad AIs, but by that point there’s not much daylight left between you and the AI-risk people.

As you’ve probably gathered, I’m a worrywart by temperament (and, I like to think, experience), and I’ve now spent a good deal of space on my disagreements with Pinker that flow from that.  But the funny part is, even though I consistently see clouds where he sees sunshine, we’re otherwise looking at much the same scene, and our shared view also makes us want the same things for the world.  I find myself in overwhelming, nontrivial agreement with Pinker about the value of science, reason, humanism, and Enlightenment; about who and what deserves credit for the stunning progress humans have made; about which tendencies of civilization to nurture and which to recoil in horror from; about how to think and write about any of those questions; and about a huge number of more specific issues.

So my advice is this: buy Pinker’s book and read it.  Then work for a future where the book’s optimism is justified.

by Scott at March 22, 2018 10:40 PM UTC

The first known classical “computer” was the Antikythera mechanism, an analog machine used to simulate the classical mechanics governing dynamics of celestial bodies on an astronomical scale. Similarly, a major ambition of quantum computers is to simulate the quantum mechanics governing dynamics of particles on the atomic scale. These simulations are often classically intractable due to the complex quantum mechanics at play. Of particular interest is the simulation of electrons forming chemical bonds, which give rise to the properties of essentially all molecules, materials and chemical reactions.
Left: The first known computing device, the Antikythera mechanism: a classical machine used to simulate classical mechanics. Right: Google’s 22 Xmon qubit “foxtail” chip arranged in a bilinear array on a wafer, the predecessor to Google’s new Bristlecone quantum processor with 72 qubits, a quantum machine we intend to use to simulate quantum mechanics, among other applications.
Since the launch of the Quantum AI team in 2013, we have been developing practical algorithms for quantum processors. In 2015, we conducted the first quantum chemistry experiment on a superconducting quantum computing device, published in Physical Review X. More recently, our quantum simulation effort experimentally simulated exotic phases of matter and released the first software package for quantum computing chemistry, OpenFermion. Earlier this month, our hardware team announced the new Bristlecone quantum processor with 72 qubits.

Today, we highlight two recent publications with theoretical advances that significantly reduce the cost of these quantum computations. Our results were presented at the Quantum Information Processing and IBM ThinkQ conferences.

The first of these works, “Low-Depth Quantum Simulation of Materials,” published this week in Physical Review X, was a collaboration between researchers at Google, the group of Professor Garnet Chan at Caltech and the QuArC group at Microsoft. Our fundamental advance was to realize that by changing how molecules are represented on quantum computers, we can greatly simplify the quantum circuits required to solve the problem. Specifically, we specially design basis sets so that the equations describing the system energies (i.e. the Hamiltonian) become more straightforward to express for quantum computation.

To do this, we focused on using basis sets related to functions (plane waves) used in classical electronic structure calculations to provide a periodic representation of the physical system. This enables one to go beyond the quantum simulation of single-molecules and instead use quantum computers to model realistic materials. For instance, instead of simulating a single lithium hydride molecule floating in free space, with our approach one can quantum simulate a crystal of lithium hydride, which is how the material appears in nature. With larger quantum computers one could study other important materials problems such as the degradation of battery cathodes, chemical reactions involving heterogeneous catalysts, or the unusual electrical properties of graphene and superconductors.

In “Quantum Simulation of Electronic Structure with Linear Depth and Connectivity,” published last week in Physical Review Letters with the same collaborators and a Google intern from the Aspuru-Guzik group at Harvard, we leverage the structure introduced in the work above to design algorithms for near-term quantum computers with qubits laid out in a linear array. Whereas past methods required such quantum computers to run for time scaling as the fifth power of the number of simulated electrons for each dynamic step, our improved algorithm runs for time scaling linearly with respect to the number of electrons. This reduction in computational cost makes it viable to perform quantum chemistry simulations on near-term devices with fewer gates in each quantum circuit, possibly avoiding the need for full error-correction.

Even with these improvements, it is no small task to deploy such new technology to outperform classical quantum chemistry algorithms and methods which have been refined in parallel with the development of classical computers for more than eighty years. However, at the current rate of advances in quantum algorithms and hardware, quantum technologies may provide chemists with an invaluable new tool. We look forward to sharing our research results as they develop.

by Research Blog ( at March 22, 2018 05:00 PM UTC

I was asked to post the following notice for the upcoming Swedish Summer School for (theoretical) computer scientists.  I gave some lectures for it a couple of summers back, and really enjoyed it.  Maybe the students did also. 

A warning that Djuronaset is not in the city of Stockholm, but a bit over an hour away from downtown Stockholm by bus.  While not in the city, it's a beautiful area for walks, and the facilities are very nice.  (In particular, they have wonderful saunas in the hotel that should be used daily.)


The 5th Swedish Summer School in Computer Science ( will be held August 5-11, 2018, in the beautiful Stockholm archipelago at Djuronaset ( The school runs for a full week Monday-Friday in early August when Sweden is at its loveliest, with arrival on Sunday evening and departure Saturdaymorning.

We will celebrate our 5th anniversary by going significantly out of our comfort zone and learn about quantum computation. Ronald de Wolf ( will give a series of lectures accessible to people who do not know quantum from before. The idea is that this will help all those of us who routinely skip intimidating quantum talks at workshops and conference to overcome our deepest fears and learn enough so that during the next conference we can confidently go to the quantum sessions and actually understand some of what is going on (and maybe even ask a smart question or two).

Another reason for being scared about quantum is that the crypto systems we know and love might no longer be safe. But fear not: Oded Regev ( will give lectures on lattices and cryptography, explaining, among other things, how to survive in a post-quantum world. Other exciting topics that Oded will touch upon are the Learning with errors (LWE) problem and fully homomorphic encryption.

The summer school is primarily intended for PhD students, but postdocs and bright MSc students are also warmly welcome (and also faculty, subject to availability of slots).

The application deadline is April 20, 2018. Please see for more information including instructions for how to apply. Any questions can be directed to

by Michael Mitzenmacher ( at March 22, 2018 01:54 PM UTC

Authors: Frédéric Magniez, Ashwin Nayak, Jérémie Roland, Miklos Santha
Download: PDF
Abstract: We propose a new method for designing quantum search algorithms for finding a "marked" element in the state space of a classical Markov chain. The algorithm is based on a quantum walk \'a la Szegedy (2004) that is defined in terms of the Markov chain. The main new idea is to apply quantum phase estimation to the quantum walk in order to implement an approximate reflection operator. This operator is then used in an amplitude amplification scheme. As a result we considerably expand the scope of the previous approaches of Ambainis (2004) and Szegedy (2004). Our algorithm combines the benefits of these approaches in terms of being able to find marked elements, incurring the smaller cost of the two, and being applicable to a larger class of Markov chains. In addition, it is conceptually simple and avoids some technical difficulties in the previous analyses of several algorithms based on quantum walk.

at March 22, 2018 12:00 AM UTC

Authors: Hartmut Klauck, Ashwin Nayak, Amnon Ta-Shma, David Zuckerman
Download: PDF
Abstract: In some scenarios there are ways of conveying information with many fewer, even exponentially fewer, qubits than possible classically. Moreover, some of these methods have a very simple structure--they involve only few message exchanges between the communicating parties. It is therefore natural to ask whether every classical protocol may be transformed to a ``simpler'' quantum protocol--one that has similar efficiency, but uses fewer message exchanges.

We show that for any constant k, there is a problem such that its k+1 message classical communication complexity is exponentially smaller than its k message quantum communication complexity. This, in particular, proves a round hierarchy theorem for quantum communication complexity, and implies, via a simple reduction, an Omega(N^{1/k}) lower bound for k message quantum protocols for Set Disjointness for constant k.

Enroute, we prove information-theoretic lemmas, and define a related measure of correlation, the informational distance, that we believe may be of significance in other contexts as well.

at March 22, 2018 12:41 AM UTC

Authors: Frederic Magniez, Ashwin Nayak
Download: PDF
Abstract: We consider the problem of testing the commutativity of a black-box group specified by its k generators. The complexity (in terms of k) of this problem was first considered by Pak, who gave a randomized algorithm involving O(k) group operations. We construct a quite optimal quantum algorithm for this problem whose complexity is in O (k^{2/3}). The algorithm uses and highlights the power of the quantization method of Szegedy. For the lower bound of Omega(k^{2/3}), we give a reduction from a special case of Element Distinctness to our problem. Along the way, we prove the optimality of the algorithm of Pak for the randomized model.

at March 22, 2018 12:45 AM UTC

Authors: Ashwin Nayak, Julia Salzman
Download: PDF
Abstract: Shared entanglement is a resource available to parties communicating over a quantum channel, much akin to public coins in classical communication protocols. Whereas shared randomness does not help in the transmission of information, or significantly reduce the classical complexity of computing functions (as compared to private-coin protocols), shared entanglement leads to startling phenomena such as ``quantum teleportation'' and ``superdense coding.''

The problem of characterising the power of prior entanglement has puzzled many researchers. In this paper, we revisit the problem of transmitting classical bits over an entanglement-assisted quantum channel. We derive a new, optimal bound on the number of quantum bits required for this task, for any given probability of error. All known lower bounds in the setting of bounded error entanglement-assisted communication are based on sophisticated information theoretic arguments. In contrast, our result is derived from first principles, using a simple linear algebraic technique.

at March 22, 2018 12:40 AM UTC

Authors: Pedro F. Felzenszwalb
Download: PDF
Abstract: We consider a problem that involves finding similar elements in a collection of sets. The problem is motivated by applications in machine learning and pattern recognition. We formulate the similar elements problem as an optimization and give an efficient approximation algorithm that finds a solution within a factor of 2 of the optimal. The similar elements problem is a special case of the metric labeling problem and we also give an efficient 2-approximation algorithm for the metric labeling problem on complete graphs.

at March 22, 2018 12:46 AM UTC

Authors: Krzysztof Domino, Adam Glos
Download: PDF
Abstract: In this paper we present the algorithm that changes the subset of marginals of multivariate normal distributed data into such modelled by an Archimedean copula. Proposed algorithm leaves a correlation matrix almost unchanged, but introduces a higher order cross-correlation measured by high order multivariate cumulant tensors. Given the algorithm, we analyse the ability of cumulants based features selection methods to detect a subset of changed data. We show numerically that the performance of the method based on a second cumulant (a covariance matrix) is weak comparing to method that uses the 3 order multivariate cumulant tensor. Our data generation algorithm can be used for hiding information in randomly distributed data or for the features discrimination algorithms comparison.

at March 22, 2018 12:45 AM UTC

Authors: Ali Gholami Rudi
Download: PDF
Abstract: A stay point of a moving entity is a region in which it spends a significant amount of time. In this paper, we identify all stay points of an entity in a certain time interval, where the entity is allowed to leave the region but it should return within a given time limit. This definition of stay points seems more natural in many applications of trajectory analysis than those that do not limit the time of entity's absence from the region. We present an $O(n \log n)$ algorithm for trajectories in $R^1$ with $n$ vertices and a $(1 + \epsilon)$-approximation algorithm for trajectories in $R^2$ for identifying all such stay points. We also show that the existence of an incremental algorithm with the time complexity $o(n^3)$ for two-dimensional trajectories is unlikely.

at March 22, 2018 12:48 AM UTC

Authors: Amir Ali Ahmadi, Jeffrey Zhang
Download: PDF
Abstract: We prove that unless P=NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can test whether the optimal value of a nonlinear optimization problem where the objective and constraints are given by low-degree polynomials is attained. If the degrees of these polynomials are fixed, our results along with previously-known "Frank-Wolfe type" theorems imply that exactly one of two cases can occur: either the optimal value is attained on every instance, or it is strongly NP-hard to distinguish attainment from non-attainment. We also show that testing for some well-known sufficient conditions for attainment of the optimal value, such as coercivity of the objective function and closedness and boundedness of the feasible set, is strongly NP-hard. As a byproduct, our proofs imply that testing the Archimedean property of a quadratic module is strongly NP-hard, a property that is of independent interest to the convergence of the Lasserre hierarchy. Finally, we give semidefinite programming (SDP)-based sufficient conditions for attainment of the optimal value, in particular a new characterization of coercive polynomials that lends itself to an SDP hierarchy.

at March 22, 2018 12:43 AM UTC

Authors: Jean-Daniel Boissonnat, Ramsay Dyer, Arijit Ghosh, Mathijs Wintraecken
Download: PDF
Abstract: We present criteria for establishing a triangulation of a manifold. Given a manifold M, a simplicial complex A, and a map H from the underlying space of A to M, our criteria are presented in local coordinate charts for M, and ensure that H is a homeomorphism. These criteria do not require a differentiable structure, or even an explicit metric on M. No Delaunay property of A is assumed. The result provides a triangulation guarantee for algorithms that construct a simplicial complex by working in local coordinate patches. Because the criteria are easily verified in such a setting, they are expected to be of general use.

at March 22, 2018 12:48 AM UTC

Authors: Elizabeth Munch, Anastasios Stefanou
Download: PDF
Abstract: There are many metrics available to compare phylogenetic trees since this is a fundamental task in computational biology. In this paper, we focus on one such metric, the $\ell^\infty$-cophenetic metric introduced by Cardona et al. This metric works by representing a phylogenetic tree with $n$ labeled leaves as a point in $\mathbb{R}^{n(n+1)/2}$ known as the cophenetic vector, then comparing the two resulting Euclidean points using the $\ell^\infty$ distance. Meanwhile, the interleaving distance is a formal categorical construction generalized from the definition of Chazal et al., originally introduced to compare persistence modules arising from the field of topological data analysis. We show that the $\ell^\infty$-cophenetic metric is an example of an interleaving distance. To do this, we define phylogenetic trees as a category of merge trees with some additional structure; namely labelings on the leaves plus a requirement that morphisms respect these labels. Then we can use the definition of a flow on this category to give an interleaving distance. Finally, we show that, because of the additional structure given by the categories defined, the map sending a labeled merge tree to the cophenetic vector is, in fact, an isometric embedding, thus proving that the $\ell^\infty$-cophenetic metric is, in fact, an interleaving distance.

at March 22, 2018 12:48 AM UTC

Authors: Shima Bab Hadiashar, Ashwin Nayak, Renato Renner
Download: PDF
Abstract: Quantum teleportation uses prior shared entanglement and classical communication to send an unknown quantum state from one party to another. Remote state preparation (RSP) is a similar distributed task in which the sender knows the entire classical description of the state to be sent. (This may also be viewed as the task of non-oblivious compression of a single sample from an ensemble of quantum states.) We study the communication complexity of approximate remote state preparation, in which the goal is to prepare an approximation of the desired quantum state. Jain [Quant. Inf. & Comp., 2006] showed that the worst-case communication complexity of approximate RSP can be bounded from above in terms of the maximum possible information in an encoding. He also showed that this quantity is a lower bound for communication complexity of (exact) remote state preparation. In this work, we tightly characterize the worst-case and average-case communication complexity of remote state preparation in terms of non-asymptotic information-theoretic quantities. We also show that the average-case communication complexity of RSP can be much smaller than the worst-case one. In the process, we show that n bits cannot be communicated with less than n transmitted bits in LOCC protocols. This strengthens a result due to Nayak and Salzman [J. ACM, 2006] and may be of independent interest.

at March 22, 2018 12:40 AM UTC

Authors: Ashwin Nayak, Dave Touchette
Download: PDF
Abstract: We show how two recently developed quantum information theoretic tools can be applied to obtain lower bounds on quantum information complexity. We also develop new tools with potential for broader applicability, and use them to establish a lower bound on the quantum information complexity for the Augmented Index function on an easy distribution. This approach allows us to handle superpositions rather than distributions over inputs, the main technical challenge faced previously. By providing a quantum generalization of the argument of Jain and Nayak [IEEE TIT'14], we leverage this to obtain a lower bound on the space complexity of multi-pass, unidirectional quantum streaming algorithms for the DYCK(2) language.

at March 22, 2018 12:41 AM UTC

Authors: Frederic Magniez, Ashwin Nayak, Peter C. Richter, Miklos Santha
Download: PDF
Abstract: In this paper we define new Monte Carlo type classical and quantum hitting times, and we prove several relationships among these and the already existing Las Vegas type definitions. In particular, we show that for some marked state the two types of hitting time are of the same order in both the classical and the quantum case.

Further, we prove that for any reversible ergodic Markov chain $P$, the quantum hitting time of the quantum analogue of $P$ has the same order as the square root of the classical hitting time of $P$. We also investigate the (im)possibility of achieving a gap greater than quadratic using an alternative quantum walk.

Finally, we present new quantum algorithms for the detection and finding problems. The complexities of both algorithms are related to the new, potentially smaller, quantum hitting times. The detection algorithm is based on phase estimation and is particularly simple. The finding algorithm combines a similar phase estimation based procedure with ideas of Tulsi from his recent theorem for the 2D grid. Extending his result, we show that for any state-transitive Markov chain with unique marked state, the quantum hitting time is of the same order for both the detection and finding problems.

at March 22, 2018 12:45 AM UTC

I was recently talking with an historian who mentioned to me that when Stephen Hawking lost the use of his hands (so could no longer write on a whiteboard), he had to switch from thinking algebraically to thinking geometrically. The historian asked me about this distinction, and I attempted to explain these alternative modes of mathematical thinking. We discovered an analogy that is so cute that I have to write it up: the historian, who specialises in shipping in the Mediterranean, often visualises the scenery, and life on board the ships, but does not end up illustrating his books with his imagined scenes. Visualising the past environment presumably helps to understand the narrative of what was going on at the time, and of course, that corresponds to the geometrical thinking.

by Paul Goldberg ( at March 21, 2018 11:00 PM UTC

The next TCS+ talk will take place this coming Wednesday, March 28th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Artur Czumaj from the University of Warwick will speak about “Round Compression for Parallel Matching Algorithms” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms?

A prominent example here is the fundamental graph problem of finding maximum matching. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(log n) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. showed that if each machine has n^{1+Ω(1)} memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(log n) rounds once we enter the near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power.

In this talk, we finally show how to refute that perplexing possibility. That is, we break the above O(log n) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+ϵ) approximation to maximum matching, for any fixed constant ϵ>0, in O((loglog n)^2) rounds.

This is a joint work with Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski.

by plustcs at March 21, 2018 02:09 PM UTC

Authors: Petra Mutzel, Lutz Oettershagen
Download: PDF
Abstract: The Harary-Hill conjecture states that for every $n>0$ the complete graph on $n$ vertices $K_n$, the minimum number of crossings over all its possible drawings equals \begin{align*} H(n) := \frac{1}{4}\Big\lfloor\frac{n}{2}\Big\rfloor\Big\lfloor\frac{n-1}{2}\Big\rfloor\Big\lfloor\frac{n-2}{2}\Big\rfloor\Big\lfloor\frac{n-3}{2}\Big\rfloor\text{.} \end{align*} So far, the lower bound of the conjecture could only be verified for arbitrary drawings of $K_n$ with $n\leq 12$. In recent years, progress has been made in verifying the conjecture for certain classes of drawings, for example $2$-page-book, $x$-monotone, $x$-bounded, shellable and bishellable drawings. Up to now, the class of bishellable drawings was the broadest class for which the Harary-Hill conjecture has been verified, as it contains all beforehand mentioned classes. In this work, we introduce the class of seq-shellable drawings and verify the Harary-Hill conjecture for this new class. We show that bishellability implies seq-shellability and exhibit a non-bishellable but seq-shellable drawing of $K_{11}$, therefore the class of seq-shellable drawings strictly contains the class of bishellable drawings.

at March 21, 2018 12:00 AM UTC

Authors: Dmitriy Zhuk
Download: PDF
Abstract: Constraint Satisfaction Problem on finite sets is known to be NP-complete in general but certain restrictions on the constraint language can ensure tractability. It was proved that if a constraint language has a weak near unanimity polymorphism then the corresponding constraint satisfaction problem is tractable, otherwise it is NP-complete. In the paper we present a modification of the algorithm that works in polynomial time even for infinite constraint languages.

at March 21, 2018 12:40 AM UTC

Authors: Bernd Schuh
Download: PDF
Abstract: We derive an upper bound on the number of models for exact satisfiability (XSAT) of arbitrary CNF formulas F. The bound can be calculated solely from the distribution of positive and negated literals in the formula. For certain subsets of CNF instances the new bound can be computed in sub-exponential time, namely in at most O(exp(sqrt(n))) , where n is the number of variables of F. A wider class of SAT problems beyond XSAT is defined to which the method can be extended.

at March 21, 2018 12:40 AM UTC

Authors: Filip Hanzely, Peter Richtárik
Download: PDF
Abstract: Relative smoothness - a notion introduced by Birnbaum et al. (2011) and rediscovered by Bauschke et al. (2016) and Lu et al. (2016) - generalizes the standard notion of smoothness typically used in the analysis of gradient type methods. In this work we are taking ideas from well studied field of stochastic convex optimization and using them in order to obtain faster algorithms for minimizing relatively smooth functions. We propose and analyze two new algorithms: Relative Randomized Coordinate Descent (relRCD) and Relative Stochastic Gradient Descent (relSGD), both generalizing famous algorithms in the standard smooth setting. The methods we propose can be in fact seen as a particular instances of stochastic mirror descent algorithms. One of them, relRCD corresponds to the first stochastic variant of mirror descent algorithm with linear convergence rate.

at March 21, 2018 12:40 AM UTC

Authors: Ivan S. Zapreev, Cees Verdier, Manuel Mazo Jr
Download: PDF
Abstract: Controller synthesis techniques based on symbolic abstractions appeal by producing correct-by-design controllers, under intricate behavioural constraints. Yet, being relations between abstract states and inputs, such controllers are immense in size, which makes them futile for em- bedded platforms. Control-synthesis tools such as PESSOA, SCOTS, and CoSyMA tackle the problem by storing controllers as binary decision di- agrams (BDDs). However, due to redundantly keeping multiple inputs per-state, the resulting controllers are still too large. In this work, we first show that choosing an optimal controller determinization is an NP- complete problem. Further, we consider the previously known controller determinization technique and discuss its weaknesses. We suggest several new approaches to the problem, based on greedy algorithms, symbolic regression, and (muli-terminal) BDDs. Finally, we empirically compare the techniques and show that some of the new algorithms can produce up to 85% smaller controllers than those obtained with the previous technique.

at March 21, 2018 12:40 AM UTC

Authors: Jinshi Yu, Guoxu Zhou, Andrzej Cichocki, Shengli Xie
Download: PDF
Abstract: Nonsmooth Nonnegative Matrix Factorization (nsNMF) is capable of producing more localized, less overlapped feature representations than other variants of NMF while keeping satisfactory fit to data. However, nsNMF as well as other existing NMF methods is incompetent to learn hierarchical features of complex data due to its shallow structure. To fill this gap, we propose a deep nsNMF method coined by the fact that it possesses a deeper architecture compared with standard nsNMF. The deep nsNMF not only gives parts-based features due to the nonnegativity constraints, but also creates higher-level, more abstract features by combing lower-level ones. The in-depth description of how deep architecture can help to efficiently discover abstract features in dnsNMF is presented. And we also show that the deep nsNMF has close relationship with the deep autoencoder, suggesting that the proposed model inherits the major advantages from both deep learning and NMF. Extensive experiments demonstrate the standout performance of the proposed method in clustering analysis.

at March 21, 2018 12:40 AM UTC

Authors: Sharath Raghvendra
Download: PDF
Abstract: In the online metric bipartite matching problem, we are given a set $S$ of server locations in a metric space. Requests arrive one at a time, and on its arrival, we need to immediately and irrevocably match it to a server at a cost which is equal to the distance between these locations. A $\alpha$-competitive algorithm will assign requests to servers so that the total cost is at most $\alpha$ times the cost of $M_{OPT}$ where $M_{OPT}$ is the minimum cost matching between $S$ and $R$.

We consider this problem in the adversarial model for the case where $S$ and $R$ are points on a line and $|S|=|R|=n$. We improve the analysis of the deterministic Robust Matching Algorithm (RM-Algorithm, Nayyar and Raghvendra FOCS'17) from $O(\log^2 n)$ to an optimal $\Theta(\log n)$. Previously, only a randomized algorithm under a weaker oblivious adversary achieved a competitive ratio of $O(\log n)$ (Gupta and Lewi, ICALP'12). The well-known Work Function Algorithm (WFA) has a competitive ratio of $O(n)$ and $\Omega(\log n)$ for this problem. Therefore, WFA cannot achieve an asymptotically better competitive ratio than the RM-Algorithm.

at March 21, 2018 12:00 AM UTC

Authors: Ali Dasdan
Download: PDF
Abstract: The Fibonacci numbers are a sequence of integers in which every number after the first two, 0 and 1, is the sum of the two preceding numbers. These numbers are well known and algorithms to compute them are so easy that they are often used in introductory algorithms courses. In this paper, we present eleven of these well-known algorithms and some of their properties. These algorithms, though very simple, illustrate multiple concepts from the algorithms field, so we highlight them. We also present the results of a small-scale experimental comparison of their runtimes on a personal laptop. Finally, we provide a list of homework questions for the students. We hope that this paper can serve as a useful resource for the students learning the basics of algorithms.

at March 21, 2018 12:54 AM UTC

Authors: Tolga Birdal, Benjamin Busam, Nassir Navab, Slobodan Ilic, Peter Sturm
Download: PDF
Abstract: This paper proposes a segmentation-free, automatic and efficient procedure to detect general geometric quadric forms in point clouds, where clutter and occlusions are inevitable. Our everyday world is dominated by man-made objects which are designed using 3D primitives (such as planes, cones, spheres, cylinders, etc.). These objects are also omnipresent in industrial environments. This gives rise to the possibility of abstracting 3D scenes through primitives, thereby positions these geometric forms as an integral part of perception and high level 3D scene understanding.

As opposed to state-of-the-art, where a tailored algorithm treats each primitive type separately, we propose to encapsulate all types in a single robust detection procedure. At the center of our approach lies a closed form 3D quadric fit, operating in both primal & dual spaces and requiring as low as 4 oriented-points. Around this fit, we design a novel, local null-space voting strategy to reduce the 4-point case to 3. Voting is coupled with the famous RANSAC and makes our algorithm orders of magnitude faster than its conventional counterparts. This is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes. Results on synthetic and real datasets support the validity of our method.

at March 21, 2018 12:00 AM UTC

Authors: Timothy M. Chan
Download: PDF
Abstract: We make progress on a number of open problems concerning the area requirement for drawing trees on a grid. We prove that

1. every tree of size $n$ (with arbitrarily large degree) has a straight-line drawing with area $n2^{O(\sqrt{\log\log n\log\log\log n})}$, improving the longstanding $O(n\log n)$ bound;

2. every tree of size $n$ (with arbitrarily large degree) has a straight-line upward drawing with area $n\sqrt{\log n}(\log\log n)^{O(1)}$, improving the longstanding $O(n\log n)$ bound;

3. every binary tree of size $n$ has a straight-line orthogonal drawing with area $n2^{O(\log^*n)}$, improving the previous $O(n\log\log n)$ bound by Shin, Kim, and Chwa (1996) and Chan, Goodrich, Kosaraju, and Tamassia (1996);

4. every binary tree of size $n$ has a straight-line order-preserving drawing with area $n2^{O(\log^*n)}$, improving the previous $O(n\log\log n)$ bound by Garg and Rusu (2003);

5. every binary tree of size $n$ has a straight-line orthogonal order-preserving drawing with area $n2^{O(\sqrt{\log n})}$, improving the $O(n^{3/2})$ previous bound by Frati (2007).

at March 21, 2018 12:00 AM UTC

Authors: Ryo Ashida, Kotaro Nakagawa
Download: PDF
Abstract: The directed graph reachability problem takes as input an $n$-vertex directed graph $G=(V,E)$, and two distinguished vertices $s$ and $t$. The problem is to determine whether there exists a path from $s$ to $t$ in $G$. This is a canonical complete problem for class NL. Asano et al. proposed an $\tilde{O}(\sqrt{n})$ space and polynomial time algorithm for the directed grid and planar graph reachability problem. The main result of this paper is to show that the directed graph reachability problem restricted to grid graphs can be solved in polynomial time using only $\tilde{O}(n^{1/3})$ space.

at March 21, 2018 12:54 AM UTC

On behalf of the organizing committee: Ashish Goel, Jason Hartline, Gabriel Carroll, and Nicole Immorlica.

This year, the ACM Conference on Economics and Computation (EC) will host a festival highlighting some of the best work in economics and computation that typically appears in conferences and journals adjacent to EC. The intention of this festival is to expose EC attendees to related work just beyond the boundary of their current awareness.

We seek nominations for papers that have made breakthrough advances, opened up new questions or areas, made unexpected connections, or had significant impact on practice or other sciences. Examples of conferences and journals that publish papers relevant for our festival include STOC/FOCS/SODA, AAAI/IJCAI/AAMAS, NIPS/ICML/COLT, WWW/KDD, AER/Econometrica/QJE/RESTUD/TE/AEJ Micro/JET/GEB, and Math of OR/Management Science/Operations Research. Please email nominations to Anyone is welcome to contact us, but we especially invite members of PCs or editorial boards in various venues to send us suggestions. Nominations should include:

  • Name of paper and authors.
  • Publication venue or online working version. Preference will be given to papers that have appeared in a related conference or journal within the past two years, or have a working version circulated within the past two years.
  • Short (1-3 paragraph) explanation of the paper and its importance.
  • (Optional) Names of 1-3 knowledgeable experts on the area of the paper.

Note at least one of the authors of a selected paper will be required to present their paper at EC 2018 and so should be available to travel to the conference, which is taking place in Ithaca, NY from June 19-21, 2018. To ensure maximum consideration, please send all nominations by March 31, 2018.

by Jason Hartline at March 20, 2018 02:17 AM UTC

We prove that to estimate within a constant factor the number of defective items in a non-adaptive group testing algorithm we need at least $\tilde\Omega((\log n)(\log(1/\delta)))$ tests. This solves the open problem posed by Damaschke and Sheikh Muhammad.

at March 19, 2018 12:44 PM UTC

Authors: Jérémie Chalopin, Victor Chepoi, Feodor F. Dragan, Guillaume Ducoffe, Abdulhakeem Mohammed, Yann Vaxès
Download: PDF
Abstract: In this paper, we study Gromov hyperbolicity and related parameters, that represent how close (locally) a metric space is to a tree from a metric point of view. The study of Gromov hyperbolicity for geodesic metric spaces can be reduced to the study of graph hyperbolicity. Our main contribution in this note is a new characterization of hyperbolicity for graphs (and for complete geodesic metric spaces). This characterization has algorithmic implications in the field of large-scale network analysis, which was one of our initial motivations. A sharp estimate of graph hyperbolicity is useful, e.g., in embedding an undirected graph into hyperbolic space with minimum distortion [Verbeek and Suri, SoCG'14]. The hyperbolicity of a graph can be computed in polynomial-time, however it is unlikely that it can be done in subcubic time. This makes this parameter difficult to compute or to approximate on large graphs. Using our new characterization of graph hyperbolicity, we provide a simple factor 8 approximation algorithm for computing the hyperbolicity of an $n$-vertex graph $G=(V,E)$ in optimal time $O(n^2)$ (assuming that the input is the distance matrix of the graph). This algorithm leads to constant factor approximations of other graph-parameters related to hyperbolicity (thinness, slimness, and insize). We also present the first efficient algorithms for exact computation of these parameters. All of our algorithms can be used to approximate the hyperbolicity of a geodesic metric space.

at March 19, 2018 12:00 AM UTC

Authors: Boris Bukh, Xavier Goaoc, Alfredo Hubard, Matthew Trager
Download: PDF
Abstract: We consider incidences among colored sets of lines in $\mathbb{R}^d$ and examine whether the existence of certain concurrences between lines of $k$ colors force the existence of at least one concurrence between lines of $k+1$ colors. This question is relevant for problems in 3D reconstruction in computer vision.

at March 19, 2018 12:00 AM UTC

Authors: Sayan Bandyapadhyay, Anil Maheshwari, Saeed Mehrabi, Subhash Suri
Download: PDF
Abstract: We consider the Dominating Set (DS) problem on the intersection graphs of geometric objects. Surprisingly, for simple and widely used objects such as rectangles, the problem is NP-hard even when all the rectangles are "anchored" at a line with slope -1. It is easy to see that for the anchored rectangles, the problem reduces to one with even simpler objects: L-frames. An L-frame is the union of a vertical and a horizontal segment that share one endpoint (corner of the L-frame). In light of the above discussion, we consider DS on the intersection graphs of L-frames.

In this paper, we consider three restricted versions of the problem. First, we consider the version in which the corners of all input L-frames are anchored at a line with slope -1, and obtain a polynomial-time $(2+\epsilon)$-approximation. Furthermore, we obtain a PTAS in case all the input L-frames are anchored at the diagonal from one side. Next, we consider the version, where all input L-frames intersect a vertical line, and prove APX-hardness of this version. Moreover, we prove NP-hardness of this version even in case the horizontal and vertical segments of each L-frame have the same length. Finally, we consider the version, where every L-frame intersects a vertical and a horizontal line, and show that this version is linear-time solvable. We also consider these versions of the problem in the so-called "edge intersection model", and obtain several interesting results. One of the results is an NP-hardness proof of the third version which answers a question posed by Mehrabi (WAOA 2017).

at March 19, 2018 12:00 AM UTC

Authors: Yuko Kuroki, Tomomi Matsui
Download: PDF
Abstract: Transportation networks frequently employ hub-and-spoke network architectures to route flows between many origin and destination pairs. Hub facilities work as switching points for flows in large networks. In this study, we deal with a problem, called the single allocation hub-and-spoke network design problem. In the problem, the goal is to allocate each non-hub node to exactly one of given hub nodes so as to minimize the total transportation cost. The problem is essentially equivalent to another combinatorial optimization problem, called the metric labeling problem. The metric labeling problem was first introduced by Kleinberg and Tardos in 2002, motivated by application to segmentation problems in computer vision and related areas. In this study, we deal with the case where the set of hubs forms a star, which arises especially in telecommunication networks. We propose a polynomial-time randomized approximation algorithm for the problem, whose approximation ratio is less than 5.281. Our algorithms solve a linear relaxation problem and apply dependent rounding procedures.

at March 19, 2018 12:55 AM UTC

Authors: Fedor V. Fomin, Petr A. Golovach, Fahad Panolan
Download: PDF
Abstract: We provide a number of algorithmic results for the following family of problems: For a given binary m\times n matrix A and integer k, decide whether there is a "simple" binary matrix B which differs from A in at most k entries. For an integer r, the "simplicity" of B is characterized as follows.

- Binary r-Means: Matrix B has at most r different columns. This problem is known to be NP-complete already for r=2. We show that the problem is solvable in time 2^{O(k\log k)}\cdot(nm)^{O(1)} and thus is fixed-parameter tractable parameterized by k. We prove that the problem admits a polynomial kernel when parameterized by r and k but it has no polynomial kernel when parameterized by k only unless NP\subseteq coNP/poly. We also complement these result by showing that when being parameterized by r and k, the problem admits an algorithm of running time 2^{O(r\cdot \sqrt{k\log{(k+r)}})}(nm)^{O(1)}, which is subexponential in k for r\in O(k^{1/2 -\epsilon}) for any \epsilon>0.

- Low GF(2)-Rank Approximation: Matrix B is of GF(2)-rank at most r. This problem is known to be NP-complete already for r=1. It also known to be W[1]-hard when parameterized by k. Interestingly, when parameterized by r and k, the problem is not only fixed-parameter tractable, but it is solvable in time 2^{O(r^{ 3/2}\cdot \sqrt{k\log{k}})}(nm)^{O(1)}, which is subexponential in k.

- Low Boolean-Rank Approximation: Matrix B is of Boolean rank at most r. The problem is known to be NP-complete for k=0 as well as for r=1. We show that it is solvable in subexponential in k time 2^{O(r2^r\cdot \sqrt{k\log k})}(nm)^{O(1)}.

at March 19, 2018 12:44 AM UTC

Authors: Ahmed Abdelkader, Chandrajit L. Bajaj, Mohamed S. Ebeida, Ahmed H. Mahmoud, Scott A. Mitchell, John D. Owens, Ahmad A. Rushdi
Download: PDF
Abstract: We study the problem of decomposing a volume bounded by a smooth surface into a collection of Voronoi cells. Unlike the dual problem of conforming Delaunay meshing, a principled solution to this problem for generic smooth surfaces remained elusive. VoroCrust leverages ideas from $\alpha$-shapes and the power crust algorithm to produce unweighted Voronoi cells conforming to the surface, yielding the first provably-correct algorithm for this problem. Given an $\epsilon$-sample on the bounding surface, with a weak $\sigma$-sparsity condition, we work with the balls of radius $\delta$ times the local feature size centered at each sample. The corners of this union of balls are the Voronoi sites, on both sides of the surface. The facets common to cells on opposite sides reconstruct the surface. For appropriate values of $\epsilon$, $\sigma$ and $\delta$, we prove that the surface reconstruction is isotopic to the bounding surface. With the surface protected, the enclosed volume can be further decomposed into an isotopic volume mesh of fat Voronoi cells by generating a bounded number of sites in its interior. Compared to clipping-based methods, VoroCrust cells are full Voronoi cells, with convexity and fatness guarantees. Compared to the power crust algorithm, VoroCrust cells are not filtered, are unweighted, and offer greater flexibility in meshing the enclosed volume by either structured grids or random samples.

at March 19, 2018 12:00 AM UTC