Theory of Computing Blog Aggregator

Authors: Ryo Asai, Andrey Vladimirov
Download: PDF
Abstract: In this paper we demonstrate the methodology for parallelizing the computation of large one-dimensional discrete fast Fourier transforms (DFFTs) on multi-core Intel Xeon processors. DFFTs based on the recursive Cooley-Tukey method have to control cache utilization, memory bandwidth and vector hardware usage, and at the same time scale across multiple threads or compute nodes. Our method builds on single-threaded Intel Math Kernel Library (MKL) implementation of DFFT, and uses the Intel Cilk Plus framework for thread parallelism. We demonstrate the ability of Intel Cilk Plus to handle parallel recursion with nested loop-centric parallelism without tuning the code to the number of cores or cache metrics. The result of our work is a library called EFFT that performs 1D DFTs of size 2^N for N>=21 faster than the corresponding Intel MKL parallel DFT implementation by up to 1.5x, and faster than FFTW by up to 2.5x. The code of EFFT is available for free download under the GPLv3 license. This work provides a new efficient DFFT implementation, and at the same time demonstrates an educational example of how computer science problems with complex parallel patterns can be optimized for high performance using the Intel Cilk Plus framework.

at September 22, 2014 12:41 AM UTC

Authors: Dmitry Kosolobov
Download: PDF
Abstract: The complexity of computing the Lempel-Ziv factorization and the set of all runs (= maximal repetitions) is studied in the decision tree model of computation over ordered alphabet. It is known that both these problems can be solved by RAM algorithms in $O(n\log\sigma)$ time, where $n$ is the length of the input string and $\sigma$ is the number of distinct letters in it. We prove an $\Omega(n\log\sigma)$ lower bound on the number of comparisons required to construct the Lempel-Ziv factorization and thereby conclude that a popular technique of computation of runs using the Lempel-Ziv factorization cannot achieve an $o(n\log\sigma)$ time bound. In contrast with this, we exhibit an $O(n)$ decision tree algorithm finding all runs in a string. Therefore, in the decision tree model the runs problem is easier than the Lempel-Ziv factorization. Thus we support the conjecture that there is a linear RAM algorithm finding all runs.

at September 22, 2014 12:41 AM UTC

Authors: Leslie Ann Goldberg, Heng Guo
Download: PDF
Abstract: We study the complexity of approximating the Ising and Tutte partition functions with complex parameters. Our results are motivated by the study of the quantum complexity classes BQP and IQP. Recent results show how to encode quantum computations as partition functions. These results rely on interesting and deep results about quantum computation in order to obtain hardness results about the difficulty of (classically) evaluating the partition functions. The main motivation for this paper is to go the other way around. Partition functions are combinatorial in nature and classifying the difficulty of approximating these partition functions does not require a detailed understanding of quantum computation. Using combinatorial arguments, we give a full classification of the complexity of multiplicatively approximating the norm of the partition function for complex edge interactions. We also classify the complexity of additively approximating the argument of the partition function (and of approximating the partition function according to a natural complex metric). Using our classifications, we then revisit the connections to quantum computation, drawing conclusions that are different from (and incomparable to) the results in the quantum complexity literature. We show that strong simulation of IQP within any constant factor is #P-hard, even for the restricted class of circuits studied by Bremner et al. We also show that computing the sign of the Tutte polynomial is #P-hard at a certain point related to the simulation of BQP. Finally, motivated by further BQP-hardness results, we study the problem of approximating the norm of the Ising partition function in the presence of external fields. We give a complete dichotomy for the case in which the parameters are roots of unity. Previous results were known just for a few such points, and we strengthen these results from BQP-hardness to #P-hardness.

at September 22, 2014 12:40 AM UTC

Authors: Takuro Fukunaga
Download: PDF
Abstract: We consider a network design problem called the generalized terminal backup problem. Whereas earlier work investigated the edge-connectivity constraints only, we consider both edge- and node-connectivity constraints for this problem. A major contribution of this paper is the development of a strongly polynomial-time 4/3-approximation algorithm for the problem. Specifically, we show that a linear programming relaxation of the problem is half-integral, and that the half-integral optimal solution can be rounded to a 4/3-approximate solution. We also prove that the linear programming relaxation of the problem with the edge-connectivity constraints is equivalent to minimizing the cost of half-integral multiflows that satisfy flow demands given from terminals. This observation presents a strongly polynomial-time algorithm for computing a minimum cost half-integral multiflow under flow demand constraints.

at September 22, 2014 12:41 AM UTC

Authors: Liangyue Li, Hanghang Tong, Nan Cao, Kate Ehrlich, Yu-Ru Lin, Norbou Buchler
Download: PDF
Abstract: In this paper, we study the problem of Team Member Replacement: given a team of people embedded in a social network working on the same task, find a good candidate who can fit in the team after one team member becomes unavailable. We conjecture that a good team member replacement should have good skill matching as well as good structure matching. We formulate this problem using the concept of graph kernel. To tackle the computational challenges, we propose a family of fast algorithms by (a) designing effective pruning strategies, and (b) exploring the smoothness between the existing and the new team structures. We conduct extensive experimental evaluations on real world datasets to demonstrate the effectiveness and efficiency. Our algorithms (a) perform significantly better than the alternative choices in terms of both precision and recall; and (b) scale sub-linearly.

at September 22, 2014 12:41 AM UTC

Authors: Ahmad Biniaz, Anil Maheshwari, Michiel Smid
Download: PDF
Abstract: We consider an extension of the triangular-distance Delaunay graphs (TD-Delaunay) on a set $P$ of points in the plane. In TD-Delaunay, the convex distance is defined by a fixed-oriented equilateral triangle $\triangledown$, and there is an edge between two points in $P$ if and only if there is an empty homothet of $\triangledown$ having the two points on its boundary. We consider higher-order triangular-distance Delaunay graphs, namely $k$-TD, which contains an edge between two points if the interior of the homothet of $\triangledown$ having the two points on its boundary contains at most $k$ points of $P$. We consider the connectivity, Hamiltonicity and perfect-matching admissibility of $k$-TD. Finally we consider the problem of blocking the edges of $k$-TD.

at September 22, 2014 12:41 AM UTC

Authors: Michael J. Bannister, William E. Devanny, Michael T. Goodrich, Joseph A. Simons, Lowell Trott
Download: PDF
Abstract: We study geometric data structures for sets of point-based temporal events, answering time-windowed queries, i.e., given a contiguous time interval we answer common geometric queries about the point events with time stamps in this interval. The geometric queries we consider include queries based on the skyline, convex hull, and proximity relations of the point set. We provide space efficient data structures which answer queries in polylogarithmic time.

at September 22, 2014 12:41 AM UTC

Authors: Guy Bresler, David Gamarnik, Devavrat Shah
Download: PDF
Abstract: We consider the problem of learning the canonical parameters specifying an undirected graphical model (Markov random field) from the mean parameters. For graphical models representing a minimal exponential family, the canonical parameters are uniquely determined by the mean parameters, so the problem is feasible in principle. The goal of this paper is to investigate the computational feasibility of this statistical task. Our main result shows that parameter estimation is in general intractable: no algorithm can learn the canonical parameters of a generic pair-wise binary graphical model from the mean parameters in time bounded by a polynomial in the number of variables (unless RP = NP). Indeed, such a result has been believed to be true (see the monograph by Wainwright and Jordan (2008)) but no proof was known.

Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters. Our reduction entails showing that the marginal polytope boundary has an inherent repulsive property, which validates an optimization procedure over the polytope that does not use any knowledge of its structure (as required by the ellipsoid method and others).

at September 22, 2014 12:40 AM UTC

After my lectures in the “boot camp” of the spectral graph theory program at the Simons Institute, I promised I would post some references, because I stated all results without attribution.

Here is a a first draft.

If you notice that work that you know of (for example, your work) is misrepresented or absent, please let me know and I will edit the document. (If possible, when you do so, do not compare me to Stalin and cc your message to half a dozen prominent people — true story.)


by luca at September 21, 2014 03:12 AM UTC

Joe Malkevitch recently asked me: which polycubes have planar graphs?

By a polycube, I mean a set of unit cubes in the three-dimensional integer lattice whose dual graph (with vertices for cubes and edges for cubes that share a square with each other) is connected; Malkevitch had a more restrictive definition in mind in which the boundary of the union of the cubes is a connected manifold, but that turns out not to make a difference for this problem. The graph of a polycube is formed by the vertices and edges of the cubes in this set.

The answer turns out to be: the polycubes that have a tree-like structure that can be formed by adding cubes one at a time, at each step adding a cube that touches the previous cubes in exactly one of its squares. One direction is easy: if you add cubes in this way, you end up changing the graph by replacing a quadrilateral face by five quadrilaterals, which preserves planarity.



In the other direction, suppose you have a polycube that cannot be constructed in this way, and build it up by adding cubes one at a time in a preorder traversal of a spanning tree of the dual graph. At some point, you will add a cube c that is adjacent not only to its parent p in the tree, but to some other part of the already-constructed polycube. If you contract the parts of c and p that are not on their shared square, and contract the path in the polycube connecting those parts, you can get a K3,3 graph minor out of the graph of the polycube, showing that it must be nonplanar.



The same question is also interesting when you ask about the graph of the boundary of the polycube, but it has a simpler answer: if the boundary is connected, then it is planar if and only if the polycube is a topological ball.

at September 21, 2014 01:05 AM UTC

I am still in shock at the news that Microsoft decided to shut down MSR SVC and fire, literally from one day to the next, almost all the scientific staff.

It is shocking that the company decided that it was a good idea to destroy an investment they had made over several years; I am only familiar with the theory side of MSR SVC, which had great success in nurturing young talent and developing breakthrough ideas and results. The work on privacy led by Cynthia Dwork, for example, informed the thinking on privacy of higher management, and on how one could find new ways to balance European privacy law with the data collection necessary for advertising. This is one of the returns of having academic research in a company: not so that they can implement in C++ their algorithms, but so that they can generate and communicate new ways of thinking about problems. (Cynthia is one of the few people retained by Microsoft, but she has lost all her collaborators.) Microsoft’s loss will be other places’ win as the MSR SVC diaspora settles down elsewhere.

It is also shocking that, instead of planning an orderly shutdown, they simply threw people out overnight, which shows a fundamental indifference to the way the academic job market works (it can take a full year for an academic job offer to materialize).

I am not shocked by the class demonstrated by Omer Reingold; the moral stature of people is best seen in difficult moments. Omer has written a beautiful post about the lab, whose comment section has become a memorial to the lab, with people posting their personal remembrances.

Here at Berkeley and Stanford we will do our best to help, and we will make sure that everybody has some space to work. There will also be some kind of community-wide response, but it will take some time to figure out what we can do. Meanwhile, I urge everybody to reach out to their friends formerly at MSR SVC, make them feel the love of their community, and connect them to opportunities for both short term and long term positions, as they weigh their options.


by luca at September 20, 2014 08:36 PM UTC

ASCL (Ansi C Specification Language), is a specification for C code, annotated with special comments, that allows C code to be formally verified.

I have not looked into it, but I imagine that the formal methods used in ACSL verifiers would be similar to Hoare Logic. For pure functional languages though, such as Haskell, I can't imagine what sort of formalism would be used for formal verification.

Has anyone made something similar to ASCL, but for a pure functional language? If not, has there been any research into specification-annotated style formal verification for functional languages?

I know that there's dependent typing, which many languages (Agda, Idris, etc...) support, but in Haskell dependent typing is difficult without doing some (unreadable?) type-wizardry. With that in mind, and since Haskell has so much better library support than Agda and Idris, I believe such a system for functional formal verification might be useful, but I don't know if research has been done on this or not.

by Sintrastes at September 19, 2014 12:25 PM UTC

Today, I choose to remember the five amazing years I spent in MSR-SV Labs (which are unfortunately closing). In a place with no boarders between research areas, I was free to follow my intellectual curiosity with colleagues I wouldn’t normally have the great fortune of working with. My non-theory colleagues have left me a much more complete computer scientist than I ever been.

My theory colleagues left me in absolute awe! Being surrounded by the creativity and brilliance of a unique collection of young scientists was such a rush. I initiated Windows on Theory because I thought that this rush must be shared with everyone. I hope that the readers of this blog got a glimpse of the breadth and depth of my theory colleagues. I am confident that they will make many departments and research groups much better in the following months and years. My only regret is every free minute I didn’t spend learning from these wonderful colleagues and friends.

My email for now will be omer.reingold@gmail.com, so drop me a line.


by Omer Reingold at September 19, 2014 10:12 AM UTC

(If I mispell anything in this post, well, that"s why I'm not a MacArthur Genius, or any other kind of Genius.)

Craig Gentry, of homomorphic encryption fame, won a 2014 MacArthur Genius award.  here is an article about it and a description of his work, which is understandable to most non-genius's.  It is great work and I am glad to see the work and him honored.

Have other computer scientists won it? Yes. Have other CS theorists won it? Yes. Here is a list, though it may be incomplete: Peter Shor (1999), Erik Demaine (2003), Jon Kleinberg (2005), Daniel Spielman (2012).

Jacob Lurie who does very abstract mathematics, also won a 2014 MacArthur genius award. He won the Breakthrough prize earlier (3 million dollars) and the MacArthur (650,000) so he owes me 3650 lunches (I mentored him in HS and hence get a free lunch for every 1000 he wins). Here is an article about it and a description of his work which is not understandable even to most math genius's. Its not the articles fault- its just very hard to describe very hard and deep mathematics unless you can relate it to a very concrete thing like crypto (or other applications) or some things in number theory- you can at least tell someone the statement of Fermat's last theorem. I am sure its great work--- he seems to be generalizing math to an unprecedented degree. Note that the generality does pay off to solve real problems, for example this paper.

Yitang Zhang, who proved that there is a constant c such that infinitely  often there are primes that are c apart (he proved c\le 70,000,000 but its been gotten down to 246 - see here)  also won the 2014 MacArthur genius award. While the proof is hard the result can be explained to anyone. See here for an article about his prize and his result.

Have other mathematicans won it? Yes, around 31 total including the two this year.Two that I will note- Terry Tao and Andrew Wiles.

by GASARCH (noreply@blogger.com) at September 19, 2014 04:12 AM UTC

Authors: Jalaj Upadhyay
Download: PDF
Abstract: The focus of this paper is on differential privacy of streaming data using sketch-based algorithms. Previous works, like Dwork {\it et al.} (ICS 2010, STOC 2010), explored random sampling based streaming algorithms. We work in the well studied streaming model of computation, where the database is stored in the form of a matrix and a curator can access the database row-wise or column-wise.

Dwork {\it et al.} (STOC 2010) gave impossibility result for any non-trivial query on a streamed data with respect to the user level privacy. Therefore, in this paper, we restrict our attention to the event level privacy. {We provide optimal, up to logarithmic factor, space differentially private mechanism in the streaming model for three basic linear algebraic tasks: matrix multiplication, linear regression, and low rank approximation, while incurring significantly less additive error}.

Our approach for matrix multiplication and linear regression has some similarities with Blocki {\it et al.} (FOCS 2012) and Upadhyay (ASIACRYPT 2013) on the superficial level, but there are some subtle differences. For example, they perform an affine transformation to convert the private matrix in to a set of $\{\sqrt{w/n},1\}^n$ vectors for some appropriate $w$, while we perform an input perturbation that raises the singular value of the private matrix. %On a high level, the mechanism for linear regression and matrix multiplication can be seen as a private analogue of the known streaming algorithms. In order to get a streaming algorithm for low rank approximation, we have to reuse the random Gaussian matrix in a specific way. We prove that the resulting distribution also preserve differential privacy.

at September 19, 2014 12:40 AM UTC

Authors: Mohammed El-Kebir, Gunnar W. Klau
Download: PDF
Abstract: Given an undirected node-weighted graph, the Maximum-Weight Connected Subgraph problem (MWCS) is to identify a subset of nodes of maximal sum of weights that induce a connected subgraph. MWCS is closely related to the well-studied Prize-Collecting Steiner Tree problem and has many applications in different areas, including computational biology, network design and computer vision. The problem is NP-hard and even hard to approximate within a constant factor. In this work we describe an algorithmic scheme for solving MWCS to provable optimality, which is based on preprocessing rules, new results on decomposing an instance into its bi- and triconnected components and a branch-and-cut approach combined with a primal heuristic. We demonstrate the performance of our method on the benchmark instances of the 11th DIMACS implementation challenge consisting of MWCS as well as transformed PCST instances.

at September 19, 2014 12:40 AM UTC

Authors: David Gamarnik, Mathieu Hemery, Samuel Hetterich
Download: PDF
Abstract: We are going to analyze local algorithms over sparse random graphs. These algorithms are based on local information where local regards to a decision made by the exploration of a small neighbourhood of a certain vertex plus a believe of the structure of the whole graph and maybe added some randomness. This kind of algorithms can be a natural response to the given problem or an efficient approximation such as the Belief Propagation Algorithm.

at September 19, 2014 12:41 AM UTC

Authors: Hu Qin, Zizhen Zhang, Yubin Xie, Andrew Lim
Download: PDF
Abstract: This paper introduces a multi-period inspector scheduling problem (MPISP), which is a new variant of the multi-trip vehicle routing problem with time windows (VRPTW). In the MPISP, each inspector is scheduled to perform a route in a given multi-period planning horizon. At the end of each period, each inspector is not required to return to the depot but has to stay at one of the vertices for recuperation. If the remaining time of the current period is insufficient for an inspector to travel from his/her current vertex $A$ to a certain vertex B, he/she can choose either waiting at vertex A until the start of the next period or traveling to a vertex C that is closer to vertex B. Therefore, the shortest transit time between any vertex pair is affected by the length of the period and the departure time. We first describe an approach of computing the shortest transit time between any pair of vertices with an arbitrary departure time. To solve the MPISP, we then propose several local search operators adapted from classical operators for the VRPTW and integrate them into a tabu search framework. In addition, we present a constrained knapsack model that is able to produce an upper bound for the problem. Finally, we evaluate the effectiveness of our algorithm with extensive experiments based on a set of test instances. Our computational results indicate that our approach generates high-quality solutions.

at September 19, 2014 12:40 AM UTC

I'm one of those professor types that ends up defending the joys of life as an academic versus a career in industry -- some of you who read this blog have probably seen me comment at Matt Welsh's blog or Daniel Lemire's blog/Google+ chain.  And to be clear I'm not anti-industry, I just want to make sure there's a fair discussion.

In that vein, I'm sad to hear that Microsoft Research Silicon Valley will be closing, as described in this article.  I know many people at MSRSV, and have visited there often.  It's a loss to the community to have it close.  I have sympathy for those who work there -- this must be stressful -- but this sympathy is happily diminished because I know the people there are so talented, they will quickly move on to other employment.  (When Digital Systems Research Center closed long ago, a number of the people I worked with in the lab just moved on to some small company that seemed to be just starting to really get going, called Google.  I wonder how it worked out for them.) 

After a moment of silence for the lab, I do feel it necessary to point out that this is one of the "issues" in the academia vs industry life choice.  The life cycle for companies moves rapidly.  That's not a bad thing, but it's a thing.  Disruptions like this are a non-trivial risk in industry, much less so in academia.  (Just look at the history of research labs.)  Again, I'm sure it will work out for the people at MSRSV, but any life shift like this -- even if it ends positively -- is stressful.  Without wanting to overstate the consequences, it's worth pointing out.

by Michael Mitzenmacher (noreply@blogger.com) at September 18, 2014 10:48 PM UTC


Congratulating Dick on the 2014 Knuth Prize

LiptonSecondary
Cropped from source

Dick Lipton is of course the founder and driving writer of this weblog. He is also a computer scientist with a great record of pathbreaking research. The latter has just been recognized, I am delighted and proud to say, with the award of the 2014 Knuth Prize. The prize is awarded jointly by the ACM Special Interest Group on Algorithms and Computation Theory and the IEEE Technical Committee on the Mathematical Foundations of Computing, and was instituted in 1996, shortly after the formal retirement of the great—and very much active—Donald Knuth.

Today I am glad to give my congratulations in public, and also my thanks for a wonderful and long association.

Only in reading around while writing this post did I learn that Dick won another major honor earlier this year: election to the American Academy of Arts and Sciences. The end of April was crazy with personal and unusual professional matters for both of us, and it didn’t come up. In the 2014 list for CS he is alongside Jennifer Chayes, Ron Fagin, David Harel, Daphne Koller, Leslie Lamport, Leonid Levin, and Mendel Rosenblum.

The Knuth Prize is “for outstanding contributions to the foundations of computer science.” The description also says it may be partly based on educational contributions such as textbooks and students, but “is not based on service work for the community, although service might be included in the citation for a winner if it is appropriate.” The ACM release does not mention this blog. So just for now I will imitate Basil Fawlty’s advice in a famous episode of the immortal 1970’s BBC TV sitcom “Fawlty Towers”:

Don’t mention the blog.

But I will mention foundations, since that is an aspect of the recognition that comes with a price but is a longstanding purpose of this—wait, don’t mention the…

Ideas and Foundations

The release includes some of Dick’s major contributions, and some others have been noted on StackExchange. I can add the first work by Dick that I was aware of: two papers in 1979–80 with Rich DeMillo—also now at Georgia Tech—which were part of various efforts to answer questions about {\mathsf{P =?\;NP}} via logic and model theory. There is also his involvement with time-space tradeoffs for {\mathsf{SAT}}, which represent the best lower bounds—well, in a broad sense, kind-of lower bounds—known for any major {\mathsf{NP}}-complete problem. Most recent and potentially important in my purview—but you might have to comb through his DLBP page to realize they’re there—are papers on multivariable polynomials modulo composite numbers, which are at the frontier of circuit complexity lower bounds.

What generates all this is a powerful beacon of ideas that have had a photoelectric effect on many parts of computer science, more than a typical theoretician encounters. I should know—it has been my privilege to do radiographic processing for many more, even some that haven’t yet appeared on this—oops—anyway, more ideas than I can often keep up with. Not to mention that I’m also pursuing (and opening) some ideas of my own. The point is that these ideas are almost all aimed at foundations. The examples above are aimed at foundations of complexity theory. That goes double for some ideas that haven’t progressed well, at least not yet.

The Problem

The problem—with the problem and much more—is that the foundations of complexity are set in sturdy concrete. They basically haven’t budged. Beams of ideas bounce off or just get absorbed. Major related parts of algorithms and crypto and other fields are likewise shielded. This is polar the opposite situation of what Bill Thurston recounted about his field of manifold foliations in his famous essay, “On Proof and Progress in Mathematics.”

The difficulty of direct progress has been so discouraging that through the theory applications grapevine I’ve heard whispers that one can phrase as, “Don’t mention foundations…” We tried to present aspects of this more positively in a recent post featuring Chayes.

However, if you aim a beam hot enough and keep it focused for long enough, events do happen. The question should not be whether it’s worth aiming the beam, but rather what’s needed to establish and maintain the intensity and focus. What we suspect is that just as in particle physics, a century on from when the likes of Joseph Thomson and Ernest Rutherford and Robert Millikan could work alone or with one associate or two, progress will require greater co-ordination of multiple researchers and shared focus on ideas than our field is yet accustomed to. Hence this…well, we can mention that various people we know and admire are trying to foster this in their own ways, not just proposed but some activated to a wonderful degree.

As I said, aiming at foundations has a price—and the payment is alas more evenly distributed among those who try than the rewards. That’s why it is good to remember that ever so often, in small as well as big, a prize comes with the price. The above, too, has an added price, but perhaps it will be ushered in with a new model for appreciating the credits.

Open Problems

The Knuth Prize has the unusual period of being awarded every 1-1/2 years officially. The last five awards are dated 2010, 2011, 2012, 2013, and now 2014. Can this seeming contradiction be resolved mathematically? Perhaps the “1-1/2″ is governed by left-and-right approximands in the manner of John Conway’s surreal numbers, which Knuth described in an engaging book. A left-bound of 1.25 would just barely allow it. However, I suspect that any such involvement by Knuth would have instead fixed the period at {2^9 = 512} days, and then I don’t see a solution…unless maybe arithmetic is inconsistent after all.

Of course we congratulate Dick without any inconsistency.


by KWRegan at September 18, 2014 03:18 PM UTC

Authors: Vincenzo Fioriti, Marta Chinnici
Download: PDF
Abstract: Identifying the nodes of small sub-graphs with no a priori information is a hard problem. In this work, we want to find each node of a sparse sub-graph embedded in both dynamic and static background graphs, of larger average degree. We show that exploiting the summability over several background realizations of the Estrada-Benzi communicability and the Krylov approximation of the matrix exponential, it is possible to recover the sub-graph with a fast algorithm with computational complexity O(N n). Relaxing the problem to complete sub-graphs, the same performance is obtained with a single background. The worst case complexity for the single background is O(n log(n)).

at September 18, 2014 12:41 AM UTC

Authors: Pratik Ghoshal, Meghana Nasre, Prajakta Nimbhorkar
Download: PDF
Abstract: Let G = (A U P, E) be a bipartite graph where A denotes a set of agents, P denotes a set of posts and ranks on the edges denote preferences of the agents over posts. A matching M in G is rank-maximal if it matches the maximum number of applicants to their top-rank post, subject to this, the maximum number of applicants to their second rank post and so on.

In this paper, we develop a switching graph characterization of rank-maximal matchings, which is a useful tool that encodes all rank-maximal matchings in an instance. The characterization leads to simple and efficient algorithms for several interesting problems. In particular, we give an efficient algorithm to compute the set of rank-maximal pairs in an instance. We show that the problem of counting the number of rank-maximal matchings is #P-Complete and also give an FPRAS for the problem. Finally, we consider the problem of deciding whether a rank-maximal matching is popular among all the rank-maximal matchings in a given instance, and give an efficient algorithm for the problem.

at September 18, 2014 12:41 AM UTC

Authors: Hsien-Kuei Hwang, Alois Panholzer, Nicolas Rolin, Tsung-Hsi Tsai, Wei-Mei Chen
Download: PDF
Abstract: We give a detailed analysis of the cost used by the (1+1)-evolutionary algorithm. The problem has been approached in the evolutionary algorithm literature under various views, formulation and degree of rigor. Our asymptotic approximations for the mean and the variance represent the strongest of their kind. The approach we develop is also applicable to characterize the limit laws and is based on asymptotic resolution of the underlying recurrence. While most approximations have their simple formal nature, we elaborate on the delicate error analysis required for rigorous justifications.

at September 18, 2014 12:41 AM UTC

Authors: Prachi Goyal, Pranabendu Misra, Fahad Panolan, Geevarghese Philip, Saket Saurabh
Download: PDF
Abstract: Problems of the following kind have been the focus of much recent research in the realm of parameterized complexity: Given an input graph (digraph) on $n$ vertices and a positive integer parameter $k$, find if there exist $k$ edges (arcs) whose deletion results in a graph that satisfies some specified parity constraints. In particular, when the objective is to obtain a connected graph in which all the vertices have even degrees---where the resulting graph is \emph{Eulerian}---the problem is called Undirected Eulerian Edge Deletion. The corresponding problem in digraphs where the resulting graph should be strongly connected and every vertex should have the same in-degree as its out-degree is called Directed Eulerian Edge Deletion. Cygan et al. [\emph{Algorithmica, 2014}] showed that these problems are fixed parameter tractable (FPT), and gave algorithms with the running time $2^{O(k \log k)}n^{O(1)}$. They also asked, as an open problem, whether there exist FPT algorithms which solve these problems in time $2^{O(k)}n^{O(1)}$. In this paper we answer their question in the affirmative: using the technique of computing \emph{representative families of co-graphic matroids} we design algorithms which solve these problems in time $2^{O(k)}n^{O(1)}$. The crucial insight we bring to these problems is to view the solution as an independent set of a co-graphic matroid. We believe that this view-point/approach will be useful in other problems where one of the constraints that need to be satisfied is that of connectivity.

at September 18, 2014 12:41 AM UTC

Authors: Jiecao Chen, Qin Zhang
Download: PDF
Abstract: Modern data management systems often need to deal with massive, dynamic and inherently distributed data sources: we collect the data using a distributed network, and at the same time try to maintain a global view of the data at a central coordinator. Such applications have been captured by the distributed monitoring model, which has attracted a lot of attention recently in both theory and database communities. However, all proposed algorithms in distributed monitoring with provable guarantees are ad-hoc in nature, each being designed for a specific problem. In this paper we propose the first generic algorithmic approach, by adapting the celebrated AMS-sampling framework from the streaming model to distributed monitoring. We also show how to use this framework to monitor entropy functions. Our results significantly improve the previous best results by Arackaparambil et al. [2] for entropy monitoring.

at September 18, 2014 12:42 AM UTC

Authors: Ho-Lin Chen, David Doty, Dhiraj Holden, Chris Thachuk, Damien Woods, Chun-Tao Yang
Download: PDF
Abstract: We study the power of uncontrolled random molecular movement in the nubot model of self-assembly. The nubot model is an asynchronous nondeterministic cellular automaton augmented with rigid-body movement rules (push/pull, deterministically and programmatically applied to specific monomers) and random agitations (nondeterministically applied to every monomer and direction with equal probability all of the time). Previous work on the nubot model showed how to build simple shapes such as lines and squares quickly---in expected time that is merely logarithmic of their size. These results crucially make use of the programmable rigid-body movement rule: the ability for a single monomer to control the movement of a large objects quickly, and only at a time and place of the programmers' choosing. However, in engineered molecular systems, molecular motion is largely uncontrolled and fundamentally random. This raises the question of whether similar results can be achieved in a more restrictive, and perhaps easier to justify, model where uncontrolled random movements, or agitations, are happening throughout the self-assembly process and are the only form of rigid-body movement. We show that this is indeed the case: we give a polylogarithmic expected time construction for squares using agitation, and a sublinear expected time construction to build a line. Such results are impossible in an agitation-free (and movement-free) setting and thus show the benefits of exploiting uncontrolled random movement.

at September 18, 2014 12:00 AM UTC

Authors: Juan Bermejo-Vega, Cedric Yen-Yu Lin, Maarten Van den Nest
Download: PDF
Abstract: This work presents a precise connection between Clifford circuits, Shor's factoring algorithm and several other famous quantum algorithms with exponential quantum speed-ups for solving Abelian hidden subgroup problems. We show that all these different forms of quantum computation belong to a common new restricted model of quantum operations that we call \emph{black-box normalizer circuits}. To define these, we extend the previous model of normalizer circuits [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208], which are built of quantum Fourier transforms, group automorphism and quadratic phase gates associated to an Abelian group $G$. In previous works, the group $G$ is always given in an explicitly decomposed form. In our model, we remove this assumption and allow $G$ to be a black-box group. While standard normalizer circuits were shown to be efficiently classically simulable [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208], we find that normalizer circuits are powerful enough to factorize and solve classically-hard problems in the black-box setting. We further set upper limits to their computational power by showing that decomposing finite Abelian groups is complete for the associated complexity class. In particular, solving this problem renders black-box normalizer circuits efficiently classically simulable by exploiting the generalized stabilizer formalism in [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208]. Lastly, we employ our connection to draw a few practical implications for quantum algorithm design: namely, we give a no-go theorem for finding new quantum algorithms with black-box normalizer circuits, a universality result for low-depth normalizer circuits, and identify two other complete problems.

at September 18, 2014 12:40 AM UTC

Let us call a function $f(n)$ superpolynomial if $\lim_{n\rightarrow\infty} n^c/f(n)=0$ holds for every $c>0$.

It is clear that for any language $L\in {\mathsf P}$ it holds that $L\in {\mathsf {DTIME}}(f(n))$ for every superpolynomial time bound $f(n)$. I wonder, wether the converse of this statement is also true? That is, if we know $L\in {\mathsf {DTIME}}(f(n))$ for every superpolynomial time bound $f(n)$, does it imply $L\in {\mathsf P}$? In other words, is it true that $${\mathsf P} = \cap_f {\mathsf {DTIME}}(f(n))$$ where the intersection is taken over every superpolynomial $f(n)$.

by Andras Farago at September 17, 2014 10:50 PM UTC

(1) The new NSF Secure and Trusthworthy Cyberspace (SaTC) solicitation has been released.  Note that Frontiers (up to $10M) have been replaced by Large (up to $3M) proposals.  The submission deadlines are:

  • Small: January 14, 2015
  • Medium: November 10, 2014
  • Large: November 20, 2014
  • Education:    December 19, 2014

(2) NSF and the US-Israel Binational Science Foundation (BSF) have developed a collaboration arrangement whereby US researchers may receive funding from the NSF and collaborating Israeli researchers may receive funding from the BSF. Proposals may be submitted to the SaTC Small category, with the identical proposal submitted to BSF to support Israeli collaborators.  Proposals will be reviewed by NSF; those selected for funding will have separate agreements with NSF (for US researchers) and BSF (for Israeli researchers).

by salilvadhan at September 17, 2014 04:31 PM UTC

Here is the call for nominations for one of the EATCS awards that is closest to my heart. Do put pen to paper and nominate your favourite young TCS researcher! He/She might join a truly impressive list of previous award recipients.
 
Presburger Award for Young Scientists 2015

   Call for Nominations

   Deadline: December 31st, 2014

Starting in 2010, the European Association for Theoretical Computer Science (EATCS) established the Presburger Award. The Award is conferred annually at the International Colloquium on Automata, Languages and Programming (ICALP) to a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. The Award is named after Mojzesz Presburger who accomplished his path-breaking work on decidability of the theory of addition (which today is called Presburger arithmetic) as a student in 1929.

Nominations for the Presburger Award can be submitted by any member or group of members of the theoretical computer science community except the nominee and his/her advisors for the master thesis and the doctoral dissertation. Nominated scientists have to be at most 35 years at the time of the deadline of nomination (i.e., for the Presburger Award of 2015 the date of birth should be in 1979 or later). The Presburger Award Committee of 2015 consists of Zoltan Esik (Szeged), Claire Mathieu (Paris), and Peter Widmayer (Zürich, chair).

Nominations, consisting of a two page justification and (links to) the respective papers, as well as additional supporting letters, should be sent by e-mail to:

   Peter Widmayer
   widmayer@inf.ethz.ch

The subject line of every nomination should start with Presburger Award 2015, and the message must be received before December 31st, 2014.

The award includes an amount of 1000 Euro and an invitation to ICALP 2015 for a lecture.

Previous Winners:
  • Mikołaj Bojanczyk, 2010
  • Patricia Bouyer-Decitre, 2011
  • Venkatesan Guruswami and Mihai Patrascu, 2012
  • Erik Demaine, 2013
  • David Woodruff, 2014
Official website: http://www.eatcs.org/index.php/presburger

by Luca Aceto (noreply@blogger.com) at September 17, 2014 10:05 AM UTC

My hearty congratulations to MacArthur Fellowship for handing down the right decision and naming Craig Gentry its fellow, better known as a genius. What a truly deserving winner! As the readers of this blog know full well, Craig has done seminal work in cryptography – time and time again. In his prize-winning Ph.D. work in 2009 Craig achieved what many had considered to be impossible – fully-homomorphic encryption. In short three years he (with co-authors, Sanjam Garg and Shai Halevi) proposed another object – a cryptographic multilinear map whose existence I’d been willing to bet against. Last year Craig (with several more co-authors) constructed an obfuscation mechanism with two amazing properties: it looks impossibly difficult to achieve and useless for any cryptographic applications. Both statements – you see the pattern here – are wrong. Indistinguishability obfuscation, as it has become known, quite plausibly exists and we are still in the process of grasping its true potential.

Congratulations!


by Ilya Mironov at September 17, 2014 08:03 AM UTC


Univ of Maryland at College Park is having a Theory Day
Friday October 10.

Free Registration and Free Lunch! (there are no economists coming to tell us there is no such thing).

For Information and Registration goto here

A good way to learn lots of current theory in a short time.

Schedule:

8:30-9:00 Light Breakfast and Intro Remarks

9:00-9:20  Gasarch, UMCP
NIM with Cash

9:25-9:45  Mount, UMCP
A New Algorithm for Approximating the Euclidean Minimum Spanning Tree

9:50-10:10 Samir, UMCP:
To do or not to do: scheduling to minimize energy

10:20-11:00 Coffee Break

11:00-12:00 Distinguished Invited Speaker Avrim Blum, CMU
Reconstructing preferences and priorities from opaque transactions

12:00-1:00 Catered Lunch

1:00-2:00 Distinguished Invited Speaker Sanjeev Arora, Princeton
Overcoming the intractability bottleneck in unsupervised learning.

2:00-2:30 Coffee Break

2:30-2:50 Elaine Shi, UMCP
Circuit ORAM and Tightness of the Goldreich-Ostrovksy bound

2:55-3:15 David Harris, UMCP
The Moser-Tardos Framework with Partial Resampling

3:20-3:40 Mohammad Hajiaghayi, UMCP
Streaming Algorithms for Estimating the Matching Size in Planar Graphs and Beyond

3:45-4:05 Michael Dinitz, JHU
Explicit Expanding Expanders

4:10-5:00 Poster Session in 2120 (Grad Students)


by GASARCH (noreply@blogger.com) at September 17, 2014 04:48 AM UTC

Authors: Pritam Bhattacharya, Subir Kumar Ghosh, Bodhayan Roy
Download: PDF
Abstract: The art gallery problem enquires about the least number of guards that are sufficient to ensure that an art gallery, represented by a polygon $P$, is fully guarded. In 1998, the problems of finding the minimum number of point guards, vertex guards, and edge guards required to guard $P$ were shown to be APX-hard by Eidenbenz, Widmayer and Stamm. In 1987, Ghosh presented approximation algorithms for vertex guards and edge guards that achieved a ratio of $\mathcal{O}(\log n)$, which was improved up to $\mathcal{O}(\log\log OPT)$ by King and Kirkpatrick in 2011. It has been conjectured that constant-factor approximation algorithms exist for these problems. We settle the conjecture for the special class of polygons that are weakly visible from an edge and contain no holes by presenting a 6-approximation algorithm for finding the minimum number of vertex guards that runs in $\mathcal{O}(n^2)$ time. On the other hand, when holes are allowed, we show that the problem of vertex guarding a weak visibility polygon is APX-hard by constructing a reduction from the Set Cover problem.

at September 17, 2014 12:41 AM UTC

Authors: M. I. Schlesinger, E. V. Vodolazskiy, V. M. Yakovenko
Download: PDF
Abstract: The article analyzes similarity of closed polygonal curves in Frechet metric, which is stronger than the well-known Hausdorff metric and therefore is more appropriate in some applications. An algorithm that determines whether the Frechet distance between two closed polygonal curves with m and n vertices is less than a given number is described. The described algorithm takes O(mn) time whereas the previously known algorithms take O(mn log(mn)) time.

at September 17, 2014 12:40 AM UTC

Authors: Manuel Bodirsky, Michael Pinsker, András Pongrácz
Download: PDF
Abstract: It is known that a countable $\omega$-categorical structure interprets all finite structures primitively positively if and only if its polymorphism clone maps to the clone of projections on a two-element set via a continuous clone homomorphism. We investigate the relationship between the existence of a clone homomorphism to the projection clone, and the existence of such a homomorphism which is continuous and thus meets the above criterion.

at September 17, 2014 12:40 AM UTC

Authors: Emanuele Tron
Download: PDF
Abstract: In this note we prove that, if $S_n$ is the greatest area of a rectangle which can be covered with $n$ unit disks, then $2\leq S_n/n<3 \sqrt{3}/2$, and these are the best constants; moreover, for $\Delta(n):=(3\sqrt{3}/2)n-S_n$, we have $0.727384<\liminf\Delta(n)/\sqrt{n}<2.121321$ and $0.727384<\limsup\Delta(n)/\sqrt{n}<4.165064$.

at September 17, 2014 12:40 AM UTC

Authors: Erel Segal-Halevi, Avinatan Hassidim, Yonatan Aumann
Download: PDF
Abstract: We consider the problem of fairly dividing a two dimensional heterogeneous good among multiple players. Applications include division of land as well as ad space in print and electronic media. Classical cake cutting protocols primarily consider a one-dimensional resource, or allocate each player multiple infinitesimally small "pieces". In practice, however, the two dimensional \emph{shape} of the allotted piece is of crucial importance in many applications (e.g. squares or bounded aspect-ratio rectangles are most useful for building houses, as well as advertisements). We thus introduce and study the problem of fair two-dimensional division wherein the allotted plots must be of some restricted two-dimensional geometric shape(s). Adding this geometric constraint re-opens most questions and challenges related to cake-cutting. Indeed, even the elementary \emph{proportionality} fairness criteria can no longer be guaranteed in all cases. In this paper we thus examine the \emph{level} of proportionality that \emph{can} be guaranteed, providing both impossibility results (for proportionality that cannot be guaranteed), and algorithmic constructions (for proportionality that can be guaranteed). We focus primarily on the case when the cake is a rectilinear polygon and the allotted plots must be squares or bounded aspect-ratio rectangles.

at September 17, 2014 12:41 AM UTC

Proof systems for quantified Boolean formulas (QBFs) provide a theoretical underpinning for the performance of important QBF solvers. However, the proof complexity of these proof systems is currently not well understood and in particular lower bound techniques are missing. In this paper we exhibit a new and elegant proof technique for showing lower bounds in QBF proof systems based on strategy extraction. This technique provides a direct transfer of circuit lower bounds to lengths of proofs lower bounds. We use our method to show the hardness of a natural class of parity formulas for Q-resolution. Variants of the formulas are hard for even stronger systems as long-distance and universal Q-resolution. With a completely different lower bound argument we show the hardness of the prominent formulas of Kleine Büning et al. for the strong expansion-based calculus IR-calc, thus also confirming the hardness of the formulas for Q-resolution. Our lower bounds imply new exponential separations between two different types of resolution-based QBF calculi: proof systems for DPLL-based solvers (Q-resolution, long-distance Q-resolution) and proof systems for expansion-based solvers ($\forall$Exp+Res and its generalizations IR-calc and IRM-calc). The relations between proof systems from the two different classes were not known before.

at September 16, 2014 03:31 PM UTC

Richard J. Lipton has been selected as the winner of the 2014 Knuth Prize "for Introduction of New Ideas and Techniques".

What are to your minds the main new ideas and techniques that Lipton developed?

Note. This question shall become community wiki, please put one such idea, technique or result by answer.

by Bruno at September 16, 2014 08:47 AM UTC

at September 16, 2014 06:20 AM UTC