Theory of Computing Blog Aggregator

The NIPS experiment is making waves.  If you are unaware, for the last NIPS conference, the PC was broken into two independent halves A and B.  A random selection of the submissions were assigned to both committees.  The result: 57% of the papers that were accepted by committee A were rejected by committee B (and […]

The post The negative impacts of random conference decisions appeared first on Glencora Borradaile.

by Glencora Borradaile at December 18, 2014 04:23 PM UTC

The NIPS (machine learning) conference ran an interesting experiment this year. They had two separate and disjoint program committees with the submissions split between them. 10% (166) of the submissions were given to both committees. If either committee accepted one of those papers it was accepted to NIPS.

According to an analysis by Eric Price, of those 166, about 16 (about 10%) were accepted by both committees, 43 (26%) by exactly one of the committees and 107 (64%) rejected by both committees. Price notes that of the accepted papers, over half (57%) of them would not have been accepted with a different PC. On the flip side 83% of the rejected papers would still be rejected.
More details of the experiment here.

No one who has ever served on a program committee should be surprised by these results. Nor is there anything really wrong or bad going on here. A PC will almost always accept the great papers and almost always reject the mediocre ones, but the middle ground are at a similar quality level and personal tastes come into play. There is no objective perfect ordering of the papers and that's why we task a program committee to make those tough choices. The only completely fair committees would either accept all the papers or reject all the papers.

These results can lead to a false sense of self worth. If your paper is accepted you might think you had a great submission, more likely you had a good submission and got lucky. If your paper was rejected, you might think you had a good submission and was unlucky, more likely you had a mediocre paper that would never get in.

In the few days since NIPS announced these results, I've already seen people try to use them not only to trash program committees but for many other subjective decision making. In the end we have to make choices on who to hire, who to promote and who to give grants. We need to make subjective decisions and those done by our peers aren't always consistent but they work much better than the alternatives. Even the machine learning conference doesn't use machine learning to choose which papers to accept.

by Lance Fortnow (noreply@blogger.com) at December 18, 2014 12:44 PM UTC

Authors: René van Bevern, Christian Komusiewicz, Rolf Niedermeier, Manuel Sorge, Toby Walsh
Download: PDF
Abstract: Google Scholar allows merging multiple article versions into one. This merging affects the H-index computed by Google Scholar. We analyze the parameterized complexity of maximizing the H-index using article merges. Herein, multiple possible measures for computing the citation count of a merged article are considered. Among others, for the measure used by Google Scholar, we give an algorithm that maximizes the H-index in linear time if there is only a constant number of versions of the same article. In contrast, if we are allowed to merge arbitrary articles, then already increasing the H-index by one is NP-hard.

at December 18, 2014 01:41 AM UTC

Authors: Sheela Devadas, Ronitt Rubinfeld
Download: PDF
Abstract: We present simple, self-contained proofs of correctness for algorithms for linearity testing and program checking of linear functions on finite subsets of integers represented as n-bit numbers. In addition we explore two generalizations of self-testing to multiple variables - the case of multilinear functions and homomorphisms on a multidimensional vector space.

We show that our self-testing algorithm for the univariate case can be directly generalized to vector space domains. The number of queries made by our algorithms are independent of domain size. However, linearity testing for multilinear functions requires a different testing algorithm. We give an algorithm for the k-linearity problem with queries independent of the size of the domain.

at December 18, 2014 12:00 AM UTC

Authors: Tobias Brunsch, Anna Großwendt, Heiko Röglin
Download: PDF
Abstract: We show that the shadow vertex simplex algorithm can be used to solve linear programs in strongly polynomial time with respect to the number $n$ of variables, the number $m$ of constraints, and $1/\delta$, where $\delta$ is a parameter that measures the flatness of the vertices of the polyhedron. This extends our recent result that the shadow vertex algorithm finds paths of polynomial length (w.r.t. $n$, $m$, and $1/\delta$) between two given vertices of a polyhedron.

Our result also complements a recent result due to Eisenbrand and Vempala who have shown that a certain version of the random edge pivot rule solves linear programs with a running time that is strongly polynomial in the number of variables $n$ and $1/\delta$, but independent of the number $m$ of constraints. Even though the running time of our algorithm depends on $m$, it is significantly faster for the important special case of totally unimodular linear programs, for which $1/\delta\le n$ and which have only $O(n^2)$ constraints.

at December 18, 2014 01:41 AM UTC

Authors: Daniel Bundala, Michael Codish, Luís Cruz-Filipe, Peter Schneider-Kamp, Jakub Závodný
Download: PDF
Abstract: We solve a 40-year-old open problem on the depth optimality of sorting networks. In 1973, Donald E. Knuth detailed, in Volume 3 of "The Art of Computer Programming", sorting networks of the smallest depth known at the time for n =< 16 inputs, quoting optimality for n =< 8. In 1989, Parberry proved the optimality of the networks with 9 =< n =< 10 inputs. In this article, we present a general technique for obtaining such optimality results, and use it to prove the optimality of the remaining open cases of 11 =< n =< 16 inputs. We show how to exploit symmetry to construct a small set of two-layer networks on n inputs such that if there is a sorting network on n inputs of a given depth, then there is one whose first layers are in this set. For each network in the resulting set, we construct a propositional formula whose satisfiability is necessary for the existence of a sorting network of a given depth. Using an off-the-shelf SAT solver we show that the sorting networks listed by Knuth are optimal. For n =< 10 inputs, our algorithm is orders of magnitude faster than the prior ones.

at December 18, 2014 01:40 AM UTC

Authors: Catherine Greenhill
Download: PDF
Abstract: The problem of efficiently sampling from a set of(undirected) graphs with a given degree sequence has many applications. One approach to this problem uses a simple Markov chain, which we call the switch chain, to perform the sampling. The switch chain is known to be rapidly mixing for regular degree sequences. We prove that the switch chain is rapidly mixing for any degree sequence with minimum degree at least 1 and with maximum degree $d_{\max}$ which satisfies $3\leq d_{\max}\leq \frac{1}{4}\, \sqrt{M}$, where $M$ is the sum of the degrees. The mixing time bound obtained is only an order of $n$ larger than that established in the regular case, where $n$ is the number of vertices.

at December 18, 2014 01:41 AM UTC

Authors: Esther Ezra
Download: PDF
Abstract: We refine the bound on the packing number, originally shown by Haussler, for shallow geometric set systems. Specifically, let $\V$ be a finite set system defined over an $n$-point set $X$; we view $\V$ as a set of indicator vectors over the $n$-dimensional unit cube. A $\delta$-separated set of $\V$ is a subcollection $\W$, s.t. the Hamming distance between each pair $\uu, \vv \in \W$ is greater than $\delta$, where $\delta > 0$ is an integer parameter. The $\delta$-packing number is then defined as the cardinality of the largest $\delta$-separated subcollection of $\V$. Haussler showed an asymptotically tight bound of $\Theta((n/\delta)^d)$ on the $\delta$-packing number if $\V$ has VC-dimension (or \emph{primal shatter dimension}) $d$. We refine this bound for the scenario where, for any subset, $X' \subseteq X$ of size $m \le n$ and for any parameter $1 \le k \le m$, the number of vectors of length at most $k$ in the restriction of $\V$ to $X'$ is only $O(m^{d_1} k^{d-d_1})$, for a fixed integer $d > 0$ and a real parameter $1 \le d_1 \le d$ (this generalizes the standard notion of \emph{bounded primal shatter dimension} when $d_1 = d$). In this case when $\V$ is "$k$-shallow" (all vector lengths are at most $k$), we show that its $\delta$-packing number is $O(n^{d_1} k^{d-d_1}/\delta^d)$, matching Haussler's bound for the special cases where $d_1=d$ or $k=n$. As an immediate consequence we conclude that set systems of halfspaces, balls, and parallel slabs defined over $n$ points in $d$-space admit better packing numbers when $k$ is smaller than $n$. Last but not least, we describe applications to (i) spanning trees of low total crossing number, and (ii) geometric discrepancy, based on previous work by the author.

at December 18, 2014 01:41 AM UTC

Authors: Mikkel Abrahamsen
Download: PDF
Abstract: We describe new methods for the construction of spiral toolpaths for high-speed machining. In the simplest case, our method takes a polygon as input and a number $\delta>0$ and returns a spiral starting at a central point in the polygon, going around towards the boundary while morphing to the shape of the polygon. The spiral consists of linear segments and circular arcs, it is $G^1$ continuous, it has no self-intersections, and the distance from each point on the spiral to each of the neighboring revolutions is at most $\delta$. Our method has the advantage over previously described methods that it is easily adjustable to the case where there is an island in the polygon to be avoided by the spiral. In that case, the spiral starts at the island and morphs the island to the outer boundary of the polygon. It is shown how to apply that method to make significantly shorter spirals in polygons with no islands. Finally, we show how to make a spiral in a polygon with multiple islands by connecting the islands into one island.

at December 18, 2014 01:41 AM UTC

Authors: Kunal Dutta, Arijit Ghosh
Download: PDF
Abstract: We prove a size-sensitive version of Haussler's Packing lemma~\cite{Haussler92spherepacking} for set-systems with bounded primal shatter dimension, which have an additional {\em size-sensitive property}. This answers a question asked by Ezra~\cite{Ezra-sizesendisc-soda-14}. We also partially address another point raised by Ezra regarding overcounting of sets in her chaining procedure. As a consequence of these improvements, we get an improvement on the size-sensitive discrepancy bounds for set systems with the above property. Improved bounds on the discrepancy for these special set systems also imply an improvement in the sizes of {\em relative $(\varepsilon, \delta)$-approximations} and $(\nu, \alpha)$-samples.

at December 18, 2014 01:41 AM UTC

I am happy to inform you that the PC chairs for ICALP 2016 will be
Many thanks to this colleagues for their willingness to serve as PC chairs for the conference, which will be held in Rome.


by Luca Aceto (noreply@blogger.com) at December 17, 2014 01:31 PM UTC

In my recent posting on four-dimensional polytopes containing linked or knotted cycles of edges, I showed pictures of linked cycles in three examples, the (3,3)-duopyramid, hypercube, and (in the comments) truncated 5-cell. All three of these have some much more special properties: the two linked cycles are induced cycles (there are no edges between two non-consecutive vertices in the same cycle), they include all the vertices in the graph, and their intersection with any two- or three-dimensional face of the polytope forms a connected path.

When this happens, we can use it to construct a nice two-dimensional grid representation of the polytope. The set of pairs (x,y) where x is a position on one of the cycles (at a vertex or along an edge) and y is a position on the other cycle form a two-dimensional space, topologically a torus. We can think of this as a grid with wrap-around boundary conditions, where the grid lines correspond to vertex positions on one or the other cycle. The number of grid lines in each dimension is just the length of the cycle. Then, each non-cycle edge of the polytope connects one point from each cycle, so it can be represented as a grid point on this torus. Each two-dimensional face of the polytope has two non-cycle edges, and can be represented as a line segment connecting the corresponding two grid points (perhaps wrapping around from one side of the grid to the other). And when we draw these grid points and line segments, they divide the grid into cells (again, perhaps wrapping around) that turn out to correspond to the 3-dimensional faces of the polytope. So all the features of the polytope that are not part of the two cycles instead show up somewhere on this grid.

For instance below, in this two-dimensional grid representation, are the duopyramid (two 3-cycles, so a 3 × 3 grid), hypercube (8 × 8 grid), and truncated 5-cell (10 × 10 grid) again. I've drawn these with the wraparound points halfway along an edge of each cycle in order to avoid placing a grid line on the boundary of the drawing. In the hypercube and truncated 5-cell, the axes are labeled by numberings of the vertices. For the hypercube, the vertices can be numbered by the 16 hexadecimal digits, where two digits are adjacent if they differ in a single bit in their binary representations. The two eight-vertex cycles can be obtained by cycling through the order of which bit changes. For the truncated 5-cell, the vertices can be numbered by ordered pairs of unequal digits from 1 to 5, where the neighbors of each vertex are obtained by changing the second digit or swapping the two digits.



Another way of thinking about this is that, on the three-dimensional surface of a 4d unit sphere, we can draw two linked unit circles, one in the xy plane and the other in the wz plane. The medial axis of these circles (the points on the sphere equally distant from both of them) is a torus, and what we're drawing in this diagram is how a polyhedral version of the same torus slices through the faces of the polytope.

You can read off the structure of each 3-dimensional cell in the polytope from the corresponding polygon in the diagram. Recall that these cells are themselves three-dimensional polyhedra whose vertices have been divided into two induced paths. So (just as in the four-dimensional case) we can make a grid from the product of these two paths, represent non-path edges as grid points, and represent two-dimensional faces as line segments connecting grid points. Each two-dimensional face has two non-path sides, and a number of path sides given by the difference in coordinates between the corresponding two grid points. So, the total number of sides of the face is just two plus the Manhattan length of the line segment representing the face. For instance, the unit line segments in the duopyramid diagram represent triangles, and the squares formed by four of these segments represent tetrahedra (four triangles). The segments of Manhattan length two in the hypercube diagram represent quadrilaterals, and the hexagons formed by six of these segments represent cubes (six quadrilaterals). In order to represent a polyhedron in this way, a grid polygon has to have vertical edges at its left and right extremes, and horizontal edges at its top and bottom extremes, because otherwise a vertex at the end of one of the two paths would have only two incident edges, impossible in a polyhedron. For the same reason each intermediate grid line must have a grid point (representing a non-path edge of the polyhedron) on it. We can make a dictionary of small polyhedra that can be decomposed into two induced paths, and their associated grid polygons:



Notice that the grid polygons don't have to be strictly convex: the octahedron has eight grid points, four of which are at the corners of a 2 × 2 square but the other four of which are in the middle of the edges of this square. But in order for a collection of polyhedra to meet up to form the faces of a four-dimensional polytope, each grid point needs at least three line segments connecting to it (each polytope edge has to be surrounded by three or more two-dimensional faces). This can only happen if each grid polygon has at most one slanted side in each of the four corners of its bounding box. So these polygons are convex except for possibly having vertices on their horizontal and vertical sides. There are also some other constraints on their shape; for instance, a hexagon with two diagonal sides within a 2 × 2 square doesn't correspond to a polyhedron, because it forms a shape that is not 3-vertex-connected.

Given this dictionary, we can form new patterns by tessellating a rectangular wrap-around grid by these grid polygons, and then ask: does the tessellation represent a 4-dimensional polytope? We have to be a little careful here, because we cannot place horizontal or vertical sides that are subdivided (e.g. in the octahedron) next to similar-looking sides that are not subdivided (e.g. in the cube). There are infinitely many possibilities, some of which give known polytopes, and some of which are unknown to me. For instance, extending the grid of squares shown for the (3,3)-duopyramid to grid of squares in a larger rectangle produces the diagram for another kind of duopyramid.

In the drawing below, the left grid tessellation represents a linked-cycle decomposition of the hypersimplex with five tetrahedra and five octahedra (one of the polytopes Gil Kalai asked about in the comments on my previous post). It can be formed from the truncated 5-cell by contracting all of the edges that are not part of tetrahedra; because the cycle edges of the linked cycles of the truncated 5-cell alternate between tetrahedral and non-tetrahedral edges, this contraction preserves the cycle decomposition. The right grid tessellation represents the octahedral prism, with eight triangular-prism cells and two octahedral cells. Therefore, both of these polytopes are linked. In both cases I found the infinite tessellation first, found its smallest period of horizontal and vertical translation, and then was able to identify the corresponding polytope with the help of the low number of cells and high symmetry of these tessellations. But I'm confused by the brick wall in the middle. It has the right number of vertices (10) and the right shape of 3-cells (triangular dipyramids) to be the dual hypersimplex, but the number of bricks is wrong: it should be 10 and is instead 12. (A herringbone pattern of bricks will also tessellate nicely, but then the number of vertices in both cycles would be a multiple of four.) It would be nice to have a theorem characterizing which tessellations give polytopes more generally.



One thing is clear: the polytopes that have linked-cycle decompositions are a very special subclass of the 4-polytopes. For instance, for general 4-polytopes, it remains unknown whether they can have fat face lattices. That is, can a polytope with a small number of vertices and 3-cells have a large number of edges and 2-faces? This can't happen in 3d, by a calculation involving Euler's formula, but the same calculation in 4d doesn't rule out this possibility. But in linked-cycle-decomposable polytopes, the number of cycle edges equals the number of vertices. And because the 3-cells are faces of a torus graph, the number of non-cycle edges (vertices of the torus graph) and 2-faces (edges of the torus graph) are bounded by linear functions of the number of 3-cells. In particular, if there are v vertices and c 3-cells, then there can be at most v + 2c edges and at most 3c 2-cells. This bound is tight whenever the torus diagram is simple (has exactly three edges at each vertex), as it is in the hypercube, truncated 5-cell, and hypersimplex cases.

at December 17, 2014 08:17 AM UTC

When I was asked earlier this year to write a short survey on k-best enumeration algorithms for the Springer Encyclopedia of Algorithms, I wrote a first draft before checking the formatting requirements. It ended up being approximately five pages of text and seven more pages of references, and I knew I would have to cut some of that. But then I did check the format, and saw that it needed to be much shorter, approximately two pages of text and a dozen references. I don't regret doing it this way; I think having a longer version to cut down helped me to organize the results and figure out which parts were important. But then I thought: why not make the long(er) version available too? I added a few more references, so now it's about six pages of text and ten of references, still closer to an annotated bibliography than an in-depth survey. Here it is: arXiv:1412.5075.

at December 17, 2014 04:46 AM UTC

Last week I finally saw The Imitation Game, the movie with Benedict Cumberbatch as Alan Turing.

OK, so for those who haven’t yet seen it: should you?  Here’s my one paragraph summary: imagine that you told the story of Alan Turing—one greatest triumphs and tragedies of human history, needing no embellishment whatsoever—to someone who only sort-of understood it, and who filled in the gaps with weird fabrications and Hollywood clichés.  And imagine that person retold the story to a second person, who understood even less, and that that person retold it to a third, who understood least of all, but who was charged with making the movie that would bring Turing’s story before the largest audience it’s ever had.  And yet, imagine that enough of the enormity of the original story made it through this noisy channel, that the final product was still pretty good.  (Except, imagine how much better it could’ve been!)

The fabrications were especially frustrating to me, because we know it’s possible to bring Alan Turing’s story to life in a way that fully honors the true science and history.  We know that, because Hugh Whitemore’s 1986 play Breaking the Code did it.  The producers of The Imitation Game would’ve done better just to junk their script, and remake Breaking the Code into a Hollywood blockbuster.  (Note that there is a 1996 BBC adaptation of Breaking the Code, with Derek Jacobi as Turing.)

Anyway, the movie focuses mostly on Turing’s codebreaking work at Bletchley Park, but also jumps around in time to his childhood at Sherborne School, and to his arrest for “homosexual indecency” and its aftermath.  Turing’s two world-changing papers—On Computable Numbers and Computing Machinery and Intelligence—are both mentioned, though strangely, his paper about computing zeroes of the Riemann zeta function is entirely overlooked.

Here are my miscellaneous comments:

  • The boastful, trash-talking, humor-impaired badass-nerd of the movie seems a lot closer to The Big Bang Theory‘s Sheldon Cooper, or to some other Hollywood concept of “why smart people are so annoying,” than to the historical Alan Turing.  (At least in Sheldon’s case, the archetype is used for laughs, not drama or veracity.)  As portrayed in the definitive biography (Andrew Hodges’ Alan Turing: The Enigma), Turing was eccentric, sure, and fiercely individualistic (e.g., holding up his pants with pieces of string), but he didn’t get off on insulting the intelligence of the people around him.
  • In the movie, Turing is pretty much singlehandedly responsible for designing, building, and operating the Bombes (the codebreaking machines), which he does over the strenuous objections of his superiors.  This, of course, is absurd: Bletchley employed about 10,000 people at its height.  Turing may have been the single most important cog in the operation, but he was still a cog.  And by November 1942, the operation was already running smoothly enough that Turing could set sail for the US (in waters that were now much safer, thanks to Bletchley!), to consult on other cryptographic projects at Bell Labs.
  • But perhaps the movie’s zaniest conceit is that Turing was also in charge of deciding what to do with Bletchley’s intelligence (!).  In the movie, it falls to him, not the military, to decide which ship convoys will be saved, and which sacrificed to prevent spilling Bletchley’s secret.  If that had any historicity to it, it would surely be the most military and political power ever entrusted to a mathematician (update: see the comments section for potential counterexamples).
  • It’s true that Turing (along with three other codebreakers) wrote a letter directly to Winston Churchill, pleading for more funding for Bletchley Park—and that Churchill saw the letter, and ordered “Action this day! Make sure they have all they want on extreme priority.”  However, the letter was not a power play to elevate Turing over Hugh Alexander and his other colleagues: in fact, Alexander co-signed the letter.  More broadly, the fierce infighting between Turing and everyone else at Bletchley Park, central to the movie’s plot, seems to have been almost entirely invented for dramatic purposes.
  • The movie actually deserves a lot of credit for getting right that the major technical problem of Bletchley Park was how to get the Bombes to search through keys fast enough—and that speeding things up is where Turing made a central contribution.  As a result, The Imitation Game might be the first Hollywood movie ever made whose plot revolves around computational efficiency.  (Counterexamples, anyone?)  Unfortunately, the movie presents Turing’s great insight as being that one can speed up the search by guessing common phrases, like “HEIL HITLER,” that are likely to be in the plaintext.  That was, I believe, obvious to everyone from the beginning.
  • Turing never built a computer in his own home, and he never named a computer “Christopher,” after his childhood crush Christopher Morcom.  (On the other hand, Christopher Morcom existed, and his early death from tuberculosis really did devastate Turing, sending him into morbid-yet-prescient ruminations about whether a mind could exist separately from a brain.)
  • I found it ironic that The Imitation Game, produced in 2014, is far more squeamish about on-screen homosexuality than Breaking the Code, produced in 1986.  Turing talks about being gay (which is an improvement over 2001’s Enigma, which made Turing straight!), but is never shown embracing another man.  However, the more important problem is that the movie botches the story of the burglary of Turing’s house (i.e., the event that led to Turing’s arrest and conviction for homosexual indecency), omitting the role of Turing’s own naiveté in revealing his homosexuality to the police, and substituting some cloak-and-dagger spy stuff.  Once again, Breaking the Code handled this perfectly.
  • In one scene, Euler is pronounced “Yooler.”

For more, see an excellent piece in Slate, How Accurate Is The Imitation Game?.  And for other science bloggers’ reactions, see this review by Christos Papadimitriou (which I thought was extremely kind, though it focuses more on Turing himself than on the movie), this reaction by Peter Woit, which largely echoes mine, and this by Clifford Johnson.

by Scott at December 17, 2014 04:06 AM UTC

Authors: Pablo Pérez-Lantero
Download: PDF
Abstract: Given a set $P$ of $n$ points in the plane, we study the computation of the probability distribution function of both the area and perimeter of the convex hull of a random subset $S$ of $P$. The random subset $S$ is formed by drawing each point $p$ of $P$ independently with a given rational probability $\pi_p$. For both measures of the convex hull, we show that it is NP-hard to compute the probability that the measure is at least a given bound $w$. For $\eps\in(0,1)$, we provide an algorithm that runs in $O(n^{9}/\eps)$ time and returns a value that is between the probability that the area is at least $w$, and the probability that the area is at least $(1-\eps)w$. For the perimeter, we show a similar algorithm running in $O(n^{9}/\eps)$ time. Finally, given $\eps,\delta\in(0,1)$ and for any measure, we show an $O(n\log n+ (n/\eps^2)\log(1/\delta))$-time Monte Carlo algorithm that returns a value that with probability at least $1-\delta$ differs at most $\eps$ from the probability that the measure is at least $w$.

at December 17, 2014 01:41 AM UTC

Authors: David Eppstein
Download: PDF
Abstract: We survey $k$-best enumeration problems and the algorithms for solving them, including in particular the problems of finding the $k$ shortest paths, $k$ smallest spanning trees, and $k$ best matchings in weighted graphs.

at December 17, 2014 01:41 AM UTC

Authors: Jens Maßberg
Download: PDF
Abstract: We consider the problem of embedding the Steiner points of a Steiner tree with given topology into the rectilinear plane. Thereby, the length of the path between a distinguished terminal and each other terminal must not exceed given length restrictions. We want to minimize the total length of the tree.

The problem can be formulated as a linear program and therefore it is solvable in polynomial time. In this paper we analyze the structure of feasible embeddings and give a combinatorial polynomial time algorithm for the problem. Our algorithm combines a dynamic programming approach and binary search and relies on the total unimodularity of a matrix appearing in a sub-problem.

at December 17, 2014 12:00 AM UTC

Authors: Benjamin A. Burton, Éric Colin de Verdière, Arnaud de Mesmay
Download: PDF
Abstract: Normal surface theory, a tool to represent surfaces in a triangulated 3-manifold combinatorially, is ubiquitous in computational 3-manifold theory. In this paper, we investigate a relaxed notion of normal surfaces where we remove the quadrilateral conditions. This yields normal surfaces that are no longer embedded. We prove that it is NP-hard to decide whether such a surface is immersed. Our proof uses a reduction from Boolean constraint satisfaction problems where every variable appears in at most two clauses, using a classification theorem of Feder. We also investigate variants, and provide a polynomial-time algorithm to test for a local version of this problem.

at December 17, 2014 01:41 AM UTC

Authors: Hartmut Klauck, Supartha Podder
Download: PDF
Abstract: We show new results about the garden-hose model. Our main results include improved lower bounds based on non-deterministic communication complexity (leading to the previously unknown $\Theta(n)$ bounds for Inner Product mod 2 and Disjointness), as well as an $O(n\cdot \log^3 n)$ upper bound for the Distributed Majority function (previously conjectured to have quadratic complexity). We show an efficient simulation of formulae made of AND, OR, XOR gates in the garden-hose model, which implies that lower bounds on the garden-hose complexity $GH(f)$ of the order $\Omega(n^{2+\epsilon})$ will be hard to obtain for explicit functions. Furthermore we study a time-bounded variant of the model, in which even modest savings in time can lead to exponential lower bounds on the size of garden-hose protocols.

at December 17, 2014 01:40 AM UTC

Authors: Prabhakar Ragde
Download: PDF
Abstract: Efficient implementations of sets and maps (dictionaries) are important in computer science, and balanced binary search trees are the basis of the best practical implementations. Pedagogically, however, they are often quite complicated, especially with respect to deletion. I present complete code (with justification and analysis not previously available in the literature) for a purely-functional implementation based on AA trees, which is the simplest treatment of the subject of which I am aware.

at December 17, 2014 01:40 AM UTC

Authors: Dean Foster, Howard Karloff, Justin Thaler
Download: PDF
Abstract: Variable selection for sparse linear regression is the problem of finding, given an m x p matrix B and a target vector y, a sparse vector x such that Bx approximately equals y. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k'-sparse x with ||Bx-y||^2<=h(m,p), where k'=k*2^{log^{1-delta} p} and h(m,p)<=p^(C_1)*m^(1-C_2), where delta>0, C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector x^* satisfying Bx^*=y. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise.

To the authors' knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k'>k and h(m,p)>0.

at December 17, 2014 01:40 AM UTC


What to do when afraid to see if what you want is true

BerglerFace
Cropped from Canadian Bergler Society source

Edmund Bergler coined the term in 1947, the great writers Francis Fitzgerald—F. Scott to most—and Joseph Conrad among many others suffered from it, as did the great cartoonist Charles Schulz. The problem is writer’s block.

Today Ken and I want to write about something that I wonder if any of you have ever had.

I will call it prover’s block. It is related to, but different from, writer’s block. Of course writer’s block is the condition that makes one unable to write, unable to create new sentences, unable to produce. It is the fear of the blank sheet of paper, which today is more likely the fear of that blank laptop screen in front of you.

There are many suggestions on how to overcome writer’s block. One I like is from the poet William Stafford who offered this advice to poets:

There is no such thing as writer’s block for writers whose standards are low enough.

The point is not to write garbage. The point is to write something: get started and be prepared to throw away lots, but write. Start getting your ideas down and trust that later, with much re-writing and edits, the writing will be okay. Of all the advice I find this one very useful. I certainly use it for GLL. I hope we do enough re-writes and edits so that most of what gets out is not garbage.

Prover’s Block

Some what is prover’s block? Let me explain in a personal way, since am just about over a bad case. I actually hope that writing this piece will help me overcome my block.

I have been working for a long time—let’s not say how long right now—to prove a certain Lemma X. I have thought at least a hundred times I had found a proof of X, but alas each time I started to work out the details the proof failed. After a while I began to doubt that X was true, but I really want X to be true. If it is true I will have proved something quite nice. No not that—not a “breakthrough”—but something that is still quite important.

A few weeks ago I looked at the statement of X from a new angle. How I missed this angle before who knows; somehow I did miss it. A quick rough check showed that this new approach should yield a proof of X. So I ran right off to the computer to write up the LaTeX version of the full details of the proof. Right.

No. I did nothing. I am afraid. I want this new approach to work so very much. I think it will. But the fear is as with all the previous ones this approach will collapse when I start hashing out all the details. This is prover’s block. I am stuck right here.

I have a great new approach to X, but am afraid to work out all the details. Perhaps this is one of the advantages of working with co-authors. On this one, however, I am alone.

Some Observations by Ken

Sometimes I, Ken writing now, find it helps even just to define a few new LaTeX macros in a document header to get rolling. A similar idea definitely works for “programmer’s block”: define a few routines to make the problem smaller.

The “{T^3}” typesetting program which I introduced to Oxford in 1985, and which is still going strong today as “Scientific Word,” had the philosophy that nothing is ever started from scratch. There was no “New Document” menu item—every document had to begin as a modification of another document. I still do that with many LaTeX documents, including solo posts for this blog.

Personality-wise I work better in the mode of modifying and extending over creating ex nihilo. It may not be simplistic to ascribe this trait to the ‘P’ versus ‘J’ component of the Myers-Briggs typology. The ‘P’ stands for “perceiving” but may as well stand for “perfectionistic” or “procrastinating,” whereas those with high ‘J’ (for “judging”) may align with those able to generate content quickly from scratch with less concern over errors or polish.

Specifically with regard to proofs, one thing I’ve noticed is in trying to prove a “simple” lemma on-the-fly while typing. Often the details mill around and cause backtracking to the extent that I’m not even sure the lemma is true anymore. I still find I need to sit with a notebook or sheets of paper to nail it down.

Open Problems

Is Lemma X proved by this new method? I, Dick, am about to find out. This has energized me to delve in to seeing if it works or not. The worst that can happen is I will have a new angle on X and potentially new ideas will emerge. The best that can happen is that I will finally prove X.

I will let you know. Thanks for listening.


by Pip at December 16, 2014 10:35 PM UTC

Pick a number, N, then try searching for it on the web via Bing or Google (or maybe the leet version of Google). What can you expect to learn? I wasn’t quite sure of the answer, so I ran some experiments.

When N is a small positive integer—less than 100, say—the leading results tend to be mass-audience web pages that happen to display the numeral N in some prominent way, such as in a headline or a title. There are news stories (Packers 43, Falcons 37), TV stations (WXMI Fox 17), a few brand names (Motel 6), references to iconic events (9/11, Apollo 13), listings of Bible verses (Romans 3:23).

With somewhat larger integers—three or four digits—I see a lot of street addresses, area codes, tax forms, statutes and ordinances. With five-digit numbers, Zip codes become prominent. At six digits we enter the land of hex colors, accompanied by a baffling variety of part numbers, account numbers, serial numbers, patent numbers, error numbers, lottery numbers. With a search string of 8 to 10 digits, telephone directories dominate the results. Still further out on the number line, you eventually come to a numerical desert where Google and Bing usually come up empty.


To get a more quantitative sense of how numbers are distributed across the web, I decided to do some sampling. I randomly selected 2,000 positive integers of 1 to 12 decimal digits, and submitted them to Google as web search queries. To construct the query integers I started with 2,000 floating-point numbers selected uniformly at random (with replacement) from the range \(0 \le m \lt 12\). For each \(m\) I calculated \(N = \lfloor 10^{m}\rfloor\), then ran a Google search for N. The work was done by a Python script with a politeness pause of one second between queries.From the results of each search I extracted \(H(N)\), the number of hits, which Google reports near the top of the page. Here’s what I found, plotted on log-log scales:

Google hits 12 digit

What an intriguing graph! Over most of the range in the log-log plot, the broad trend looks very nearly linear. What does that mean? If the Google data accurately reflect the state of the web, and if my sampling of the data can be trusted, it means the number of web pages mentioning numbers of magnitude \(10^k\) is roughly constant for all k in the range from \(k = 2\) to \(k = 10\). I don’t mean to suggest that specific large numbers appear just as frequently as specific small numbers. That’s obviously untrue: A typical two- or three-digit number might be found on a billion web pages, whereas a specific nine- or ten-digit number is likely to appear on only one or two pages. But there are only 90 two-digit numbers, compared with 90 billion 10-digit numbers, so the overall number of pages in those two classes is approximately the same.

Here’s another way of saying the same thing: The product of \(N\) and \(H(N)\) is nearly constant, with a geometric mean of roughly \(7 \times 10^{10}\). An equivalent statement is that:

\[\log_{10}{N} + \log_{10}{H(N)} \approx 10.86.\]

You can visualize this fact without doing any arithmetic at all. Just print a series of \(N, H(N)\) tuples in a column and observe that the total number of digits in a tuple is seldom less than 11 or greater than 13.

    N,        H(N)
    96964835, 2120
    2048, 164000000
    476899618, 214
    96416, 374000
    75555964, 3020
    171494, 182000
    154045436, 2160
    1206, 112000000
    761088, 50200
    7500301034, 24
    13211445, 10900
    1289, 77000000
    1507549, 18100
    3488, 3330000
    7507624475, 10
    17592745, 2830
    1430187656, 30
    691, 265000000
    41670244642, 2
    326, 52900000

Although the vast majority of the 2,000 data points lie near the 10.86 “main sequence” line, there are some outliers. One notable example is 25898913. Most numbers of this magnitude garner a few thousand hits on Google, but 25898913 gets 29,500,000. What could possibly make that particular sequence of digits 10,000 times more popular than most of its neighbors? Apparently it’s not just an isolated freak. About half the integers between 25898900 and 25898999 score well below 10,000 hits, and the other half score above 20 million. I can’t discern any trait that distringuishes the two classes of numbers. Sampling from other nearby ranges suggests that such anomalies are rare.


A straight line on a log-log plot often signals a power-law distribution. The classic example is the Zipfian distribution of word frequencies in natural-language text, where the kth most common word can be expected to appear with frequency proportional to \(k^{-\alpha}\), with \(\alpha \approx 1\). Does a similar rule hold for integers on the web? Maybe. I tried fitting a power law to the data with the powerlaw Python package from Jeff Alstott et al. The calculated value of \(\alpha\) was about 1.17, which seems plausible enough, but other diagnostic indicators were not so clear. Identifying power laws in empirical data is notoriously tricky, and I don’t have much confidence in my ability to get it right, even with the help of a slick code library.

I’m actually surprised that the pattern in the graph above looks so Zipfian, because the data being plotted don’t really represent the frequencies of the numbers. Google’s hit count \(H(N)\) is an approximation to the number of web pages on which \(N\) appears, not the number of times that \(N\) appears on the web. Those two figures can be expected to differ because a page that mentions \(N\) once may well mention it more than once. For example, a page about the movie 42 has eight occurrences of 42, and a page about the movie 23 has 13 occurrences of 23. (By the way, what’s up with all these numeric movie titles?)

Another distorting factor is that Google apparently implements some sort of substring matching algorithm for digit strings. If you search for 5551212, the results will include pages that mention 8005551212 and 2125551212, and so on. I’m not sure how far they carry this practice. Does a web page that includes the number 1234 turn up in search results for all nonempty substrings: 1234, 123, 234, 12, 23, 34, 1, 2, 3, 4? That kind of multiple counting would greatly inflate the frequencies of numbers in the Googleverse.

It’s also worth noting that Google does some preprocessing of numeric data both in web pages and in search queries. Commas, hyphens, and parentheses are stripped out (but not periods/decimal points). Thus searches for 5551212, 555-1212, and 5,551,212 all seem to elicit identical results. (Enclosing the search string in quotation marks suppresses this behavior, but I didn’t realize that until late in the writing of this article, so all the results reported here are for unquoted search queries.)


In the graph above, the linear trend seems to extend all the way to the lower righthand corner, but not to the upper lefthand corner. If we take seriously the inferred equation \(N \times H(N) = 7 \times 10^{10}\), then the number of hits for \(N = 1\) should obviously be \(7 \times 10^{10}\). In fact, searches for integers in the range \(1 \le N \le 25\) generally return far fewer hits. Many of the results are clustered around \(10^{7}\), four or five orders of magnitude smaller than would be expected from the trend line.

To investigate this discrepancy, I ran another series of Google searches, recording the number of hits for each integer from 0 through 100. Note that in this graph the y axis is logarithmic but the x axis is linear.

Google hits 0 100

There’s no question that something is depressing the abundance of most numbers less than 25. The abruptness of the dip suggests that this is an artifact of an algorithm or policy imposed by the search engine, rather than a property of the underlying distribution. I have a guess about what’s going on. Small numbers may be so common that they are treated as “stop words,” like “a,” “the,” “and,” etc., and ignored in most searches. Perhaps the highest-frequency numbers are counted only when they appear in an <h1> or <h2> heading, not when they’re in ordinary text.

But much remains unexplained. Why do 2, 3, 4, and 5 escape the too-many-to-count filter? Same question for 23. What’s up with 25 and 43, which stand more than 10 times taller than their nearest neighbors? Finally, in this run of 101 Google searches, the hit counts for small integers are mostly clustered around \(10^6\), whereas the earlier series of 2,000 random searches produced a big clump at \(10^7\). In that earlier run I also noticed that searching repeatedly for the same \(N\) could yield different values of \(H(N)\), even when the queries were submitted in the space of a few seconds. For example, with \(N=1\) I saw \(H(N)\) values ranging from 10,400,000 to 1,550,000,000. Presumably, the different values are coming from different servers or different data centers in Google’s distributed database.

I was curious enough about the inconsistencies to run another batch of random searches. In the graph below the 2,000 data points from the first search are light blue and the 2,000 new points are dark blue.

Google hits 12 digit ccombo

Over most of the range, the two data sets are closely matched, but there’s a conspicuous change in the region between \(10^2\) and \(10^4\). In the earlier run, numbers in that size range were split into two populations, with frequencies differing by a factor of 10. I was unable to identify any property that distinguishes members of the two populations; they are not, for example, just odd and even numbers. In the new data, the lower branch of the curve has disappeared. Now there is a sharp discontinuity at \(N = 10^4\), where typical frequency falls by factor of 10. I have no idea what this is all about, but I strongly suspect it’s something in the Google algorithms, not in the actual distribution of numbers on the web.


The limitations of string matching—or even regular-expression matching—are more troublesome when you go beyond searching for simple positive integers. I’ve hardly begun to explore this issue, but the following table hints at one aspect of the problem.

N top hit
17.3 HP Anodized Silver 17.3″ Pavilion
17.30 17.30j Syllabus – MIT
17.300 Chapter 17.300 COMPLIANCE
17.3000 Map of Latitude: 17.3000, Longitude: -62.7333
17.30000 41 25 0 0 2.000000 4.000000 6.000000 8.000000
17.300000 17.300000 [initially -35.600000] gi_24347982 (+) RNA

Search queries that are mathematically equal (when interpreted as decimal representations of real numbers) yield quite different results. And 4.999… is definitely not equal to 5.000… in the world of Google web search.

It gets even worse with fractions. A search for 7/3 brought me a calculator result correctly announcing that “7/3 = 2.33333333333″ but it also gave me articles headlined “7^3 – Wolfram Alpha”, “Matthew 7:3″, “49ers take 7-3 lead”, and “Hokua 7’3″ LE – Naish”. (Enclosing the search term in quotation marks doesn’t help in this case.)


Before closing the book on this strange numerical diversion that has entertained me for the past couple of weeks, I want to comment on one more curious discovery. If you run enough searches for large numbers, you’ll eventually stumble on web sites such as numberworld, numberempire, numbersbase, each-number, every-number, all-numbers, integernumber, numbersaplenty, and numberopedia. A few of these sites appear to be created and curated by genuine number-lore enthusiasts, but others have a whiff of sleazy search-engine baiting. (For that reason I’m not linking to any of them.)

Here’s part of a screen capture from Numbers Aplenty, which is one of the more interesting sites:

NumbersAplenty screen

Each of the numbers displayed on the page is a link to another Numbers Aplenty page, and the site is apparently equipped to display such a page for any positive integer less than \(10^{16}\). A few years ago, Google reported that they had indexed a trillion unique URLs on the world wide web. Evidently they hadn’t yet worked their way through the 10,000 trillion URLS at Numbers Aplenty. (But I’m pretty sure the server doesn’t have \(10^{16}\) HTML files stored on disk, patiently waiting for someone to request the information.


And, finally, a trivia question: What is the smallest positive integer for which a Google search returns zero results? The smallest I have recorded so far is 10,041,295,923. (Of course that could change after the Googlebot indexes this page.) Can anyone find an example with 10 or fewer digits?

by Brian Hayes at December 16, 2014 09:38 PM UTC

We give a new simple proof of the time hierarchy theorem for heuristic BPP originally proved by Fortnow and Santhanam [FS04] and then simplified and improved by Pervyshev [P07]. In the proof we use a hierarchy theorem for sampling distributions recently proved by Watson [W13]. As a byproduct we get that P is not included in BPTime[$n^k$] if one way functions exist. As far as we know this statement was not known before. We also show that our technique may be extended for time hierarchies in some other heuristic classes.

at December 16, 2014 07:33 PM UTC

We examine visibly counter languages, which are languages recognized by visibly counter automata (a.k.a. input driven counter automata). We are able to effectively characterize the visibly counter languages in AC0, and show that they are contained in FO[+].

at December 16, 2014 07:31 PM UTC

We consider variants of the Minimum Circuit Size Problem MCSP, where the goal is to minimize the size of oracle circuits computing a given function. When the oracle is QBF, the resulting problem MCSP$^{QBF}$ is known to be complete for PSPACE under ZPP reductions. We show that it is not complete under logspace reductions, and indeed it is not even hard for TC$^0$ under uniform AC$^0$ reductions. We obtain a variety of consequences that follow if oracle versions of MCSP are hard for various complexity classes under different types of reductions. We also prove analogous results for the problem of determining the resource-bounded Kolmogorov complexity of strings, for certain types of Kolmogorov complexity measures.

at December 16, 2014 07:28 PM UTC

Authors: James Caldwell, Philip Hölzenspies, Peter Achten
Download: PDF
Abstract: The goal of TFPIE is to gather researchers, professors, teachers, and all professionals interested in functional programming in education. This includes the teaching of functional programming, but also the application of functional programming as a tool for teaching other topics. The post-workshop review process received 13 submissions, which were vetted by the program committee, assuming scientific journal standards of publication. The six articles in this volume were selected for publication as the result of this process.

at December 16, 2014 02:00 AM UTC

Authors: Abhishek Bhowmick, Shachar Lovett
Download: PDF
Abstract: The problem of constructing explicit functions which cannot be approximated by low degree polynomials has been extensively studied in computational complexity, motivated by applications in circuit lower bounds, pseudo-randomness, constructions of Ramsey graphs and locally decodable codes. Still, most of the known lower bounds become trivial for polynomials of super-logarithmic degree. Here, we suggest a new barrier explaining this phenomenon. We show that many of the existing lower bound proof techniques extend to nonclassical polynomials, an extension of classical polynomials which arose in higher order Fourier analysis. Moreover, these techniques are tight for nonclassical polynomials of logarithmic degree.

at December 16, 2014 01:41 AM UTC

Authors: Flávio K. Miyazawa, Lehilton L. C. Pedrosa, Rafael C. S. Schouery, Maxim Sviridenko, Yoshiko Wakabayashi
Download: PDF
Abstract: We give an asymptotic approximation scheme (APTAS) for the problem of packing a set of circles into a minimum number of unit square bins. To obtain rational solutions, we use augmented bins of height $1+\gamma$, for some arbitrarily small number $\gamma > 0$. Our algorithm is polynomial on $\log 1/\gamma$, and thus $\gamma$ is part of the problem input. For the special case that $\gamma$ is constant, we give a (one dimensional) resource augmentation scheme, that is, we obtain a packing into bins of unit width and height $1+\gamma$ using no more than the number of bins in an optimal packing. Additionally, we obtain an APTAS for the circle strip packing problem, whose goal is to pack a set of circles into a strip of unit width and minimum height. These are the first approximation and resource augmentation schemes for these problems.

Our algorithm is based on novel ideas of iteratively separating small and large items, and may be extended to a wide range of packing problems that satisfy certain conditions. These extensions comprise problems with different kinds of items, such as regular polygons, or with bins of different shapes, such as circles and spheres. As an example, we obtain APTAS's for the problems of packing d-dimensional spheres into hypercubes under the $L_p$-norm.

at December 16, 2014 02:00 AM UTC

Authors: Maxime Crochemore, Robert Mercas
Download: PDF
Abstract: The work takes another look at the number of runs that a string might contain and provides an alternative proof for the bound.

at December 16, 2014 02:00 AM UTC

Authors: Dmitry Kosolobov
Download: PDF
Abstract: The online repetition detection problem is to detect the first occurrence of substring of a given exponent in a string that is read sequentially from left to right. Recall that a string $s$ has an exponent $e$ if $|s| = ep$ for a period $p$ of $s$. In this paper we present two algorithms solving this problem.

1). The first one works in $O(n\log\sigma)$ time and linear space, where $\sigma$ is the number of distinct letters in the input string. This algorithm is relatively simple and doesn't require much memory unlike the previously known solution with the same working time and space.

2). The second one supports the backtrack operation removing the last letter of input string. This algorithm takes $O(n\log m)$ time and $O(m)$ space, where $m$ is the length of a longest string generated during the execution of given $n$ backtrack and read operations.

at December 16, 2014 02:00 AM UTC

Authors: Jop Briët, Oded Regev
Download: PDF
Abstract: We prove that for any $\varepsilon > 0$ it is Unique-Games-hard to approximate the non-commutative Grothendieck problem to within a factor $1/2 + \varepsilon$, which matches the approximation ratio of the algorithm of Naor, Regev, and Vidick (STOC'13). Our proof uses an embedding of $\ell_2$ into the space of matrices endowed with the trace norm with the property that the image of standard basis vectors is longer than that of unit vectors with no large coordinates.

at December 16, 2014 01:40 AM UTC

Authors: Maciej Drwal
Download: PDF
Abstract: In this paper we consider the problem of scheduling jobs on parallel identical machines, where the processing times of jobs are uncertain: only interval bounds of processing times are known. The optimality criterion of a schedule is the total flow time. In order to cope with uncertainty we make use of the robust optimization framework: we seek a schedule which performs well under all possible instantiations of processing times. Consequently, the goal is to minimize the maximum regret value of solution. Although the deterministic version of this problem is solvable in polynomial time, the minmax regret version is known to be weakly NP-hard even for a single machine, and strongly NP-hard for parallel unrelated machines. In this paper we show that the problem is strongly NP-hard also in the case of parallel identical machines.

at December 16, 2014 12:00 AM UTC

Authors: Michael Blondin, Alain Finkel, Stefan Göller, Christoph Haase, Pierre McKenzie
Download: PDF
Abstract: Determining the complexity of the reachability problem for vector addition systems with states (VASS) is a long-standing open problem in computer science. Long known to be decidable, the problem to this day lacks any complexity upper bound whatsoever. In this paper, reachability for two-dimensional VASS is shown PSPACE-complete. This improves on a previously known doubly exponential time bound established by Howell, Rosier, Huynh and Yen in 1986. The coverability and boundedness problems are also noted to be PSPACE-complete. In addition, some complexity results are given for the reachability problem in two-dimensional VASS and in integer VASS when numbers are encoded in unary.

at December 16, 2014 01:40 AM UTC

Authors: Nikolaos Kariotoglou, Kostas Margellos, John Lygeros
Download: PDF
Abstract: We discuss the computational complexity and feasibility properties of scenario based techniques for uncertain optimization programs. We consider different solution alternatives ranging from the standard scenario approach to recursive variants, and compare feasibility as a function of the total computation burden. We identify trade-offs between the different methods depending on the problem structure and the desired probability of constraint satisfaction. Our motivation for this work stems from the applicability and complexity reduction when making decisions by means of recursive algorithms. We illustrate our results on an example from the area of approximate dynamic programming

at December 16, 2014 01:41 AM UTC

Authors: Jan Kynčl, Bernard Lidický, Tomáš Vyskočil
Download: PDF
Abstract: An irreversible $k$-threshold process (also a $k$-neighbor bootstrap percolation) is a dynamic process on a graph where vertices change color from white to black if they have at least $k$ black neighbors. An irreversible $k$-conversion set of a graph $G$ is a subset $S$ of vertices of $G$ such that the irreversible $k$-threshold process starting with $S$ black eventually changes all vertices of $G$ to black. We show that deciding the existence of an irreversible 2-conversion set of a given size is NP-complete, even for graphs of maximum degree 4, which answers a question of Dreyer and Roberts. Conversely, we show that for graphs of maximum degree 3, the minimum size of an irreversible 2-conversion set can be computed in polynomial time. Moreover, we find an optimal irreversible 3-conversion set for the toroidal grid, simplifying constructions of Pike and Zou.

at December 16, 2014 01:40 AM UTC

The problem of constructing explicit functions which cannot be approximated by low degree polynomials has been extensively studied in computational complexity, motivated by applications in circuit lower bounds, pseudo-randomness, constructions of Ramsey graphs and locally decodable codes. Still, most of the known lower bounds become trivial for polynomials of superlogarithmic degree. Here, we suggest a new barrier explaining this phenomenon. We show that many of the existing lower bound proof techniques extend to nonclassical polynomials, an extension of classical polynomials which arose in higher order Fourier analysis. Moreover, these techniques are tight for nonclassical polynomials of logarithmic degree.

at December 15, 2014 11:16 PM UTC

One issue I keep seeing in comments here and elsewhere on this issue is that academia is very competitive, with everyone worried about their rank.  In my last post, I admitted that it would be hard to completely deny that there is competition, particularly when people are younger, which tends to come out when jobs are at stake.  But, for the most part, I think the role of competition is completely exaggerated, strangely so.  Academia -- at least, certainly, my branch of it -- thrives on collaboration, and I believe people who go into it thinking that it is a big competition are going to lose out, both professionally and in their enjoyment of the profession.  (Indeed, one of the reasons I write these posts is because I don't want undergraduate and graduate students getting what I think is a one-sided, incorrect view of how academics work.)

First, I would again like to compare academics with other professions.  I've seen comments (including on this blog) that other professions are less competitive than academia, and people are less worried about rank.  I think people who are making that suggestion need to check in with people working in those professions, because I think they're ridiculously wrong.  Lawyers coming out of school to get jobs, doctors trying to get fellowships or residencies, consultants at consulting firms -- all very competitive.  As you move up, this continues.  For lawyers, there's who can bill the most hours, and the challenge to make partner;  for doctors, who can get the best positions at hospitals;  and for businesspeople, every promotion is a step up.  Even for software engineering types, there's competition.  When I was a graduate student, I recall visiting friends who had gone to a large well-known company, and for a large part of the evening all they talked about was what "level" they and others were in the company and the annual reviews and who was doing what project that might get them ahead.  So let's not be delusional and start by acknowledging that there's competition everywhere, and that's unsurprising when jobs and money are at stake.  While I'm not suggesting I have deep knowledge of all of these careers, I think academics have much less competition than most.  

If academics appear like they're concerned about ranking, perhaps it's because they appear to be easy to rank.  First, as I pointed out last post, there's not that many of us.  Second, there are obvious metrics everyone can understand:  number of papers published, number of papers appearing in "top" conferences, and h-index stand out.  I'm not suggesting these are good metrics -- but they're easy and to a first order give (potentially) some useful information.  They're a quick way of bucketing or sorting people, particularly those without an established track record and are therefore not necessarily widely known or visible in the field, and therefore have more of an impression and an impact on younger academics.

But very quickly after your PhD, this sort of ranking loses its importance, and the very idea of ranking starts to lose its value -- as many have noted in a variety of venues.  In academia, there's lots of ways to be successful, many points on the Pareto frontier.  There are many great results waiting to be found and many different subfields to work in.  At the end of the day, a history of good work and specific achievements is what people look for;  there's not really a finite pool of such things for which to compete.  Indeed, I'm not sure how I would go about competing with the top people in the field, except to try to do interesting work, which is what I'm trying to do anyway.  (A secondary way to compete is just to make sure people know about your work.  But giving talks is less important to being successful than doing the work that goes into the talks in the first place;  again, it can have a bigger impact for people in the early stages of their career.) 

Against this idea of competition, just look at how people in academia work together.  In computer science theory, in particular, most papers have several authors working together.  In a number of cases these are students with their advisors, but a closer look reveals that in many cases, they are not.  Credit can be shared easily in academia, and collaborations can lead to results that individuals could not get alone.  Working in groups is a way for people to get more done.  Instead of competition, collaboration often yields the path to having a greater impact on the field.  Rather than being a "competitive game", research is more a "cooperative game".  (As an aside, this is why theory's approach of alphabetical order for authors rather than some sort of implicit "credit scheme" based on author order makes such great sense.)  In most areas of computer science that I've participated in, a similar spirit prevails.

I encourage graduate students to team up pick out projects to work on together (and have seen this at other places, also -- one of my best experiences as a graduate student was such a project).  It gives them something to do and a sense of ownership outside of working with their advisor.  And, importantly, in reinforces that these other students are their colleagues, and that working together is a great idea that can gain for everyone.  Hopefully, they also learn that working together is more fun and generally more productive than working alone.  When it comes to hiring time, it's nice to see students who have worked on such team projects, because I typically prefer colleagues with a track record of working well with others.     

Sometimes small competitions break out, sure -- multiple groups are working on the same or similar problems.  Often, though, this is a very healthy competition, pushing progress forward in an area over a series of papers.  I remember in the past an argument with another group when we were working on similar problems and an issue of "credit" in the writeup of the various papers came up.  A week later, we were starting collaborations together on new ideas.  That's not exactly the sign of a super-competitive landscape. 

It could be that I've just got a mistaken impression of the field.  Harvard is a wonderfully collaborative place, and personally I've found overall I like working with others more than on my own.  But when I think of cutthroat competition, I don't think of the job I'm in.

To conclude the post, I think what may be going on is people confuse "competition" with "drive".  Most academics are smart, successful people, who are internally driven by a desire to do great work.  To many, that must appear like "competition", but if so, it's internal competition -- you're not out to beat others, but to be our best.  And I think it's very possible academia does have more "Type A" personalities that have this internal drive that is, surely, not always healthy.  It's not clear to me that this academia's fault -- such people would be similarly driven in any career -- but, if it is true, then it suggests we might consider if this is best for our field, and how we might open up the field to a wider set of personalities or how we might make work in this field healthier for this type of personality.  





by Michael Mitzenmacher (noreply@blogger.com) at December 15, 2014 09:59 PM UTC

Last Friday in our theory reading group, Yael Kalai observed that there’s only one other woman in the room. She noticed it because in cryptography meetings, at least in the Boston area, there is a significantly higher female presence. Make no mistake, cryptography, even in Boston, still has a very lopsided gender ratio. But I think it is still a bit better than some of the other areas of theoretical computer science, largely due to a few strong role models such as Shafi Goldwasser. The gender ratio in TCS is sometimes thought of as an unfortunate but constant fact of life, that is due to larger forces in society beyond our control. Examples such as this show that actions by small communities or even individuals can make a difference.

I truly believe that theoretical computer science, and computer science at large, will grow significantly in importance over the coming decades, as it occupies a much more central place in many of the sciences and other human activities. We’ll have many more problems to solve, and we can’t do it without using half of the world’s talent pool.


by Boaz Barak at December 15, 2014 07:53 PM UTC