Theory of Computing Blog Aggregator

at March 01, 2015 02:09 AM UTC

זו הפעם החמש-עשרה שבה אצביע והפעם השלישית שאנו נמצאים לפני בחירות מאז שהתחלתי בכתיבת הבלוג. בעבר לא הבעתי בפומבי את הבחירה האישית שלי. עבורי יום הבחירות הוא יום חג, ההכרעות בין  תפיסות עולם ואינטרסים מנוגדים אינן קלות, ויש דרכים שונות, אותן אני מכבד, לשפוט את  המציאות ואת  המתמודדים

 זו הפעם הראשונה שאני נותן ביטוי פומבי לבחירתי

 כתמיד כל ממשלה שתוקם לאחר הבחירות תצטרך להתמודד עם קשיים, איומים
ואולי אף עם מלחמות, אבל הפעם, ללא שינוי בהנהגת המדינה, אני רואה אפשרות קשה להתדרדרות של ערכי יסוד של החברה שלנו ביחד עם התדרדרות והשחתה של מערכות המדינה והחברה

זה הזמן לשינוי

אני תומך בבחירות ברשימת המחנה הציוני

 

 

שינוי1

שינוי2

 

 


by Gil Kalai at March 01, 2015 02:00 AM UTC

It would appear that there’s a substantial academic literature on social groupings amongst adolescents at schools. I learned about this while attending a seminar by Robert Akerlof on Social norm formation: the role of esteem. The paper considers a simplified model of social interaction between adoloscents1 in which (in a 2-player version) each player makes 3 choices: effort at 2 activities (academic achievement versus rock music), which one to value, and whether to interact with the other player. There are exogenous costs of effort and of interaction; the latter may be positive or negative. If both players choose to interact with effort at the same activity, then the weaker player grants esteem to the other player; self-esteem may also be derived from valuing the activity you really prefer. Esteem is what everyone wants. The model is somewhat reminiscent of social network models of opinion adoption, but without an underlying graph and neighbourhood structure.

The paper aims to be “the first model to capture the conflict between conformity and differentiation, which is at the heart of social interaction in many economic settings”. One key feature of the real world that the model aims to capture is the way we give up on activities where there are poor prospects of being competitive; competition for academic achievement is high amongst pupils of similar ability, but a big disparity may cause the weaker ones to switch to something else entirely. Apparently this can be used to explain why Catholic schools have lower drop-out rates; the lower cost of interaction helps (according to the equilibrium of the simultaneous-move game) pupils who are weak academically, and would be most likely to drop out. The model also predicts a “smart set”, and a “cool set” who have high self-esteem, and the middle, who have low self-esteem. That was probably you, if you read this far.

1No jokes now. I’m willing to accept that it’s possible to simplify interaction amongst adolescents.

by Paul Goldberg (noreply@blogger.com) at February 27, 2015 04:20 PM UTC

Does the information complexity of a function equal its communication complexity? We examine whether any currently known techniques might be used to show a separation between the two notions. Recently, Ganor et al. provided such a separation in the distributional setting for a specific input distribution ?. We show that in the non- distributional setting, the relative discrepancy bound they defined is, in fact, smaller than the information complexity, and hence it cannot be used to separate information and communication complexity. In addition, in the distributional case, we provide an equivalent linear program formulation for relative discrepancy and relate it to variants of the partition bound, resolving also an open question regarding the relation of the partition bound and information complexity. Last, we prove the equivalence between the adaptive relative discrepancy and the public-coin partition bound, which implies that the logarithm of the adaptive relative discrepancy bound is quadratically tight with respect to communication.

at February 27, 2015 01:59 PM UTC

We are advertising a 24-month postdoctoral research fellowship in theoretical computer science shared between University of Sydney and University of New South Wales, Australia. The expected start date is September 2015.

The successful applicant will conduct research on algorithms with Serge Gaspers, Joachim Gudmundsson and Julian Mestre in the USYD SACT research group and UNSW Algorithms group.

Webpage: http://sydneyalgorithms.wordpress.com/postdoctoral-fellowship/

by theorycsjobs at February 27, 2015 01:47 PM UTC

True Mathematics - Greatest Hits (Vinyl-1988) (2)Combinatorics and More’s Greatest Hits

First Month

Combinatorics, Mathematics, Academics, Polemics, …
Helly’s Theorem, “Hypertrees”, and Strange Enumeration I (There were 3 follow up posts:)
Extremal Combinatorics I: Extremal Problems on Set Systems (There were 4 follow up posts II III; IV; VI)
Drachmas
Rationality, Economics and Games

Open problems

Five Open Problems Regarding Convex Polytopes
Seven Problems Around Tverberg’s Theorem
F ≤ 4E
The AC0 Prime Number Conjecture
Coloring Simple Polytopes and Triangulations
Some old and new problems in combinatorics and geometry
Noise Stability and Threshold Circuits

Taxi and other Stories

Cosmonaut: Michal Linial
Michal Linial: No Witches in Portugal
Tel-Aviv’s “Jerusalem Beach”
Coffee, Cigarettes, and Aggression

Polytopes

Ziegler´s Lecture on the Associahedron 
The Simplex, the Cyclic polytope, the Positroidron, the Amplituhedron, and Beyond
Satoshi Murai and Eran Nevo proved the Generalized Lower Bound Conjecture.
Telling a Simple Polytope From its Graph

Combinatorics

When Do a Few Colors Suffice?
Fractional Sylvester-Gallai
My Mathematical Dialogue with Jürgen Eckhoff
Influence, Threshold, and Noise

Quantum

A Few Slides and a Few Comments From My MIT Lecture on Quantum Computers
Why Quantum Computers Cannot Work: The Movie!
Symplectic Geometry, Quantization, and Quantum Noise
Aaronson and Arkhipov’s Result on Hierarchy Collapse

Computing

Is Backgammon in P?
Four Derandomization Problems
Navier-Stokes Fluid Computers
Subexponential Lower Bound for Randomized Pivot Rules!
Impagliazzo’s Multiverse

 

Economics and Rationality

Arrow’s Economics 1
Social Choice Talk

Sex, football, and other sports

Academic Degrees and Sex
The Intermediate Value Theorem Applied to Football
Is More Sex Safe? A book review.

Guests posts

Emmanuel Abbe: Erdal Arıkan’s Polar Codes
János Pach: Guth and Katz’s Solution of Erdős’s Distinct Distances Problem
Amir Ban on Deep Junior
About Conjectures: Shmuel Weinberger
(Eran Nevo) The g-Conjecture I
Christine Bjorner: The Golden Room and the Golden Mountain

Polls

What does “beyond a reasonable doubt” practically mean?
Would you decide the election if you could?
The Generation Gap

People

Oded
Happy Birthday Richard Stanley!
Happy Birthday Ron Aharoni!
Nati’s Influence

Polymath and online research

The Polynomial Hirsch Conjecture: A proposal for Polymath3
Polymath 8 – a Success!
Polymath8: Bounded Gaps Between Primes
Mathematics, Science, and Blogs
Polymath1: Success!

Opinions

Chess can be a Game of Luck
Amy Triumphs* at the Shtetl

Test your intuition

Test Your Intuition (17): What does it Take to Win Tic-Tac-Toe
Mittag-Leffler Institute and Yale, Winter 2005; Test your intuition: Who Played the Piano?

Mathematics

The Amitsur-Levitzki Theorem for a Non Mathematician.
About Mathematics
Can Category Theory Serve as the Foundation of Mathematics?
Buffon’s Needle and the Perimeter of Planar Sets of Constant Width
A Few Mathematical Snapshots from India (ICM2010)
The Beauty of Mathematics

Science

Answer: Lord Kelvin, The Age of the Earth, and the Age of the Sun
Itamar Pitowsky: Probability in Physics, Where does it Come From?

Riddles

Two Math Riddles
The Ultimate Riddle
Greg’s Dinosaurs Riddle

Discussions of various kinds

When It Rains It Pours
Why is mathematics possible?
Chomskian Linguistics (The phrase “more or less” in various languages)

News

The Kadison-Singer Conjecture has beed Proved by Adam Marcus, Dan Spielman, and Nikhil Srivastava
Francisco Santos Disproves the Hirsch Conjecture
Amazing: Peter Keevash Constructed General Steiner Systems and Designs
Exciting News on Three Dimensional Manifolds
Roth’s Theorem: Tom Sanders Reaches the Logarithmic Barrier
Greg Kuperberg: It is in NP to Tell if a Knot is Knotted! (under GRH!)
From Oberwolfach: The Topological Tverberg Conjecture is False
New Ramanujan Graphs!

Updates

Celebrations in Sweden and Norway
Mabruk Elon, India, and More

Teaching

A Proof by Induction with a Difficulty
In how many ways you can chose a committee of three students from a class of ten students?

MathOverflow related

Fundamental Examples
Why are Planar Graphs so Exceptional
Joel David Hamkins’ 1000th MO Answer is Coming
Joe’s 100th MO question

Gina says

My Book: “Gina Says,” Adventures in the Blogosphere String War
Anat Lotan: Who is Gina I
Anat Lotan: Who is Gina II, My Own Shocking Revelation
Praise For ‘Gina says’

Humor (?)

Believing that the Earth is Round When it Matters
Annotating Kimmo Eriksson’s Poem

Public service

Optimism – two quotes
The Simplified Pill Algorithm

Useful links

Stand Clear of The Closing Doors, Please

Personal

Ilan and me
Geometry, Equality, and Luck


by Gil Kalai at February 27, 2015 10:29 AM UTC

Applications are invited for a postdoctoral position at University of Waterloo in algorithms and optimization, hosted by Joseph Cheriyan and Lap Chi Lau. The position is for a year (possibly another year) starting in Fall 2015.

Applicants should email their CV, Research Statement, and names of three references to Joseph Cheriyan (jcheriya@uwaterloo.ca) and Lap Chi Lau by April 30.

Email: lapchi@uwaterloo.ca

by theorycsjobs at February 27, 2015 01:17 AM UTC

Authors: Meirav Zehavi
Download: PDF
Abstract: Given an undirected graph $G$ and a parameter $k$, the $k$-Leaf Spanning Tree ($k$-LST) problem asks if $G$ contains a spanning tree with at least $k$ leaves. This problem has been extensively studied over the past three decades. In 2000, Fellows et al. [FSTTCS'00] explicitly asked whether it admits a klam value of 50. A steady progress towards an affirmative answer continued until 5 years ago, when an algorithm of klam value 37 was discovered. In this paper, we present an $O^*(3.188^k)$-time parameterized algorithm for $k$-LST, which shows that the problem admits a klam value of 39. Our algorithm is based on an interesting application of the well-known bounded search trees technique, where the correctness of rules crucially depends on the history of previously applied rules in a non-standard manner.

at February 27, 2015 01:44 AM UTC

Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
Download: PDF
Abstract: We present an optimal data structure for submatrix maximum queries in n x n Monge matrices. Our result is a two-way reduction showing that the problem is equivalent to the classical predecessor problem. This gives a data structure of O(n) space that answers submatrix maximum queries in O(loglogn) time. It also gives a matching lower bound, showing that O(loglogn) query-time is optimal for any data structure of size O(n polylog(n)).

Our result concludes a line of improvements that started in SODA'12 with O(log^2 n) query-time and continued in ICALP'14 with O(log n) query-time. Finally, we show that partial Monge matrices can be handled in the same bounds as full Monge matrices. In both previous results, partial Monge matrices incurred additional inverse-Ackerman factors.

at February 27, 2015 01:44 AM UTC

Authors: Nadia Alshahwan, Earl T. Barr, David Clark, George Danezis
Download: PDF
Abstract: This work focuses on a specific front of the malware detection arms-race, namely the detection of persistent, disk-resident malware. We exploit normalised compression distance (NCD), an information theoretic measure, applied directly to binaries. Given a zoo of labelled malware and benign-ware, we ask whether a suspect program is more similar to our malware or to our benign-ware. Our approach classifies malware with 97.1% accuracy and a false positive rate of 3%. We achieve our results with off-the-shelf compressors and a standard machine learning classifier and without any specialised knowledge. An end-user need only collect a zoo of malware and benign-ware and then can immediately apply our techniques.

We apply statistical rigour to our experiments and our selection of data. We demonstrate that accuracy can be optimised by combining NCD with the compressibility rates of the executables. We demonstrate that malware reported within a more narrow time frame of a few days is more homogenous than malware reported over a longer one of two years but that our method still classifies the latter with 95.2% accuracy and a 5% false positive rate. Due to the use of compression, the time and computation cost of our method is non-trivial. We show that simple approximation techniques can improve the time complexity of our approach by up to 63%.

We compare our results to the results of applying the 59 anti-malware programs used on the VirusTotal web site to our malware. Our approach does better than any single one of them as well as the 59 used collectively.

at February 27, 2015 01:40 AM UTC

Authors: Luis Pedro Montejano, Ignasi Sau
Download: PDF
Abstract: The \emph{$k$-restricted edge-connectivity} of a graph $G$, denoted by $\lambda_k(G)$, is defined as the minimum size of an edge set whose removal leaves exactly two connected components each containing at least $k$ vertices. This graph invariant, which can be seen as a generalization of a minimum edge-cut, has been extensively studied from a combinatorial point of view. However, very little is known about the complexity of computing $\lambda_k(G)$. Very recently, in the parameterized complexity community the notion of \emph{good edge separation} of a graph has been defined, which happens to be essentially the same as the $k$-restricted edge-connectivity. Motivated by the relevance of this invariant from both combinatorial and algorithmic points of view, in this article we initiate a systematic study of its computational complexity, with special emphasis on its parameterized complexity for several choices of the parameters. We provide a number of NP-hardness and W[1]-hardness results, as well as FPT-algorithms.

at February 27, 2015 01:43 AM UTC

Authors: Cristopher Moore
Download: PDF
Abstract: A $k$-uniform, $d$-regular instance of \EC\ is a family of $m$ sets $F_{n,d,k} = \{ S_j \subseteq \{1,\ldots,n\} \}$, where each subset has size $k$ and each $1 \le i \le n$ is contained in $d$ of the $S_j$. It is satisfiable if there is a subset $T \subseteq \{1,\ldots,n\}$ such that $|T \cap S_j|=1$ for all $j$. Alternately, we can consider it a $d$-regular instance of \POS, i.e., a Boolean formula with $m$ clauses and $n$ variables where each clause contains $k$ variables and demands that exactly one of them is true. We determine the satisfiability threshold for random instances of this type with $k > 2$. Letting $d^\star = \frac{\ln k}{(k-1)(- \ln (1-1/k))} + 1$, we show that $F_{n,d,k}$ is satisfiable with high probability if $d < d^\star$ and unsatisfiable with high probability if $d > d^\star$. We do this with a simple application of the first and second moment methods, boosting the probability of satisfiability below $d^\star$ to $1-o(1)$ using the small subgreaph conditioning method.

at February 27, 2015 01:41 AM UTC

Authors: Hamida Seba, Sofiane Lagraa, Elsen Ronando
Download: PDF
Abstract: Graph comparison is fundamentally important for many applications such as the analysis of social networks and biological data and has been a significant research area in the pattern recognition and pattern analysis domains. Nowadays, the graphs are large, they may have billions of nodes and edges. Comparison issues in such huge graphs are a challenging research problem.

In this paper, we survey the research advances of comparison problems in large graphs. We review graph comparison and pattern matching approaches that focus on large graphs. We categorize the existing approaches into three classes: partition-based approaches, search space based approaches and summary based approaches. All the existing algorithms in these approaches are described in detail and analyzed according to multiple metrics such as time complexity, type of graphs or comparison concept. Finally, we identify directions for future research.

at February 27, 2015 01:43 AM UTC

Authors: Feng Pan, Jie Qi
Download: PDF
Abstract: In this paper with two equivalent representations of the information contained by a SAT formula, the reason why sequence generated by succinct SAT formula can be greatly compressed is firstly presented based on Kolmogorov complexity theory. Then what sequences can be greatly compressed was classified and discussed.

at February 27, 2015 01:41 AM UTC

Authors: Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, Thomas Zeume
Download: PDF
Abstract: We consider the dynamic complexity of some central graph problems such as Reachability and Matching and linear algebraic problems such as Rank and Inverse. As elementary change operations we allow insertion and deletion of edges of a graph and the modification of a single entry in a matrix, and we are interested in the complexity of maintaining a property or query. Our main results are as follows:

1. Rank of a matrix is in DynFO(+,x);

2. Reachability is in DynFO;

3. Maximum Matching (decision) is in non-uniform DynFO.

Here, DynFO allows updates of the auxiliary data structure defined in first-order logic, DynFO(+,x) additionally has arithmetics at initialization time and non-uniform DynFO allows arbitrary auxiliary data at initialization time. Alternatively, DynFO(+,x) and non-uniform DynFO allow updates by uniform and non-uniform families of poly-size, bounded-depth circuits, respectively.

The second result confirms a two decade old conjecture of Patnaik and Immerman (1997). The proofs rely mainly on elementary Linear Algebra.

at February 27, 2015 12:00 AM UTC

Authors: Karthekeyan Chandrasekaran, Ameya Velingker
Download: PDF
Abstract: In a breakthrough work, Marcus-Spielman-Srivastava recently showed that every $d$-regular bipartite Ramanujan graph has a 2-lift that is also $d$-regular bipartite Ramanujan. As a consequence, a straightforward iterative brute-force search algorithm leads to the construction of a $d$-regular bipartite Ramanujan graph on $N$ vertices in time $2^{O(dN)}$. Shift $k$-lifts studied by Agarwal-Kolla-Madan lead to a natural approach for constructing Ramanujan graphs more efficiently. The number of possible shift $k$-lifts of a $d$-regular $n$-vertex graph is $k^{nd/2}$. Suppose the following holds for $k=2^{\Omega(n)}$:

There exists a shift $k$-lift that maintains the Ramanujan property of $d$-regular bipartite graphs on $n$ vertices for all $n$. (*)

Then, by performing a similar brute-force search algorithm, one would be able to construct an $N$-vertex bipartite Ramanujan graph in time $2^{O(d\,log^2 N)}$. Furthermore, if (*) holds for all $k \geq 2$, then one would obtain an algorithm that runs in $\mathrm{poly}_d(N)$ time. In this work, we take a first step towards proving (*) by showing the existence of shift $k$-lifts that preserve the Ramanujan property in $d$-regular bipartite graphs for $k=3,4$.

at February 27, 2015 01:40 AM UTC

Authors: Satoru Iwata, Shin-ichi Tanigawa, Yuichi Yoshida
Download: PDF
Abstract: This paper presents a polynomial-time $1/2$-approximation algorithm for maximizing nonnegative $k$-submodular functions. This improves upon the previous $\max\{1/3, 1/(1+a)\}$-approximation by Ward and \v{Z}ivn\'y~(SODA'14), where $a=\max\{1, \sqrt{(k-1)/4}\}$. We also show that for monotone $k$-submodular functions there is a polynomial-time $k/(2k-1)$-approximation algorithm while for any $\varepsilon>0$ a $((k+1)/2k+\varepsilon)$-approximation algorithm for maximizing monotone $k$-submodular functions would require exponentially many queries. In particular, our hardness result implies that our algorithms are asymptotically tight.

We also extend the approach to provide constant factor approximation algorithms for maximizing skew-bisubmodular functions, which were recently introduced as generalizations of bisubmodular functions.

at February 27, 2015 01:43 AM UTC

After my post last week on the Complexity accepts, a friend of Shuichi Hirahara send Shuichi an email saying that I was interested in his paper. Shuichi contacted me, sent me his paper and we had a few good emails back and forth. He posted his paper Identifying an Honest EXPNP Oracle Among Many on the arXiv yesterday.

Shuichi asks the following question: Given two oracles both claiming to compute a language L, figure out which oracle is correct. For which languages does there exist such a selector?

For deterministic polynomial-time selectors, every such L must sit in PSPACE and all PSPACE-complete languages have selectors. The question gets much more interesting if you allow probabilistic computation.

Shuichi shows that every language that has a probabilistically poly-time selector sits in S2EXP, the exponential analogue of S2P. His main theorem shows that EXPNP-complete sets have this property. His proof is quite clever, using the EXPNP-complete problem of finding the lexicographically least witness of a succinctly-described exponential-size 3-SAT question. He uses PCP techniques to have each oracle produce a witness and then he has a clever way to doing binary search to find the least bit where these witnesses differ. I haven't checked all the details carefully but the proof ideas look good.

Still leaves an interesting gap between EXPNP and S2EXP. Is there a selector for Promise-S2EXP-complete languages?

by Lance Fortnow (noreply@blogger.com) at February 26, 2015 10:22 PM UTC

A highly abundant number is a positive integer n that holds the record (among it and smaller numbers) for the biggest sum of divisors σ(n). While cleaning up some citations on the Wikipedia article, I ran across an unsolved problem concerning these numbers, posed by Jaycob Coleman and listed on the OEIS entry for them: are all sufficiently large highly abundant numbers practical?

A practical number n has the property that all numbers up to n can be expressed as sums of distinct divisors of n. This can be tested by looking at the factorization of n: define the <p-smooth part of n to be the product of the prime factors of n that are less than p. Then n is practical if and only if, for each prime factor p of n, p is at most one more than the sum of divisors of the <p-smooth part of n. So, for instance, the highly abundant number 10 is not practical: the <5-smooth part of 10 is 2, and 5 is too big compared to σ(2) = 3. Also, 3 is not practical as its <3-smooth part is only one. Are these the only exceptions?

As with other questions involving record-holders for multiplicative functions, the highly abundant numbers can be thought of as solutions to special instances of the knapsack problem: if we define the size of a prime power pi to be log p, and we define its profit to be the logarithm of the factor (pi + 1 − 1)/(pi − 1) by which including pi as a divisor of n would cause σ to increase (relative to the next lower power of p), then the factorization of n is given by the set of prime powers whose sizes add to at most log n and whose profits add to the largest number possible. I don't know how to use this knapsack view of the problem directly (in part because knapsack is a hard problem) but it is helpful in thinking about showing that certain factors must be present or absent because they would lead to a better knapsack solution.

For instance, suppose that n is highly abundant, let p be the smallest prime that does not divide n, and let P be the largest prime factor of n. Then it must be true that P < p2. For, if not, let q = floor(P/p). We could replace P in the factorization of n by pq, giving a smaller number than n with a bigger contribution to σ: at least (p + 1)q, versus at most P + 1, or smaller if P appears to a higher power than one.

Based on this fact, it's straightforward to show that all highly abundant numbers that are divisible by four are practical. More strongly the same is true for other numbers n that are divisible by four and have the same inequality for p and P. For, if the first missing prime p is 3, then the sum of divisors of the <p-smooth part is at least 7, big enough to cover any prime factor P that satisfies the inequality. And for each additional prime factor of n smaller than p, the bound on P grows by at most four (by Bertrand's postulate) and the sum of divisors of the smooth part grows by at least four, so this sum of divisors always remains large enough to satisfy the condition for being practical.

But in their early work on highly abundant numbers, Alaoglu and Erdős observed that 210 is the largest highly abundant number to include only one factor of two in its prime factorization. All larger highly abundant numbers are divisible by four, and by the argument above they are all practical. The remaining cases are small enough to test individually, and they are all practical. So Jaycob Coleman's conjecture is true.

Update: this claim about 210 is obviously wrong. 630 is highly abundant and is also not divisible by four. So here's a better argument along the same lines. The case p = 2 is easy to handle: P can only be 3, so n is a power of three. If it is not 3 itself, we could replace a factor of 9 in it by a factor of 8, getting a smaller number with a bigger contribution to σ (15 vs 13). So the only odd highly abundant number is 3. Similarly, if the first missing prime is 3, then n must be {2,5,7}-smooth. If it is divisible by 25, we can replace this factor by 24 (with a contribution of at least 32 to σ instead of 31) and if it is divisible by 7, we can replace this factor by 6 (with a contribution greater than 8 to σ instead of 8). So the only possible highly abundant numbers that are even but not divisible by 3 are powers of two and their multiples by five, and the only one of those that can be impractical is 10.

Next, suppose that the first missing prime is 5, and there is only one factor of two. The <5-smooth part is at least 6 and its sum of divisors is 12, big enough to cover all primes less than 11, and if any of these primes is a factor of n then including it in the smooth part boosts the sum of divisors to large enough to cover all remaining factors. Similarly, if there is more than one factor of three, then the sum of divisors of the smooth part is at least 39, covering all possible prime factors. So the only possible impractical numbers in this case are not divisible by 5, 7, or 11 but are divisible by exactly one factor of 2 or 3 and may be divisible by 13, 17, 19, or 23. A factor of 13 can be replaced by a factor of 10 (contributing 14 to σ in either case, so giving a smaller number with the same sum of divisors). A factor of 17 can be replaced by a factor of 15 (contributing 19.5 to σ instead of 18). A factor of 19 can be replaced by a factor of 18 (contributing 23 1/2 instead of 20) and a factor of 23 can be replaced by a factor of 20 (contributing 30 instead of 24). So none of these cases give rise to new exceptions.

Finally, if the first missing prime is 7, then <7-smooth part is at least 30 and its sum of divisors is at least 72, big enough to cover all primes less than 49, and from here we can use the same Bertrand postulate argument.

at February 26, 2015 06:49 PM UTC

This question is about problems for which there is a big open complexity gap between known lower bound and upper bound, but not because of open problems on complexity classes themselves.

To be more precise, let's say a problem has gap classes $A,B$ (with $A\subseteq B$, not uniquely defined) if $A$ is a maximal class for which we can prove it is $A$-hard, and $B$ is a minimal known upper bound, i.e. we have an algorithm in $B$ solving the problem. This means if we end up finding out that the problem is $C$-complete with $A\subseteq C\subseteq B$, it will not impact complexity theory in general, as opposed to finding a $P$ algorithm for a $NP$-complete problem.

I am not interested in problems with $A\subseteq P$ and $B=NP$, because it is already the object of this question.

I am looking for examples of problems with gap classes that are as far as possible. To limit the scope and precise the question, I am especially interested in problems with $A\subseteq P$ and $B\supseteq EXPTIME$, meaning both membership in $P$ and $EXPTIME$-completeness are coherent with current knowledge, without making known classes collapse (say classes from this list).

by Denis at February 26, 2015 12:11 PM UTC

Authors: Mehryar Mohri, Michael Riley, Ananda Theertha Suresh
Download: PDF
Abstract: We present a theoretical framework for the compression of automata, which are widely used in speech processing and other natural language processing tasks. The framework extends to graph compression. Similar to stationary ergodic processes, we formulate a probabilistic process of graph and automata generation that captures real world phenomena and provide a universal compression scheme LZA for this probabilistic model. Further, we show that LZA significantly outperforms other compression techniques such as gzip and the UNIX compress command for several synthetic and real data sets.

at February 26, 2015 01:47 AM UTC

Authors: Shuichi Hirahara
Download: PDF
Abstract: We provide a general framework to remove short advice by formulating the following computational task for a function f: given two oracles at least one of which is honest (i.e. correctly computes f on all inputs) as well as an input, the task is to compute f on the input with the help of the oracles by a probabilistic polynomial-time machine, which we shall call a selector. We characterize the languages for which short advice can be removed by the notion of selector: a paddable language has a selector if and only if short advice of a probabilistic machine that accepts the language can be removed under any relativized worlds.

Previously, instance checkers have served as a useful tool to remove short advice of probabilistic computation. We indicate that existence of instance checkers is a property stronger than that of removing short advice: there exists a selector for EXP^NP-complete languages, whereas no instance checker for these languages exists unless EXP^NP = NEXP.

at February 26, 2015 01:41 AM UTC

Authors: Giorgio Ausiello, Paolo G. Franciosa, Giuseppe F. Italiano, Andrea Ribichini
Download: PDF
Abstract: We present a new algorithm for maintaining a DFS tree of an arbitrary directed graph under any sequence of edge insertions. Our algorithm requires a total of $O(m\cdot n)$ time in the worst case to process a sequence of edge insertions, where $n$ is the number of vertices in the graph and $m$ is the total number of edges in the final graph. We also prove lower bounds for variations of this problem.

at February 26, 2015 01:48 AM UTC

Authors: I. V. Oseledets, G. V. Ovchinnikov, A. M. Katrutsa
Download: PDF
Abstract: This paper presents a deterministic linear time complexity IDE-SimRank method to approximately compute SimRank with proved error bound. SimRank is a well-known similarity measure between graph vertices which relies on graph topology only and is built on intuition that "two objects are similar if they are related to similar objects". The fixed point equation for direct SimRank computation is the discrete Lyapunov equation with specific diagonal matrix in the right hand side. The proposed method is based on estimation of this diagonal matrix with GMRES and use this estimation to compute singe-source and single pairs queries. These computations are executed with the part of series converging to the discrete Lyapunov equation solution.

at February 26, 2015 01:47 AM UTC

Authors: Andreas Haas, Thomas A. Henzinger, Andreas Holzer, Christoph M. Kirsch, Michael Lippautz, Hannes Payer, Ali Sezgin, Ana Sokolova, Helmut Veith
Download: PDF
Abstract: Linearizability is the most accepted consistency condition used to describe the semantics of concurrent data structures. One of its benefits is its intuitive definition that is well understandable to system designers and programmers. However, linearizability is very strong and often linearizable implementations show unsatisfactory performance and scalability. To open up possibilities for more efficient implementations of concurrent data structures, a plethora of consistency conditions has been considered in the literature. In this paper, we utilise the idea of a decomposition principle and define local linearizability, a new consistency condition that is closely connected to linearizability. Local linearizability is weaker than linearizability for pools, queues, and stacks. Local linearizability is intuitive, verifiable, and opens up possibilities for new efficient implementations. In particular, we provide new well-performing and scalable implementations of locally linearizable concurrent queues and stacks.

at February 26, 2015 01:41 AM UTC

Authors: Asghar Asgharian Sardroud, Alireza Bagheri
Download: PDF
Abstract: Although, the Hamiltonicity of solid grid graphs are polynomial-time decidable, the complexity of the longest cycle problem in these graphs is still open. In this paper, by presenting a linear-time constant-factor approximation algorithm, we show that the longest cycle problem in solid grid graphs is in APX. More precisely, our algorithm finds a cycle of length at least $\frac{2n}{3}+1$ in 2-connected $n$-node solid grid graphs.

Keywords: Longest cycle, Hamiltonian cycle, Approximation algorithm, Solid grid graph.

at February 26, 2015 01:41 AM UTC

Authors: Andrew R. Francis, Mike Steel
Download: PDF
Abstract: A binary phylogenetic network may or may not be obtainable from a tree by the addition of directed edges (arcs) between tree arcs. Here, we establish a precise and easily tested criterion (based on `antichains') that efficiently determines whether or not any given network can be realized in this way. Moreover, the proof provides a polynomial-time algorithm for finding one or more trees (when they exist) on which the network can be based. A number of interesting consequences are presented as corollaries; these lead to some further relevant questions and observations, which we outline in the conclusion.

at February 26, 2015 01:41 AM UTC

Authors: David Felber, Rafail Ostrovsky
Download: PDF
Abstract: We consider the problem of tracking with small relative error an integer function $f(n)$ defined by a distributed update stream $f'(n)$. Existing streaming algorithms with worst-case guarantees for this problem assume $f(n)$ to be monotone; there are very large lower bounds on the space requirements for summarizing a distributed non-monotonic stream, often linear in the size $n$ of the stream.

Input streams that give rise to large space requirements are highly variable, making relatively large jumps from one timestep to the next. However, streams often vary slowly in practice. What has heretofore been lacking is a framework for non-monotonic streams that admits algorithms whose worst-case performance is as good as existing algorithms for monotone streams and degrades gracefully for non-monotonic streams as those streams vary more quickly.

In this paper we propose such a framework. We introduce a new stream parameter, the "variability" $v$, deriving its definition in a way that shows it to be a natural parameter to consider for non-monotonic streams. It is also a useful parameter. From a theoretical perspective, we can adapt existing algorithms for monotone streams to work for non-monotonic streams, with only minor modifications, in such a way that they reduce to the monotone case when the stream happens to be monotone, and in such a way that we can refine the worst-case communication bounds from $\Theta(n)$ to $\tilde{O}(v)$. From a practical perspective, we demonstrate that $v$ can be small in practice by proving that $v$ is $O(\log f(n))$ for monotone streams and $o(n)$ for streams that are "nearly" monotone or that are generated by random walks. We expect $v$ to be $o(n)$ for many other interesting input classes as well.

at February 26, 2015 01:45 AM UTC

Authors: Christopher T. Chubb, Steven T. Flammia
Download: PDF
Abstract: Given a gapped Hamiltonian of a spin chain, we give a polynomial-time algorithm for finding the degenerate ground space projector. The output is an orthonormal set of matrix product states that approximate the true ground space projector up to an inverse polynomial error in any Schatten norm, with a runtime exponential in the degeneracy. Our algorithm is an extension of the recent algorithm of Landau, Vazirani, and Vidick for the nondegenerate case, and it includes the recent improvements due to Huang. The main new idea is to incorporate the local distinguishability of ground states on the half-chain to ensure that the algorithm returns a complete set of global ground states.

at February 26, 2015 01:41 AM UTC

Longtime friend and colleague Boaz Barak sent me a fascinating New York Times Magazine article that profiles people who lost their jobs or otherwise had their lives ruined, because of a single remark that then got amplified a trillionfold in importance by social media.  (The author, Jon Ronson, also has a forthcoming book on the topic.)  The article opens with Justine Sacco: a woman who, about to board a flight to Cape Town, tweeted “Going to Africa.  Hope I don’t get AIDS.  Just kidding.  I’m white!”

To the few friends who read Sacco’s Twitter feed, it would’ve been obvious that she was trying to mock the belief of many well-off white people that they live in a bubble, insulated from the problems of the Third World; she wasn’t actually mocking black Africans who suffer from AIDS.  In a just world, maybe Sacco deserved someone to take her aside and quietly explain that her tweet might be read the wrong way, that she should be more careful next time.  Instead, by the time she landed in Cape Town, she learned that she’d become the #1 worldwide Twitter trend and a global symbol of racism.  She lost her career, she lost her entire previous life, and tens of thousands of people expressed glee about it.  The article rather heartbreakingly describes Sacco’s attempts to start over.

There are many more stories like the above.  Some I’d already heard about: the father of three who lost his job after he whispered a silly joke involving “dongles” to the person next to him at a conference, whereupon Adria Richards, a woman in front of him, snapped his photo and posted it to social media, to make an example of him as a sexist pig.  (Afterwards, a counter-reaction formed, which successfully got Richards fired from her job: justice??)  Other stories I hadn’t heard.

Reading this article made it clear to me just how easily I got off, in my own recent brush with the online shaming-mobs.  Yes, I made the ‘mistake’ of writing too openly about my experiences as a nerdy male teenager, and the impact that one specific aspect of feminist thought (not all of feminism!) had had on me.  Within the context of the conversation that a few nerdy men and women were having on this blog, my opening up led to exactly the results I was hoping for: readers thoughtfully sharing their own experiences, a meaningful exchange of ideas, even (dare I say it?) glimmers of understanding and empathy.

Alas, once the comment was wrested from its original setting into the clickbait bazaar, the story became “MIT professor explains: the real oppression is having to learn to talk to women” (the title of Amanda Marcotte’s hit-piece, something even some in Marcotte’s ideological camp called sickeningly cruel).  My photo was on the front page of Salon, next to the headline “The plight of the bitter nerd.”  I was subjected to hostile psychoanalysis not once but twice on ‘Dr. Nerdlove,’ a nerd-bashing site whose very name drips with irony, rather like the ‘Democratic People’s Republic of Korea.’  There were tweets and blog comments that urged MIT to fire me, that compared me to a mass-murderer, and that “deduced” (from first principles!) all the ways in which my parents screwed up in raising me and my female students cower in fear of me.   And yes, when you Google me, this affair now more-or-less overshadows everything else I’ve done in my life.

But then … there were also hundreds of men and women who rose to my defense, and they were heavily concentrated among the people I most admire and respect.  My supporters ranged from the actual female students who took my classes or worked with me or who I encouraged in their careers, from whom there was only kindness, not a single negative word; to the shy nerds who thanked me for being one of the only people to acknowledge their reality; to the lesbians and bisexual women who told me my experience also resonated with them; to the female friends and colleagues who sent me notes urging me to ignore the nonsense.  In the end, not only have I not lost any friends over this, I’ve gained new ones, and I’ve learned new sides of the friends I had.

Oh, and I didn’t get any death threats: I guess that’s good!  (Once in my life I did get death threats—graphic, explicit threats, about which I had to contact the police—but it was because I refused to publicize someone’s P=NP proof.)

Since I was away from campus when this blew up, I did feel some fear about the professional backlash that would await me on my return.  Would my office be vandalized?  Would activist groups be protesting my classes?  Would MIT police be there to escort me from campus?

Well, you want to know what happened instead?  Students and colleagues have stopped me in the hall, or come by my office, just to say they support me.  My class has record enrollment this term.  I was invited to participate in MIT’s Diversity Summit, since the organizers felt it would mean a lot to the students to see someone there who had opened up about diversity issues in STEM in such a powerful way.  (I regretfully had to decline, since the summit conflicted with a trip to Stanford.)  And an MIT graduate women’s reading group invited me for a dinner discussion (at my suggestion, Laurie Penny participated as well).  Imagine that: not only are MIT’s women’s groups not picketing me, they’re inviting me over for dinner!  Is there any better answer to the claim, urged on me by some of my overzealous supporters, that the bile of Amanda Marcotte represents all of feminism these days?

Speaking of which, I met Laurie Penny for coffee last month, and she and I quickly hit it off.  We’ve even agreed to write a joint blog post about our advice for shy nerds.  (In my What I Believe post, I had promised a post of advice for shy female nerds—but at Laurie’s urging, we’re broadening the focus to shy nerds of both sexes.)  Even though Laurie’s essay is the thing that brought me to the attention of the Twitter-mobs (which wasn’t Laurie’s intent!), and even though I disagreed with several points in her essay, I knew on reading it that Laurie was someone I’d enjoy talking to.  Unlike so much writing by online social justice activists, which tends to be encrusted with the specialized technical terms of that field—you know, terms like “asshat,” “shitlord,” “douchecanoe,” and “precious feefees of entitled white dudes”—Laurie’s prose shone with humanity and vulnerability: her own, which she freely shared, and mine, which she generously acknowledged.

Overall, the response to my comment has never made me happier or more grateful to be part of the STEM community (I never liked the bureaucratic acronym “STEM,” but fine, I’ll own it).  To many outsiders, we STEM nerds are a sorry lot: we’re “sperglords” (yes, slurs are fine, as long as they’re directed against the right targets!) who might be competent in certain narrow domains, but who lack empathy and emotional depth, and are basically narcissistic children.  Yet somehow when the chips were down, it’s my fellow STEM nerds, and people who hang out with STEM nerds a lot, who showed me far more empathy and compassion than many of the “normals” did.  So if STEM nerds are psychologically broken, then I say: may I surround myself, for the rest of my life, with men and women who are psychologically broken like I am.  May I raise Lily, and any future children I have, to be as psychologically broken as they can be.  And may I stay as far as possible from anyone who’s too well-adjusted.

I reserve my ultimate gratitude for the many women in STEM, friends and strangers alike, who sent me messages of support these past two months.  I’m not ashamed to say it: witnessing how so many STEM women stood up for me has made me want to stand up for them, even more than I did before.  If they’re not called on often enough in class, I’ll call on them more.  If they’re subtly discouraged from careers in science, I’ll blatantly encourage them back.  If they’re sexually harassed, I’ll confront their harassers myself (well, if asked to).  I will listen to them, and I will try to improve.

Is it selfish that I want to help female STEM nerds partly because they helped me?  Here’s the thing: one of my deepest moral beliefs is in the obligation to fight for those among the disadvantaged who don’t despise you, and who wouldn’t gladly rid the planet of everyone like you if they could.  (As I’ve written before, on issue after issue, this belief makes me a left-winger by American standards, and a right-winger by academic ones.)  In the present context, I’d say I have a massive moral obligation toward female STEM nerds and toward Laurie Penny’s version of feminism, and none at all toward Marcotte’s version.

All this is just to say that I’m unbelievably lucky—privileged (!)—to have had so many at MIT and elsewhere willing to stand up for me, and to have reached in a stage in life where I’m strong enough to say what I think and to weather anything the Internet says back.  What worries me is that others, more vulnerable, didn’t and won’t have it as easy when the Twitter hate-machine turns its barrel on them.  So in the rest of this post, I’d like to discuss the problem of what to do about social-media shaming campaigns that aim to, and do, destroy the lives of individuals.  I’m convinced that this is a phenomenon that’s only going to get more and more common: something sprung on us faster than our social norms have evolved to deal with it.  And it would be nice if we could solve it without having to wait for a few high-profile suicides.

But first, let me address a few obvious questions about why this problem is even a problem at all.

Isn’t social shaming as old as society itself—and permanent records of the shaming as old as print media?

Yes, but there’s also something fundamentally new about the problem of the Twitter-mobs.  Before, it would take someone—say, a newspaper editor—to make a conscious decision to the effect, “this comment is worth destroying someone’s life over.”  Today, there might be such an individual, but it’s also possible for lives to be destroyed in a decentralized, distributed fashion, with thousands of Twitterers collaborating to push a non-story past the point of no return.  And among the people who “break” the story, not one has to intend to ruin the victim’s life, or accept responsibility for it afterward: after all, each one made the story only ε bigger than it already was.  (Incidentally, this is one reason why I haven’t gotten a Twitter account: while it has many worthwhile uses, it’s also a medium that might as well have been designed for mobs, for ganging up, for status-seeking among allies stripped of rational arguments.  It’s like the world’s biggest high school.)

Don’t some targets of online shaming campaigns, y’know, deserve it?

Of course!  Some are genuine racists or misogynists or homophobes, who once would’ve been able to inflict hatred their entire lives without consequence, and were only brought down thanks to social media.  The trouble is, the participants in online shaming campaigns will always think they’re meting out righteous justice, whether they are or aren’t.  But there’s an excellent reason why we’ve learned in modern societies not to avenge even the worst crimes via lynch mobs.  There’s a reason why we have trials and lawyers and the opportunity for the accused to show their innocence.

Some might say that no safeguards are possible or necessary here, since we’re not talking about state violence, just individuals exercising their free speech right to vilify someone, demand their firing, that sort of thing.  Yet in today’s world, trial-by-Internet can be more consequential than the old kind of trial: would you rather spend a year in jail, but then be free to move to another town where no one knew about it, or have your Google search results tarnished with lurid accusations (let’s say, that you molested children) for the rest of your life—to have that forever prevent you from getting a job or a relationship, and have no way to correct the record?  With trial by Twitter, there’s no presumption of innocence, no requirement to prove that any other party was harmed, just the law of the schoolyard.

Whether shaming is justified in a particular case is a complicated question, but for whatever it’s worth, here are a few of the questions I would ask:

  • Did the person express a wish for anyone (or any group of people) to come to harm, or for anyone’s rights to be infringed?
  • Did the person express glee or mockery about anyone else’s suffering?
  • Did the person perpetrate a grievous factual falsehood—like, something one could prove was a falsehood in a court of law?
  • Did the person violate anyone else’s confidence?
  • How much does the speaker’s identity matter?  If it had been a man rather than a woman (or vice versa) saying parallel things, would we have taken equal offense?
  • Does the comment have what obscenity law calls “redeeming social value”?  E.g., does it express an unusual viewpoint, or lead to an interesting discussion?

Of course, even in those cases where shaming campaigns are justified, they’ll sometimes be unproductive and ill-advised.

Aren’t society’s most powerful fair targets for public criticism, even mocking or vicious criticism?

Of course.  Few would claim, for example, that we have an ethical obligation to ease up on Todd Akin over his “legitimate rape” remarks, since all the rage might give Akin an anxiety attack.  Completely apart from the (de)merits of the remarks, we accept that, when you become (let’s say) an elected official, a CEO, or a university president, part of the bargain is that you no longer get to complain if people organize to express their hatred of you.

But what’s striking about the cases in the NYT article is that it’s not public figures being gleefully destroyed: just ordinary people who in most cases, made one ill-advised joke or tweet, no worse than countless things you or I have probably said in private among friends.  The social justice warriors try to justify what would otherwise look like bullying by shifting attention away from individuals: sure, Justine Sacco might be a decent person, but she stands for the entire category of upper-middle-class, entitled white women, a powerful structural force against whom the underclass is engaged in a righteous struggle.  Like in a war, the enemy must be fought by any means necessary, even if it means picking off one hapless enemy foot-soldier to make an example to the rest.  And anyway, why do you care more about this one professional white woman, than about the millions of victims of racism?  Is it because you’re a racist yourself?

I find this line of thinking repugnant.  For it perverts worthy struggles for social equality into something callous and inhuman, and thereby undermines the struggles themselves.  It seems me to have roughly the same relation to real human rights activism as the Inquisition did to the ethical teachings of Jesus.  It’s also repugnant because of its massive chilling effect: watching a few shaming campaigns is enough to make even the most well-intentioned writer want to hide behind a pseudonym, or only offer those ideas and experiences that are sure to win approval.  And the chilling effect is not some accidental byproduct; it’s the goal.  This negates what, for me, is a large part of the promise of the Internet: that if people from all walks of life can just communicate openly, everything made common knowledge, nothing whispered or secondhand, then all the well-intentioned people will eventually come to understand each other.


If I’m right that online shaming of decent people is a real problem that’s only going to get worse, what’s the solution?  Let’s examine five possibilities.

(1) Libel law.  For generations, libel has been recognized as one of the rare types of speech that even a liberal, democratic society can legitimately censor (along with fraud, incitement to imminent violence, national secrets, child porn, and a few others).  That libel is illegal reflects a realistic understanding of the importance of reputation: if, for example, CNN falsely reports that you raped your children, then it doesn’t really matter if MSNBC later corrects the record; your life as you knew it is done.

The trouble is, it’s not clear how to apply libel law in the age of social media.  In the cases we’re talking about, an innocent person’s life gets ruined because of the collective effect of thousands of people piling on to make nasty comments, and it’s neither possible nor desirable to prosecute all of them.  Furthermore, in many cases the problem is not that the shamers said anything untrue: rather, it’s that they “merely” took something true and spitefully misunderstood it, or blew it wildly, viciously, astronomically out of proportion.  I don’t see any legal remedies here.

(2) “Shame the shamers.”  Some people will say the only answer is to hit the shamers with their own weapons.  If an overzealous activist gets an innocent jokester fired from his job, shame the activist until she’s fired from her job.  If vigilantes post the jokester’s home address on the Internet with crosshairs overlaid, find the vigilantes’ home addresses and post those.  It probably won’t surprise many people that I’m not a fan of this solution.  For it only exacerbates the real problem: that of mob justice overwhelming reasoned debate.  The most I can say in favor of vigilantism is this: you probably don’t get to complain about online shaming, if what you’re being shamed for is itself a shaming campaign that you prosecuted against a specific person.

(In a decade writing this blog, I can think of exactly one case where I engaged in what might be called a shaming campaign: namely, against the Bell’s inequality denier Joy Christian.  Christian had provoked me over six years, not merely by being forehead-bangingly wrong about Bell’s theorem, but by insulting me and others when we tried to reason with him, and by demanding prize money from me because he had ‘proved’ that quantum computing was a fraud.  Despite that, I still regret the shaming aspects of my Joy Christian posts, and will strive not to repeat them.)

(3) Technological solutions.  We could try to change the functioning of the Internet, to make it harder to use it to ruin people’s lives.  This, more-or-less, is what the European Court of Justice was going for, with its much-discussed recent ruling upholding a “right to be forgotten” (more precisely, a right for individuals to petition for embarrassing information about them to be de-listed from search engines).  Alas, I fear that the Streisand effect, the Internet’s eternal memory, and the existence of different countries with different legal systems will forever make a mockery of all such technological solutions.  But, OK, given that Google is constantly tweaking its ranking algorithms anyway, maybe it could give less weight to cruel attacks against non-public-figures?  Or more weight (or even special placement) to sites explaining how the individual was cleared of the accusations?  There might be scope for such things, but I have the strong feeling that they should be done, if at all, on a voluntary basis.

(4) Self-censorship.  We could simply train people not to express any views online that might jeopardize their lives or careers, or at any rate, not to express those views under their real names.  Many people I’ve talked to seem to favor this solution, but I can’t get behind it.  For it effectively cedes to the most militant activists the right to decide what is or isn’t acceptable online discourse.  It tells them that they can use social shame as a weapon to get what they want.  When women are ridiculed for sharing stories of anorexia or being sexually assaulted or being discouraged from careers in science, it’s reprehensible to say that the solution is to teach those women to shut up about it.  I not only agree with that but go further: privacy is sometimes important, but is also an overrated value.  The respect that one rational person affords another for openly sharing the truth (or his or her understanding of the truth), in a spirit of sympathy and goodwill, is a higher value than privacy.  And the Internet’s ability to foster that respect (sometimes!) is worth defending.

(5) Standing up.  And so we come to the only solution that I can wholeheartedly stand behind.  This is for people who abhor shaming campaigns to speak out, loudly, for those who are unfairly shamed.

At the nadir of my own Twitter episode, when it felt like my life was now finished, throw in the towel, the psychiatrist Scott Alexander wrote a 10,000-word essay in my defense, which also ranged controversially into numerous other issues.  In a comment on his girlfriend Ozy’s blog, Alexander now says that he regrets aspects of Untitled (then again, it was already tagged “Things I Will Regret Writing” when he posted it!).  In particular, he now feels that the piece was too broad in its critique of feminism.  However, he then explains as follows what motivated him to write it:

Scott Aaronson is one of the nicest and most decent people in the world, who does nothing but try to expand human knowledge and support and mentor other people working on the same in a bunch of incredible ways. After a lot of prompting he exposed his deepest personal insecurities, something I as a psychiatrist have to really respect. Amanda Marcotte tried to use that to make mincemeat of him, casually, as if destroying him was barely worth her time. She did it on a site where she gets more pageviews than he ever will, among people who don’t know him, and probably stained his reputation among nonphysicists permanently. I know I have weird moral intuitions, but this is about as close to pure evil punching pure good in the face just because it can as I’ve ever seen in my life. It made me physically ill, and I mentioned the comments of the post that I lost a couple pounds pacing back and forth and shaking and not sleeping after I read it. That was the place I was writing from. And it was part of what seemed to me to be an obvious trend, and although “feminists vs. nerds” is a really crude way of framing it, I couldn’t think of a better one in that mental state and I couldn’t let it pass.

I had three reactions on reading this.  First, if there is a Scott in this discussion who’s “pure good,” then it’s not I.  Second, maybe the ultimate solution to the problem of online shaming mobs is to make a thousand copies of Alexander, and give each one a laptop with an Internet connection.  But third, as long as we have only one of him, the rest of us have a lot of work cut out for us.  I know, without having to ask, that the only real way I can thank Alexander for coming to my defense, is to use this blog to defend other people (anywhere on the ideological spectrum) who are attacked online for sharing in a spirit of honesty and goodwill.  So if you encounter such a person, let me know—I’d much prefer that to letting me know about the latest attempt to solve NP-complete problems in polynomial time with some analog contraption.


Unrelated Update: Since I started this post with Boaz Barak, let me also point to his recent blog post on why theoretical computer scientists care so much about asymptotics, despite understanding full well that the constants can overwhelm them in practice.  Boaz articulates something that I’ve tried to say many times, but he’s crisper and more eloquent.


Update (Feb. 27): Since a couple people asked, I explain here what I see as the basic problems with the “Dr. Nerdlove” site.


Update (Feb. 28): In the middle of this affair, perhaps the one thing that depressed me the most was Salon‘s “Plight of the bitter nerd” headline. Random idiots on the Internet were one thing, but how could a “serious,” “respectable” magazine lend its legitimacy to such casual meanness? I’ve now figured out the answer: I used to read Salon sometimes in the late 90s and early 2000s, but not since then, and I simply hadn’t appreciated how far the magazine had descended into clickbait trash. There’s an amusing fake Salon Twitter account that skewers the magazine with made-up headlines (“Ten signs your cat might be racist” / “Nerd supremacism: should we have affirmative action to get cool people into engineering?”), mixed with actual Salon headlines, in such a way that it would be difficult to tell many of them apart were they not marked. (Indeed, someone should write a web app where you get quizzed to see how well you can distinguish them.) “The plight of the bitter nerd” is offered there as one of the real headlines that’s indistinguishable from the parodies.

by Scott at February 25, 2015 01:15 PM UTC

Constant parallel-time cryptography allows to perform complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions that each of their output bits depend on a constant number of input bits. A natural way to obtain local cryptographic constructions is to use \emph{random local functions} in which each output bit is computed by applying some fixed $d$-ary predicate $P$ to a randomly chosen $d$-size subset of the input bits. In this work, we will study the cryptographic hardness of random local functions. In particular, we will survey known attacks and hardness results, discuss different flavors of hardness (one-wayness, pseudorandomness, collision resistance, public-key encryption), and mention applications to other problems in cryptography and computational complexity. We also present some open questions with the hope to develop a systematic study of the cryptographic hardness of local functions.

at February 25, 2015 08:46 AM UTC

Paul Erdös would be 102 year old this year, and in celebration of this the Notices of the AMS have published a two-part series of essays on his life and his work: [part 1] and [part 2].

Of particular interest to me is the story of the problem of finding large gaps between primes; recently Maynard, Ford, Green, Konyagin, and Tao solved an Erdös $10,000 question in this direction. It is probably the Erdös open question with the highest associated reward ever solved (I don’t know where to look up this information — for comparison, Szemeredi’s theorem was a $1,000 question), and it is certainly the question whose statement involves the most occurrences of “\log“.


by luca at February 25, 2015 02:16 AM UTC

Authors: Song Yang, Stojan Trajanovski, Fernando A. Kuipers
Download: PDF
Abstract: Solving the shortest path and the min-cut problems are key in achieving high performance and robust communication networks. Those problems have often beeny studied in deterministic and independent networks both in their original formulations as well as in several constrained variants. However, in real-world networks, link weights (e.g., delay, bandwidth, failure probability) are often correlated due to spatial or temporal reasons, and these correlated link weights together behave in a different manner and are not always additive.

In this paper, we first propose two correlated link-weight models, namely (i) the deterministic correlated model and (ii) the (log-concave) stochastic correlated model. Subsequently, we study the shortest path problem and the min-cut problem under these two correlated models. We prove that these two problems are NP-hard under the deterministic correlated model, and even cannot be approximated to arbitrary degree in polynomial time. However, these two problems are polynomial-time solvable under the (constrained) nodal deterministic correlated model, and can be solved by convex optimization under the (log-concave) stochastic correlated model.

at February 25, 2015 01:42 AM UTC

Authors: Ching-Lueh Chang
Download: PDF
Abstract: We give a deterministic $O(hn^{1+1/h})$-time $(2h)$-approximation nonadaptive algorithm for $1$-median selection in $n$-point metric spaces, where $h\in\mathbb{Z}^+\setminus\{1\}$ is arbitrary. Our proof generalizes that of Chang.

at February 25, 2015 01:42 AM UTC

Authors: Mike Behrisch, Miki Hermann, Stefan Mengel, Gernot Salzer
Download: PDF
Abstract: We investigate the complexity of three optimization problems in Boolean propositional logic related to information theory: Given a conjunctive formula over a set of relations, find a satisfying assignment with minimal Hamming distance to a given assignment that satisfies the formula ($\mathsf{NeareastOtherSolution}$, $\mathsf{NOSol}$) or that does not need to satisfy it ($\mathsf{NearestSolution}$, $\mathsf{NSol}$). The third problem asks for two satisfying assignments with a minimal Hamming distance among all such assignments ($\mathsf{MinSolutionDistance}$, $\mathsf{MSD}$).

For all three problems we give complete classifications with respect to the relations admitted in the formula. We give polynomial time algorithms for several classes of constraint languages. For all other cases we prove hardness or completeness regarding APX, APX, NPO, or equivalence to well-known hard optimization problems.

at February 25, 2015 01:40 AM UTC

Authors: Gabor Ivanyos, Marek Karpinski, Miklos Santha, Nitin Saxena, Igor Shparlinski
Download: PDF
Abstract: We consider the problem of recovering (that is, interpolating) and identity testing of a "hidden" monic polynomial $f$, given an oracle access to $f(x)^e$ for $x\in{\mathbb F_q}$ (extension fields access is not permitted). The naive interpolation algorithm needs $O(e\, \mathrm{deg}\, f)$ queries and thus requires $e\, \mathrm{deg}\, f<q$. We design algorithms that are asymptotically better in certain cases; requiring only $e^{o(1)}$ queries to the oracle. In the randomized (and quantum) setting, we give a substantially better interpolation algorithm, that requires only $O(\mathrm{deg}\, f \log q)$ queries. Such results have been known before only for the special case of a linear $f$, called the hidden shifted power problem.

We use techniques from algebra, such as effective versions of Hilbert's Nullstellensatz, and analytic number theory, such as results on the distribution of rational functions in subgroups and character sum estimates.

at February 25, 2015 01:42 AM UTC

Authors: Quentin Bramas, Sébastien Tixeuil
Download: PDF
Abstract: We consider the problem of scattering $n$ robots in a two dimensional continuous space. As this problem is impossible to solve in a deterministic manner, all solutions must be probabilistic. We investigate the amount of randomness (that is, the number of random bits used by the robots) that is required to achieve scattering. We first prove that $n \log n$ random bits are necessary to scatter $n$ robots in any setting. Also, we give a sufficient condition for a scattering algorithm to be random bit optimal. As it turns out that previous solutions for scattering satisfy our condition, they are hence proved random bit optimal for the scattering problem. Then, we investigate the time complexity of scattering when strong multiplicity detection is not available. We prove that such algorithms cannot converge in constant time in the general case and in $o(\log \log n)$ rounds for random bits optimal scattering algorithms. However, we present a family of scattering algorithms that converge as fast as needed without using multiplicity detection. Also, we put forward a specific protocol of this family that is random bit optimal ($n \log n$ random bits are used) and time optimal ($\log \log n$ rounds are used). This improves the time complexity of previous results in the same setting by a $\log n$ factor. Aside from characterizing the random bit complexity of mobile robot scattering, our study also closes its time complexity gap with and without strong multiplicity detection (that is, $O(1)$ time complexity is only achievable when strong multiplicity detection is available, and it is possible to approach it as needed otherwise).

at February 25, 2015 12:00 AM UTC

Define $io$-$SUBEXP$ to be the class of languages $L$ such that there is a language $L' \in \cap_{\varepsilon > 0} TIME(2^{n^{\varepsilon}})$ and for infinitely many $n$, $L$ and $L'$ agree on all instances of length $n$. (That is, this is the class of languages which can be "solved infinitely often, in subexponential time".)

Is there an oracle $A$ such that $NP^A \not\subset io$-$SUBEXP^A$? If we equip SAT with the oracle $A$ in the usual way, can we say that $SAT^A$ is not in this class?

(I'm asking separate questions here, because we have to be careful with infinitely-often time classes: just because you have a reduction from problem $B$ to problem $C$ and $C$ is solvable infinitely often, you may not actually get that $B$ is solvable infinitely often without further assumptions on the reduction: what if your reduction from $B$ "misses" the input lengths that you can solve $C$ on?)

by Ryan Williams at February 23, 2015 10:59 PM UTC


How ∅ versus {ε} can be a life-and-death difference

SkilesCockpit
Cropped from source

Jeff Skiles was the co-pilot on US Airways Flight 1549 from New York’s LaGuardia Airport headed for Charlotte on January 15, 2009. The Airbus A320 lost power in both engines after striking birds at altitude about 850 meters and famously ditched in the Hudson River with no loss of life. As Skiles’s website relates, he had manual charge of the takeoff but upon his losing his instrument panel when the engines failed,

“Captain Chesley Sullenberger took over flying the plane and tipped the nose down to retain airspeed.”

Skiles helped contact nearby airports for emergency landing permission but within 60 seconds Sullenberger and he determined that the Hudson was the only option. His front page does not say he did anything else.

Today we tell some stories about the technical content of forms of emptiness.

I am teaching Buffalo’s undergraduate theory of computation course again. I like to emphasize early on that an alphabet need not be only a set of letter or digit symbols, even though it will be {\{0,1\}} or {\{a,b\}} or similar in nearly all instances. The textbook by Mike Sipser helps by having examples where tokens like “REAR” or “<RESET>” denoting actions are treated as single symbols. An alphabet can be the set of atomic actions from an aircraft or video game console. Some controls such as joysticks may be analog, but their output can be transmitted digitally. What’s important is that any sequence of actions is represented by a string over an appropriately chosen alphabet.

I go on to say that strings over any alphabet can be re-coded over {\{0,1\}}. Or over ASCII or UTF-8 or UNICODE, but those in turn are encoded in 8-bit or 16-bit binary anyway. I say all this justifies flexible thinking in that we can regard {\{0,1\}} as “the” alphabet for theory but can speak in terms of a generic char type for practice. Then in terms of the C++ standard library I write alphabet = set<char>, string = list<char>, language = set<string>, and class = set<language>. I go on to say how “{M = (Q,\Sigma,\delta,s,F)}” abbreviates object-oriented class notation in which set<State;> Q; and alphabet Sigma; and State start; and set<State> finalStates; are fields and delta can be variously a map or a function pointer or a set of tuples regarded as instructions.

Emptiness

In the course I’m glad to go into examples of DFAs and NFAs and regular expressions right away, but reaching the last is high time to say more on formal language theory. I’ve earlier connected {\cup} and {\cap} to Boolean or and and, but concatenation of languages needs explaining as a kind of “and then.” One point needing reinforcement is that the concatenation of a language {A} with itself, written {A\cdot A} or {A^2}, equals {\{xy: x \in A \wedge y \in A\}}, not {\{x^2: x \in A\}}. The most confusion and error I see, however, arises from the empty language {\emptyset} versus the empty string {\epsilon} (or {\lambda} in other sources).

I explain the analogy between multiplication and concatenation although the latter is not commutative, and that the operation between languages naturally “lifts” the definition of {x \cdot y} for strings. I then say that {\emptyset} behaves like {0} and {\{\epsilon\}} behaves like {1} under this analogy, but I don’t know how well that catches on with the broad range of students. So not always but a few times when lecture prep and execution has left 6–10 minutes in the period, I wrap a story into an example:

Let {\text{\tt CESSNA}} denote the alphabet of controls on a typical twin-engine Cessna business jet. I will define two languages over this alphabet—you tell me what they are:

  1. {L_1 = \{x \in \text{\tt CESSNA}^*: x} represents an appropriate sequence of actions for the co-pilot to initiate if 1 engine fails at 100 meters altitude, and with {x} there is a non-miraculous chance of saving the plane{\}}.

  2. {L_2 = \{x \in \text{\tt CESSNA}^*: x} represents an appropriate sequence of actions for the co-pilot to initiate if 2 engines fail at 2000 meters altitude, and with {x} there is a non-miraculous chance of saving the plane{\}}.

After carefully writing this on board or slide, I say, “you have enough information to answer this; it could be a pop quiz.” I let 15–20 seconds go by to see if someone raises a hand amid bewildered looks in silence, and then I say, “OK—I’ll tell a real-life story.”

The Story

My father Robert Regan was a financial reporter specializing in aluminum and magnesium. Once in the 1970s he covered a meeting of aluminum company executives in North Carolina. One of the executives failed to show for the first evening, and the news of why was not conveyed until he appeared in splint and bandages at breakfast the next morning.

He told how his twin-engine crew-of-two jet had lost all power immediately after takeoff. With no time or room for turning back, the pilot spotted the flat roof of a nearby bowling alley and steered for it as little he could. The jet pancaked on the roof and bounced into the mercifully empty parking lot. Everyone survived and could thank the way the force of impact had been broken into two lesser jolts. The end of the executive’s tale and interaction with my father in the breakfast-room audience went about as follows:

Exec: I have never seen such a great piece of quick thinking and calm control in my lifetime of business, to say nothing of the sheer flying skill. That pilot ought to get a medal.

RR: The co-pilot deserves a medal too.

Exec: Why? He didn’t do anything.

RR: Exactly.

Maybe only then the significance of the words “appropriate for the co-pilot to initiate” in my definitions of {L_1} and {L_2} dawns on the class, as well as the Boolean and. The appropriate string {x} is {\epsilon} in both cases: the co-pilot should not “initiate” any actions.

As witnessed by the stories above, in the case of {L_1} there is a good chance even if both engines fail, so the second clause is certainly satisfied. Thus {L_1 = \{\epsilon\}}. Perhaps the example of Sullenberger and Skiles at 850 meters makes it too pessimistic for me to say {L_2} is a goner at 2,000 meters, but the point of the example is that an unsatisfied conjunct in a set definition makes the whole predicate false even if the part depending on {x} is true. Thus the intent is {L_2 = \emptyset}.

There it is: the difference between {\{\epsilon\}} and {\emptyset} can be one of life and death. How much the story helps burnish the difference is hard to quantify, but at least much of the class tends to get a later test question involving this difference right.

Zero to the Zero

Whether I tell the story or not, I next have to convey why {\emptyset^*} turns around and becomes {\{\epsilon\}}. I say that the convention {A^0 = \{\epsilon\}} helps make the power law {A^{m+n} = A^m \cdot A^n} true for all {m,n \geq 0}, but why is this law relevant for {A = \emptyset}? Why do we need to define {\emptyset^0} anyway, let alone stipulate that it equals {\{\epsilon\}}?

If I say it’s like {0^0 = 1} in arithmetic, the students can find various sources saying {0^0 = 1} is a “convention” and “controversial.” So I say it’s like the convention that a for-loop


for (int i = 0; i < n; i++) { ... }


naturally “falls through” when {n = 0}. Even if the loop is checking for conditions that might force your code to terminate—and even if the body is definitely going to kill your program on entry—if the loop executes 0 times then you’re still flying. It’s a no-op represented by {\{\epsilon\}} rather than a killing {\emptyset}, so the whole flow-of-control analysis is

\displaystyle  \emptyset^0 = \{\epsilon\}.

Thus it comes down to the logical requirement that a universally quantified test on an empty domain defaults to true. Not just they but I can feel this requirement better in programming terms.

To go deeper—usually as notes for TAs if time permits in recitations or as a note in the course forum—I connect {0^0} to logic and relations. I’ve defined a function from a set {A} to a set {B} as a relation {R \subseteq A \times B} that satisfies the test

\displaystyle  T = (\forall x \in A)(\exists y \in B)\!: xRy \wedge (\forall y' \in B)[xRy' \longrightarrow y' = y].

Now we can ask:

Is the empty relation a function?

There’s an impulse to answer, “of course it isn’t—there aren’t any function values.” But when {A = \emptyset} the test {T} becomes a universally quantified formula over an empty domain, and so it defaults to true. Thus {\emptyset: \emptyset \longrightarrow B} counts as a function regardless of what {B} is, even if {B = \emptyset} too.

Because {\emptyset \times B = \emptyset}, the only possible relation on {\emptyset \times B} is {R = \emptyset}. So the cardinality of the set of functions from {\emptyset} to {B} is {1}. The notation for the set of functions from a set {A} to a set {B}, namely {B^A}, is motivated by examples like {\{0,1\}^5} being the set of binary functions on {[5] = \{0,1,2,3,4\}}. There are {2^5 = 32} such functions, and in general

\displaystyle  \left|B^A\right| = |B|^{|A|}.

With {A = B = \emptyset} all this gives {0^0 = 1}. Thus {0^0 = 1} and {\emptyset^* = \{\epsilon\}} are needed for the universe of mathematics based on sets and logic to come out right.

Emptiness and Type

The same analysis shows that an empty relation on a nonempty domain is not a function. This means that even when stuff is empty, the type signature of the stuff matters too. One student in my course told me last week that the realization that “empty” could come with a type helped him figure things out.

Real advances in mathematics have come from structure channeling content even when the content is degenerate or empty. There are more hints of deeper structure even in basic formal language theory. I generally encourage the notation {r + s} for regular expressions over Sipser’s {r \cup s} in order to leverage the analogy between concatenation and multiplication, even though {r + r} equals {r} not any notion of “{2r}.” The property {(\forall r)\;r + r = r} does not hold over any field, except for the possibility of the “field of one element” {\mathbb{F}_1} which we discussed some time back.

Now consider the suggestive analogy

\displaystyle  \begin{array}{rcl}  \;A^*\; &=& \{\epsilon\} \cup A \cup A^2 \cup A^3 \cup \cdots\\ \frac{1}{1-a} &=& \;\,1 \,\;+\, a + a^2 + a^3 \,+ \cdots \end{array}

What substance does it have beyond optics? The latter equation holds provided {|a| < 1} even over the complex numbers, and also holds in a sense for {a = 1}. The analogy {A = \emptyset}, {a = 0} works in both equations to yield {\{\epsilon\}} and {1}. We then find it disturbing, however, that substituting {A = \{\epsilon\}}, {a = 1} fails because {\{\epsilon\}^* = \{\epsilon\}} which is not infinite.

Does it really fail? Perhaps it succeeds in some structure that embraces both equations—perhaps involving {\mathbb{F}_1}? Our earlier post and its links noted that {\mathbb{F}_1} has an awful lot of structure and connections to other parts of mathematics despite its quasi-empty content.

We know several ways to build a universe on emptiness. In them the supporting cast of structure rather than {\emptyset} is the real lead. The new actor in town, Homotopy Type Theory, aims to find the right stuff directly in terms of types and the identity relation and a key notion and axiom of univalence. As related in a recent survey by Álvaro Pelayo and Martin Warren in the AMS Bulletin, the object is to make {\emptyset} and other sets emerge from the framework rather than form its base.

Open Problems

Does handling {\emptyset} and {\epsilon} right take care of everything?


by KWRegan at February 23, 2015 09:36 PM UTC

The position is under the ERC Consolidator Grant ALUnif (Algorithms and Lower Bounds: A Unified Approach), held by Rahul Santhanam.

The goal of the project is to use recently discovered connections between complexity lower bound techniques and algorithmic design and analysis to design new and improved algorithms for SAT and other NP-complete problems, as well as prove new complexity lower b

Webpage: http://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=032615

Email: rsanthan@inf.ed.ac.uk

by theorycsjobs at February 23, 2015 07:31 PM UTC