Guest post from Nikhil Devanur:
Pinyan Lu and I were the PC chairs for WINE 2017, and we decided to conduct an experiment. We asked the PC members to score the submissions in the same way they would score an EC 2017 submission. In fact, we sent them the instructions that were given to the PC for EC 2017 and asked them to follow the same guidelines.
Right off the bat, there is the question of how well these instructions were followed. We sent these instructions multiple times, and yet some PC members seem to have ignored it initially. This would invariably come up during discussions. Someone would ask, “Is this EC scale, or WINE scale”, and every now and then there would be a “Doh! I forgot”.
With that aside, I want to present the results of this experiment. Moshe Babaioff kindly gave me the statistics for EC submissions, so we can compare. The key quantity of interest is of course, how do the submission qualities differ. We all know EC gets stronger submissions, but by how much? We sorted the papers into the following buckets, and compared the percentage of submissions in each bucket. The scoring scale for EC was from 1 to 7. Here’s the result.
Average score  EC  WINE 
6+  6%  0% 
5.5 to 6  9%  3% 
5 to 5.5  17%  14% 
4.5 to 5  14%  11% 
4 to 4.5  11%  19% 
3 to 4  27%  38% 
1 to 3  16%  15% 
This table doesn’t tell the whole story, so I did something else: I added 0.5 to the average of each WINE paper, and then calculated the CDF. Here’s what that looks like.
Average score  EC  WINE + 0.5 
6+  6%  3% 
5.5+  16%  17% 
5+  32%  28% 
4.5+  47%  48% 
4+  58%  65% 
3+  84%  89% 
1+  100%  100% 
Now you can see that the two columns are very close to each other. What this tells me is that there is about a 0.5 to 1 point difference between EC and WINE submissions, on a scale of 1 to 7. (My guess is that this would hold even after taking into consideration a bit of grade inflation in the experiment, which is hard to measure.)
Other than this experiment, here is some feedback that I wanted to give to the community.
There were other interesting proposals that came up in the WINE business meeting, but I chose not to include any of those so that this blog post is as short as possible. You can watch a recording of this (as well as that of all the talks) here:
http://lcm.csa.iisc.ernet.in/wine2017/
Finally, I want to give a shout out once again to the local organizers of WINE 2017. I got uniformly and overwhelmingly positive feedback from many of the attendees that this was one of the most enjoyable conferences they had attended. The local organizers (Y. Narahari and his group at the IISc, Bangalore) get all the credit for this.
This is the post about l2w version 1.0, a Latex to WordPress converter painstakingly put together by me with big help from the LaTeX community. Click here to download it. Below is an example of what you can do, taken at random from my class notes which were compiled with this script. I also used this in conjunction with Lyx for several posts such as I believe P=NP, so you can also call this a Lyx to WordPress converter. I just export to latex and then run l2w.
This might work out of the box. More in detail, it needs tex4ht (which is included e.g. in MiKTeX distributions) and Perl (the script only uses minimalistic, shell perl commands). Simply unzip l2w.zip, which contains four files. The file post.tex is this document, which you can edit. To compile, run l2w.bat (which calls myConfig5.cfg). This will create the output post.html which you can copy and past in the wordpress HTML editor. I have tested it on an old Windows XP machine, and a more recent Windows 7 with MixTeX 2.9. I haven’t tested it on linux, which might require some simple changes to l2w.bat. For LyX I add certain commands in the preamble, and as an example the .lyx source of the post I believe P=NP is included in the zip archive.
The nonmath source is compiled using fullfledged LaTeX, which means you can use your own macros and bibliography. The math source is not compiled, but more or less left as is for wordpress, which has its own LaTeX interpreter. This means that you can’t use your own macros in math mode. For the same reason, label and ref of equations are a problem. To make them work, the script fetches their values from the .aux file and then crudely applies them. This is a hack with a rather unreadable script; however, it works for me. One catch: your labels should start with eq:.
I hope this will spare you the enormous amount of time it took me to arrive to this solution. Let me know if you use it!
First, some of the problematic math references:
Equation (1).
Next, some weird font stuff: , , .
Lemma 1. Suppose that distributions over are wise indistinguishable distributions; and distributions over are wise indistinguishable distributions. Define over as follows:
: draw a sample from , and replace each bit by a sample of (independently).
Then and are wise indistinguishable.
To finish the proof of the lower bound on the approximate degree of the ANDOR function, it remains to see that ANDOR can distinguish well the distributions and . For this, we begin with observing that we can assume without loss of generality that the distributions have disjoint supports.
Claim 2. For any function , and for any wise indistinguishable distributions and , if can distinguish and with probability then there are distributions and with the same properties (wise indistinguishability yet distinguishable by ) and also with disjoint supports. (By disjoint support we mean for any either or .)
Proof. Let distribution be the “common part” of and . That is to say, we define such that multiplied by some constant that normalize into a distribution.
Then we can write and as
where , and are two distributions. Clearly and have disjoint supports.
Then we have
Therefore if can distinguish and with probability then it can also distinguish and with such probability.
Similarly, for all such that , we have
Hence, and are wise indistinguishable.
Equipped with the above lemma and claim, we can finally prove the following lower bound on the approximate degree of ANDOR.
Theorem 3. ANDOR.
Proof. Let be wise indistinguishable distributions for AND with advantage , i.e. . Let be wise indistinguishable distributions for OR with advantage . By the above claim, we can assume that have disjoint supports, and the same for . Compose them by the lemma, getting wise indistinguishable distributions . We now show that ANDOR can distinguish :
Therefore we have ANDOR.
In this subsection we discuss the approximate degree of the surjectivity function. This function is defined as follows.
Definition 4. The surjectivity function SURJ, which takes input where for all , has value if and only if .
First, some history. Aaronson first proved that the approximate degree of SURJ and other functions on bits including “the collision problem” is . This was motivated by an application in quantum computing. Before this result, even a lower bound of had not been known. Later Shi improved the lower bound to , see [AS04]. The instructor believes that the quantum framework may have blocked some people from studying this problem, though it may have very well attracted others. Recently Bun and Thaler [BT17] reproved the lower bound, but in a quantumfree paper, and introducing some different intuition. Soon after, together with Kothari, they proved [BKT17] that the approximate degree of SURJ is .
We shall now prove the lower bound, though one piece is only sketched. Again we present some things in a different way from the papers.
For the proof, we consider the ANDOR function under the promise that the Hamming weight of the input bits is at most . Call the approximate degree of ANDOR under this promise ANDOR. Then we can prove the following theorems.
Theorem 6. ANDOR for some suitable .
In our settings, we consider . Theorem 5 shows surprisingly that we can somehow “shrink” bits of input into bits while maintaining the approximate degree of the function, under some promise. Without this promise, we just showed in the last subsection that the approximate degree of ANDOR is instead of as in Theorem 6.
Proof of Theorem 5. Define an matrix s.t. the 0/1 variable is the entry in the th row th column, and iff . We can prove this theorem in following steps:
Now we prove this theorem step by step.
Then the polynomial for ANDOR is the polynomial with replaced as above, thus the degree won’t increase. Correctness follows by the promise.
Clearly it is a good approximation of ANDOR. It remains to show that it’s a polynomial of degree in ’s if is a polynomial of degree in ’s.
Let’s look at one monomial of degree in : . Observe that all ’s are distinct by the promise, and by over . By chain rule we have
By symmetry we have , which is linear in ’s. To get , we know that every other entry in row is , so we give away row , average over ’s such that under the promise and consistent with ’s. Therefore
In general we have
which has degree in ’s. Therefore the degree of is not larger than that of .
Proof idea for Theorem 6. First, by the duality argument we can verify that if and only if there exists wise indistinguishable distributions such that:
Claim 7. OR.
The proof needs a little more information about the weight distribution of the indistinguishable distributions corresponding to this claim. Basically, their expected weight is very small.
Now we combine these distributions with the usual ones for And using the lemma mentioned at the beginning.
What remains to show is that the final distribution is supported on Hamming weight . Because by construction the copies of the distributions for Or are sampled independently, we can use concentration of measure to prove a tail bound. This gives that all but an exponentially small measure of the distribution is supported on strings of weight . The final step of the proof consists of slightly tweaking the distributions to make that measure .
Groups have many applications in theoretical computer science. Barrington [Bar89] used the permutation group to prove a very surprising result, which states that the majority function can be computed efficiently using only constant bits of memory (something which was conjectured to be false). More recently, catalytic computation [BCK14] shows that if we have a lot of memory, but it’s full with junk that cannot be erased, we can still compute more than if we had little memory. We will see some interesting properties of groups in the following.
Some famous groups used in computer science are:
An example is . Generally we have
The group was invented by Galois. (If you haven’t, read his biography on wikipedia.)
Quiz. Among these groups, which is the “least abelian”? The latter can be defined in several ways. We focus on this: If we have two highentropy distributions over , does has more entropy? For example, if and are uniform over some elements, is close to uniform over ? By “close to” we mean that the statistical distance is less that a small constant from the uniform distribution. For , if uniform over , then is the same, so there is not entropy increase even though and are uniform on half the elements.
Definition 8.[Measure of Entropy] For , we think of for “high entropy”.
Note that is exactly the “collision probability”, i.e. . We will consider the entropy of the uniform distribution as very small, i.e. . Then we have
Theorem 9.[[Gow08], [BNP08]] If are independent over , then
where is the minimum dimension of irreducible representation of .
By this theorem, for high entropy distributions and , we get , thus we have
If is large, then is very close to uniform. The following table shows the ’s for the groups we’ve introduced.












should be very small  






Here is the alternating group of even permutations. We can see that for the first groups, Equation ((2)) doesn’t give nontrivial bounds.
But for we get a nontrivial bound, and for we get a strong bound: we have .
[Amb05] Andris Ambainis. Polynomial degree and lower bounds in quantum complexity: Collision and element distinctness with small range. Theory of Computing, 1(1):37–46, 2005.
[AS04] Scott Aaronson and Yaoyun Shi. Quantum lower bounds for the collision and the element distinctness problems. J. of the ACM, 51(4):595–605, 2004.
[Bar89] David A. Mix Barrington. Boundedwidth polynomialsize branching programs recognize exactly those languages in NC. J. of Computer and System Sciences, 38(1):150–164, 1989.
[BCK14] Harry Buhrman, Richard Cleve, Michal Koucký, Bruno Loff, and Florian Speelman. Computing with a full memory: catalytic space. In ACM Symp. on the Theory of Computing (STOC), pages 857–866, 2014.
[BKT17] Mark Bun, Robin Kothari, and Justin Thaler. The polynomial method strikes back: Tight quantum query bounds via dual polynomials. CoRR, arXiv:1710.09079, 2017.
[BNP08] László Babai, Nikolay Nikolov, and László Pyber. Product growth and mixing in finite groups. In ACMSIAM Symp. on Discrete Algorithms (SODA), pages 248–257, 2008.
[BT17] Mark Bun and Justin Thaler. A nearly optimal lower bound on the approximate degree of AC0. CoRR, abs/1703.05784, 2017.
[Gow08] W. T. Gowers. Quasirandom groups. Combinatorics, Probability & Computing, 17(3):363–387, 2008.
Authors: Mark A. Hallen, Bruce R. Donald
Download: PDF
Abstract: We review algorithms for protein design in general. Although these algorithms
have a rich combinatorial, geometric, and mathematical structure, they are
almost never covered in computer science classes. Furthermore, many of these
algorithms admit provable guarantees of accuracy, soundness, complexity,
completeness, optimality, and approximation bounds. The algorithms represent a
delicate and beautiful balance between discrete and continuous computation and
modeling, analogous to that which is seen in robotics, computational geometry,
and other fields in computational science. Finally, computer scientists may be
unaware of the almost direct impact of these algorithms for predicting and
introducing molecular therapies that have gone in a short time from mathematics
to algorithms to software to predictions to preclinical testing to clinical
trials. Indeed, the overarching goal of these algorithms is to enable the
development of new therapeutics that might be impossible or too expensive to
discover using experimental methods. Thus the potential impact of these
algorithms on individual, community, and global health has the potential to be
quite significant.