Special Topics in Complexity Theory, Fall 2017. Instructor: Emanuele Viola

### 1 Lecture 6-7, Scribe: Willy Quach

In these lectures, we introduce -wise indistinguishability and link this notion to the approximate degree of a function. Then, we study the approximate degree of some functions, namely, the AND function and the AND-OR function. For the latter function we begin to see a proof that is different (either in substance or language) from the proofs in the literature. We begin with some LATEXtips.

#### 1.1 Some LATEX tips.

- Mind the punctuation. Treat equations as part of the phrases; add commas, colons, etc accordingly.
- In math mode, it is usually better to use \ell () rather than regular l. The latter can be confused with 1.
- Align equations with \begin{align} \end{align} with the alignment character &.
- For set inclusion, use \subset () only if you mean
*proper*inclusion (which is uncommon). Otherwise use \subseteq (). (Not everybody agrees with this, but this seems the most natural convention by analogy with and .)

#### 1.2 Introducing k-wise indistinguishability.

We studied previously the following questions:

- What is the minimum such that any -wise independent distribution over fools (i.e. for all -size circuits with constant depth)?
We saw that is enough.

- What is the minimum such that fools the AND function?
Taking for suffices (more precisely we saw that -wise independence fools the AND function with ).

Consider now and two distributions over that are -wise indistinguishable, that is, any projection over bits of and have the same distribution. We can ask similar questions:

- What is the minimum such that cannot distinguish and (i.e. for all -size circuits with constant depth)?
It turns out this requires : there are some distributions that are almost always distinguishable in this regime. (Whether is necessary or not is an open question.)

Also, suffices to fool (in which case is essentially exponentially small).

- What is the minimum such that the AND function (on bits) cannot distinguish and ?
It turns out that is necessary and sufficient. More precisely:

- There exists some over that are -wise indistinguishable for some constant , but such that:
- For all that are -wise indistinguishable for some bigger constant , we have:

- There exists some over that are -wise indistinguishable for some constant , but such that:

#### 1.3 Duality.

Those question are actually equivalent to ones related about approximation by real-valued polynomials:

**Theorem 1.** Let be a function, and an integer. Then:

Here denotes degree- real polynomials. We will denote the right-hand side .

Some examples:

- : then for all distribution , so that both sides of the equality are .
- the parity function on bits.
Then for , the left-hand side is at least : take to be uniform; and to be uniform on bits, defining the th bit to be to be the parity of the first bits. Then but .

**Claim 2.**.Proof. Suppose by contradiction that some polynomial has degree and approximates Parity by .

The key ingredient is to symmetrize a polynomial , by letting

where ranges over permutations. Note that only depends on .

Now we claim that there is a

*univariate*polynomial also of degree such thatfor every .

To illustrate, let be a monomial of . For instance if , then , where is the Hamming weight of the input. (For this we think of the input as being . Similar calculations can be done for .)

If , then which is quadratic in .

And so on.

More generally is a symmetric polynomial. As form a basis of symmetric polynomials of degree , can be written as a linear combination in this basis. Now note that only depends on ; substituting gives that is of degree in .

(Note that the degree of can be strictly less than the degree of (e.g. for : we have ).)

Then, applying symmetrization on , if is a real polynomial -close to Parity (in norm), then is also -close to Parity’ (as a convex combination of close values).

Finally, remark that for every integer , we have: Parity and Parity. In particular, as , must have at least zeroes, and must therefore be zero, which is a contradiction.

We will now focus on proving the theorem.

Note that one direction is easy: if a function is closely approximated by a polynomial of degree , it cannot distinguish two -wise indistinguishable distributions and :

where comes from the fact that and are -wise indistinguishable.

The general proof goes by a Linear Programming Duality (aka finite-dimensional Hahn-Banach theorem, min-max, etc.). This states that:

If , , and , then:

We can now prove the theorem:

Proof.

The proof will consist in rewriting the sides of the equality in the theorem as outputs of a Linear Program. Let us focus on the left side of the equality: .

We will introduce variables, namely and for every , which will represent for .

We will also use the following, which can be proved similarly to the Vazirani XOR Lemma:

**Claim 3.** Two distributions and are -wise indistinguishable if and only if: with , , where is the Fourier basis of boolean functions.

The quantity can then be rewritten:

Following the syntax of LP Duality stated above, we have:

, (where goes over ),

,

,

,

where the rows of except the first two correspond to some such that .

We apply LP duality. We shall denote the new set of variables by

.

We have the following program:

Writing , the objective becomes to minimize , while the second set of constraints can be rewritten:

The expression is an arbitrary degree- polynomial which we denote by . So our constrains become

Where ranges over all degree- polynomials, and we are trying to minimize . Because is always below , but when you add it becomes bigger, is always within of .

#### 1.4 Approximate Degree of AND.

Let us now study the AND function on bits. Let us denote the minimal degree of a polynomial approximating with error .

We will show that AND.

Let us first show the upper bound:

**Claim 4.** We have:

To prove this claim, we will consider a special family of polynomials:

**Definition 5.** **(Chebychev polynomials of the first kind.)**

The Chebychev polynomials (of the first kind) are a family of polynomials defined inductively as:

- ,
- ,
- .

Those polynomials satisfy some useful properties:

- ,
- such that ,
- .

Property follows from , and property follows from a direct induction. For a nice picture of these polynomials you should have come to class (or I guess you can check wikipedia). We can now prove our upper bound:

Proof. Proof of Claim:

We construct a univariate polynomial such that:

- ;
- ;
- .

In other words, will be close to on , and close to on . Then, we can naturally define the polynomial for the AND function on bits to be , which also has degree . Indeed, we want to be close to if has Hamming weight less than , while being close to on of Hamming weight (by definition of AND). This will conclude the proof.

Let us define as follows:

Intuitively, this uses the fact that Chebychev polynomials are bounded in (Property 2.) and then increase very fast (Property 3.).

More precisely, we have:

- by construction;
- for , we have:
by Property 2.;

by Property 3. and 4., and therefore for some , we have: .

Let us now prove the corresponding lower bound:

**Claim 6.** We have:

Proof. Let be a polynomial that approximates the AND function with error . Consider the univariate symmetrization of .

We have the following result from approximation theory:

**Theorem 7.** Let be a real univariate polynomial such that:

- ;
- for some .
Then .

To prove our claim, it is therefore sufficient to check that satisfies conditions 1. and 2., as we saw that :

- We have: by assumption on ;
- We have and (by assumption), so that the mean value theorem gives some such that .

This concludes the proof.

#### 1.5 Approximate Degree of AND-OR.

Consider the AND function on bits and the OR function on bits. Let AND-OR be their composition (which outputs the AND of the outputs of the function on -bits (disjoint) blocks).

It is known that AND-OR. To prove the upper bound, we will need a technique to compose approximating polynomials which we will discuss later.

Now we focus on the lower bound. This lower bound was recently proved independently by Sherstov and by Bun and Thaler. We present a proof that is different (either in substance or in language) and which we find more intuitive. Our proof replaces the “dual block method” with the following lemma.

**Lemma 8.** Suppose that

distributions over are -wise indistinguishable distributions; and

distributions over are -wise indistinguishable distributions.

Define over as follows: : draw a sample from , and replace each bit by a sample of (independently).

Then and are -wise indistinguishable.

Proof. Consider any set of bit positions; let us show that they have the same distribution in and .

View the as *blocks* of bits. Call a block of bits heavy if ; call the other blocks light.

There are at most heavy blocks by assumption, so that the distribution of the (entire) heavy blocks are the same in and by -wise indistinguishability of and .

Furthermore, conditioned on any outcome for the samples in , the light blocks have the same distribution in both and by -wise indistinguishability of and .

Therefore and are -wise indistinguishable.

by Manu at October 17, 2017 02:08 PM UTC