# Polynomials in combinatorics and in complexity theory

Instructors: Pascal Koiran and Stéphan Thomassé

Polynomials in combinatorics:

One of the most successful algorithmic paradigm is certainly linear optimization, where one can cast a discrete problem P into a linear program and then try to infer integral solutions. A natural generalization is to ask for higher order modelization and thus code P using  polynomials. There are several methods to achieve this, for instance deciding if problem P has a solution usually reduces to:

1) Deciding if a polynomial ideal contains 1 (this is the so-called  Hilbert Nullstellensatz).

2) Polynomial Identity Testing: deciding if a given polynomial is identically 0.

Applications of these polynomial methods are very exciting since they often provide the unique known method to prove a combinatorial result, or give the fastest known algorithm.

For instance, every graph which is the disjoint union of the cycle 1,2,3,...,3n and a collection of n disjoint triangles has chromatic number 3, but the only known proof of this fact uses polynomials (and no polytime algorithm to 3-color the graph is known).

We will survey these techniques and try to provide a broad landscape of applications (error correcting codes, information theory, graph theory, machine learning, discrete geometry, Ramsey theory, etc)

Polynomials in complexity theory:

Arithmetic circuits provide a natural model for the computation of polynomials. These circuits are made of additions and multiplication gates, instead of the Boolean gates which form the building blocks of Boolean circuits. We will see several upper and lower bound results for this model. The ultimate goal in this area is to obtain a superpolynomial lower bound for some “explicit” polynomial. This would be an arithmetic analogue of “P <> NP”.

More surprisingly, polynomials also appear in Boolean complexity (for instance, in interactive proofs or in lower bounds for Boolean circuits). We will see some of these Boolean applications in this course.

Tensors can be viewed as multidimensional arrays. In 2 dimensions these are just the matrices that are well known from linear algebra, but tensors can also be three-dimensional, four-dimensional...They have become important in diverse areas such as algebraic complexity (in particular, for the study of matrix multiplication), machine learning and signal processing. They will appear naturally in this course since a tensor can be viewed as the array of coefficients of a multivariate polynomial.