# Smoothed Analysis of Algorithms: Why The Simplex Algorithm Usually Takes Polynomial Time

## ABSTRACT

Linear programming, the problem of maximizing a linear function over a convex polytope, has been one of the most interesting points of contact between Combinatorics and Computer Science. The most popular algorithm for solving linear programs is the simplex algorithm, which works by walking from vertex to vertex along the skeleton of the polytope.

While the simplex algorithm is known to work very well in practice, its theoretical analyses have been negative or inconclusive. Attempts to prove the Hirsch conjecture on the diameter of polytopes have been a fascinating approach to understanding the simplex algorithm, but even the best results of Kalai and Kleitman do not come close to practical experience. Even if better bounds on the diameters of polytopes exist, worst-case analyses of the simplex algorithm point out that there are inputs on which variants of the algorithm take exponential time. As these analyses do not match practical experience, average-case analyses were performed. In the 1980's, the simplex algorithm was shown to converge in expected polynomial time on various distributions of random inputs, most notably by Borgwardt and Smale. However, the last 20 years of research in probability, combinatorics, and numerical analysis have taught us that these random instances have very special properties that one should not expect to find in practice.

To help resolve the discrepancy between theoretical analysis and practical experience, we propose a new approach to analyzing algorithms that we call smoothed analysis. In smoothed analysis, we measure the performance of an algorithm under slight random perturbations of arbitrary inputs. In particular, we consider Gaussian perturbations of inputs to algorithms that take real and complex inputs, and we measure the running time of algorithms in terms of the input size and the variance of the perturbations.

We show that the simplex algorithm has polynomial smoothed complexity. That is, under a slight random perturbation of the facets of a polytope, there is probably a path of polynomial length from a starting vertex to the vertex optimizing the objective function.

In particular, for every matrix A with entries of absolute value at most 1, every vector c, and every vector b whose entries are 1 or -1, we show that the simplex algorithm using the shadow-vertex pivot rule takes expected time polynomial in 1/ \sigma and the sizes of A and c to solve

minimize c'x s.t (A + \sigma G) x <= b
If A is well-scaled, then the solution to this program is an approximation to the original.

(This talk is joint with the Applied Math Colloquium)

Speaker's Contact Info: spielman(at-sign)math.mit.edu