Publications

Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs

with Michael B. Cohen, John Peebles, Richard Peng, Anup Rao, Aaron Sidford, and Adrian Vladu.
Full version. Extended abstract appeared in Symposium on the Theory of Computing (STOC) 2017.
In this paper we introduce a notion of spectral approximation for directed graphs. While there are many potential ways one might define approximation for directed graphs, most of them are too strong to allow sparse approximations in general. In contrast, we prove that for our notion of approximation, such sparsifiers do exist, and we show how to compute them in almost linear time.

Using this notion of approximation, we provide a general framework for solving asymmetric linear systems that is broadly inspired by the work of [Peng-Spielman, STOC'14]. Applying this framework in conjunction with our sparsification algorithm, we obtain an almost linear time algorithm for solving directed Laplacian systems associated with Eulerian Graphs. Using this solver in the recent framework of [Cohen-Kelner-Peebles-Peng-Sidford-Vladu, FOCS'16], we obtain almost linear time algorithms for solving a directed Laplacian linear system, computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, and more.

For each of these problems, our algorithms improves the previous best running times of $O((nm^{3/4} + n^{2/3} m) \log^{O(1)} (n \kappa \epsilon^{-1}))$ to $O((m + n2^{O(\sqrt{\log{n}\log\log{n}})}) \log^{O(1)} (n \kappa \epsilon^{-1}))$ where $n$ is the number of vertices in the graph, $m$ is the number of edges, $\kappa$ is a natural condition number associated with the problem, and $\epsilon$ is the desired accuracy. We hope these results open the door for further studies into directed spectral graph theory, and will serve as a stepping stone for designing a new generation of fast algorithms for directed graphs.

A Nearly Tight Sum-of-Squares Lower Bound for the Planted Clique Problem

with Boaz Barak, Samuel B. Hopkins, Pravesh K. Kothari, Ankur Moitra, and Aaron Potechin.
Full version. Extended abstract appeared in Foundations of Computer Science (FOCS) 2016.
We prove that with high probability over the choice of a random graph $G$ from the Erdős-Rényi distribution $G(n,1/2)$, the $n^{O(d)}$-time degree $d$ Sum-of-Squares semidefinite programming relaxation for the clique problem will give a value of at least $n^{1/2-c(d/\log n)^{1/2}}$ for some constant $c>0$. This yields a nearly tight $n^{1/2 - o(1)}$ bound on the value of this program for any degree $d = o(\log n)$. Moreover we introduce a new framework that we call pseudo-calibration to construct Sum of Squares lower bounds. This framework is inspired by taking a computational analog of Bayesian probability theory. It yields a general recipe for constructing good pseudo-distributions (i.e., dual certificates for the Sum-of-Squares semidefinite program), and sheds further light on the ways in which this hierarchy differs from others.

Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More

with Michael B. Cohen, John Peebles, Richard Peng, Aaron Sidford, and Adrian Vladu.
Full version. Extended abstract appeared in Foundations of Computer Science (FOCS) 2016.
In this paper, we provide faster algorithms for computing various fundamental quantities associated with random walks on a directed graph, including the stationary distribution, personalized PageRank vectors, hitting times, and escape probabilities. In particular, on a directed graph with $n$ vertices and $m$ edges, we show how to compute each quantity in time $\tilde{O}(m^{3/4}n+mn^{2/3})$, where the $\tilde{O}$ notation suppresses polylogarithmic factors in $n$, the desired accuracy, and the appropriate condition number (i.e. the mixing time or restart probability).

Our result improves upon the previous fastest running times for these problems; previous results either invoke a general purpose linear system solver on a $n\times n$ matrix with $m$ non-zero entries, or depend polynomially on the desired error or natural condition number associated with the problem (i.e. the mixing time or restart probability). For sparse graphs, we obtain a running time of $\tilde{O}(n^{7/4})$, breaking the $O(n^{2})$ barrier of the best running time one could hope to achieve using fast matrix multiplication.

We achieve our result by providing a similar running time improvement for solving directed Laplacian systems, a natural directed or asymmetric analog of the well studied symmetric or undirected Laplacian systems. We show how to solve such systems in time $\tilde{O}(m^{3/4}n+mn^{2/3})$, and efficiently reduce a broad range of problems to solving $\tilde{O}(1)$ directed Laplacian systems on Eulerian graphs. We hope these results and our analysis open the door for further study into directed spectral graph theory.

Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method

with Boaz Barak and David Steurer.
Full version. Extended abstract appeared in Symposium on the Theory of Computer Science (STOC) 2015.
We give a new approach to the dictionary learning (also known as "sparse coding") problem of recovering an unknown $n\times m$ matrix $A$ (for $m \geq n$) from examples of the form \[ y = Ax + e, \] where $x$ is a random vector in $\mathbb R^m$ with at most $\tau m$ nonzero coordinates, and $e$ is a random noise vector in $\mathbb R^n$ with bounded magnitude. For the case $m=O(n)$, our algorithm recovers every column of $A$ within arbitrarily good constant accuracy in time $m^{O(\log m/\log(\tau^{-1}))}$, in particular achieving polynomial time if $\tau = m^{-\delta}$ for any $\delta>0$, and time $m^{O(\log m)}$ if $\tau$ is (a sufficiently small) constant. Prior algorithms with comparable assumptions on the distribution required the vector $x$ to be much sparser---at most $\sqrt{n}$ nonzero coordinates---and there were intrinsic barriers preventing these algorithms from applying for denser $x$.

We achieve this by designing an algorithm for noisy tensor decomposition that can recover, under quite general conditions, an approximate rank-one decomposition of a tensor $T$, given access to a tensor $T'$ that is $\tau$-close to $T$ in the spectral norm (when considered as a matrix). To our knowledge, this is the first algorithm for tensor decomposition that works in the constant spectral-norm noise regime, where there is no guarantee that the local optima of $T$ and $T'$ have similar structures.

Our algorithm is based on a novel approach to using and analyzing the Sum of Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and it can be viewed as an indication of the utility of this very general and powerful tool for unsupervised learning problems.

Rounding Sum-of-Squares Relaxations

with Boaz Barak and David Steurer.
Full version. Extended abstract appeared in Symposium on the Theory of Computing (STOC) 2014.
We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a combining algorithm -- an algorithm that maps a distribution over solutions into a (possibly weaker) solution -- into a rounding algorithm that maps a solution of the relaxation to a solution of the original problem.

Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems:

1) We give a quasipolynomial-time algorithm that approximates the maximum of a low degree multivariate polynomial with non-negative coefficients over the Euclidean unit sphere. Beyond being of interest in its own right, this is related to an open question in quantum information theory, and our techniques have already led to improved results in this area (Brandão and Harrow, STOC '13).

2) We give a polynomial-time algorithm that, given a d dimensional subspace of $\mathbb{R}^n$ that (almost) contains the characteristic function of a set of size $n/k$, finds a vector $v$ in the subspace satisfying $|v|_4^4 > c(k/d^{1/3}) |v|_2^2$, where $|v|_p = (E_i v_i^p)^{1/p}$. Aside from being a natural relaxation, this is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012) and our results yield a certain improvement for that problem.

3) We use this notion of $L_4$ vs. $L_2$ sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted $\mu$-sparse vector v in a random d-dimensional subspace of $\mathbb{R}^n$. If $v$ has $\mu n$ nonzero coordinates, we can recover it with high probability whenever $\mu < O(\min(1,n/d^2))$, improving for $d < n^{2/3}$ prior methods which intrinsically required $\mu < O(1/\sqrt{d})$.

An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations

with Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford.
Full version. Extended abstract appeared in Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA) 2014 and received Best Paper Award.
In this paper, we introduce a new framework for approximately solving flow problems in capacitated, undirected graphs and apply it to provide asymptotically faster algorithms for the maximum $s$-$t$ flow and maximum concurrent multicommodity flow problems. For graphs with $n$ vertices and $m$ edges, it allows us to find an $\epsilon$-approximate maximum $s$-$t$ flow in time $O(m^{1+o(1)}\epsilon^{-2})$, improving on the previous best bound of $\tilde{O}(mn^{1/3} poly(1/\epsilon))$. Applying the same framework in the multicommodity setting solves a maximum concurrent multicommodity flow problem with $k$ commodities in $O(m^{1+o(1)}\epsilon^{-2}k^2)$ time, improving on the existing bound of $\tilde{O}(m^{4/3} poly(k,\epsilon^{-1})$.

Our algorithms utilize several new technical tools that we believe may be of independent interest:
  • We give a non-Euclidean generalization of gradient descent and provide bounds on its performance. Using this, we show how to reduce approximate maximum flow and maximum concurrent flow to the efficient construction of oblivious routings with a low competitive ratio.
  • We define and provide an efficient construction of a new type of flow sparsifier. In addition to providing the standard properties of a cut sparsifier our construction allows for flows in the sparse graph to be routed (very efficiently) in the original graph with low congestion.
  • We give the first almost-linear-time construction of an $O(m^{o(1)})$-competitive oblivious routing scheme. No previous such algorithm ran in time better than $\tilde{{\Omega}}(mn)$.
We note that independently Jonah Sherman produced an almost linear time algorithm for maximum flow and we thank him for coordinating arXiv submissions.

A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time

with Lorenzo Orecchia, Aaron Sidford, and Zeyuan Allen Zhu.
Full version. Extended abstract appeared in Symposium on the Theory of Computing (STOC), 2013.
In this paper, we present a simple combinatorial algorithm that solves symmetric diagonally dominant (SDD) linear systems in nearly-linear time. It uses very little of the machinery that previously appeared to be necessary for a such an algorithm. It does not require recursive preconditioning, spectral sparsification, or even the Chebyshev Method or Conjugate Gradient. After constructing a "nice" spanning tree of a graph associated with the linear system, the entire algorithm consists of the repeated application of a simple (non-recursive) update rule, which it implements using a lightweight data structure. The algorithm is numerically stable and can be implemented without the increased bit-precision required by previous solvers. As such, the algorithm has the fastest known running time under the standard unit-cost RAM model. We hope that the simplicity of the algorithm and the insights yielded by its analysis will be useful in both theory and practice.

Hypercontractivity, Sum-of-Squares Proofs, and their Applications

with Boaz Barak, Fernando G. S. L. Brandão, Aram W. Harrow, David Steurer, and Yuan Zhou.
Full version. Extended abstract appeared in Symposium on the Theory of Computing (STOC), 2012.
We study the computational complexity of approximating the $2\rightarrow q$ norm of linear operators (defined as $\left\|A\right\|_{2\rightarrow q}=\sup_{v} \left\|Av\right\|_q/\left\|v\right\|_2$), as well as connections between this question and issues arising in quantum information theory and the study of Khot's Unique Games Conjecture (UGC). We show the following:
  1. For any constant even integer $q\geq 4$, a graph $G$ is a small-set expander if and only if the projector into the span of the top eigenvectors of G's adjacency matrix has bounded $2 \rightarrow q$ norm. As a corollary, a good approximation to the $2\rightarrow q$ norm will refute the Small-Set Expansion Conjecture--a close variant of the UGC. We also show that such a good approximation can be obtained in $\exp(n^{2/q})$ time, thus obtaining a different proof of the known subexponential algorithm for Small Set Expansion.
  2. Constant rounds of the "Sum of Squares" semidefinite programing hierarchy certify an upper bound on the $2\rightarrow 4$ norm of the projector to low-degree polynomials over the Boolean cube, as well certify the unsatisfiability of the "noisy cube" and "short code" based instances of Unique Games considered by prior works. This improves on the previous upper bound of exp(poly log n) rounds (for the "short code"), as well as separates the "Sum of Squares"/"Lasserre" hierarchy from weaker hierarchies that were known to require $\omega(1)$ rounds.
  3. We show reductions between computing the $2\rightarrow 4$ norm and computing the injective tensor norm of a tensor, a problem with connections to quantum information theory. Three corollaries are: (i) the $2\rightarrow 4$ norm is NP-hard to approximate to precision inverse-polynomial in the dimension, (ii) the $2\rightarrow 4$ norm does not have a good approximation (in the sense above) unless 3-SAT can be solved in time $\exp(\sqrt{n})$ polylog(n)), and (iii) known algorithms for the quantum separability problem imply a non-trivial additive approximation for the $2\rightarrow 4$ norm.

Faster Approximate Multicommodity Flow Using Quadratically Coupled Flows

with Gary Miller and Richard Peng.
Full version. Extended abstract appeared in Symposium on the Theory of Computing (STOC), 2012.
The maximum multicommodity flow problem is a natural generalization of the maximum flow problem to route multiple distinct flows. Obtaining a $1-\epsilon$ approximation to the multicommodity flow problem on graphs is a well-studied problem. In this paper we present an adaptation of recent advances in single-commodity flow algorithms to this problem. As the underlying linear systems in the electrical problems of multicommodity flow problems are no longer Laplacians, our approach is tailored to generate specialized systems which can be preconditioned and solved efficiently using Laplacians. Given an undirected graph with $m$ edges and $k$ commodities, we give algorithms that find $1-\epsilon$ approximate solutions to the maximum concurrent flow problem and the maximum weighted multicommodity flow problem in time $\tilde{O}(m^{4/3}\mathrm{poly}(k,\epsilon^{-1}))$.

Global Computation in a Poorly Connected World: Fast Rumor Spreading with No Dependence on Conductance

with Keren Censor-Hillel, Bernhard Haeupler, and Petar Maymounkov.
Full version. Extended abstract appeared in Symposium on the Theory of Computing (STOC), 2012.
In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the widely studied GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others.

In the LOCAL model, this is quite simple: each node broadcasts all of its information in each round, and the number of rounds required will be equal to the diameter of the underlying communication graph. In the GOSSIP model, each node must independently choose a single neighbor to contact, and the lack of global information makes it difficult to make any sort of principled choice. As such, researchers have focused on the uniform gossip algorithm, in which each node independently selects a neighbor uniformly at random. When the graph is well-connected, this works quite well. In a string of beautiful papers, researchers proved a sequence of successively stronger bounds on the number of rounds required in terms of the conductance \(\phi\), culminating in a bound of \(O(\phi^{-1} \log n)\).

In this paper, we give an algorithm that solves the information dissemination problem in at most $O(D+\text{polylog}{(n)})$ rounds in a network of diameter $D$, with no dependence on the conductance. This is at most an additive polylogarithmic factor from the trivial lower bound of $D$, which applies even in the LOCAL model.

In fact, we prove that something stronger is true: any algorithm that requires $T$ rounds in the LOCAL model can be simulated in $O(T +\mathrm{polylog}(n))$ rounds in the GOSSIP model. We thus prove that these two models of distributed computation are essentially equivalent.

Quantum money

with Scott Aaronson, Edward Farhi, David Gosset, Avinatan Hassidim, and Andrew Lutomirski.
Communications of the ACM 55(8): 84-92 (2012).

Randomized accuracy-aware program transformations for efficient approximate computations

with Zeyuan Allen Zhu, Sasa Misailovic, and Martin C. Rinard.
Proceedings of the 39th Annual ACM Symposium on Principles of Programming Languages (POPL), 2012.
Despite the fact that approximate computations have come to dominate many areas of computer science, the field of program transformations has focused almost exclusively on traditional semantics-preserving transformations that do not attempt to exploit the opportunity, available in many computations, to acceptably trade off accuracy for benefits such as increased performance and reduced resource consumption.

We present a model of computation for approximate computations and an algorithm for optimizing these computations. The algorithm works with two classes of transformations: implementation selection for functions (which selects one of a number of available implementations for a given function, with each implementation offering a different combination of accuracy and resource consumption) and reduction sampling (which randomly discards some of the inputs to a given reduction). The algorithm produces an optimal randomized program that minimizes resource consumption subject to an error bound specified in the form of a maximum expected error. Its algorithm runs in polynomial time in the size of the program. It reduces the optimization problem to a higher-order nonlinear convex program, using a variety of techniques to eliminate non-convex and integer constraints, which would otherwise render the optimization problem intractable.

Topology Discovery of Sparse Random Graphs With Few Participants

with Animashree Anandkumar and Avinatan Hassidim.
Random Structures and Algorithms} 43(1): 16-48, 2013. Extended abstract appeared in ACM SIGMETRICS 2011 and received Best Paper Award.
We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants.

Electrical Flows, Laplacian Systems, and Faster Approximation of Maximum Flow in Undirected Graphs

with Paul Christiano, Aleksander Madry, Daniel A. Spielman, and Shang-Hua Teng.
Full version. Extended abstract appeared in STOC '11 and received Best Paper Award.
We introduce a new approach to computing an approximately maximum s-t flow in a capacitated, undirected graph. This flow is computed by solving a sequence of electrical flow problems. Each electrical flow is given by the solution of a system of linear equations in a Laplacian matrix, and thus may be approximately computed in nearly-linear time.

Using this approach, we develop the fastest known algorithm for computing approximately maximum $s$-$t$ flows. For a graph having $n$ vertices and $m$ edges, our algorithm computes a $(1-\epsilon)$-approximately maximum $s$-$t$ flow in time $\tilde{O}(mn^{1/3} \epsilon^{-11/3})$. A dual version of our approach computes a $(1+\epsilon)$-approximately minimum $s$-$t$ cut in time $\tilde{O}(m+n^{4/3}\epsilon^{-8/3})$, which is the fastest known algorithm for this problem as well. Previously, the best dependence on $m$ and $n$ was achieved by the algorithm of Goldberg and Rao (J. ACM 1998), which can be used to compute approximately maximum $s$-$t$ flows in time $\tilde{O}(m\sqrt{n}\epsilon^{-1})$, and approximately minimum $s$-$t$ cuts in time $\tilde{O}(m+n^{3/2}\epsilon^{-3})$.

Spectral Sparsification in the Semi-Streaming Setting

with Alex Levin.
Theory of Computing Systems 53(2): 243-262 (2013). Preliminary conference version appeared 28th International Symposium on Theoretic Aspects of Computer Science (STACS), 2011.
Let $G$ be a graph with $n$ vertices and $m$ edges. A sparsifier of $G$ is a sparse graph on the same vertex set approximating $G$ in some natural way. It allows us to say useful things about $G$ while considering much fewer than $m$ edges. The strongest commonly-used notion of sparsification is spectral sparsification; $H$ is a spectral sparsifier of $G$ if the quadratic forms induced by the Laplacians of $G$ and $H$ approximate one another well. This notion is strictly stronger than the earlier concept of combinatorial sparsification.

In this paper, we consider a semi-streaming setting, where we have only $\tilde{O}(n)$ storage space, and we thus cannot keep all of $G$. In this case, maintaining a sparsifier instead gives us a useful approximation to $G$, allowing us to answer certain questions about the original graph without storing all of it. In this paper, we introduce an algorithm for constructing a spectral sparsifier of $G$ with $O(n\log n/\epsilon^2)$ edges (where $\epsilon$ is a parameter measuring the quality of the sparsifier), taking $\tilde{O}(m)$ time and requiring only one pass over $G$. In addition, our algorithm has the property that it maintains at all times a valid sparsifier for the subgraph of $G$ that we have received.

Our algorithm is natural and conceptually simple. As we read edges of $G$, we add them to the sparsifier $H$. Whenever $H$ gets too big, we resparsify it in $\tilde{O}(n)$ time. Adding edges to a graph changes the structure of its sparsifier's restriction to the already existing edges. It would thus seem that the above procedure would cause errors to compound each time that we resparsify, and that we should need to either retain significantly more information or reexamine previously discarded edges in order to construct the new sparsifier. However, we show how to use the information contained in $H$ to perform this resparsification using only the edges retained by earlier steps in nearly linear time.

Metric uniformization and spectral bounds for graphs

with James R. Lee, Gregory N. Price, and Shang-Hua Teng.
Geometric and Functional Analysis 21(5): 1117-1143, October 2011. Extended abstract appeared in IEEE Foundations of Computer Science (FOCS) 2009 under the title "Higher eigenvalues of graphs".
We present a method for proving upper bounds on the eigenvalues of the graph Laplacian. A main step involves choosing an appropriate "Riemannian" metric to uniformize the geometry of the graph. In many interesting cases, the existence of such a metric is shown by examining the combinatorics of special types of flows. This involves proving new inequalities on the crossing number of graphs.

In particular, we use our method to show that for any positive integer $k$, the $k$th smallest eigenvalue of the Laplacian on an $n$-vertex, bounded-degree planar graph is $O(k/n)$. This bound is asymptotically tight for every $k$, as it is easily seen to be achieved for square planar grids. We also extend this spectral result to graphs with bounded genus, and graphs which forbid fixed minors. Previously, such spectral upper bounds were only known for the case $k=2$.

Breaking and making quantum money: toward a new quantum cryptographic protocol

with Andrew Lutomirski, Scott Aaronson, Edward Farhi, David Gosset, Avinatan Hassidim, and Peter Shor.
Innovations in Computer Science (ICS), 2010.
Public-key quantum money is a cryptographic protocol in which a bank can create quantum states which anyone can verify but no one except possibly the bank can clone or forge. There are no secure public-key quantum money schemes in the literature; as we show in this paper, the only previously published scheme [1] is insecure. We introduce a category of quantum money protocols which we call collision-free. For these protocols, even the bank cannot prepare multiple identical-looking pieces of quantum money. We present a blueprint for how such a protocol might work as well as a concrete example which we believe may be insecure.

Faster generation of random spanning trees

with Aleksander Madry.
IEEE Foundations of Computer Science (FOCS) 2009.
In this paper, we set forth a new algorithm for generating approximately uniformly random spanning trees in undirected graphs. We show how to sample from a distribution that is within a multiplicative $(1+\delta)$ of uniform in expected time $\tilde{O}(m\sqrt{n}\log 1/\delta)$. This improves the sparse graph case of the best previously known worst-case bound of $O(\min \{mn, n^{2.376}\})$, which has stood for twenty years.

To achieve this goal, we exploit the connection between random walks on graphs and electrical networks, and we use this to introduce a new approach to the problem that integrates discrete random walk-based techniques with continuous linear algebraic methods. We believe that our use of electrical networks and sparse linear system solvers in conjunction with random walks and combinatorial partitioning techniques is a useful paradigm that will find further applications in algorithmic graph theory.

Local graph partitions for approximation and testing

with Avinatan Hassidim, Huy Nguyen, and Krzysztof Onak.
IEEE Foundations of Computer Science (FOCS) 2009.
We introduce a new tool for approximation and testing algorithms called partitioning oracles. We develop methods for constructing them for any class of bounded-degree graphs with an excluded minor, and in general, for any hyperfinite class of bounded-degree graphs. These oracles utilize only local computation to consistently answer queries about a global partition that breaks the graph into small connected components by removing only a small fraction of the edges.

We illustrate the power of this technique by using it to extend and simplify a number of previous approximation and testing results for sparse graphs, as well as to provide new results that were unachievable with existing techniques. For instance:

  • We give constant-time approximation algorithms for the size of the minimum vertex cover, the minimum dominating set, and the maximum independent set for any class of graphs with an excluded minor.
  • We show a simple proof that any minor-closed graph property is testable in constant time in the bounded degree model.
  • We prove that it is possible to approximate the distance to almost any hereditary property in any bounded degree hereditary families of graphs. Hereditary properties of interest include bipartiteness, $k$-colorability, and perfectness.

Electric routing and concurrent flow cutting

with Petar Maymounkov.
Theoretical Computer Science} 412(32): 4123-4135 (2011). Extended abstract published in Springer LNCS Book No. 5878, Proceedings of The 20th International Symposium on Algorithms and Computation (ISAAC'09).
We investigate an oblivious routing scheme, amenable to distributed computation and resilient to graph changes, based on electrical flow. Our main technical contribution is a new rounding method which we use to obtain a bound on the $\ell_1\rightarrow \ell_1$ operator norm of the inverse graph Laplacian. We show how this norm reflects both latency and congestion of electric routing.

Large-scale identification of genetic design strategies using local search

with Desmond S. Lun, Graham Rockwell, Nicholas J. Guido, Michael Baym, Bonnie Berger, James E. Galagan, and George M. Church.
Molecular Systems Biology, vol. 5, August 2009.
In the past decade, computational methods have been shown to be well suited to unraveling the complex web of metabolic reactions in biological systems. Methods based on flux-balance analysis (FBA) and bi-level optimization have been used to great effect in aiding metabolic engineering. These methods predict the result of genetic manipulations and allow for the best set of manipulations to be found computationally. Bi-level FBA is, however, limited in applicability because the required computational time and resources scale poorly as the size of the metabolic system and the number of genetic manipulations increase. To overcome these limitations, we have developed Genetic Design through Local Search (GDLS), a scalable, heuristic, algorithmic method that employs an approach based on local search with multiple search paths, which results in effective, low-complexity search of the space of genetic manipulations. Thus, GDLS is able to find genetic designs with greater in silico production of desired metabolites than can feasibly be found using a globally optimal search and performs favorably in comparison with heuristic searches based on evolutionary algorithms and simulated annealing.

Fitting a graph to vector data

with Samuel Daitch and Daniel Spielman.
International Conference on Machine Learning (ICML) 2009.
We introduce a measure of how well a combinatorial graph fits a collection of vectors. The optimal graphs under this measure may be computed by solving convex quadratic programs and have many interesting properties. For vectors in $d$-dimensional space, the graphs always have average degree at most $2(d+1)$, and for vectors in $2$ dimensions they are always planar. We compute these graphs for many standard data sets and show that they can be used to obtain good solutions to classification, regression and clustering problems.

On the hardness and smoothed complexity of quasi-concave minimization

with Evdokia Nikolova.
IEEE Foundations of Computer Science, Jan 2007.
In this paper, we resolve the smoothed and approximative complexity of low-rank quasi-concave minimization, providing both upper and lower bounds. As an upper bound, we provide the first smoothed analysis of quasi-concave minimization. The analysis is based on a smoothed bound for the number of extreme points of the projection of the feasible polytope onto a $k$-dimensional subspace, where $k$ is the rank (informally, the dimension of nonconvexity) of the quasi-concave function. Our smoothed bound is polynomial in the original dimension of the problem n and the perturbation size $\rho$, and it is exponential in the rank of the function $k$. From this, we obtain the first randomized fully polynomial-time approximation scheme for low-rank quasi-concave minimization under broad conditions. In contrast with this, we prove $\log n$-hardness of approximation for general quasi-concave minimization. This shows that our smoothed bound is essentially tight, in that no polynomial smoothed bound is possible for quasi-concave functions of general rank $k$.

The tools that we introduce for the smoothed analysis may be of independent interest. All previous smoothed analyses of polytopes analyzed projections onto two-dimensional subspaces and studied them using trigonometry to examine the angles between vectors and $2$-planes in $\mathbb{R}^n$. In this paper, we provide what is, to our knowledge, the first smoothed analysis of the projection of polytopes onto higher-dimensional subspaces. To do this, we replace the trigonometry with tools from random matrix theory and differential geometry on the Grassmannian. Our hardness reduction is based on entirely different proofs that may also be of independent interest: we show that the stochastic $2$-stage minimum spanning tree problem has a supermodular objective and that supermodular minimization is hard to approximate.

Stochastic shortest paths via quasi-convex maximization

with Matthew Brand, Michael Mitzenmacher, and Evdokia Nikolova.
In Proceedings of the 14th Annual European Symposium on Algorithms, published as Springer Lecture Notes in Computer Science, 4168, pages 552--563, 2006.
We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact $n^{\Theta(\log n)}$ algorithm for the case of normally distributed edge lengths, which is based on quasi-convex maximization. We then prove average and smoothed polynomial bounds for this algorithm, which also translate to average and smoothed bounds for the parametric shortest path problem, and extend to a more general non-convex optimization setting. We also consider a number other edge length distributions, giving a range of exact and approximation schemes.

A randomized polynomial-time simplex algorithm for linear programming

with Daniel Spielman.
STOC '06: Proceedings of the thirty-eighth annual ACM symposium on the theory of computing, May 2006. Invited to appear in special issue of SIAM Journal on Computing (declined invitation).
We present the first randomized polynomial-time simplex algorithm for linear programming. Like the other known polynomial-time algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input.

We begin by reducing the input linear program to a special form in which we merely need to certify boundedness. As boundedness does not depend upon the right-hand-side vector, we run the shadow-vertex simplex method with a random right-hand-side vector. Thus, we do not need to bound the diameter of the original polytope.

Our analysis rests on a geometric statement of independent interest: given a polytope $A \mathbf{x} \leq \mathbf{b}$ in isotropic position, if one makes a polynomially small perturbation to $\mathbf{b}$ then the number of edges of the projection of the perturbed polytope onto a random 2-dimensional subspace is expected to be polynomial.

Spectral partitioning, eigenvalue bounds, and circle packings for graphs of bounded genus

SIAM J. Comput. 35(4): 882-902 (2006). (Special issue for STOC 2004). Preliminary conference version appeared in the proceedings of STOC '04 and received the Best Student Paper Award.
In this paper, we address two long-standing questions about finding good separators in graphs of bounded genus and degree:

  1. It is a classical result of Gilbert, Hutchinson, and Tarjan [J. Algorithms, 5 (1984), pp. 391-407] that one can find asymptotically optimal separators on these graphs if given both the graph and an embedding of it onto a low genus surface. Does there exist a simple, efficient algorithm to find these separators, given only the graph and not the embedding?
  2. In practice, spectral partitioning heuristics work extremely well on these graphs. Is there a theoretical reason why this should be the case?
We resolve these two questions by showing that a simple spectral algorithm finds separators of cut ratio $O(\sqrt{g/n})$ and vertex bisectors of size $O(\sqrt{gn})$ in these graphs, both of which are optimal. As our main technical lemma, we prove an $O(g/n)$ bound on the second smallest eigenvalue of the Laplacian of such graphs and show that this is tight, thereby resolving a conjecture of Spielman and Teng. While this lemma is essentially combinatorial in nature, its proof comes from continuous mathematics, drawing on the theory of circle packings and the geometry of compact Riemann surfaces.

The surgery theoretic classification of high-dimensional smooth and piecewise linear simply-connected manifolds

My Harvard undergraduate senior thesis, which has been used as a text in graduate seminars at several universities.

Multiple description vector quantization with a coarse lattice

with Vivek Goyal and Jelena Kovacevic.
IEEE Transactions on Information Theory, vol. 48, no. 3, pp. 781-788, March 2002.
A multiple description (MD) lattice vector quantization technique for two descriptions was recently introduced in which fine and coarse codebooks are both lattices. The encoding begins with quantization to the nearest point in the fine lattice. This encoding is an inherent optimization for the decoder that receives both descriptions; performance can be improved with little increase in complexity by considering all decoders in the initial encoding step. The altered encoding relies only on the symmetries of the coarse lattice. This allows us to further improve performance without a significant increase in complexity by replacing the fine lattice codebook with a nonlattice codebook that respects many of the symmetries of the coarse lattice. Examples constructed with the two-dimensional (2-D) hexagonal lattice demonstrate large improvement over time sharing between previously known quantizers.

Quantized frame expansions with erasures

with Vivek K Goyal, and Jelena Kovacevic.
Special issue on Engineering Applications of Applied Computational Harmonic Analysis, 10(3):203-233, 2001.
Frames have been used to capture significant signal characteristics, provide numerical stability of reconstruction, and enhance resilience to additive noise. This paper places frames in a new setting, where some of the elements are deleted. Since proper subsets of frames are sometimes themselves frames, a quantized frame expansion can be a useful representation even when some transform coefficients are lost in transmission. This yields robustness to losses in packet networks such as the Internet. With a simple model for quantization error, it is shown that a normalized frame minimizes mean-squared error if and only if it is tight. With one coefficient erased, a tight frame is again optimal among normalized frames, both in average and worst-case scenarios. For more erasures, a general analysis indicates some optimal designs. Being left with a tight frame after erasures minimizes distortion, but considering also the transmission rate and possible erasure events complicates optimizations greatly.

An analysis of front-facing representations of orientable surfaces

with Christopher Mihelich.
Harvard Computer Science Technical Reports, 2002.

Multiple description lattice vector quantization: variations and extensions

with Vivek K Goyal and Jelena Kovacevic.
Data Compression Conference, 2000. In Proceedings of the DCC 2000, pages 480-489, 2000.
Multiple description lattice vector quantization (MDLVQ) is a technique for two-channel multiple description coding. We observe that MDLVQ, in the form introduced by Servetto, Vaishampayan and Sloane in 1999, is inherently optimized for the central decoder; i.e., for a zero probability of a lost description. With a nonzero probability of description loss, performance is improved by modifying the encoding rule (using nearest neighbors with respect to "multiple description distance") and by perturbing the lattice codebook. The perturbation maintains many symmetries and hence does not significantly effect encoding or decoding complexity. An extension to more than two descriptions with attractive decoding properties is outlined.

The near-zero microscopic eigenvalue spectrum of random matrix ensembles of finite variance

Unpublished manuscript, 1997. (Written in high school and submitted to several science competitions. Won the Grand Prize in the Intel International Science and Engineering Fair (ISEF) and 8th place in the Westinghouse Science Talent Search.)

Evolution of a vibrational wave packet on a disordered chain

with Philip B. Allen.
American Journal of Physics, Jan. 1998.
A linear chain of point masses coupled by harmonic springs is a standard model used to introduce concepts of solid state physics. The well-ordered chain has sinusoidal standing wave normal modes ~if the ends are fixed! or traveling wave normal modes ~if the ends are connected in a ring!. Ballistically propagating wave packets can be built from these normal modes, and illustrate the mechanism of heat propagation in insulating crystals. When the chain is disordered, new effects arise. Ballistic propagation is replaced by diffusive propagation on length scales larger than the mean free path for ballistic motion. However, a new length scale, the localization length, also enters. On length scales longer than the localization length, neither ballistic nor diffusive propagation occurs, and energy is trapped unless there are anharmonic forces. These ideas are illustrated by a computer experiment.