|10:00AM||Manuel Blum||Toward a Theory of Humanly Computable Protocols||Abstract
Can a (competent) human compute something of great value to him/her self -- PRIVATELY -- entirely in his/her head -- that NO ADVERSARY -- BE IT HUMAN, COMPUTER, OR COMBO -- can reasonably get hold of? In particular, can a human compute a private hash function h of challenges (typically 4 to 6-letter words) x_1, x_2, x_3,...
• with just a few (3?) hours of preprocessing to learn h and just 20 seconds of processing per challenge word x_i to compute h(x_i), so that
• an observant human/computer combo that
- does not know h (though it does know the class H from which h was chosen) and
- has seen only a few pairs <x,h(x)> cannot compute h(x) on a new x?
The purpose of this talk is to exhibit such a scheme, presented in the form of a PASSWORD SCHEME, that is
1. WELL DEFINED (a mathematical concept),
2. HUMANLY USABLE by a normal dedicated human being, namely me (an experimental concept), and
3. MACHINE UNCRACKABLE to a TINY but WELL-DEFINED EXTENT (an information theoretic concept).
This theory inevitably builds on ideas from both computer and cognitive science. Joint work with Jeremiah Blocki, Anupam Datta, and Santosh Vempala
|10:30AM||Avi Wigderson||Lower Bounds, Anyone?|
|11:30AM||Leonard Schulman||Channel Coding for Interactive Communication||Abstract
Communications between interacting parties need to be protected from channel errors. This is a harder challenge than the classical one of protecting one-way message transmissions. I will discuss why, and then introduce the core combinatorial object that is needed: a class of error-correcting codes known as tree codes. These codes are known to exist but for two decades have eluded explicit construction. I will describe a conjecture about somewhat unusual exponential sums, which if correct will yield the first construction of tree codes.
Joint work with Cris Moore.
|12:00PM||Alex Russell||The History (and Future?) of Computational Complexity: From Prodigal Child to Prospector and Imperialist|
|1:45PM||Dan Spielman||Adiabatic Quantum Computing||Abstract
The adiabatic approach to quantum computing was introduced in a
provocative paper of Farhi, Goldstone, Gutmann and Sipser. While
it is known to be equivalent in computational power to the model
of quantum circuits, it seems much easier to design algorithms
using the adiabatic approach. The designer knows exactly what
the adiabatic algorithm will compute. The hard part is figuring
out how long it will take. This is very different from the
design of algorithms in the quantum circuit model, in which the
number of computation steps is determined, but it can be very
difficult to figure out what a given circuit actually computes.
We will demonstrate the elegance of the adiabatic quantum computing by presenting a simple adiabatic algorithm for solving large systems of linear equations. A quantum circuit for solving this problem was originally described by Harrow, Hassidim and Lloyd.
This is joint work with Nicholas Read.
|2:15PM||Sofya Raskhodnikova||Sublinear Algorithms for Real Data|
|2:45PM||Drew Sutherland||Counting Points in Average Polynomial Time||Abstract
Given a polynomial equation with integer coefficients, say of the form
f(x,y)=0, one may ask for the number of solutions modulo a prime p. The
difficulty of this problem depends crucially on the genus g of the curve
defined by the reduction of the equation modulo p. When g=0, the problem
is trivial. When g=1 the problem is definitely not trivial, but it can be
solved in deterministic polynomial-time using an algorithm due to Schoof.
For g > 1 there is a generalization of Schoof's algorithm whose running
time is polynomial in log p but exponential in g, which makes it
exponential in terms of the input f mod p, and in practical terms,
infeasible for g > 2. There is no deterministic or randomized algorithm
known for this problem whose complexity is polynomial in both g and log p.
I will present a new approach that allows one to solve this problem simultaneously for all primes p < N in time that is quasi-linear in N and polynomial in g, yielding a complexity that is polynomial in the size of f mod p when averaged over primes p < N. Moreover, this algorithm is quite practical, and has already achieved several record breaking computations for g = 3.
This is joint work with David Harvey.
|3:45PM||Johan Hastad||Some (Mostly Old) Thoughts on Small-Depth Circuits|
|4:15PM||Shafi Goldwasser||The Surprising Use for Lower Bounding a Set|