Applied Math Colloquium

Subscribe to the mailing list to receive talks announcements

For more information, contact Philippe Rigollet and Laurent Demanet

Spring 2003

Most talks are at 4:15 pm in Room 2-105. Refreshments are typically served at 3:45 pm in Room 2-349.

Spring 2003 Organizers: Alan Edelman and Daniel Spielman

Date Speaker / Abstract
Feb 10

Sinan Gunturk (Courant)

One-bit quantization: how good are equally important bits?

One-bit quantization is a method of representing bounded signals by {+1,-1} sequences that are computed from regularly spaced samples of these signals; as the sampling density is increased, convolving these one-bit sequences with appropriately chosen averaging kernels must produce increasingly close approximations of the original signals. This method is widely used for analog-to-digital conversion because of the many advantages its implementation presents over the classical and more familiar method of fine-resolution quantization.

A striking feature of one-bit quantization is that all bits are given equal importance. This brings challenges with it, one of which is to achieve high accuracy. A fundamental open problem is the determination of the best possible behavior of the approximation error as a function of the sampling density for various function classes, and most importantly for the class of bandlimited functions, which is a model space for audio signals. Some of the other open problems ask for precise error estimates for particular popular algorithms used in practice.

In this talk, we present the recent progress towards the solution of these problems, and the interplay of various types of mathematics in achieving these results.

Feb 17

(President's Day)

Feb 24

Horst Simon Simon, NERSC

The divergence problem in high performance computing

The divergence problem is the recognition of the fact that in recent years scientific computing in America has been handicapped by its dependence on hardware that is designed and optimized for commercial applications. The performance of the recently completed Earth Simulator in Japan, which is five times faster than the fastest American supercomputer, dramatically exposed the seriousness of this problem. Typical scientific applications are now able to extract only 5 to 10 percent of the power of American supercomputers built from commercial web and data servers. By contrast, the design of the Earth Simulator makes 30 to 50 percent of its power accessible to the majority of types of scientific calculations. Reliance on commercial webservers as building blocks for scientific supercomputers has served the scientific community well for the last eight years. But events in 2002 demonstrated that the needs of commercial and scientific applications are diverging.

In this talk I will describe several steps that should be taken in the U.S. to overcome the reliance on purely commercial approaches to supercomputing, and to develop science-driven computer architectures. These steps include new partnerships with vendors, and the development of projects such as Red Storm (Cray/AMD) and Blue Planet (IBM), which are based on commodity components, with customization in some key areas.

Reference: "Creating Science-Driven Computer Architecture: A New Path to Scientific Leadership" at http://www.nersc.gov/news/blueplanet.html

Mar 3

Dan Meiron Caltech

Mar 10

no colloquium

Mar 17

no colloquium

recommend the Killian Faculty Achievement Award Lecture: "The robot within us: neural mechanisms underlying habit formation", by Ann M. Graybiel, at 4:30 in 10-250.

Apr 7

Bruce Hendrickson (Sandia National Labs)

Combinatorial Scientific Computing: The role of discrete algorithms in scientific computing

Although scientific computing is generally viewed as the province of differential equations and numerical analysis, combinatorial techniques have long played a crucial role. For instance, graph theory is essential to the study of molecular structures and material science, many problems in linear algebra involve discrete algorithms, and the parallelization of scientific computations leads to numerous combinatorial problems. I will review some of these many successes, and offer suggestions for new areas in which work is needed at this exciting intersection of disciplines.

Apr 14

Dmitry Panchenko (M.I.T.)

Bounds on the generalization error of some classification algorithms

In this talk we will present a number of bounds on the generalization error of several classification algorithms such as voting algorithms, support vector machines and neural networks. These bounds explain the generalization ability of the classifiers in terms of several important parameters, e.g., confidence of classification, sparsity and clustering, and some measures of complexity of families of classifiers.

Apr 28

Thomas C. Halsey (ExxonMobil Research and Engineering)

May 5

Elliott Lieb (Princeton)

The Stability of Matter and Quantum Electrodynamics, or There Is More to Physics Than The Hydrogen Atom

Ordinary matter is held together with electromagnetic forces, and the dynamical laws governing the constituents (electrons and nuclei) are those of quantum mechanics. These laws, found in the beginning of this century, were able to account for the fact that electrons do not fall into the nuclei and thus atoms are quite robust. It was only in 1967 that Dyson and Lenard were able to show that matter in bulk was also stable and that two stones had a volume twice that of one stone. Simple as this may sound, the conclusion is not at all obvious and hangs by a thread-- namely Pauli's "exclusion principle." In the ensuing three decades much was accomplished to clarify, simplify and extend this result. We now understand that matter can, indeed, be unstable when relativistic effects and magnetic fields are taken into account -- unless the electron's charge is small enough (which it is, fortunately). These delicate and non-intuitive conclusions will be summarized. We can now hope to begin an analysis of the half-century old question about the ultimate theory of ordinary matter, called quantum electrodynamics (QED). This is an experimentally successful theory, but one without a decent mathematical foundation. Some recent, preliminary steps in resolving some of the problems of QED will be presented.

May 12 35-225

Craig Silverstein (Google)

Mathematical Analysis of Hyperlinks in the World Wide Web