Applied Math Colloquium

Subscribe to the mailing list to receive talks announcements

For more information, contact Laurent Demanet

Fall 1996

All the talks are at 4:15 pm in Room 4-163. Refreshments are typically served at 3:45 pm in Room 2-349.

Fall 1996 Organizers: Alan Edelman for the continuous side (fluids, physics, dynamics, numerical algorithms, mechanics, ...) and Sergey Fomin on the discrete side. (combinatorics, optimization, complexity, theoretical computer science, computational biology, ...)

Date Speaker / Abstract
Sep 9

Hung Cheng (MIT)

Re-Visiting Quantum Gauge Field Theories -- Trouble with the Standard Model?

A review will be given on the two pillars of the modern formulation of quantum gauge field theories--path integration and dimensional regularization. I shall argue that they should be relegated to a complementary role in the theory of quantized fields. Indeed, while the Fadeev-Popove formalism of path integration has serious faults, dimensional regularization is deficient in various aspects. A calculation on the basis of renormalized perturbation and the Ward-Takahashi identities show that the next-order triangular anomaly is not zero, contradicting the Adler-Bardeen theorem.

Sep 16

Dan Spielman (MIT)

Highly Fault-Tolerant Parallel Computation

When the function of a processor is critical, it is common to use three in place of the one so that even if one fails, its instructions will be overridden by the other two. Similarly, one could protect against the failure of k processors by following the instructions of a majority of 2k+1. But, what if these processors are components of a large parallel machine? Is it necessary to replicate each processor 2k+1 times to insure against the loss of any k? We show that, in many cases, one can do better.

The k-wise repetition scheme was studied by Von Neumann in his classic paper, "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components". Elias pointed out that Von Neumann's results, while a "triumph", had "the character of pre-information theory results on the reliable transmission of information over unreliable channels." We use techniques recently developed to study complexity theory and parallel computation to construct fault-tolerant computations with the character of modern information theory--at each time step, the states of our parallel machines are close to codewords in Reed-Solomon Codes.

Sep 23

--- Student Holiday ---

Sep 30

Gina Kolata (New York Times)

How does a Math Story Get Into the NY Times?

It's a rare day indeed when mathematics research makes the news. The situation is due in part to the culture of mathematics and in part to the definition of "news." By discussing a few newspaper articles that were about mathematics, I will tell the stories behind the stories at the N.Y. Times.

Oct 7

Charles S. Peskin (Courant Institute NYU)

Random walks along microtubules: the stochastic dynamics of two molecular motors

Microtubules are stiff protein polymers that form tracks inside biological cells along which material can be transported from one place to another both faster and with greater specificity than by diffusion through the cytoplasm. In this talk, we shall consider two molecular motors that run along these tracks. One of these is the motor protein kinesin, a two-headed molecule that literally walks along the microtuble towing its cargo (such as a membrane- bound vesicle) by means of a long elastic tether. The other motor involves the depolymerization of the microtubule itself, which can be used to drive the transport of structures that diffuse along the microtubule but cannot get off its depolymerizing end. We shall introduce mathematical models of these two motors and show how these models can be used to characterize the random walks that the motors generate. A synergistic and somewhat paradoxical interaction of both motors in chromosome transport during cell division will be proposed.

Oct 8 SPECIAL COLLOQUIUM
4:15p 2-338

A. Fokas (Imperial College, London and Clarkson, Potsdam NY)

A Unified Transform Method for Solving Linear and Certain Nonlinear PDE's

The inverse spectral method is a nonlinear Fourier transform method for solving initial value problems for certain nonlinear PDE's in 2 and 3 dimensions. Is is based on the solution of the so called Riemann-Hilbert and \bar{\boundary} problems. After reviewing this method we will present a new transform method for solving initial boundary value problems for both linear and for integrable nonlinear PDE's in 2 dimensions. This method provides a unified approach to solving linear equations with simple boundary conditions, linear equations with complicated boundary conditions (Wiener-Hopf type problems) and nonlinear integrable equations.

Oct 14

--- Columbus Day ----

Oct 21

Albert Cohen (U. Paris)

Adaptive Multiscale Approximations and PDEs

Multiscale methods are used in numerical analysis for the preconditionning of elliptic operator and the adaptive numerical treatment of PDE's, in particular when the solution admits singularities. We shall present theoretical results on adaptive multiscale approximations for the solutions of elliptic PDE's, as well as fast algorithms that aim to construct these approximations.

Oct 28

James Demmel (UC Berkeley)

Fast or Accurate Algorithms for Eigenvalue Problems

We discuss the fastest algorithms now available for computing the symmetric eigendecomposition or SVD, as well as faster algorithms that we expect to be available soon. We discuss other algorithms designed to be much more accurate for computing tiny eigenvalues, which are needed in some quantum mechanics and finite element problems. We use tools ranging from the fast multipole method to the combinatorial structure of totally positive matrices.

Nov 4

Joel Koplik (CUNY)

Superfluid Vortex Scattering

We study the interactions between superfluid vortex filaments and rings, with emphasis on annihilation and reconnection events. The calculation is based on efficient numerical solutions of the Gross-Pitaevskii nonlinear Schroedinger equation model of superfluid He~II, so as to include physically reasonable core-scale dynamics. The motivation is firstly to consider the maintenance and generation of vorticity in superfluid flows, and secondly to compare the behavior of superfluids and normal (Navier-Stokes) fluids with regard to the evolution of vortex structures. In general it is found that neighboring vortex filament sections of opposite orientation always annihilate smoothly at the core scale, and otherwise the behavior is very similar to that of normal fluid vortices. Collisions of vortex rings resemble the scattering of elementary particles.

Nov 11

--- Veterans Day ----

Nov 18

Stuart Geman (Brown University)

Maximum-Likelihood Decoding

Most decoding algorithms employ nearest-neighbor decoding. In most cases this is suboptimal. Motivated by the problem of reading some new two-dimensional barcodes from camera images, I will explore a code representation that permits maximum-likelihood decoding. Maximum-likelihood decoding achieves the minimum attainable error rate when code words are uniformly distributed. The idea is to view the set of code words as the language (set of sentences) associated with a context-free grammar. Variations on standard parsing algorithms are then available for 1) maximum likelihood decoding, 2) computation of the probability that a given decoding is correct, and 3) decoding from channels that are characterized by Markov (instead of independent) noise. Very large codes can be feasibly decoded through a "thinning algorithm" that trades information rate (encoded bits divided by transmitted bits) for exponential improvements in decoding efficiency. Results from coding theory, formal grammars, etc. will be introduced as needed so as to make the talk more or less self contained.

Nov 25

Dmitri Burago (Pennsylvania State University)

Collisions of hard balls and geometry of non-positive curvature

Consider a system of several hard balls moving in empty space and colliding elastically. It is intuitively clear that the number of collisions can not be arbitrarily big, while the number of balls, their masses and radii are fixed. Surprisingly, this problem remained open for over twenty years. One also expects the existence of an analogous bound on the number of collisions in unitary time for hard ball gas in a box. (This bound then may have a certain thermodynamical meaning.) Proofs of these statements (obtained jointly with S. Ferleger and A. Kononenko) are based on purely geometrical considerations, which become very transparent from the viewpoint of geometry of non-positively-curved spaces. The same approach also yields other dynamical information, i.e. estimations on topological entropy, and raises new problems of both geometrical, dynamical and combinatorial character.

Dec 2

Steve Grossberg (BU)

The Attentive Brain: Perception, Learning, and Consciousness

Are there universal computational principles that the brain uses to self-organize its intelligent properties? This lecture suggests that common principles are used in brain systems for early vision, spatial attention, visual object recognition, auditory source identification, and variable-rate speech perception, among others. These are principles of matching and nonlinear resonance that form part of Adaptive Resonance Theory, or ART. These principles are realized in neural networks that are defined by systems of nonlinear differential equations. This talk will provide a self-contained intuitive introduction to this subject. In particular, bottom-up signals in an ART system can automatically activate target cells to levels capable of generating suprathreshold output signals. Top-down expectation signals can only excite, or prime, target cells to subthreshold levels. When both bottom-up and top-down signals are simultaneously active, only the bottom-up signals that receive top-down support can remain active. All other cells, even those receiving large bottom-up inputs, are inhibited. Top-down matching hereby generates a focus of attention that can resonate across processing levels, including those that generate the top-down signals. Such a resonance acts as a trigger that activates learning processes within the system. Resonance is also proposed to be a necessary condition for conscious awareness.

In the models described herein, these effects are due to a top-down nonspecific inhibitory gain control signal that is released in parallel with specific excitatory signals. One model concerns how the brain's auditory system constructs coherent representations of acoustic objects from the jumble of noise and harmonics that relentlessly bombards our ears throughout life. This model, called the ARTSTREAM model, suggests an approach to solving the cocktail party problem. Another model, the ARTPHONE model, helps to clarify the neural representation of a speech code as it evolves in real time, and is used to simulate data about variable-rate speech categorization. Yet another model, called a FACADE theory model, helps to explain how visual thalamocortical interactions give rise to boundary percepts such as illusory contours and surface percepts such as filled-in brightnesses. Top-down corticogeniculate feedback interactions select and synchronize\break LGN activities that are consistent with cortical activations. In addition to these systems are the more familiar ART systems for explaining visual object recognition, including how recognition categories are learned whose number, size, and shape fit the statistics of a nonstationary environment. Taken together, these results provide accumulating evidence that the brain parsimoniously specifies a small set of computational principles to ensure its stability and adaptability in responding to many different types of environmental challenges. The talk will also discuss why procedural memories, or memories that control movements, are not conscious by specifying why they are not designed to achieve resonance.

Dec 9

Oded Goldreich (Weizmann Institute)

Probabilistic Proof Systems

Various types of probabilistic proof systems have played a central role in the development of computer science in the last decade. In the talk, we concentrate on three such proof systems --- interactive proofs, zero-knowledge proofs, and probabilistic checkable proofs --- stressing the essential role of randomness in each of them.