ONE-BIT QUANTIZATION:
HOW GOOD ARE EQUALLY IMPORTANT BITS?

SINAN GUNTURK

Courant Institute of Mathematical Sciences
New York University

February 10, 4:15pm
2-105

ABSTRACT 



One-bit quantization is a method of representing bounded signals by
{+1,-1} sequences that are computed from regularly spaced samples of these
signals; as the sampling density is increased, convolving these one-bit
sequences with appropriately chosen averaging kernels must produce
increasingly close approximations of the original signals.  This method is
widely used for analog-to-digital conversion because of the many
advantages its implementation presents over the classical and more
familiar method of fine-resolution quantization.

A striking feature of one-bit quantization is that all bits are given
equal importance. This brings challenges with it, one of which is to
achieve high accuracy. A fundamental open problem is the determination of
the best possible behavior of the approximation error as a function of the
sampling density for various function classes, and most importantly for
the class of bandlimited functions, which is a model space for audio
signals. Some of the other open problems ask for precise error estimates
for particular popular algorithms used in practice.

In this talk, we present the recent progress towards the solution of these
problems, and the interplay of various types of mathematics in achieving
these results.




Return to Applied Math Colloquium home page