Imaging and Computing Seminar — Spring 2010
-
Tu 01/19, at 2:00pm, in room 2-142 (Special)
Yajun Zhou, Chemistry, Harvard.
Robust Solution to Integral Equations via Topological Compactness: Applications in Chemical Kinetics and Electromagnetic Scattering
Abstract: We study the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. We point out that topological compactness plays an essential role in the robust solutions to the integral equations associated with the two problems under investigation. By exploiting the $ w^*$-compactness theorem of Banach-Alaoglu, we devise a numerically stable method to extract the probability distribution of kinetic rate constants from noisy measurements in the time domain. By constructing a compact operator $\hat {\mathcal G}+2\hat{\mathcal G}^2 $ from the non-compact Green operator $\hat{\mathcal G} $ of light scattering, we demonstrate robust solutions to the scattering of electromagnetic waves on non-accretive homogeneous dielectric media, except for a critical susceptibility value $ \chi=\epsilon_r-1=-2$ that is independent of the dielectric shape.
-
Th 02/04, at 4:15pm, in room 36-112
Cancelled
-
We 02/10, at 4:00pm, in room 54-915 (Special; EAPS colloquium)
Carl Tape, Earth and planetary sciences, Harvard.
Seismic Tomography and Imaging of the Southern California Crust
-
Th 02/11, at 4:15pm, in room 36-112
Paul Barbone, Mathematics, Boston University.
Direct imaging via time-reversal, Krylov methods, and sparse signals.
Abstract: Multiple SIgnal Classification (MUSIC) has been used to form images and identify sound sources since 1986 [Schmidt, R.O, "Multiple Emitter Location and Signal Parameter Estimation," IEEE Trans. Antennas Propagation, Vol. AP-34 (March 1986), pp.276-280.] In active imaging of point targets, the MUSIC method can be used to estimate the range of the time-reversal operator. Kirch's factorization method extends these ideas to extended targets. Typical implementations of these methods utilize measurements of the entire time-reversal operator. They then require computations of its eigenvalues and eigenvectors, or an estimate of its pseudoinverse.
By contrast, we show how iterative Krylov space methods can be used to compute these images with relatively few measurements. With the Lanczos technique, no eigenvalues, eigenvectors, nor psuedoinverses need be computed. Rather, an orthonormal basis for the range of the time-reversal operator can be constructed directly from the received data. Most of the necessary computing is performed by the array itself, performing as a kind of "analog computer." Furthermore, we show that useful images can be formed from one ``iteration" (i.e. measurement) to the next, while the data are being collected, and automatically stops when the full space is spanned.
Used in this way, the Lanczos method provides a perfect reconstruction of the target with a minimal number of measurements. As such, this approach provides a new mode of compressed sensing without an $\ell^1$ constraint.
Contributors: G.R. Feijoo, A.A. Oberai, and P.E. Barbone
-
Th 02/18, at 4:15pm, in room 36-112
Misha Kilmer, Mathematics, Tufts.
Hybrid and generalized hybrid regularization for pMRI reconstruction
Abstract: The reconstruction of MR images from accelerated parallel MR data presents as a familiar inverse problem, and system regularization techniques are often employed to ensure robust solutions. To be clinically viable, however, regularized solutions must be computed efficiently and in a manner consistent with the speed of data acquisition. Image quality in the regularized solution depends directly on the value of the regularization parameter, and therefore the regularization parameter(s) must be selected in a stable and efficient manner while simultaneously constructing candidate solutions. We present an LSQR-hybrid iterative algorithm that allows us to quickly and efficiently select a regularization parameter value for a class of parallel MRI image reconstruction problems. To achieve higher acceleration rates, parallel MRI can be combined with partial-Fourier acquisition techniques. For this special case, we propose a two-parameter constrained reconstruction problem. A generalization of the LSQR-Hybrid approach for this special case allows us to efficiently select both parameters via information generated during various runs of the LSQR-Hybrid algorithm. In both versions of the reconstruction problem, our algorithm gives high quality results for high acceleration factors in a fraction of the time compared to the naive approach. Phantom and in-vivo results will be presented.
This is joint work with Dr. W. Scott Hoge, Brigham and Women's Hospital.
-
Th 02/25, at 4:15pm, in room 36-112
Bruce Fischl, Harvard Medical School
Computational Modeling of Neuroanatomy using MRI
Abstract: Computational neuronanatomy can either refer the manner in which the computational architecture of the brain helps it to carry out computations, or the application of computational techniques to build models of neuroanatomical structures. While the former definition is the one that is of ultimate interest, it most likely requires the models indicated in the latter definition. In this talk I will discuss research at MGH with the goal of building such models. The neuronanatomical structures of interest can be broadly subdivided into two categories - cortical and non-cortical.
Cortical structures (particularly the cerbral cortex) are typically highly folded, thin sheets of gray matter. Functionally, the cerebral cortex has been shown to have a "columnar" architecture. For this reason, we construct surface-based models for analysis of cortical properties. The construction of such models is a difficult task due to the high degree of folding of the cortical manifold in conjunstion with the limited (~ 1 mm) resolution of current neuroimaging technologies. Once constructed, the cortical models can be deformed for morphometry, visualization and registration purposes. I will show some results of this type of analysis, including the morphometric changes that the cortex undergoes in disorders such as schizophrenia, Alzheimer's disease, and Huntington's disease, as well as healthy aging.
A different set of techniques have been developed for the construction of models of subcortical structures. Here, we build on work of Kapur, Grimson and Wells, and model the segmentation as an anisotropic nonstationary Markov Random Field. The anisotropy lets us model the local spatial relationships that exist between neuroanatomical structures (e.g. hippocampus is anterior and inferior to amygdala), while the nonstationarity facilitates the encoding of inhomogeneous properties of the tissue within a structure. This approach is based on extracting the relevant model parameters from a manually labeled training set, and has been shown to be comparable in accuracy to the manual labeling.
-
Fr 03/05, at 3:00pm, in room 2-131 (special time, date, room)
Chris Stucchio, Courant Institute, NYU
Phase Space Analysis in MRI
Abstract: An MRI is a device used to physically calculate the continuous Fourier transform of an image. The challenge in MRI is to use this data to reconstruct an image of the body. I'll discuss how phase space analysis can be used to improve the imaging process. I'll introduce wavefront filters, an extension of concentration kernels to multiple dimensions. I'll then show how they can be used to match a parametric model of an image to physical data, and discuss future directions in improving imaging.
-
Fr 03/12, at 12:00pm, in room 54-517 (joint with the FISH-ERL seminar)
Tristan van Leeuwen, Dept of Geotechnology, Delft University
Correlation-based inversion of seismic data
Abstract: The aim of waveform inversion is to infer subsurface medium properties from seismic (reflection) data. Such an inverse problem may be posed as a PDE-constrained least-sqaures optimization problem. However, due to the band-limited nature of the data, the data depends very nonlinearly on the medium parameters that control the kinematics of the data. If a kinematically accurate initial guess is not provided, a gradient-based optimization algorithm will most likely get stuck in a local minimum of the least-squares objective functional. This calls for a reformulation of the inverse problem into a traveltime tomography problem. The basic idea is that the traveltimes of the data depend more linearly on the velocity than the waveforms themselves. A standard technique involves a correlation of the modeled and observed waveforms to extract the traveltime difference. We propose a novel way to measure the traveltime misfit from the correlation and show that this can be immediately used to define an alternative objective functional that mitigates the local minima that plague the least-squares approach. The proposed method can be adapted for both transmission and reflection tomography. We illustrate the workings on synthetic and real data examples.
-
Th 03/18, at 4:15pm, in room 36-112
Lorenzo Rosasco, Brain and cognitive sciences, MIT
Spectral Methods for Learning from High Dimensional Data
Abstract: Learning can be described as the problem of making inference from (possibly small) samples of noisy, high dimensional data. Such a problem is typically ill-posed and ensuring stability is the key to find models that generalize to new data. In this talk I will discuss a general class of learning techniques that draw on the study of spectral properties of suitably defined, data dependent matrices. Stability of the methods is achieved viaspectral filtering, that is discarding components corresponding to small eigenvalues. The efficiency of the approach can be proved using concentration inequalities to study the stability properties of the empirical matrices and their spectra. Such results are dimension independent and suggest that the proposed methods can be efficiently used to learn high dimensions problems. Indeed, an empirical analysis shows that spectral filtering methods achieve state of the art performances in a variety of real world applications.
-
Th 03/25: Spring break
- We 04/07, at 4:15pm, in room 2-105 (special date, time, room)
Liliana Borcea, Computational and Applied Mathematics, Rice University
Source localization in random acoustic waveguides
Abstract: Mode coupling due to scattering by weak random inhomogeneities in waveguides leads to loss of coherence of wave fields at long distances of propagation. This in turn leads to serious deterioration of coherent source localization methods, such as matched field. I will show with analysis and numerical simulations how such deterioration occurs, and introduce a novel incoherent approach for long range source localization in random waveguides. It is based on a special form of transport theory for the incoherent fluctuations of the wave field. We study theoretically the statistical stability of the method and illustrate its performance with numerical simulations. I will also show how it can be used to estimate the correlation function of the random fluctuations of the wave speed.
-
Th 04/08, at 4:15pm, in room 36-112
Mark Lyon, Mathematics and Statistics, U of New Hamphsire
Efficient solution of PDEs on complex domains: Fourier Continuation - alternating direction methods
Abstract: A new methodology for the numerical solution of Partial Differential Equations (PDEs) in smoothly bounded domains will be presented. Our algorithms are based on the use of a certain "Fourier Continuation" (FC) method for the resolution of the Gibbs phenomenon in conjunction with well-known alternating direction (AD) strategies. This FC-AD methodology will be demonstrated through a variety of examples including the Heat, Laplace, Wave and Elastic Wave Equations and both Dirichlet and Neumann problems. The high-order algorithms possess the desirable property of unconditional stability for general domains in computational time that grows in an essentially linear manner with the number of unknowns. Some results for nonlinear PDEs will also be presented. The significant improvements that these new algorithms can provide over the computing times required by alternative general-domain solvers will be discussed. In particular, the debilitating spatial "pollution error", which arises as finite-difference and finite-element solvers are applied to the solution of wave propagation problems, is essentially absent from our calculations due to the Fourier basis used in the FC-AD calculations.
-
Th 04/15, at 4:15pm, in room 36-112
Facundo Memoli, Mathematics, Stanford
Characterization, stability and convergence of Hierarchical Clustering Schemes
Abstract: Clustering is one of the most basic data analysis techniques, yet, not much is known about the theoretical properties of most clustering schemes. Jon Kleinberg has proved that there exists no clustering scheme that simultaneously satisfies three very natural axioms that he identifies. We prove that in the more relaxed context of hierarchical clustering methods, a set of axioms very similar to those of Kleinberg's yield existence and uniqueness instead of non-existence.
Other properties of this unique scheme can be established, namely stability and convergence. In order to do this, we represent dendrograms as ultrametric spaces and use tools from metric geometry to quantify the degree to which perturbations in the input metric space affect the result of hierarchical methods.
The unique scheme that we characterize turns out to be single linkage hierarchical clustering which is commonly criticized for exhibiting the so called 'chaining effect'. As a refinement of a classical observation by Hartigan, we prove that in the case when the input to our unique scheme comes from sampling an underlying measure metric space in an i.i.d. fashion, our convergence results imply that one asymptotically recovers a certain multiscale decomposition of the support of the underlying probability measure.
In a related vein, Hierarchical clustering can be seen as the 0-dimensional version of persistent Homology. We show how our stability results extend to k-dimensional persistence Homology invariants coming from Vietoris-Rips simplicial constructions.
-
Th 04/22, at 4:15pm, in room 36-112
David Colton, Mathematics, University of Delaware
Transmission Eigenvalues and Inverse Scattering Theory
Abstract: The transmission eigenvalue problem is a new class of non-selfadjoint eigenvalue problems that first appeared in inverse scattering theory. This problem can be viewed as the dual of the well known "cloaking problem" where now,for a given inhomogeneous medium, one seeks an incident wave for which the inhomogeneous medium is invisible, i.e. there is no scattered field. It can be shown that this can occur for at most a discrete set of values of the wave number and such values are called transmission eigenvalues. It has only recently been shown that for a non-absorbing medium real transmission eigenvalues can exist and these eigenvalues can be determined from a knowledge of the far field pattern of the scattered wave. Through the derivation of Faber-Krahn type inequalities for transmission eigenvalues one can obtain estimates on the index of refraction of the medium, thus opening up new possibilities for investigating the inverse scattering problem for both acoustic and electromagnetic waves. It can further be shown that for a spherically stratified medium the transmission eigenvalues uniquely determine the index of refraction up to a normalizing constant. This talk will provide a brief survey of the above results as well as the formulation of open problems whose solution is necessary for further progress.
-
Th 04/29, at 4:15pm, in room 36-112
Birsen Yazici, Mathematics, RPI
Passive Synthetic Aperture Imaging
Abstract: I will present novel synthetic-aperture imaging modalities for radar/sonar that rely on sources of opportunity. In particular, I will present two new modalities which we refer to as "hitchhiker" and "Doppler hitchhiker". Both modalities involve receivers only and use sources of opportunity, such as TV, radio, or WiFi signals (or ambient sound) to form an image. While hitchhiker uses delayed and correlated measurements and is suitable for wideband sources of opportunity, Doppler hitchhiker uses scaled and correlated measurements and can form high resolution images using single frequency sources of opportunity. I will derive forward maps in the form of Fourier Integral Operators for both modalities; present the underlying imaging geometry and describe the image formation using microlocal techniques.
-
We 05/05, at 4:00pm, in room 32-D463 (Stata center, D tower.)
Joint with the CSAIL Brains & Machines Seminar Series.
Mauro Maggioni, Mathematics, Duke
Intrinsic dimensionality estimation and multiscale geometry of data sets
Abstract: The analysis of large data sets, modeled as point clouds in high dimensional spaces, is needed in a wide variety of applications such as recommendation systems, search engines, molecular dynamics, machine learning, statistical modeling, just to name a few. Oftentimes it is claimed or assumed that many data sets, while lying in high dimensional spaces, have indeed a low-dimensional structure. It may come perhaps as a surprise that only very few, and rather sample-inefficient, algorithms exist to estimate the intrinsic dimensionality of these point clouds. We present a recent multiscale algorithm for estimating the intrinsic dimensionality of data sets, under the assumption that they are sampled from a rather tame low-dimensional object, such as a manifold, and perturbed by high dimensional noise. Under natural assumptions, this algorithm can be proven to estimate the correct dimensionality with a number of points which is merely linear in the intrinsic dimension. Experiments on synthetic and real data will be discussed. Furthermore, this algorithm opens the way to novel algorithms for exploring, visualizing, compressing and manipulating certain classes of high-dimensional point clouds.
-
Th 05/13, at 4:15pm, in room 36-112
Eric Miller, ECE, Tufts
Geometric Methods for Inverse Problems and Image Segmentation
Abstract: One of the most fundamental problems in image processing is that of segmentation broadly defined as determining the structure of objects in a given scene. In almost all application areas where multidimensional data are acquired, some portion of the processing pipeline requires the identification and quantification of specific regions in the field of regard either as the ultimate solution to the underlying problems or as an intermediate step toward the extraction of higher level information. While many methods have been proposed in the last 30 years for solving such problems, here we concentrate on the use of level-set methods. These techniques have attracted significant attention both due to their mathematical elegance as well as their ability to identify disconnected area easily. We will start by discussing level-set basics and then move on to cover some extensions we have developed to improve the performance of such methods in addressing challenges arising in their application both to inverse problems as well as image segmentation. In the case of inverse problems, we examine the use of low-order, parametrically defined level set functions for application to severely ill-posed problems (electrical resistance tomography for subsurface contaminant remediation) and problems where one is interested in recovering small anomalies embedded in cluttered background (dual energy X-ray CT for luggage inspection). Additionally, motivated by a problem in electron microscopy segmentation, we will discuss out recent work in building "active-ribbons" out of multiple level sets.