Room 2-449 (unless otherwise noted)
Wednesday 4:30 PM - 5:30 PM (unless otherwise noted)
The NMPDE seminar covers numerical and data-driven methods for solving differential equations and modeling physical systems. To receive seminar announcements and zoom links, please write to yfa@mit.edu.
March 6: Pengning Chao (MIT Math)
A general framework for computing fundamental limits in photonics inverse design
Advances in computing power and numerical algorithms has lead to a paradigm shift in photonics engineering towards inverse design: achieving desired functional characteristics by solving a PDE-constrained optimization problem over a large number of structural parameters. This approach has been very fruitful, with increases in device performance often measured by orders of magnitude. However, the resulting structures can be highly complex, and a natural question is to what extent further improvement is possible. Unfortunately, the high-dimensional, non-convex nature of the PDE-constrained optimization precludes the exact determination of its global optimum. To address this issue, this talk presents a general framework for computing fundamental limits to photonics device performance based on a systematic convex relaxation of the original optimization. The efficacy of the framework is demonstrated on canonical problems such as maximizing scattering cross-sections and manipulating the photonic local density of states. Further extensions and improvements will also be discussed. Based on joint work with the groups of Sean Molesky and Alejandro Rodriguez.
April 3: Chris Rackauckas (JuliaHub)
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Machine Learning
The dream is to be able to write down a symbolic expression of a PDE and just get a solution. Not only get a solution, but get that solution efficiently. The question is, how do we effectively build a software ecosystem to achieve this goal, and what are the remaining mathematical problems that are necessary to solve in order to fill the gaps? In this talk we will discuss the Julia SciML ecosystem's approach to a general PDE solving framework. We will focus on 5 aspects: (1) a symbolic structural hierarchical classification of PDEs for shuttling high level PDE descriptions to appropriate solution approaches, (2) new time-stepping methods for accelerating the solution of semi-discretized equations and generalizing approaches from PDEs to structured ODE forms, (3) symbolic-numeric approaches for automating the transformation of semi-discretizations to simpler and more numerically stable equations before solving, and (4) new improvements to adjoint methods to decrease the memory requirements for differentiation of PDE solutions, and (5) scientific machine learning approaches to generate accelerated approximations (surrogates) to PDE simulators. We will demonstrate these methods with open source software that starts from symbolic expressions and solves industrial-scale PDEs with minimal lines of code.
May 1: Oswald Knoth (Leibniz Institute for Tropospheric Research)
Finite element and finite volume discretization of the shallow water equation on the sphere
There is an ongoing research in the design of numerical methods for numerical weather prediction. This is connected with an increase in spatial resolution and the intensive use of graphic processor units (GPU). The shallow water equation is a prototype for testing numerical methods in atmospheric and ocean sciences. I will describe different numerical schemes for solving this equation on the sphere with different grid types from triangular to fully unstructured polygonal grids. The focus lies on finite element and finite volume discretizations with a staggered or collocated arrangement of the unknowns. The implementation is done in the Julia language whereby the different grids are described within the same data structure.
In more details I will outline the implementation of a spectral continuous Galerkin method on conforming quad grids. The implementation follows the HOMME DyCore and uses the packages MPI.jl and KernelAbstractions.jl for running the code parallel on GPU’s.
May 8: Ahmad Peyvan (Brown University)
Modeling High-Speed Flows: Approaches for Hypersonic Flows with Strong Shocks, Real Chemistry, and Interpretable Neural Operators for Riemann Problems
In this talk, we first explore the development of a high-order discontinuous Galerkin Spectral Element Method (DGSEM) for solving hypersonic flows. Various high-order methods such as Spectral Difference (SD), Flux Reconstruction (FR), and others are compared to select the most suitable method for simulating strong shock waves typical in hypersonic flows. Through simulations of the three-species Sod problem with simplified chemistry, we find that the entropy-stable DGSEM scheme exhibits superior stability, minimal numerical oscillations, and requires less computational effort for resolving reactive flow regimes with strong shock waves. Consequently, we extend this scheme to hypersonic Euler equations by deriving new entropy conservative fluxes for a five-species gas model.
Secondly, we focus on interpretable neural operators capable of learning Riemann problems, particularly those involving strong shock waves. We utilize neural operators for compressible flows with extreme pressure jumps to address the challenge of accurately simulating high-speed flows with various discontinuities. We employ DeepONet, trained in a two-stage process, which significantly enhances accuracy, efficiency, and robustness compared to the vanilla version. This modified approach allows for a physical interpretation of results and accurately reflects flow features. We also compare results with another neural operator based on U-Net, which proves accurate for Riemann problems, especially with large pressure ratios, albeit computationally more intensive. Our study showcases the potential of simple neural network architectures, when properly trained, in achieving precise solutions for real-time forecasting of Riemann problems.
May 15: Mohammed Alhashim (Harvard University)
Towards designing the flow behavior of complex fluids using automatic differentiation
Automatic differentiation is a key driver of the recent innovations in machine learning; allowing for deep learning of complex neural network architectures with billions of parameters. While open-source libraries like JAX, Tensorflow or PyTorch have greatly simplified the implementation of this technique to compute gradients of computer programs, its adaptation to PDE-constrained optimal control problems, a mathematical structure prevalent in science, remains relatively less known within the scientific community. This presentation aims to shed light on automatic differentiation—what it entails and its recent applications in expediting direct numerical solvers, formulating innovative models, and targeting intricate flows. A special focus will be directed towards the utilization of automatic differentiation in the design of complex flows. While breakthroughs in direct numerical simulation over the last century have substantially deepened our understanding of the fundamental physics governing the motion of particles or objects within fluids, the task of designing or optimizing flows for specific applications remains a formidable challenge. This challenge stems from the necessity to solve high-dimensional optimization problems, making it an intriguing application for the automatic differentiation technique.
Research Scientist
Graduate Student
Professor of Applied Mathematics