Subscribe to the mailing list to receive talks announcements
For more information, contact Laurent Demanet
Fall 2025
Fall semester 4:30pm-5:30pm in room number 2-190
| Date | Speaker | Abstract |
|---|---|---|
| September 11 |
Shay Moran |
Differentially Private Linear Algebra Abstract: Differential privacy (DP) has emerged as a powerful framework for designing algorithms that protect sensitive data. In this talk, I will present our work at the intersection of differential privacy and linear algebra, introducing efficient DP algorithms for fundamental algebraic tasks: solving systems of linear equations over arbitrary fields, solving linear inequalities over the reals, and computing affine spans and convex hulls. Our algorithms for equalities are strongly polynomial, while those for inequalities are only weakly polynomial—and this gap is provably inherent. As applications, we obtain the first efficient DP algorithms for learning halfspaces and affine subspaces. The talk will not assume prior familiarity with differential privacy; I will begin with a review of the definition.
|
| October 9 |
Eitan Tadmor |
Swarm-Based Gradient Descent: A Multi-Agent Approach for Non-Convex Optimization Abstract: We discuss a novel class of swarm-based gradient descent (SBGD) methods for non-convex optimization. The swarm consists of agents, each is identified with position, $x$, and mass, $m$. There are two key ingredients in the SBGD dynamics: The interplay between positions and masses leads to dynamic distinction between ‘leaders' and 'explorers': heavier agents lead the swarm near local minima with small time steps; lighter agents use larger time steps to explore the landscape in search of improved global minimum, by reducing the overall 'loss' of the swarm. Convergence analysis and numerical simulations demonstrate the effectiveness of SBGD method as a global optimizer.
|
| October 24 |
George Barbastathis |
Humans and AI in the physical world: some ongoing work and future opportunities Abstract: The 2024 Nobel Prizes in Physics and Chemistry were both awarded in the field of Artificial Intelligence (AI). John J. Hopfield and Geoffrey E. Hinton pioneered most of the machine learning methods that we nowadays take for granted, and established “the physicist’s way of thinking” in developing and understanding early neural networks. David Baker, Demis Hassabis and John M. Jumper used advanced computational tools, including contemporary supervised and unsupervised learning with built-in chemical principles, to model and design complex protein structures: one of the most vexing problems in the life sciences. Were the back-to-back awards a coincidence or planned?
|
| November 6 |
Javier Gomez-Serrano |
AI-Driven Mathematical Discovery: Singularities, Algorithms, and Beyond Abstract: Machine learning is transforming mathematical discovery, enabling advances on longstanding open problems. This talk explores two complementary approaches illustrating different paradigms for AI and mathematics.
|
| November 20 |
Mike O'Neil |
Fast Direct Solvers: Foundations and Challenges Abstract:Fast Direct Solvers (FDS) address the problem of solving a system of linear equations $A x = b$ arising from the discretization of either an elliptic PDE or of an associated integral equation. The matrix $A$ will be sparse when the PDE is discretized directly, and dense when an integral equation formulation is used. For decades, industry practice for large scale problems has been to use iterative solvers such as multigrid, GMRES, or conjugate gradients. In contrast, a direct solver builds an approximation to the inverse of $A$ or an easily invertible factorization of $A$ (e.g.\ LU or Cholesky). A major development in numerical analysis in the last couple of decades has been the emergence of algorithms for constructing such factorizations or performing such inversions in linear or close to linear time. Such methods must necessarily exploit that the inverse of $A$ is ``data-sparse,'' e.g.\ that it can be tessellated into blocks that have low numerical rank. This talk will cover the development of FDS's for both sparse and dense matrices, recent developments in the field, as well as future challenges and opportunities. |