Ten lectures by mathematics faculty members on interesting topics from both classical and modern mathematics. All lectures accessible to students with calculus background and an interest in mathematics. At each lecture, reading and exercises are assigned. Students prepare these for discussion in a weekly problem session.
Organizer: Prof. Alan Edelman
Teaching assistant: Minjae Park (minj@mit.edu)
Lectures are held MWF 1-2:30pm in room 2-190.
Homeworks are due each Friday by 4pm and must be turned in at the pset boxes next to 4-174 (to the right of HQ). Problem sets will be posted on this site shortly after each lecture.
Recitations are held every Thursday at 10.30am and 1pm in room 2-131 (both sessions cover the same material).
Office hours are Tuesday 10-11am in room 2-361.
Grading is strictly P/D/F. To receive a passing grade, you must attend lectures and demonstrate solid effort on the problem sets.
Monday, January 6: Prof. Haynes Miller
Knots and Numbers
Abstract: Is a granny knot really different from a square knot? I'll explain how a mathematician addresses this question, and then focus attention on the related but less knotty theory of tangles. We'll end with a verification of a theorem of Horst Schubert by means of a square dance.
Lecture note:
Exercises:
PSET PDF due Jan 10
Wednesday, January 8: Prof. David Vogan
Sixty-six miles per hour: breaking the commutative law by just a bit
Abstract: One of the things that makes arithmetic easy is the commutative laws. You've learned by now to manage without commutativity when you multiply matrices, where AB has very little to do with BA. I'll talk about places where AB is only a little different from BA, and some of the things that lets you do.
Exercises:
PSET PDF due Jan 10
Friday, January 10: Dr. Chris Rackauckas
Incorporating Scientific Knowledge for Data-Efficient Machine Learning
Abstract: A lot of machine learning research has focused on developing AI systems that "learn from scratch". However, this has turned out to be a fairly difficult task that requires a large amount of data. Additionally, in science, we don't need to learn from scratch because we already have models of how biological/physical/mechanical systems work, so could we use this knowledge to pretrain our machine learning in such a manner that it needs to learn less? This is the topic of scientific machine learning, or physics-informed learning. In this lecture I will walk through the use of neural differential equation techniques to incorporate known biophysical models into an AI toolchain, allowing one to augment data with structural knowledge. Automatic discovery of differential equation models with accurate extrapolation from a handful of data points is shown to be possible with these kinds of techniques. After showcases some of the utility of these methods, we will briefly peel back the hood and see what kinds of mathematical methodologies are employed to make training these mixed systems efficient, and what research still needs to be done.
Lecture slides:
LECTURE PPT and discussed paper was just out on ArXiv!
Exercises: (Due Jan 17 - either email me the result or print it)
Only one problem. Get DiffEqFlux.jl running (https://github.com/JuliaDiffEq/DiffEqFlux.jl) and train the basic neural ordinary differential equation example ( https://github.com/JuliaDiffEq/DiffEqFlux.jl#neural-ordinary-differential-equations). Optional: start incorporating a research system into a scientific ML toolchain. If interested in perusing this research, please follow up after the course.
Monday, January 13: Prof. Scott Sheffield
Tug of War and Infinity Laplacian
Abstract: I will discuss several games whose analysis involves interesting mathematics. First, in the mathematical version of tug of war, play begins at a game position $x_0$. At each turn a coin is tossed, and the winner gets to move the game position to any point within $\epsilon$ units of the current point. (One can imagine the two players are holding a rope, and the ``winner'' of the coin toss is the one who gets a foothold and then has the chance to pull one step in any desired direction.) Play ends when the game position reaches a boundary set, and player two pays player one the value of a "payoff function" defined on the boundary set.
So... what is the optimal strategy? How much does player one expect to win (in the $\epsilon \to 0$ limit) when both players play optimally? We will answer this question and also explain how this game is related to the "infinity Laplacian," to "optimal Lipschitz extension theory" and to a random turn version of a game called Hex.
Lecture slides:
Exercises: (Due Jan 17)
1. Read the paper at https://arxiv.org/pdf/math/0508580.pdf which describes the "Random-turn Hex" and "Team Captains" problems discussed in lecture. Complete the exercise given at the bottom of page 4 and top of page 5 of that paper, where you are asked to extend the optimal strategy description arguments to two other games: "Restaurant Selection" and "Balanced Team Captains."
2. Read the first two sections of the paper at https://arxiv.org/pdf/math/0605002.pdf which describes the tug of war games discussed in lecture. Page 11 contains the phrase "The reader may check that" and page 12 contains the phrase "It is easy to check that." Justify the claims being made in these two places by providing a paragraph of explanation for each one.
Wednesday, January 15: Prof. Philippe Rigollet
Statistical and Computational Optimal Transport
Abstract: Optimal transport is a fundamental concept in probability theory which defines a geometry on the space of measures via optimal couplings. Over the past few years, optimal transport has been applied to images, point clouds, and other objects which can be viewed as probability distributions, leading to breakneck advances across machine learning, computer vision, computer graphics, and computational biology. The increasing scale of modern data presents two challenges: to provide theoretical guarantees for the performance of optimal transport techniques, and to develop faster algorithms for solving optimal transport problems in practice. In this lecture, we investigate both the statistical and computational aspects of optimal transport and highlight an application to single cell genomic data analysis.
Exercises:
PSET PDF due Jan 17
Friday, January 17: Dr. Jeremy Kepner
Mathematics of Big Data & Machine Learning
Abstract: Big Data describes a new era in the digital age where the volume, velocity, and variety of data created across a wide range of fields (e.g., internet search, healthcare, finance, social media, defense, ...) is increasing at a rate well beyond our ability to analyze the data. Machine Learning has emerged as a powerful tool for transforming this data into usable information. Many technologies (e.g., spreadsheets, databases, graphs, linear algebra, deep neural networks, ...) have been developed to address these challenges. The common theme amongst these technologies is the need to store and operate on data as whole collections instead of as individual data elements. This talk describes the common mathematical foundation of these data collections (associative arrays) that apply across a wide range of applications and technologies. Associative arrays unify and simplify Big Data and Machine Learning. Understanding these mathematical foundations allows the student to see past the differences that lie on the surface of Big Data and Machine Learning applications and technologies and leverage their core mathematical similarities to solve the hardest Big Data and Machine Learning challenges.
Supplementary lectures, text, and software are available at: https://mitpress.mit.edu/books/mathematics-big-data
Exercises: (Due Jan 24)
Exercise 1: Derive the gradient descent update algorithm for deep neural networks.
Exercise 2: Write this equation (include the sum over errors) in terms of matrix notation.
Wednesday, January 22: Prof. Elchanan Mossel
Mathematical aspects of voting
Abstract: I will survey some mathematical aspects of voting, beginning with its paradoxical nature observed by mathematicians in the 18th century (the beginning days of modern democracy) to very modern aspects such as gerrymandering, opinion campaigns, and polarization.
Exercises: (Due Jan 24)
Problem 1. Consider three 3-sided dice: $A = (2,4,9), B=(1,6,8), C=(3,5,7)$. Prove that $A$ beats $B$, $B$ beats $C$, and $C$ beats $A$, all with probability $5/9$. (This is an example of intransitive dice that appears in the story of Warren Buffett and Bill Gates)
Friday, January 24: Prof. Paul Seidel
The Lefschetz fixed point number
Abstract: Solomon Lefschetz lost both hands in an industrial accident, succesfully re-trained from an engineer to a pure mathematician, and made crucial contributions to algebraic geometry and topology. Among them is the Lefschetz fixed point number. A fixed point (of a continuous map) is a point such that f(x) = x. The Lefschetz number is a topological quantity which counts (and thereby can be used to prove the existence of) fixed points. The lecture will introduce you to this number and its properties.
Preliminary work: (to be done before the lecture)
Problem 1. Let $f$ be a continuous function, defined for all $x \in [0,1]$, and such that $f(x) \in [0,1]$ for all $x \in [0,1]$. Use the intermediate value theorem to show that there is some $x$ such that $f(x) = x$.
Problem 2. Write down explicit examples of functions $f$, satisfying the properties from Problem 1, which have exactly 1,2 or 3 fixed points.
Exercises:
PSET PDF due Jan 31
Monday, January 27: Prof. Chenyang Xu
Elliptic function and elliptic curve
Abstract: Elliptic functions was a subject studied by many giants in the history of mathematics. To understand it, mathematicians indeed developed a geometric theory to study its geometric counterpart, called elliptic curve. This can be arguably called the starting point of the subject algebraic geometry. We aim to give a survey on the basic theory.
Exercises:
PSET PDF due Jan 31
Wednesday, January 29: Prof. Justin Solomon
Expanding the Scope and Dimensionality of Field Design
Abstract: The design of vector fields and frame fields is a computational problem central to applications in scientific computing and computer graphics. Recent models and algorithms for this problem employ ideas from topology and discrete differential geometry to design fields that are smooth and obey special constraints that arise in applications like quadrilateral remeshing and physical simulation. But, many of these techniques only work for static design problems on two-dimensional surfaces. In this talk, I will describe our efforts to tackle field design problems with broader applications and in dimensions larger than two. Along the way, we will introduce modern tools from computational geometry processing, numerical differential geometry, and optimization.
Exercises: (due Jan 31)
Problem 1. (1) Draw a quadrilateral mesh of the unit disk where all vertices are regular (valence 4 in the interior of the disk, valence 3 on the boundary) with the exception of five valence 3 vertices and one valence 5 vertex. (2) Draw the corresponding orthogonal 4-direction field.