Mathematics Logo

IAP 2018 Classes

Non-credit activities and classes:

Check out the IAP pages at http://web.mit.edu/iap/listings/

For-credit subjects:

Check out the course catalog at http://student.mit.edu/catalog/m18a.html. You can use the Subject Search functionality to limit the search to IAP listings or simply click the “IAP only” option near the top of the page. Our main offerings in Mathematics are:

18.02A Calculus

Dr. Thomas Beck and staff

Dates: Jan. 8 - Feb. 2

Lectures: MTWRF12
Recitation: TR10-11.30 (2-132, 2-136, 2-142) or TR2-3.30 (2-132, 2-136, 2-146)
+final

Room: 54-100

This is the second half of 18.02A and can be taken only by students who took the first half in the fall term; it covers the remaining material in 18.02.

18.095 Mathematics Lecture Series

Lecture: MWF1-2.30
Recitation: R10.30-12 (2-131) or R1-2.30 (2-131)

Room: 2-190

Ten lectures by mathematics faculty members on interesting topics from both classical and modern mathematics. All lectures accessible to students with calculus background and an interest in mathematics. At each lecture, reading and exercises are assigned. Students prepare these for discussion in a weekly problem session.

Here is a tentative list of speakers and dates:

  • January 8 Heather Macbeth

    Tangent developables

    Abstract: A piece of paper can be twisted into a cone, or a cylinder. Euler, in 1772, showed that there is a third kind of surface that can be made in this way: the "tangent developables." I will explain what tangent developables are, and show how to find the pieces of paper that twist into them. Bring scissors!

  • January 10 Gil Strang

    Linear algebra and neural networks

  • January 12 Steven Johnson

    The brachistochrone problem and the calculus of variations

  • [January 15 - Martin Luther King Day]

  • January 17 Jeremy Kepner

    Mathematics of Big Data & Machine Learning

    Abstract: Big Data describes a new era in the digital age where the volume, velocity, and variety of data created across a wide range of fields (e.g., internet search, healthcare, finance, social media, defense, ...) is increasing at a rate well beyond our ability to analyze the data. Machine Learning has emerged as a powerful tool for transforming this data into usable information. Many technologies (e.g., spreadsheets, databases, graphs, linear algebra, deep neural networks, ...) have been developed to address these challenges. The common theme amongst these technologies is the need to store and operate on data as whole collections instead of as individual data elements. This lecture describes the common mathematical foundation of these data collections (associative arrays) that apply across a wide range of applications and technologies. Associative arrays unify and simplify Big Data and Machine Learning. Understanding these mathematical foundations allows the student to see past the differences that lie on the surface of Big Data and Machine Learning applications and technologies and leverage their core mathematical similarities to solve the hardest Big Data and Machine Learning challenges.

  • January 19 Dan Stroock

    Some thoughts about taking square roots

    Abstract: The traditional algorithm for computing square roots is cumbersome. In this lecture, I will discuss an alternative approach, one that involves some elementary number theory.

  • January 22 Juan Pablo Vielma

    Modeling and Solving Discrete Optimization Problems in Practice

    Abstract: Discrete optimization can model a wide range of problems in business, science and engineering. Some discrete optimization problems can be very computationally challenging in theory. However, solutions with guaranteed high-quality can sometimes be quickly found in practice using state-of-the-art solvers. In this lecture we will see how the power of these solvers can be easily accessed through the Julia-based modeling language JuMP. We will also give an overview of the mathematical concepts and tools that are the basis for the effectiveness of these solvers.

  • January 24 Thomas Beck

    Eigenvalues of the Laplacian on domains

    Abstract: We will define eigenvalues and eigenfunctions of the Laplacian on intervals and two dimensional domains, and see in particular why they play a role in understanding the vibrations of an elastic string or membrane. In certain cases these eigenfunctions can be computed explicitly, and we will see to what extent the eigenvalues (vibrating modes) determine the membrane itself.

  • January 26 Philippe Rigollet

    The Birthday Problem(s)

    Abstract: What is the probability that two students in this class are born on the same day (independently of the year)? This is the birthday problem. We are going to answer this question based on the assumption that the probability of being born on a given day is the same for all 365 days of the year. What is this assumption is not true? How many students does it take to test if it holds? We’ll also give an algorithm that does that. The class will use elementary probability.

  • January 29 Scott Sheffield

  • January 31 Joern Dunkel

    Brownian dynamics in physics and biology

    Abstract: The dynamics of small particles in fluids affects a wide range of physical and biological phenomena, from sedimentation processes in the oceans to transport of chemical messenger substances between and within microorganisms. After discussing these and other relevant examples, we will introduce the mathematical equations that describe such particle motions and study their solutions for basic test cases.

Titles and abstracts for talks will follow.

18.S096 Special Subject in Mathematics: Performance Computing in a High Level Language

Professors Steven G. Johnson and Alan Edelman

Dates: Jan. 9 - Jan. 26

Lecture: TWRF2-4

Room: 2-135

Many programmers are familiar with high-level dynamic/interactive computer languages such as Python, R, or Matlab. Traditionally, such languages approach the computer at a high level of abstraction, and performance optimization is mainly a matter of finding fast “black-box” library routines. In this course, we bridge the gap between high-level “dynamic” languages and what is really happening at a low level. Using a new language called Julia, we show how one can simultaneously write high-level, generic, interactive programs that are also optimized for performance, and which implement their own “inner loops” without relying on external libraries.

Topics include how program objects are represented in memory (types, “boxes,” registers, etc.), processor architectures, memory locality, metaprogramming and moving computations from runtime to compile-time, parallel computing, sparse and dense linear algebra, machine learning, gpu programming, and applications of numerical analysis.

Students should be comfortable with programming.

18.S998 Special Subject in Mathematics: Building efficient numerical simulations of multiscale physics

Drs. Aimé Fournier and Philippe Ricoux

Dates: Jan. 9 - Feb. 2

Lecture: TWR10-12

Room: 2-146

Numerical analysis of partial differential equations governing physical phenomena having features that cover a wide range of space and time scales, and may also be spatiotemporally localized, such as waves, pulses, and fronts. Topics include: what do "scale" and "local" mean; how well are such features resolved by the best numerical methods (finite difference, finite element, spectral element etc.); how is efficient high-performance computation achieved in practice (domain decomposition, mesh refinement, solver factorization etc.); and how do spectral and wavelet analyses inform these questions. Application examples include: reservoir modeling (multi-phase Darcy flow in anisotropic porous media); fluid cracking catalysis; and atmospheric fluid dynamics.
Prerequisite: undergraduate applied mathematics at the level of 2.087 or equivalent.