IAP 2017 Classes
Non-credit activities and classes:
Check out the IAP pages at http://web.mit.edu/iap/listings/
For-credit subjects:
Check out the course catalog at http://student.mit.edu/catalog/m18a.html. You can use the Subject Search functionality to limit the search to IAP listings. Our main offerings in Mathematics are:
18.02A Calculus
Dr. Norbert Stoop and staff
This is the second half of 18.02A and can be taken only by students who took the first half in the fall term; it covers the remaining material in 18.02.
18.031 System Functions and the Laplace Transform
Dr. Norbert Stoop
Studies basic continuous control theory as well as representation of functions in the complex frequency domain. Covers generalized functions, unit impulse response, and convolution; and Laplace transform, system (or transfer) function, and the pole diagram. Includes examples from mechanical and electrical engineering. Check out the course website at http://math.mit.edu/~stoopn/18.031/.
18.095 Mathematics Lecture Series
Ten lectures by mathematics faculty members on interesting topics from both classical and modern mathematics. All lectures accessible to students with calculus background and an interest in mathematics. At each lecture, reading and exercises are assigned. Students prepare these for discussion in a weekly problem session.
Lecture Schedule
January 9 | David Vogan | Counting primes | The sequence of prime numbers has fascinated mathematicians for thousands of years. Euclid proved that there are infinitely many prime numbers, but how they are distributed is mysterious. About 1800, Gauss and Legendre noticed that the number of primes less than a large number N seems to be approximately N/ln N. This observation is the Prime Number Theorem, and it was proven by Hadamard and de la Vallee Poussin in 1896. Even today, nobody understands the error term--the difference between N/log N and the number of primes less than N. There is a very famous conjecture that the size of the error term is something like square root of N. This conjecture is one version of the "Riemann hypothesis," which is certainly the biggest unsolved problem in mathematics. When faced with an impossibly hard problem, mathematicians like to change the subject; and that's my plan. I'll talk not about prime numbers (used to factor integers) but about irreducible polynomials (used to factor polynomials). I'll explain how to formulate a problem about counting irreducible polynomials that's analogous to the prime number theorem. The difference is that this counting problem has a precise answer. The main term of the answer looks just like the main term in the prime number theorem, and the error term behaves just like the Riemann hypothesis says it should. |
January 11 | Mike Brenner | TBA | TBA |
January 13 | Tom Mrowka | TBA | TBA |
January 18 | Joern Dunkel | TBA | TBA |
January 20 | Jeremy Kepner | Mathematics of Big Data | "Big Data" describes a new era in the digital age where the volume, velocity, and variety of data created across a wide range of fields (e.g., internet search, healthcare, finance, social media, defense,...) is increasing at a rate well beyond our ability to analyze the data. Many technologies (e.g., spreadsheets, databases, graphs, linear algebra, ...) have been developed to address these challenges. The common theme amongst these technologies is the need to store and operate on data as whole collections instead of as individual data elements. This class describes the common mathematical foundation of these data collections (associative arrays) that apply across a wide range of applications and technologies. Associative arrays unify and simplify Big Data leading to rapid solutions to Big Data volume, velocity, and variety problems. Understanding these mathematical foundations allows the student to see past the differences that lie on the surface of Big Data applications and technologies and leverage their core mathematical similarities to solve the hardest Big Data challenges. |
January 23 | Laurent Demanet | Wavelets | Wavelets are the smartest choice of basis for functions since Fourier analysis. They are at the same time localized in space and in frequency. It was a major discovery in the 1980s that one could build orthonormal bases of wavelets with favorable properties (small support, smoothness). They are now in extensive use in image processing and data analysis. |
January 25 | Vadim Gorin | Counting cubes through symmetric polynomials | How many ways are there to stack a pile of cubes in the corner of the room? The answer is given by an elegant Macmahon's triple product formula. We will discuss how to address this enumeration question and its relatives by using symmetric polynomials. |
January 27 | Andrew Sutherland | PRIMES is in P | At the exhortation of Gauss, number theorists pursued a variety of methods for determining whether or not a given integer is prime throughout the 19th and 20th centuries; some methods were fast but not always correct, while others were correct but not always fast. It was not until the 21st century that an algorithm was found that was provably both fast and correct, when Agrawal, Kayal, and Saxena announced that "PRIMES is in P" (Annals of Mathematics 160 (2004), 781-793). We will briefly review the history of this problem and then prove their remarkable result. |
January 30 | David Spivak | The pixel array method for solving non-linear systems of equations | It's far easier to solve linear systems of equations than non-linear ones. In the former case, you can find the entire solution set all at once. In contrast, the state of the art in solving a nonlinear system is still Isaac Newton's method---or a sophisticated variant thereof---in which you start with an initial guess, approximate the derivative there, and hope it leads you iteratively toward a solution. In this talk I'll present a totally different way to solve systems of equations: plot each equation, think of this plot as an array of pixels---either on or off (1 or 0)--- and multiply these arrays together in a prescribed way. It's surprising at first that multiplying arrays (or matrices) returns the approximate solution set for the whole system. But it does and in fact is surprisingly fast. As a final selling point, the pixel array method seems to be one of those rare examples of an idea which was born out of category theory but which A. requires no category theory to learn, and which B. solves a real-world problem numerically. |
February 1 | Victor-Emmanuel Brunel | TBA | TBA |
18.S096 Special Subject in Mathematics: Performance Computing in a High Level Language
- Professors Steven G. Johnson, Alan Edelman, David Sanders
- Tuesday-Friday, January 10-27
- 2pm-4pm (On Tues,Wed,Friday, 3:30-4 will be in class office hours)
- 2-135
Many programmers are familiar with high-level dynamic/interactive computer languages such as Python, R, or Matlab. Traditionally, such languages approach the computer at a high level of abstraction, and performance optimization is mainly a matter of finding fast “black-box” library routines. In this course, we bridge the gap between high-level “dynamic” languages and what is really happening at a low level. Using a new language called Julia, we show how one can simultaneously write high-level, generic, interactive programs that are also optimized for performance, and which implement their own “inner loops” without relying on external libraries.
Topics include how program objects are represented in memory (types, “boxes,” registers, etc.), processor architectures, memory locality, metaprogramming and moving computations from runtime to compile-time, parallel computing, sparse and dense linear algebra, machine learning, gpu programming, and applications of numerical analysis.
Students should be comfortable with programming.
Course Website: https://math.mit.edu/18.S096