Some MIT Faculty Newsletter pieces
"Sharing the how as well as the what of MIT education" with Eric Klopfer and Karen Willcox:
MIT Faculty Newsletter, Vol 26 No 5, 2014.
"The MIT-Haiti Initiative: An international engagement":
MIT Faculty Newsletter, Vol 29 No 1, 2016.
"A hole in the flag":
MIT Faculty Newsletter, Vol 30 No 1, 2017.
MIT Faculty Newsletter, Vol 31 No 2, 2018.
A commentary on the creation of the "College of Computing."
"Core Values" with Catherine Drennan, Linda
Griffith, and Peter Shor:
MIT Faculty Newsletter, Vol 31 No 3, 2019.
A response to the change of definition of the Pass/No Record grade.
MIT Faculty Newsletter, Vol 31 No 4, 2019:
A reaction to the
recent spate of joint majors.
"The Schwartzman College of Computing, Giving Back":
MIT Faculty Newsletter Vol 32 No 2, 2019.
Here is a
New York Times article about Blackstone's predation, and
here's a review of Aaron Glantz's book on the topic,
by the same reporter, Francesca Mari.
"Neither Fire nor Ice", a plea for MIT
leadership in regulating generative AI,
MIT Faculty Newsletter Vol 35 No 4, May/June 2023.
Here are some of the dozens of opinion peices and draft regulation on this issue published after this went to press.
Tristan Harris and Aza Raskin's Center for Humane Technology show
Cade Metz: How could A.I. destroy humanity?
Vera Jourova, deputy head of the European Commission, says A.I. content should be labelled
Lisa Kahn: We must regulate A.I. Here's how
Europeans Take a Major Step Toward Regulating A.I., referring to
Scott Aaronson, formerly at MIT but now at UTAustin and OpenAI, has
blog post and long interview on the subject.
Discussion of watermarking or provenance tracing and the need
for regulation. Where is the CoC?
Finally, in August 2023, our administration (not the College) has produced a call for proposals to "develop impact papers that articulate effective roadmaps, policy recommendations, and calls for action across the broad domain of generative AI." No hint of alarm there.
Here is the list of funded projects.
October 9, 2023: Here is a careful statement of some of the dangers of AI from the Center for AI Safety.
Back to Haynes Miller's home page.