Date Sept. 3, 2010
Speaker James Cuff (Harvard University)
Topic I think we are going to need a bigger boat" (Federating university research computing assets)

For the past four and a half years, We have been slowly coordinating and compiling existing and net new computing assets at Harvard University. Growth from 200 to over 12,000 processing cores has subsequently put significant strain on both the traditional data center and the requisite wide area networking infrastructure available within the Cambridge campus. In summary, the team had to get and continue to be rather "creative" to attempt meet the demands of the science.

I will discuss the tactics for building both the organizational and physical infrastructure which now supports over 2,000 researchers in fields such as astrophysical modeling of the early universe, modern high speed genomic sequencing, continuing our search for the Higgs boson and up to and including advanced economic and financial modeling algorithms. Each of the areas of research are now carried out at both a large scale, and on shared physical infrastructure operated by a core team of research computing associates and staff.

The research computing group have deployed approx. 2PB of assorted storage, alongside 40TF of GPGPU computing to support and compliment traditional 12,000 core x86_64 infiniband connected systems. I will also explain the now very obvious need and requirement for Harvard's active involvement in the new multi institutional Massachusetts Green High Performance Computing Center (MGHPCC).



We thank the generous support of MIT IS&T, CSAIL, and the Department of Mathematics for their support of this series.

MIT Math CSAIL EAPS Lincoln Lab Harvard Astronomy