COMPUTATIONAL RESEARCH in BOSTON and BEYOND (CRIBB)
Date | April 5, 2019 |
---|---|
Speaker | Hang Liu University of Massachusetts, Lowell |
Topic | Software-Hardware Co-Optimized Data Analytics |
Abstract | We are increasingly awash in data, both connected and disconnected, as a growing array of "sensors", which is integrated in our daily life, continues to generate an explosive amount of data. Notably, IBM recently suggests that we are creating ~2.5 quintillion bytes of data per day. Buried in such a rapidly growing flood of data are the key insights to resolve the critical issues surrounding our society, for improving productivity, enlisting new economic opportunities, and uncovering novel discoveries in science and engineering. While accommodating traditional queries, such as, word count, is straightforward, the emerging data analytical applications (e.g., graph analytics) tend to be more complex, thus will place server tax on conventional hardware systems. This talk unveils the enormous potentials of the emerging hardware, such as, Graphics Processing Unit (GPU), Field-Programmable Gate Array (FPGA) and Non-Volatile Memory express (NVMe). At meantime, Dr. Liu demonstrates how his work (i.e., USENIX FAST '17, DAC '19 and a recent submission to SIGMOD '20) leverages the new hardware to accelerate complex analytical applications on both connected and disconnected data. As the future work, this talk further outlines the mounting challenges as well as the potential solutions for deploying the popular graph learning framework on hardware accelerators. |
Biography | Dr. Hang Liu is currently an assistant professor in the Department of Electrical and Computer Engineering at University of Massachusetts Lowell. He receives the Ph.D. degree from the George Washington University in 2017, and B.E. from Huazhong University of Science and Technology in 2011. His research interests include exploiting emerging hardware to build high-performance systems for graph computing, machine learning, data compression, numerical simulation, cloud computing and software debugging. His publications appear in top tier conferences, such as, SC, SIGMOD, USENIX FAST and DAC. Particularly, he is the recipient of the NSF CRII award and the Champion of 2018 DARPA/MIT/AMAZON Graph Challenge. He is also the winner of the Best Dissertation Award from the Department of Electrical and Computer Engineering at the George Washington University. Notably, his graph traversal systems are ranked highly in both Graph500 and Green Graph500 benchmarks, which measure the performance and energy efficiency of the most powerful supercomputers in the world. |
Archives
Acknowledgements
We thank the generous support of MIT IS&T, CSAIL, and the Department of Mathematics for their support of this series.