Cornell Systems LunchCS 7490 Fall 2022
|
|
The Systems Lunch is a seminar for discussing recent, interesting papers in the systems area, broadly defined to span operating systems, distributed systems, networking, architecture, databases, and programming languages. The goal is to foster technical discussions among the Cornell systems research community. We meet once a week on Fridays at 11:45 AM Gates 114. The systems lunch is open to all Cornell Ph.D. students interested in systems. First-year graduate students are especially welcome. Non-Ph.D. students have to obtain permission from the instructor. Student participants are expected to sign up for CS 7490, Systems Research Seminar, for one credit. To join the systems lunch mailing list please send an empty message to cs-systems-lunch-l-request@cornell.edu with the subject line "join". More detailed instructions can be found here. Links to papers and abstracts below are unlikely to work outside the Cornell CS firewall. If you have trouble viewing them, this is the likely cause. The Zoom link is https://cornell.zoom.us/j/99505666321?pwd=SFAxN242K1F4TXpVNDNlcTVGVitQUT09. |
Other semesters: |
Date | Paper | Presenter |
---|---|---|
August 26 | The nanoPU: A Nanosecond Network Stack for Datacenters Stephen Ibanez, Alex Mallery, Serhat Arslan, and Theo Jepsen, Stanford University; Muhammad Shahbaz, Purdue University; Changhoon Kim and Nick McKeown, Stanford University OSDI 2021 |
Shubham Chaudhary |
September 2 | ORION and the Three Rights: Sizing, Bundling, and Prewarming for Serverless DAGs Ashraf Mahgoub and Edgardo Barsallo Yi, Purdue University; Karthick Shankar, Carnegie Mellon University; Sameh Elnikety, Microsoft Research; Somali Chaterji and Saurabh Bagchi, Purdue University OSDI 2022 |
Yueying Li |
September 9 | Protean: VM Allocation Service at Scale Ori Hadary, Luke Marshall, Ishai Menache, Abhisek Pan, Esaias E Greeff, David Dion, Star Dorminey, Shailesh Joshi, Yang Chen, Mark Russinovich, and Thomas Moscibroda, Microsoft Azure and Microsoft Research OSDI 2020 |
Alicia Yang |
September 16 | Ryoan: A Distributed Sandbox for Untrusted Computation on Secret Data Tyler Hunt, Zhiting Zhu, Yuanzhong Xu, Simon Peter, and Emmett Witchel, The University of Texas at Austin OSDI 2016 |
Yunhao Zhang |
September 23 | Enabling High Performance, Efficient, and Sustainable Deep Learning Systems At Scale |
Udit Gupta (Harvard) |
September 30 | Speaker cancelled |
|
October 7 | Oort: Efficient Federated Learning via Guided Participant Selection Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, and Mosharaf Chowdhury, University of Michigan OSDI 2021 |
Tiancheng Yuan |
October 14 | Systematic Testing of High-Speed RDMA Networks |
Danyang Zhuo (Duke) |
October 21 | ML-Centric Computer Systems Abstract: The past decade has seen unprecedented growth in computing demand due to the rise of deep neural networks (DNNs) as a prevalent computing paradigm. In this talk, we will go over a number of research threads that address the common challenge of DNN efficiency. First, we will discuss algorithm-level optimizations with neural architecture search, Second, we will describe our efforts in codesigning both DNN and hardware, and third, we will give an overview of our ongoing hardware/software efforts in specializing datacenter servers for DNNs. |
Mohamed Abdelfattah (Cornell Tech) |
October 28 | Owl: Scale and Flexibility in Distribution of Hot Content Jason Flinn, Xianzheng Dou, Arushi Aggarwal, Alex Boyko, Francois Richard, Eric Sun, Wendy Tobagus, Nick Wolchko, and Fang Zhou, Meta OSDI 2022 |
Jason Flinn (Meta) |
November 4 | No lecture -- Juris Hartmanis celebration |
|
November 11 | Learning with differentiable and amortized optimization |
Brandon Amos (Meta) |
November 18 | Every Bot for Itself: Zero-Trust Distributed Systems in Adversarial Environments Components that comprise a distributed system are not necessarily designed to be part of a coordinated, distributed network of assets that interoperate. We network these multi-purpose assets and compel them to work together to achieve a unified purpose such as processing or storage. While we expect these systems to work together for defined use cases, their distributed system capabilities are underutilized – particularly when operating in digitally or physically harsh environments. This talk articulates the underutilized capabilities of distributed assets and describes a suite of distributed system security projects that aim to maximize the distributed system’s potential when amidst an adversarial operating environment. Dr. Gregory Falco is an Assistant Professor at Johns Hopkins University’s Institute for Autonomy and is an incoming Assistant Professor at Cornell University, jointly appointment at the Sibley School of Mechanical and Aerospace Engineering and the Systems Engineering Program. His lab, the Aerospace ADVERSARY designs and develops next-generation autonomous, secure and resilient space infrastructure. He has been named a DARPA RISER, listed in Forbes 30 Under 30 and was awarded the DARPA Young Faculty Award. Dr. Falco received his PhD in Cybersecurity from MIT. |
Greg Falco (MAE/Systems Engineering) |
November 25 | No Lecture -- Thanksgiving |
|
December 2 | A Decade of Machine Learning Accelerators: Lessons Learned and Carbon Footprints Simulcast to Phillips 233 Abstract: The success of deep neural networks (DNNs) from Machine Learning (ML) has inspired domain specific architectures (DSAs) for them. Google’s first generation DSA offered 50x improvement over conventional architectures for ML inference in 2015. Google next built the first production DSA supercomputer for the much harder problem of training. Subsequent generations greatly improved performance of both phases. We start the talk with ten lessons learned from such efforts. The rapid growth of DNNs rightfully raised concerns about their carbon footprint. The second part of the talk identifies the “4Ms” (Model, Machine, Mechanization, Map) that, if optimized, can reduce ML training energy by up to 100x and carbon emissions up to 1000x. By improving the 4Ms, ML held steady at <15% of Google’s total energy use despite it consuming ~75% of its floating point computation. With continuing focus on the 4Ms, we can realize the amazing potential of ML to positively impact many fields in a sustainable way. Bio: David Patterson is a UC Berkeley Pardee professor emeritus, a Google distinguished engineer, and the RISC-V International Vice-Chair. His most influential Berkeley projects likely were RISC and RAID. His best-known book is Computer Architecture: A Quantitative Approach. He and his co-author John Hennessy shared the 2017 ACM A.M Turing Award and the 2022 NAE Charles Stark Draper Prize for Engineering. The Turing Award is often referred to as the “Nobel Prize" of Computing and the Draper Prize is considered a “Nobel Prize" of Engineering. |
Dave Patterson (Berkeley) |
December 9 | TBD |
Emmett Witchel(UT Austin) |