|
|
I am a Professor in the Department of Computer Science at Cornell University in Ithaca, NY. I'm a member of the Systems and Networking group. I'm interested in distributed systems, particularly in their fault tolerance and scalability aspects. I'm chair of ACM SIGOPS. I'm associate editor for ACM Computing Surveys. I also play jazz guitar and do sound for the Ageless Jazz Band, play traditional jazz banjo in The JazzHappensBand, am co-founder of and advisor to The Cornell Dutch Club, The Cornell Ukulele Club, and co-founder of The Finger Lakes One Wheelers, a unicycling club. I'm also the designer and webmaster of GigKeeper.com, a site that helps with the management side of playing in bands.
My Current Research
Transformations
The last decade or two have seen a wild growth of large-scale systems that have strong responsiveness requirements. Such systems include cloud services as well as sensor networks. Their scalability and reliability requirements mandate that these systems are both sharded and replicated. Also, these systems evolve quickly as a result of changes in workload, adding functionality, deploying new hardware, and so on. While these systems are useful, they can behave in erratic ways and it is not clear that one can build mission- and life-critical systems this way.
I am investigating new techniques for modular development of reliable and scalable systems, and call these transformations. A simple example of a transformation is to take a client-server system and transform it into a system in which the server is replicated in some fashion (chain replication, Paxos, etc.). Another kind of transformation that that I worked on is turning a system that can survive crash failures into one that can survive the same number of Byzantine failures. Another kind of transformation would be one in which a server is sharded in order to increase its scalability.
Joint work with Deniz Altinbuken and Stavros Nikolaou.
Supercloud: Combining Cloud Resources into a Single Cloud
Infrastructure as a Service (IaaS) clouds couple applications tightly with the underlying infrastructures and services. This vendor lock-in problem forces users to apply ad-hoc deployment strategies in order to tolerate cloud failures, and limits the ability of doing virtual machine (VM) migration and resource scaling across different clouds, and even within availability zones of the same cloud.
The Supercloud is a cloud service comprising resources obtained from several diverse IaaS cloud providers. Currently, the Supercloud has been deployed using resources from several major cloud providers, including Amazon EC2, Rackspace, HP Cloud, and some private clouds including our own Fractus cluster. VMs run in a virtual network and can be migrated seamlessly across different clouds, with different hypervisors and device models. We go well beyond simply federating clouds, as witnessed by our ability to migrate across cloud boundaries. We can demonstrate that, being able to deploy applications to more regions and granting more control to end-users, the Supercloud can reduce latency, improve network throughput, achieve higher availability, and reduce cost compared to the underlying cloud providers.
Joint work with Qin Jia, Zhiming Shen, Weijia Song, and Hakim Weatherspoon.
GridControl: Monitoring and Control of the Smart Power Grid
There are pressing economic and environmental arguments for the overhaul of the current power grid, and its replacement with a Smart Grid that integrates new kinds of green power generating systems, monitors power use, and adapts consumption to match power costs and system load. Many promising power management ideas demand scalability of a kind that seem a good fit for cloud computing, but also have requirements (real-time, consistency, privacy, security, etc.) that cloud computing does not currently support.
In cooperation with the New England ISO, we are looking to develop a monitoring system that captures streaming data from a large collection of so-called Phasor Measurements Units installed in the power grid, collect the data into a datacenter, and perform on-the-fly fitting against a model of the power grid (so-called state estimation). This way we can provide power snapshots of the current power grid several times a second.
Joint work with, among others, Ken Birman, David Bindel, Z Teo, Colin Ponce, Theodoros Gkountouvas, and, at WSU, Carl Hauser, David Bakken, and David Anderson.
Configuring Distributed Computations
Configuring large distributed computations is a challenging task. Efficient use of a cluster requires configuration tuning and optimization based on careful examination of application and hardware properties. Considering the large number of parameters and impracticality of using trial and error in a production environment, cluster operators tend to make these decisions based on their experience and rules of thumb. Such configurations can lead to underutilized and costly clusters.
We are working on a new methodology for determining desired hardware and software configuration parameters for distributed computations. The key insight behind this methodology is to build a response surface methodology (RSM) model that captures how applications perform under different hardware and software configurations. Such a model can be built through iterated experiments using the real system, or, more efficiently, using a simulator. The resulting RSM model can then generate recommendations for configuration parameters that are likely to yield the desired results even if they have not been tried either in simulation or in real-life. The process can be iterated to refine previous predictions and achieve better results.
We have implemented this methodology in a configuration recommendation system for MapReduce 2.0 applications. Performance measurements show that representative applications achieve up to 5× performance improvement when they use the recommended configuration parameters compared to the default ones.
Joint work Efe Gencer and Gun Sirer.
Reliable Broadcast for Geographically Dispersed Datacenters
Sprinkler is a reliable high-throughput broadcast facility for geographically dispersed datacenters. For scaling cloud services, datacenters use caching throughout their infrastructure. Sprinkler can be used to broadcast update events that invalidate cache entries. The number of recipients can scale to many thousands in such scenarios. The Sprinkler infrastructure consists of two layers: one layer to disseminate events among datacenters, and a second layer to disseminate events among machines within a datacenter. A novel garbage collection interface is introduced to save storage space and network bandwidth.
Joint work with Haoyan Geng.
Heterogeneous Failure Models
The robustness of distributed systems is usually phrased in terms of the number of failures of certain types that they can withstand. However, these failure models are too crude to describe the different kinds of trust and expectations of participants in the modern world of complex, integrated systems extending across different owners, networks, and administrative domains. Modern systems often exist in an environment of heterogeneous trust, in which different participants may have different opinions about the trustworthiness of other nodes, and a single participant may consider other nodes to differ in their trustworthiness.
We explore how to construct distributed protocols that meet the requirements of all participants, even in heterogeneous trust environments. The key to our approach is using lattice-based information flow to analyze and prove protocol properties. Using this approach, and with the help of SMT solvers, we can adapt existing protocols to work in this model. Through simulations, we can show that customizing a protocol to a heterogeneous trust configuration yields performance improvements over the conventional protocol designed for homogeneous trust.
Joint work with Isaac Sheff and Andrew Myers>
Correct-by-construction Fault-tolerant Distributed Systems
Fault-tolerant distributed systems, algorithms, and protocols are notoriously hard to build. Going from a specification to an implementation involves many subtle steps that are easy to get wrong. Moreover, fault-tolerant algorithms require many sources of diversity in order to make them survive a variety of failures. Programmers, even different ones, tend to make the same mistakes when they are implementing code from a specification.
I'm developing techniques to derive distributed algorithms by stepwise refinement from specification. Each step can be formally checked. Moreover, one can often make different design choices as you make these refinements, leading to sources of diversity that can be exploited for fault tolerance. With the NuPrl (formal methods) group at Cornell I'm working to have this process be mostly automated, and we are already able to synthesize a variety of executable consensus protocols.
Joint work with Fred Schneider, Robert Constable, and Mark Bickford.