Speaker: Stefan Savage
Affiliation: Univ. of Washington
Date: 3/7/00
Time & Location: 4:15 PM, B14 Hollister Hall
Host: Robbert van Renesse
Title: Network Services in an Uncooperative Internet
Unlike traditional distributed system environments, the Internet has the property that different system components are frequently under the control of different administrative authorities -- each with their own polices and objectives. This characteristic has profound implications for service designers since important system-wide properties can no longer be engineered explicitly, but instead must be established implicitly through observation, inference, incentives and careful protocol design. In this talk I explore these issues by way of three case studies.
First, I consider the problem of measuring one-way network path characteristics, such as packet loss rate. These measurements require observations at both endpoints, but unfortunately there is little motivation for most Internet sites to cooperate in such experiments. By creatively exploiting the standard behavior of existing protocols, I demonstrate that it is possible to obtain these measurements without requiring explicit cooperation.
Next, I explore the fragility of congestion control protocols. Today's Internet depends on every host to voluntarily limit the rate at which they send data. This good faith approach to resource sharing was appropriate during the Internet's "kinder and gentler" days, but is not dependable in today's competitive environment. Fortuitously, the servers sending most data (i.e. Web servers) have a natural economic incentive to share bandwidth fairly among their customers. Unfortunately, as I will show, today's congestion control protocols have design weaknesses that allow receivers of data (i.e. Web clients) to coerce remote servers into sending data at arbitrary rates. As one might suspect, many users would happily improve their own performance at the expensive of others. I show that this weakness is not a fundamental limitation of end-to-end congestion control and simple modifications can allow servers to enforce correct behavior.
Finally, as recent events demonstrate, the Internet is vulnerable to malicious denial-of-service attacks. These attacks present a unique challenge because the Internet architecture relies on each host to voluntarily indicate a packet's origin. Attackers ignore this convention and consequently determining the source of such attacks is frequently impossible -- difficult and time-consuming at best. To address this problem, I describe an efficient, incrementally deployable, and (mostly) backwards compatible network mechanism that allows victims to trace denial-of-service attacks back to their source.