Congestion Avoidance Control

Paper by Van Jacobson and Michael J. Karels, Published in November of 1988
Notes by Daniel B. Witriol, Mar. 1998, Cristian Estan, April 1999

Introduction

This paper was written to address network stability issues resulting from congested networks. The analysis in this paper sugests that common intuition, when used to construct network protocols, can lead to serious and even crippling problems. As the authors' put it, "The 'obvious' ways to implement a window-based transport protocol can result in exactly the wrong behavior in response to network congestion". This paper examines the problems with common transport protocols (excessive retransmissions, poor link utilization under congestion), and then derives an algorithm to address them.

Packet Conservation Philosophy

The guiding philosophy of this paper is the idea of 'conservation of packets'. Simply put this means that a transport protocol should not put a new packet onto the network until an old packet leaves. This idea is derived from the properties of flow in physics. A system where flow is 'conserved' tends towards a stable state.

Problems that a Packet Conserving Algorithm must address

Relevant parameers that have to be estimated

Vary the time between retransmissions according to a recursive prediction error algorithm. The result should be a better estimation than the constant used before.
Variables: rtt, rto

Window management

Begin transmitting packets at a slow rate with a small window size. Often times algorithms begin transmitting with window sizes that are three to four times larger than the network connection can allow for. This burst of packets often creates a snowball effect; where network flow will continue too oscillate and never reach a true equilibrium. (Described on page 2)
 
  • Exponential backoff on packet loss

  • Reduce window size exponentially on packet loss. This will provide stability and prevent the congestion from becoming worse.
     

    Use acknowledgment timing to improve RTT estimates, thus getting better decisions at the sender.


    Variables: ssthresh, cwnd

    Gateways and Congestion Control

    For the purposes of these algorithms, congestion notification occurs for free. Gateways will just drop packets when the network becomes congested. Because the vast majority of packet lost is due to dropped packets and not damaged packets, when a packet is dropped the sender is effectively 'told' that the network is congested. This then allows the algorithms above to go into effect and relieve the network congestion.

    That's nice, but perhaps gateways could play a larger more direct role. One area that these end-to-end algorithms have little control over is fairness. For example, suppose system 'A' is the only sender on a network. System A starts sending data and eventually ends up with a very large window size. But now system 'B' joins the network. System B also wants to send data, so it starts increasing its window size. Before very long though, System B's packets start being dropped, and it is forced to enter equilibrium at a very small window size. System B may never get equal access to this network link. Connections with long rount trip times are at disadvantage.

    Now enter the gateway algorithm. This algorithm detects that System A is behaving in an unfair manner, and starts dropping A's packets. A then decreases its window size, and B is finally allowed to increase its window size. Before you know it, every one is living happily ever after in System Land.

    Performance

    Due to these changes Internet is still working today.