Notes by Daniel B. Witriol, Mar. 1998
Paper by Van Jacobson and Michael J. Karels, Published in November of 1988
This paper was written to address network stability issues resulting from congested networks. The analysis in this paper sugests that common intuition, when used to construct network protocols, can lead to serious and even crippling problems. As the authors' put it, "The 'obvious' ways to implement a window-based transport protocol can result in exactly the wrong behavior in response to network congestion". This paper examines the problems with common transport protocols, and then derives algorithms to address them.
The guiding philosophy of this paper is the idea of 'conservation of packets'. Simply put this means that a transport protocol should not put a new packet onto the network until an old packet leaves. This idea is derived from the properties of flow in physics. A system where flow is 'conserved' tends towards a stable state.
Vary the time between retransmissions according to a recursive prediction error algorithm. The result should be a reduction in spurious retransmissions.
Space packet transmissions exponentially far apart after a retransmit. This will provide stability and prevent the congestion from becoming worse.
Begin transmitting packets at a slow rate with a small window size. Often times algorithms begin transmitting with window sizes that are three to four times larger than the network connection can allow for. This burst of packets often creates a snowball effect; where network flow will continue too oscillate and never reach a true equilibrium. (Described on page 2)
Use acknowledgment timing to improve RTT estimates
Adjust the window size of the transmission according to the performance of the network. When the network is congested, reduce the size of the window at an exponential rate. This is implemented by applying a multiplicative factor. When the congestion has subsided, increase the window size slowly with constant changes. This is implemented with an additive factor. (Details on page 5 and 6)
For the purposes of these algorithms, congestion notification occurs for free. Gateways will just drop packets when the network becomes congested. Because the vast majority of packet lost is due to dropped packets and not damaged packets, when a packet is dropped the sender is effectively 'told' that the network is congested. This then allows the algorithms above to go into effect and relieve the network congestion.
That's nice, but perhaps gateways could play a larger more direct role. One area that these end-to-end algorithms have little control over is fairness. For example, suppose system 'A' is the only sender on a network. System A starts sending data and eventually ends up with a very large window size. But now system 'B' joins the network. System B also wants to send data, so it starts increasing its window size. Before very long though, System B's packets start being dropped, and it is forced to enter equilibrium at a very small window size. System B may never get equal access to this network link.
Now enter the gateway algorithm. This algorithm detects that System A is behaving in an unfair manner, and starts dropping A's packets. A then decreases its window size, and B is finally allowed to increase its window size. Before you know it, every one is living happily ever after in System Land.
The performance of the modified TCP algorithm looks to be nothing less than amazing. There are nine pages of charts showing how these simple changes can drastically improve the stability of TCP. Everything from slow-start to dynamic window resizing is demonstrated. I believe that Jacobson and Karels have proven that TCP can be drastically improved through some simple modifications. However they provide little analysis to prove that their modifications are the best modifications. See questions below for more on this.