“Congestion Avoidance and Control”

This paper discusses techniques for congestion control in TCP: that is, ensuring that network performance is stable and near-optimal even when the network is “congested” (near its maximum capacity). The basic principle is conservation of packets: when the network is in equilibrium, a new packet should not be added to the system until an old packet leaves. Achieving these properties requires solving several practical problems:

  • at connection-establishment, the connection should be brought to equilibrium quickly: the implementation should determine the equilibrium data transfer rate quickly
  • at equilibrium, packets should not be added to the network before old packets have left
  • the protocol should respond to changes in network bandwidth/latency/utilization, increasing or decreasing the data transfer rate appropriately

Reaching equilibrium

To address connection establishment, the paper introduces the “slow-start” mechanism to gradually increase the transfer rate:

  1. Initially, the “congestion window” is set to a single packet
  2. After each ACK received, the window size is incremented by 1
  3. When sending, the window size used is the minimum of the current window size and the receiver’s advertised window size

This fixes two problems: (1) it increases the window size quickly, so as to utilize available network resources (2) it doesn’t overload the network by sending at the receiver’s advertised window size initially. This is because two hosts on fast networks might be connected by a slow link, so if the initial window size is set too large, the intermediate network will drop packets and require many retransmits.

Maintaining equilibrium

To keep the connection in an equilibrium state, the TCP implementation should keep track of the round trip time (RTT) between sender and receiver. As the RTT increases, the transfer rate should be decreased, and vice versa. Traditional implementations get this wrong, by not allowing for enough variance in the RTT time. During periods of congestion, RTT variance increases; if the sender responds to slowly-acknowledged packets by retransmitting them, it will often be retransmitting packets that are not lost but are merely delayed. This results in forcing the network to do more useless work (needless retransmits), which is particularly harmful during a period of congestion. The paper presents a more accurate method for estimating RTT variance, which reduces the impact of this phenomena.

Detecting over- and under-utilization

Finally, a TCP implementation must be able to respond to periods of increased congestion. If the RTT estimation improvements introduced earlier have been adopted, then the paper argues that most retransmits will be due to packet loss due to congestion (e.g. insufficient buffer capacity). Therefore, the paper proposes that packet loss be used as an automatic indicator of network congestion. The paper argues that the best solution to congestion is a multiplicative decrease in the current window size—so whenever a packet is lost, the current window size is decreased by a multiplicative factor of k (k < 1). If congestion persists, repeated multiplicative decreases result in exponential decrease of the current window size.

While network overutilization (congestion) is indicated by packet loss, network underutilization is not explicitly signaled. Therefore, to avoid underutilization, the sender should periodically increase its current window size, to “probe” for additional network capacity. The paper argues for an additive increase in the window size for each ACK that is received. Note that this additive increase on ACKs is not symmetric with the multiplicative decrease on packet losses. The paper argues that this is sensible, because (1) over-utilization is more harmful than under-utilization, and takes longer to recover from (2) a multiplicative increase on ACKs would lead to wild oscillations in the window size, and poor throughput on average.

Comments

Overall, I thought this was a very strong paper: clearly, it represented a marked improvement over the prior state of the art. In a modern environment, some of the paper’s assumptions may not hold: for example, the paper suggests that, assuming an accurate RTT estimate, most packet losses will be due to congestion. This may not hold if the network medium allows significant rates of corruption (e.g. wireless): a packet may be discarded due to checksum mismatch (corruption), which might not be directly related to congestion. Similarly, I wonder whether the suggested “slow start” algorithm might be too slow for short-lived connections on modern high-bandwidth links (e.g. a web browser fetching images).

The use of packet loss as an implicit signal of congestion is interesting. I can see why avoiding a protocol modification would be expedient, but for something as important as congestion, I wonder if adding an explicit congestion signal would be wiser in the long-run. I believe that Explicit Congestion Notification was proposed after this work was done—is that a win only when active queue management is in use?

Advertisements

3 Comments

Filed under Paper Summaries

3 responses to ““Congestion Avoidance and Control”

  1. Pingback: “Analysis and Simulation of a Fair Queueing Algorithm” « Everything is Data

  2. Pingback: “Improving TCP Performance Over Wireless Links” « Everything is Data

  3. Pingback: “ExOr: Opportunistic Multi-Hop Routing for Wireless Networks” « Everything is Data

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s