“Analysis and Simulation of a Fair Queueing Algorithm”

The traditional Internet architecture places all responsibility for congestion control at the endpoints of the flow, e.g. through techniques like TCP’s congestion control mechanism. Routers traditionally managed buffer space via a simple “first-come-first-serve, tail drop” (FCFS) policy: packets are routed in the order they arrive at the router. If there is no more buffer space, the router drops the newly-arrived packet (the tail of the queue).

Subsequently, it was realized that this is insufficient, for several reasons:

  1. Malicious users might deliberately subvert host-resident congestion control techniques, e.g. by modifying their TCP stack. This would allow users to obtain an unfair allocation of network bandwidth.
  2. In a heterogeneous network, different host-resident congestion control techniques might interact in unpredictable or undesirable ways.
  3. If a low-bandwidth, latency-sensitive connection (e.g. an interactive Telnet session) shares the same link as a high-bandwidth, latency-insensitive connection (e.g. file transfer), FCFS queue management will yield poor promptness (high latency) for the interactive connection.

This paper describes a “fair queueing” algorithm for managing the buffer space in routers which attempts to solve this problem. The proposed algorithm is essentially an emulation of a theoretical “bit-by-bit round robin” routing algorithm (BR). The BR algorithm ensures fairness by giving each user a separate queue, and routing the first bit from each queue in a round-robin fashion. Since bit-by-bit routing is obviously impractical, the fair queueing (FQ) algorithm proposed by the paper emulates this scheme, except that it routes one packet at a time: FQ essentially routes packets in the order defined by the per-packet finishing times that would result from using the BR scheme. If a packet arrives and no more buffer space is available, the last packet from the largest queue is dropped.

The FQ scheme also provides more prompt routing to users who utilize less than their fair share of bandwidth, following the intuition that latency-sensitive (e.g. interactive) applications typically use less bandwidth than high-throughput, latency-insensitive applications. This is achieved by giving a slight preference to routing packets that arrive for idle connections (packets inserted into empty queues). Even without this tweak, the round-robin strategy followed by FQ tends to provide much better promptness for low-throughput connections that share a network with high-throughput sessions.

The authors simulate the performance of the FQ and FCFS policies with several different host-resident flow control algorithms, and show that FQ effectively controls congestion, ensures more fair allocation of bandwidth, and provides better promptness for low-throughput connections.

The paper references an interesting game-theoretic argument: a policy should encourage “good” behavior from users, in the sense that self-optimizing behavior from users results in desirable global behavior. For example, while FCFS encourages each user to flood the network with as much traffic as possible, FQ essentially punishes users with misbehaving flow control algorithms by dropping that user’s packets more aggressively.

Comments

FQ routers would require considerably more state and per-packet computation than FCFS routers, which would make them more expensive. Is it cost-effective to build FQ routers that serve high-bandwidth links? I wonder if this issue is less pressing nowadays, given advances in technology — however, bandwidth has also increased at approximately Moore’s Law rates, so this might still be a problem. If routers need to track flows to perform FQ anyway, one wonders whether retaining datagrams as the basic unit of routing is still optimal.

While the FQ policy promises greater fairness than FCFQ, in practice it could still be defeated quite easily. Since they define a “user” as a source-destination pair, a malicious source could starve other users of bandwidth by sending packets to many different destinations. Similarly, a malicious user might be able to penalize individual users by spoofing packets with an appropriate (source, destination) address pair.

Advertisements

2 Comments

Filed under Paper Summaries

2 responses to ““Analysis and Simulation of a Fair Queueing Algorithm”

  1. Pingback: “Core-Stateless Fair Queueing” « Everything is Data

  2. Pingback: “Random Early Detection Gateways for Congestion Avoidance” « Everything is Data

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s