Tag Archives: wireless

“ExOr: Opportunistic Multi-Hop Routing for Wireless Networks”

Wired networks are traditionally viewed as a unicast medium: packets are sent from A to B. Because the underlying medium can have a degree of sharing (e.g. multiple hosts connected to the same Ethernet hub), steps must be taken to avoid interference between concurrent senders (e.g. CSMA).

In a wireless network, the medium is shared: any hosts in range of a radio transmission can receive it. If the goal is to use a traditional unicast routing protocol over a wireless link, this makes interference between senders a more challenging problem (e.g. as discussed in the MACAW paper). However, rather than viewing broadcast as an inconvenience we need to work around, the broadcast nature of wireless can also be leveraged to design new protocols that would be infeasible for a wired network.

A nice example of a broadcast-oriented protocol for wireless is “ExOr: Opportunistic Multi-Hop Routing for Wireless Networks“. In a traditional routing design, a sender chooses the next hop that should receive a packet, and then unicasts the packet to that destination. In ExOr, a sender broadcasts a batch of packets to a group of nodes simultaneously. The set of batch recipients coordinate to try to ensure that each packet in the batch is forwarded onward. The recipient of a packet is only chosen after the packet is sent, which allows ExOr to “opportunistically” take advantage of links that have high loss rates. ExOr assumes that node reception probabilities are mostly independent and mostly decrease with distance — both of which are probably pretty reasonable assumptions.


The source collects a batch of packets destined for the same host, and then chooses a forwarding list for the batch. The forwarding list is a list of nodes, sorted by the expected cost of delivering packets from that node to the eventual destination node. The cost metric is similar to ETX (expected number of transmissions required to send the packet to the destination via unicast, including retransmissions); unlike ETX, it only considers the forward delivery probability. While it would be possible to including all possible recipient nodes in the forwarding list, this would increase the coordination cost among the forwarders, so ExOr only includes “likely” recipients in the list (estimated 10%+ chance of receiving a broadcast packet).

Each packet contains a batch map, which holds the sender’s estimate of the highest priority (according to the cost metric) node to have received each packet in the batch. When a node receives a packet, it uses the packet’s batch map to update its local batch map. This means that batch map information propagates through the nodes, carrying information about packet reception from high priority nodes to lower priority nodes.

After the source broadcasts a batch, each member of the forwarding list broadcasts, ordered by descending priority (ETX value). Each node broadcasts the packets it received, along with its updated batch map. Nodes coordinate to schedule their transmissions to try to avoid interference (e.g. by estimating when each node’s expected transmission time is, according to ETX value and batch map contents). The protocol continues cycling through the nodes in priority order. At the end of each cycle, the ultimate destination broadcasts its batch map 10 times; at the beginning of each cycle, the source resends packets that weren’t received by any node (by observing batch map contents). The ExOr scheme stops when 90% of the packets in a batch have been transferred, and uses a traditional routing policy to deliver the remainder of the batch.


Overall, ExOr is a really neat idea. That said, it is at best a special-purpose solution, because of the high latency it incurs for batching and multiple transmission rounds. Similarly, the need for a split TCP proxy is pretty ugly. A complete solution to wireless routing would perhaps adaptively switch between latency-oriented and bandwidth-oriented routing techniques, depending on the nature of the traffic.

It seems arbitrary to me that ExOr works on a batch-by-batch basis—that almost seems like establishing a new TCP connection for each window’s worth of data. The amount of useful work done in each transmission round decreases, until the protocol imposes an arbitrary cutoff at 90% and switches to a traditional routing protocol. Instead, wouldn’t it be more sensible for ExOr to allow new packets to be inserted into the batch as the destination confirms the delivery of packets? This would essential emulate the “conservation of packets” principle from the Van Jacobson paper.


Filed under Paper Summaries

“White Space Networking with Wi-Fi like Connectivity”

White Space Networking with Wi-Fi like Connectivity” discusses how to achieve wireless networking using so-called “white space”: unused portions of the UHF spectrum (approximately 512-698 Mhz). This is attractive, because using the UHF spectrum can allow wireless networking over a relatively long range (1 mile or greater, compared to 300-600 feet for 802.11n). UHF signals are also better able to penetrate walls and buildings in urban environments.

The difficulty with using UHF spectrum is that the spectrum has already been assigned for use by two “incumbents”: analog TV and wireless microphones (although the use of the spectrum by analog TV should be reduced since this paper was written, due to the recent conversion of analog TV to digital). Per FCC rules, non-incumbent use of the UHF spectrum must avoid interfering with incumbent use. While TV signals are relatively stable, wireless microphone transmissions can begin without warning, which poses some challenges to providing wireless networking using the UHF spectrum. To gain improved throughput, the authors also suggest that multiple contiguous free channels should be aggregated together and used for networking.

To enable networking over UHF, several problems must be solved:

  • Spectrum Assignment: What portion of the 180 Mhz UHF band should be used for networking in a given locale? The UHF spectrum is typically “fragmented”: some portions of the spectrum are in use, while others are free. Furthermore, to increase throughput, multiple contiguous free channels should be used by a single network. The right spectrum to use will also change over time, e.g. as hosts move and wireless microphones are activated. The channel used must be free for both the AP and all clients, which is made challenging by the larger spacial extent of UHF networks.
  • AP Discovery: WiFi APs emit beacons every 100ms on the WiFi channel they are using. To find APs, clients can easily scan each channel, looking for AP beacons. This is more challenging for UHF, because there are more possibilities to scan (30 UHF channels, and 3 possible channel widths, yielding 84 combinations).
  • Handling Disconnections: If incumbent use of a channel is detected, networking transmissions must immediately stop using the channel (the authors found that even transmitting control packets can result in audible interference for wireless microphones using the channel). If clients and APs are prone to abruptly disconnect from a channel, a method is needed to choose another channel and continue communication.


To solve the spectrum assignment problem, clients and APs exchange information about which channels are free near them. APs probe for new channels when they detect incumbent use on their current channel, or if they observe a performance drop. Channel probing is done by considering which channels are free for all clients and the AP, and by then estimating the available bandwidth on each channel.

To enable AP discovery, the authors propose some signal processing magic. They propose a technique called “SIFT” which essentially allows them to scan the spectrum range and cheaply determine the presence of an AP on a channel; they can then tune the radio transceiver to the identified channel, and decode the beacon packet as usual.

To handle disconnections due to incumbent use of the spectrum, they propose using a separate 5Mhz backup channel, which is advertised as part of AP beacon packets. If a client senses that a disconnection has occurred (e.g. because no data packets have been received recently), it switches to the backup channel, and both listens for and emits chirps. Chirps contain information about the available white spaces near the chirping node, which is used to pick a new channel for communication. APs periodically scan for chirps.

It’s possible that the backup channel is already in use by another incumbent. In this case, the client picks an arbitrary available channel as the secondary backup channel, and emits chirps on it. The AP periodically scans all channels to attempt to find such chirps. The same signal processing magic (SIFT) that is used to enable cheap AP discovery is used to make scanning all channels periodically feasible.

Related Reading

Matt Welsh discusses this paper on his blog. The slides for the SIGCOMM talk on this paper are also available online.

Leave a comment

Filed under Paper Summaries

“Interactive Wi-Fi Connectivity for Moving Vehicles”

Interactive Wi-Fi Connectivity for Moving Vehicles” proposes ViFi, a system for achieving connectivity via WiFi from moving vehicles.

A moving vehicle is in range of different WiFi base stations over time. Due to host mobility, the best base station to use will change, so the client must do “handoff” to move between the base stations as appropriate. The authors describe traditional handoff policies as “hard handoff,” because a client is only ever connected to one base station at a time. The authors present experimental and analytical results to suggest that any handoff policy based on connecting to a single base station will be outperformed by policy which opportunistically utilizes multiple base stations that are in-range at once. Utilizing multiple base stations at once is effective for two reasons:

  1. A client is often in range of multiple base stations, at least in the urban environments they study.
  2. Packet loss is roughly independent of senders and receivers. In the upstream direction, utilizing multiple BSs is effective because losses are roughly independent of base stations: a packet that is not received by one BS is likely to be received by at least one another BS. In the downstream direction, BS diversity is effective because downstream losses tend to be “bursty” and path-dependent, but receiver independent: while one BS will often fail to deliver a burst of packets, other BSs are likely to be able to send packets to the receiver.


Therefore, the authors propose a scheme to allow WiFi clients to opportunistically utilize multiple in-range base stations. They assume that base stations have some ability to communicate, but do not have a high-bandwidth link. Hence, they focus on a scheme that allows the base stations to coordinate to enable base station diversity and avoids duplicate or missed packet deliveries, but requires minimal additional load on the inter-base station link and doesn’t hurt user-perceived latency. Their basic approach is to combine opportunistic reception (clients broadcast to all BSs in range) with probabilistic relaying (base stations compute relay probabilities independently and probabilistically).

In ViFi, each vehicle designates one nearby base station as the anchor, and the rest as auxiliaries. The anchor and auxiliary list is periodically broadcast by the vehicle via a beacon, which allows base stations to learn the identify and addresses of peer BSs.

Packet Transmission

The basic ViFi transmission algorithm is simple. To send a packet from the vehicle upstream, the receiver broadcasts to all in-range receivers. If the anchor receives the packet, it responds with an ACK broadcast. Any auxiliaries that overhear the packet broadcast wait for a short window; if they don’t see an ACK, they probabilistically relay the packet to the anchor via the inter-BS backplane. When the anchor receives relayed packets that it hasn’t already ACK’ed, it ACKs them. Finally, the receiver retransmits packets if it doesn’t receive an ACK within a certain period of time.

The transmission algorithm for downstream packets (base station => vehicle) works similarly (and almost symmetrically). The anchor broadcasts the packet to the vehicle. If the vehicle hears it, it broadcasts an ACK. If any auxiliaries overhear the broadcast but don’t see the ACK after a short window of time, they probabilistically broadcast the packet themselves. The anchor retransmits the packet if it doesn’t hear an ACK during the retransmission interval.

The magic is in the probabilistic retransmission technique, which must balance between retransmitting too often (wasting bandwidth) and not often enough (risking packet drops). The intuition behind their probabilistic retransmission technique is that they want the expected number of packet relays amongst all auxiliaries to be 1; amongst auxiliaries, they prefer transmission by auxiliaries that are better connected to the destination (i.e. either the anchor BS or the vehicle). Each BS computes packet reception probabilities between itself and the other BSs and the vehicle, using a periodic beacon.

Other Details

802.11 link-layer acknowledgments are sent immediately after a packet is delivered to the recipient, which allows the sender to predict the expected arrival time of the ACK. In ViFi, ACKs may be delayed due to relaying, which depends on current network conditions (e.g. behavior of the inter-BS backplane). Hence, ViFi uses its own acknowledgment scheme, with the retransmission interval set based on recent history (99th percentile measured delay for ACKs).

ViFi also implements salvage. When a vehicle moves out of range before the anchor BS can deliver packets for it, those packets are “stranded” on the old anchor BS. When the vehicle obtains a new anchor BS, the beacon it sends out also includes the previous anchor BS. The new anchor contacts the old anchor, and asks for any unacknowledged packets for the vehicle to be relayed.

Leave a comment

Filed under Paper Summaries

“Modeling Wireless Links for Transport Protocols”

This paper tackles an interesting research problem: it is well-known that traditional transport protocols (e.g. TCP) don’t perform well over wireless links (e.g. because they conflate packet losses with congestion; I reviewed an earlier paper that evaluates TCP improvements for wireless links). But in order for transport protocol designers to take wireless network performance into account, they need simple models of wireless performance that are nonetheless accurate over the wide array of different wireless variants and environments. This was reflected in the MACAW paper, for example: the proposals made by the paper are very dependent on their assumptions about the behavior of the wireless link layer, and it is hard to know how well those assumptions will generalize.

Modeling is something of an art: a good model should balance realism, generality, and detail. There are four basic parameters to a wireless model, from the perspective of the transport protocol designer:

  • Type of link: Cellular, wireless LAN, or satellite.
  • Network topology: where does the wireless link(s) appear in the network topology?
  • Traffic model: for example, is traffic bidirectional and how large is a typical transfer size?
  • Performance metrics: what should the designer optimize for. For example, throughput, goodput, fairness, or delay.

Additionally, wireless links possess some additional characteristics that are typically reflected in models:

  • Error losses and packet corruption. This should be modeled by having the link drop packets according to some random distribution (e.g. a uniform distribution over packets or bits).
  • Variation in delay. Modeled by temporarily suspending data transmission over the simulated link.
  • Packet reordering. This can be modeled by swapping the position of two packets in a queue, but this has the disadvantage that there must be a non-zero queue length for reordering to occur. Perhaps better is to simulate reordering by imposing delay on one packet but not others.
  • On-demand resource allocation. For example, in GPRS, radio channels are released when the queue size falls below a certain threshold; the channel must be allocated when new data arrives, introducing delay. This can be modeled by introducing an additional delay into the model when data arrives and the channel has been idle for longer than a configurable period.
  • Bandwidth variation. This can be modeled by simply changing the bandwidth of the link.
  • Asymmetry in bandwidth and latency. This can be directly reflected in the model: uplink and downlink bandwidths and latencies can be allowed to differ.
  • Queue management. To model current cellular and wireless LAN networks, the authors suggest using a “Drop-Tail” policy. To model satellite or “future cellular and wireless LAN networks”, RED may be more appropriate.

Modeling mobility depends on the goals of the researcher. For those concerned only with the effect that intersystem handoff has on transport protocols, it suffices to change the link characteristics in the network model to effect a handoff. If the researcher wants to model handoffs in more detail, a more exhaustive model is required (accounting for topology, home and foreign agents, Mobile IP, etc.)

Leave a comment

Filed under Paper Summaries

“Architecture and Evaluation of an Unplanned Mesh Network”

This paper discusses the design of the Roofnet, a 37-node “mesh network” that covers four square kilometers in Cambridge, MA. The authors are trying to get “the best of both worlds”: the performance of a carefully-planned wireless deployment, with the lack of planning and easy deployability of a collection of independent “hot spots.”


A small fraction of Roofnet nodes are connected to the Internet; the remainder of the Roofnet nodes access the Internet by finding a route through the Roofnet that terminates at a gateway node (4 of the 37 Roofnet nodes are gateways). Each Roofnet is assigned a non-routable IP address, based on the node’s MAC. Gateway nodes use NAT to route traffic into and out of Roofnet. Similarly, each Roofnet node runs a local “hot spot,” and assigns local (192.168.1.x) addresses to clients via DHCP. NAT is used to rewrite these addresses into the Roofnet node’s address, before they are routed through Roofnet. Using two levels of NAT means that no manual configuration of the address space is required: both Roofnet ndoe IPs and node-local client IPs can be automatically assigned, without risk of conflict.

Each node keeps track of the performance of all the gateway nodes it is connected to, and chooses the best route for each new TCP session. Roofnet uses dynamic source routing (DSR): each node maintains a database of link metrics for each pair of nodes in the network. It chooses the best route for each new connection by applying Dijkstra’s algorithm, and “source routes” (the entire route for a flow is assigned by the originating end point, not intermediate routers). The link metrics are periodically updated: link metrics for the source route are included by each node that forwards a packet; periodic flooding of link metrics is done; and nodes “snoop” and overhear link metrics sent by other traffic they can listen to. Flooding is only necessary when no route to the destination host can be found—the authors argue that this will rarely occur, since most routes terminate in a gateway.

To choose among routes, Roofnet uses an “estimated transmission time” (ETT) metric: it combines the link’s highest-throughput bit rate with the probability that a packet sent over the link will be successfully delivered. The authors found that this policy often results in using relatively high-bandwidth routes with relatively high link-level loss rates. This situation was unanticipated by the firmware designers of their wireless NIC, so they had to use a custom policy they call “SampleRate” for choosing which 802.11b transfer bit rate to use (high-bandwidth, high-loss rates are often preferable). SampleRate varies the 802.11b transfer rate dynamically, measuring the current throughput vs. delivery probability for the different rate options. SampleRate is similar in principle to the ETT routing metric they use, but differs because:

  1. SampleRate selects only the link-local transfer rate, not the route.
  2. SampleRate is based on measurements of actual packet transmissions rather than broadcasting or snooping, and hence can respond to changes in network conditions much more rapidly than ETT can.


The authors found that the average throughput between a node and its closest gateway is 1395 kbit/sec, with a latency of 22 msec: this is quite impressive, and matches typical budget consumer-grade DSL performance at the time (2005). Roofnet routes most traffic over short, high-bandwidth hops, ignoring most of the long-distance links available that perform poorly.

The authors found that a density of about 5 nodes per square kilometer is necessary (in this environment) for all-pairs connectivity. Increasing node density leads to higher throughput, because there are more alternate routes to choose from.

Roofnote is a true “mesh” network, in the sense that each node routes at least some packets through most of its neighbors (rather than preferring a single neighbor for most traffic, which would suggest that a directional, planned design would be better). On the other hand, eliminating the two best-connected nodes decreases average throughput by 43%, which suggests that, despite the mesh architecture, good network performance depends on having a few “supernodes”.


The authors are careful to brag more prominently about the mean all-pairs average throughput (627 kbits/sec) than about the median (400 kbits/sec). This is a little disingenuous, given the presence of a small number of high-bandwidth links that skew the distribution.

Using NAT is understandable, but doesn’t work well with some networking applications (e.g. BitTorrent, running a server inside Roofnet). It would be interesting to extend the Roofnet design to allow incoming connections from the Internet to an arbitrary Roofnet node, as well as allowing a TCP connection to remain connected as its route is changed on-the-fly.

It would be interesting to compare the connectivity of mesh networks with peer-to-peer networks: many of the concepts (auto-configuration, peer-to-peer topology, the importance of a small number of well-connected “supernodes”) seem to apply to both kinds of networks.

1 Comment

Filed under Paper Summaries

“Improving TCP Performance Over Wireless Links”

The traditional assumption behind the design of TCP is that most packet loss is due to congestion; this is stated explicitly in the Van Jacobson paper, for example. While this assumption mostly holds on wired networks, it causes problems with wireless links: packets dropped or corrupted due to the nature of the wireless medium lead to (1) slow recovery times because of the need to wait for duplicate ACKs or the RTO (2) poor subsequent performance, because TCP assumes that the packet drop was caused by congestion and reacts accordingly (e.g. reduces the congestion window).

A Comparison of Mechanisms for Improving TCP Performance over Wireless Links” describes and compares several solutions to this problem. They classify the candidate solutions into three groups:

  1. end-to-end protocols modify the TCP stack to make the sender of a dropped packet more intelligent, for example by adding selective ACKs (SACK) or explicit loss notifications (ELNs). The idea here is both to avoid waiting for TCP-level timeouts and to recover faster from packet loss. ELNs also allow the sender to differentiate between losses due to congestion and losses due to other factors (e.g. noise, interference), so that the TCP sender doesn’t take congestion-avoidance steps.
  2. link-layer protocols add reliability mechanisms to the link layer that understand the nature of the wireless medium. For example, the base station might require a link-level ACK from wireless clients. Because such a protocol is specific to a single link, congestion is not a factor, and the RTT can be estimated more accurately; this allows a lower retransmission timeout, and means that the sender won’t misinterpret packet drops as congestion. Link-layer protocols hide much of the link lossiness from TCP: the lossy link just appears to be a more reliable, lower-bandwidth link. A major issue here is ensuring that these protocols play nicely with TCP-level reliable delivery mechanisms. More advanced link-layer protocols have knowledge of TCP semantics: for example, they might detect and suppress TCP-level duplicate ACKs and instead retransmit lost packets themselves.
  3. split-connection protocols are the most exotic: the base station acts as the endpoint for the TCP session, and it establishes a separate session with the wireless host. This second session may or may not use TCP; in either case, the base station is in a better position to manage the wireless session, because it knows more about the medium, and knows congestion cannot occur; hence, it can schedule retransmissions more effectively. The two transport-layer sessions act to insulate the TCP sender from any losses suffered on the wireless link.


The authors conduct a rigorous series of experiments, comparing the performance of several different variations from each class of protocols. They use an 8MB data transfer as the workload, and measure both LAN and WAN environments (in the latter, the sender is 16 hops away from the base station). Notable conclusions:

  • The authors observe a bad interaction between link-layer reliable delivery mechanisms and TCP. However, the problem does not lie in a simple mismatch between the TCP RTO and the link-layer RTO (the latter is much smaller than the TCP RTO of 500 msec). The problem is that the link-layer reliable delivery technique they used does not attempt to preserve delivery order; this yields duplicate ACKs from the destination TCP stack, and hence invokes TCP-level retransmission and recovery. They observed that 90% of the lost packets were redelivered by both the link-layer and TCP. This problem was solved by modifying the link layer to understand TCP semantics and suppress TCP-level duplicate ACKs for lost packets that the link-layer has already retransmitted.
  • Basic TCP achieves very poor throughput, especially for WAN environments. ELNs and SACKs both result in significant improvements; the authors hypothesize that a protocol that combines ELNs and SACKs would be the most effective.
  • They tested two variants of the “split connection” approach. When using TCP Reno for the wireless connection, they experienced poor performance: TCP Reno’s packet loss recovery is so slow that eventually the base station’s buffers are exhausted because packet loss limits goodput on the wireless link. They saw much better performance when they used SACKs for the wireless link, but even in this configuration, the split-connection approach did not significantly outperform the best reliable-delivery link-layer approach.


Using TCP Reno for the wireless half of the “SPLIT” configuration seems pretty silly: TCP Reno is close to the worse-performing protocol for wireless links, and the whole point of the split-connection approach is that the wireless link can use a more appropriate transport than TCP.

1 Comment

Filed under Paper Summaries

“MACAW: A Media Access Protocol for Wireless LAN’s”

This paper proposes an improved media access protocol for wireless LANs. A “media access protocol” is a link-layer protocol for allocating access to a shared medium (i.e. a wireless channel) among multiple transmitters. MACAW is a media access protocol for mobile pads that communicate wirelessly with base stations; each base station can hear transmission from pads within a certain range of its location, called a cell.

This is a challenging problem for several reasons:

  • Transmitters are mobile, but location information is not communicated explicitly: a transmitter simply moves or a new transmitter comes into the area, and the media access protocol must adapt.
  • Connectivity is not transitive. For example, A and B can communicate, as can B and C, but A and C might not be able to; the protocol must be designed so that communications between B and C do not disrupt A, even though A has no direct knowledge of C.
  • Due to noise, communication may not be symmetric (A can hear B, but B cannot hear A), and transmitters may interfere with one another (A might be out of range of B, but A’s transmissions might still disrupt B’s ability to communicate).

The MACAW paper ignores both asymmetry and interference: they assume a simplified model in which two stations are either in range of one another or they aren’t; a transmission is successfully received iff there is only one active transmitter in range of the receiver; and no two base stations are within range of one another. Importantly, contention between transmissions occurs at the receiver of the transmission, not the sender: that is, two nodes can transmit different messages from the same area at once, but two nodes in the same area cannot simultaneously receive different messages. Their goal is a media access protocol that is both efficient (high utilization of the available media), and fair (each station gets its “fair share”).


MACAW is an improved version of an earlier wireless media access protocol called MACA. MACA proceeds as follows:

  • Before sending a message, the transmitter sends a RTS control message (“ready to send”), containing the length of the upcoming data message. The time taken to transmit a control message (~30 bytes) is called a slot.
  • If the receiver hears the RTS and is not currently “deferring,” it replies with a CTS control message (“clear to send”), which includes a copy of the length field from the RTS.
  • Any station that hears a CTS defers any transmissions for long enough to allow someone to send a data message of the specified length. This avoids colliding with the CTS sender (the receiver of the upcoming data message).
  • Any station that hears an RTS defers any transmissions for a single slot (long enough for the reply CTS to be received, but not long enough for the actual data message to be sent, because contention is receiver-local).
  • Backoff: if no CTS response is received for an RTS, the sender must retransmit the RTS. It waits an integer number of slots before retransmitting, where that integer is chosen randomly between 1 and BO, the backoff counter. BO is doubled for every retransmit, and reduced to 1 for every successful RTS-CTS pair.


MACAW is proposed as a series of improvements to the basic MACA algorithm. First, they suggest a less aggressive backoff algorithm: the exponential increase / reset to 1 policy of MACA leads to large oscillations in the retransmission interval. They propose increasing BO by 1.5 after a timeout, and decreasing it by 1 after a successful RTS-CTS pair. They also arrange for the value of the backoff counter to be communicated between clients: this avoids a client with a lower backoff counter starving other clients from accessing the media. Finally, they changed the backoff counter to be per-destination, rather than a single counter: the observation here is that the backoff counter should measure congestion, and there is not a single homogeneous level of congestion in a typical wireless deployment.

Second, they propose that receivers should send an ACK to the sender after successfully receiving a data message. This is suggested because the minimum TCP retransmission timeout is relatively long (0.5 seconds at the time), so it takes a long time to recover from lost or corrupted messages. A link layer timeout can be more aggressive, because it can take advantage of knowledge of the latency of the individual link (rather than the end-to-end timeout in TCP). This isn’t inconsistent with the end-to-end argument: the link-layer timeout is not necessary for the correctness of TCP; it is just a performance optimization.

Third, they propose two related techniques for allowing transmitters to avoid contention more effectively:

  1. A DS packet should be sent after a successful RTS-CTS exchange, just before the data message itself. The idea here is to explicitly announce that the RTS-CTS succeeded, so that if a pad can hear an RTS but not the CTS response, it does not attempt to transmit a message during the subsequent data transfer period. The reasoning here is subtle: as noted before, contention is only at the receiver, so one wouldn’t think that a node that can hear the RTS but not the CTS should avoid transmitting. However, sending a message requires that the sender hear the CTS response (as well as the eventual ACK); therefore, if another node within range is sending, it would be pointless to also try to transmit.
  2. Suppose that two pads A and B in different cells both want to receive rapidly, but are close enough that they contend with one another. If pad A “wins” (completes the RTS-CTS pair), and it will effectively monopolize the channel: B will be unlikely to ever send a CTS response, because to do so it would need to receive the RTS at exactly the right time (just after A has finished receiving a data message). The authors propose a fix: if a receiver hears an RTS while it is deferring any transmissions, at the end of the deferral period it replies with an RRTS (“ready for RTS”) packet, prompting the sender to resend the RTS. However, the authors note that this solution is imperfect: if A is receiving data constantly, it is unlikely that B will ever hear the RTS, so it won’t know to send the RRTS.

Note that the best-case performance of MACAW is actually lower than that of MACA, because the additional ACK and DS messages sent by MACAW incur overhead. However, MACAW is much more resistant to interference, and ensures much fairer allocation of the medium among different transmitters.


I thought the paper was quite readable, and is a clear improvement over the prior work. However, there’s a downside to their presentation style: they begin with MACA, and then propose a series of incremental improvements that are each designed to address a shortcoming of MACA in a particular situation. However, some of their improvements actually degrade performance in other situations. It is hard to get an idea of how a given protocol change will behave in the full complexity of a field deployment from the paper.

1 Comment

Filed under Paper Summaries