X-Trace

X-Trace: A Pervasive Network Tracing Framework” provides a tool for understanding the behavior of distributed systems composed of layers of protocols. Traditional logging and diagnostic tools operate at a single layer in the stack, for example by tracing the flow of HTTP or TCP traffic in a network. This is insufficient for understanding many realistic failure scenarios, because application traffic typically traverses many different layers and protocols: when a client initiates an HTTP request, the following might happen:

  1. A DNS lookup is performed (via UDP, perhaps requiring recursive resolution)
  2. A TCP connection is established to the remote server, which requires transmitting IP packets and numerous Ethernet frames across different network links.
  3. The remote server handles the HTTP request, typically by running various application code and contacting multiple databases (e.g. a single Google search request is distributed to ~1000 machines)
  4. The HTTP response is returned to the client; the contents of the response may prompt the client to issue subsequent requests (e.g. additional HTTP requests to fetch resources like images and external CSS)

A failure at any point in this sequence can result in a failure of the original action—hence, diagnosis tools that attempt to operate at a single layer of the stack won’t provide a complete picture into the operation of a distributed system.

Design

X-Trace works by tagging all the operations associated with a single high-level task with the same task ID. By modifying multiple layers of the protocol stack to record and propagate task IDs, all the low-level operations associated with a high-level task can be reconstructed. Furthermore, X-Trace also allows developers to annotate causal relationships, which allows a “task tree” of related operations to be constructed.

X-Trace metadata must be manually embedded into protocols by developers; protocols typically provide “extension”, “option”, or annotation fields that can be used to hold X-Trace data. The “trace request” (tagging all the operations associated with a task) is done in-band, as part of the messages sent for the task. Data collection happens offline and out-of-band. This makes failure handling easier, and allows the resources required for data collection to be reduced (e.g. using batching and compression). The downside to offline data collection is that it makes prompt diagnosis of problems more difficult.

Discussion

Overall, I think this is a really nice paper. The idea is obviously useful, and it is nicely explained.

The system appears to only have a limited ability to track causal relationships. In particular, situations involving multiple clients modifying shared state don’t appear to be supported very well. For example, suppose that request A results in inserting a row into a database table. Request B aggregates over that table; based on the output, it then tries to perform some action, which fails. Clearly, requests A and B are causally related in some sense, but X-Trace wouldn’t capture this relationship. Extending X-Trace to support full causal tracking would be equivalent to data provenance.

It would be interesting to try to build a network-wide assertion checking utility on top of the X-Trace framework.

Advertisements

2 Comments

Filed under Paper Summaries

2 responses to “X-Trace

  1. There have been several papers that address network debugging through invariant checking. I agree that it would be useful to characterize xTrace graphs in terms of patterns and other graph matching algorithms. What would it take to get you to use xTrace in your pipelined map-reduce implementation?

  2. Using X-Trace in MapReduce (whether pipelined or not) is an interesting example, I think. A task would correspond to a MapReduce job. The initial job submission (client to JobTracker) could be tagged with the task ID. That ID could be propagated through to each of the map or reduce tasks that are scheduled for the job. In turn, each map or reduce task could pass along the X-Trace task ID as it executed, e.g. by encoding the X-Trace task ID when fetching data from HDFS, or when reduce tasks fetch map output from map tasks via HTTP. Finally, the HDFS output files generated by the reduce task could be tagged with the X-Trace task ID.

    This example also illustrates that the X-Trace model isn’t quite powerful enough to capture all the tracing one would like to do, in my opinion. Suppose that the output of your MapReduce job is only the first stage in a more complex data analysis pipeline: for instance, suppose that the HDFS output files of multiple MapReduce jobs are used as input to another MapReduce jobs, and then the output of that job is used to, say, insert some data into a database. A user might like to get an idea of all the downstream work that was done as a result of their initial MapReduce task, which really ought to include the eventual database insertion and the rest of the workflow. But X-Trace makes this difficult: you’d essentially need a higher layer on top of X-Trace, to correlate the fact that all these MapReduce jobs are part of the same workflow.

    I think it would be interesting to explore extending X-Trace to enable datacenter-wide data provenance and auditing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s