Friday, August 15, 2008

SIGCOMM 2008, Part 1

About a year ago, I took a look at some SIGCOMM 2007 papers. I won't be attending SIGCOMM this week, unfortunately, so in the interest of self-education, I thought I'd look at some of the papers this year. (The site currently has the papers up. Wow, what a neat idea...)

Before getting into papers, I thought I'd mention that Don Towsley is being given the ACM SIGCOMM award. This is a great choice, and well deserved. And relevant to this site's audience, Don is, in my mind, primarily a theorist. Not a FOCS/STOC theorist to be sure, but a theorist nonetheless. As the award announcement states:
Towsley, who is Distinguished Professor of Computer Science, has made innovative and pioneering contributions in developing foundational modeling and analysis techniques that have enabled a better understanding of some of the most important aspects of today's computer networks, network protocols and networked applications.
Modeling, analysis, understanding... that's what theory is all about. It's people like Don that made networking an open and understanding place for people like me. Thanks! And hooray for Don!

Now for papers. As before, I'll give brief synopses (at the level of the posted abstracts :) ), as I'm just looking at these papers on the fly. The network coding crowd has attacked again with the MIXIT system, which seems to throw together a bunch of ideas in a clever fashion to improve performance on wireless mesh networks. Recall that the basic working definition of network coding is that intermediate nodes do more than store and forward, they can process the packets as they come through (creating encoded packet variations). Here, the basic unit is not taken to be a packet, but a symbol (a small collection of bits), with symbols being packed into a packet. This allows nodes can "take apart" packets; if a whole packet doesn't come in error-free, the node can take symbols that appear to be right with high enough probability (based on information from the physical layer), and re-package (via linear combinations, a la "standard" network coding) and send on only those symbols. Because erroneous symbols might get through, an end-to-end error-correcting rateless code is also used. All of this appears to improve throughput.

The paper seems interesting -- another proof-of-concept paper for network coding in wireless systems, which is where I suspect network coding will be able to make the most inroads over the next few years. I can't tell yet how practical this really seems (without a more detailed reading), but the idea of taking apart packets and sending only the good pieces in combination with multiple coding techniques seems quite nice.

As an aside, the pdf for this paper seems to contain a picture or something that crashes my poor Mac around the 8th or 9th page. Help!


Unknown said...

This paper
by Sharon Goldberg, Shai Halevi, Aaron D. Jaggard, Vijay Ramachandran, Rebecca N. Wright is likely to appeal to many theorists.

---Andy Curtis

Unknown said...

The MIXIT paper (which wont crash your MAC!) is available at

Though the focus of this paper is building a practical system, there is a lot of theory underpinning it which we could not discuss in detai due to lack of space and the systems focus. The error correcting code with network coding is a very exciting area, (see recent work by Ralf Koetter and Frank R. Kschischang), and has many interesting algorithmic problems. There is a load balancing algorithm in the MIXIT system which is connected to theoretical work on backpressure routing for distributed optimal congestion control in networks. There are other algorithmic components in the system, for example, to minimize the network code description overhead (based on dynamic programming).

Sachin Katti

ps: Yeah, I am one of the network coding crowd!