I am sure it sometimes seems on this blog that I'm "critical" of the theory community. I am! Because constructive criticism is, I think, a useful thing.
Today, I unleash some criticism of the networking community. This complaint is based on my own experience with my papers, as well as experience I have on the other side (as a PC member), on how for example data structure papers are thought of when submitted to these communities. Mind you, this criticism isn't universal -- several people seem willing to look at and accept more theoretical papers in these communities. But not everyone...
(Still, keep in mind, I'm taking an extreme point of view here, to make a point, which I understand may inspire some interesting comments.)
When a paper comes in to a systems/networks where the main contribution is a general data structure (in my papers, this has often been a hash table construction) that naturally could apply to various networking problems (e.g., hash tables are commonly used in MANY router-level applications), and the main point of the paper is how to better make these data structures work for these applications (i.e., we consider reducing accesses to slow memory, trying to get it down from say 2 to 1, or consider pin count/hardware parallelization tradeoffs), THIS IS A NETWORKING (OR SYSTEMS) PAPER. I often hear the argument "This is a general data structure, it belongs in a theory conference," and this is just wrong, for multiple reasons.
1) Please don't try to make generality a negative. I know in systems papers you are accustomed to seeing detailed results on a single application (and when a data structure paper also includes that, it can also be a very good thing). But generality itself is a good thing, not a bad thing. (For SIGCOMM this year, one of the review fields is "longevity" -- sometimes these papers aim to have an expected long lifetime, rather than focusing on the short-term impact of a particular application).
2) In the same vein, don't dismiss a paper just because in it's evaluation, it judges the data structure on raw performance (time per insertion/deletion/lookup in a hash table) and/or on simulated data. Yes, I know, by measuring performance at the data structure level instead of the application level we haven't shown this is better on "any particular" application -- but we have tried instead to show it's better for a wide range of present and future applications! (If we haven't demonstrated this effectively, then it makes more sense to reject.) Similarly, in many settings (e.g., hashing), all data is artificial. You may have to pick a good enough hash function for your data to look random (see e.g. Mitzenmacher/Vadhan), but then it doesn't matter what your initial data was.
The first two points can also be summarized the following way: your conference has page limits. We've decided to focus on using the pages available to show off the general, widely encompassing benefits of our approach; don't dismiss us just because we haven't dug into specific implementation details for a particular application (which can take pages to do, and months to program/gather data on, and really doesn't add nearly as much to the paper as what we've put in).
3) This type of work doesn't go to a theory conference: often, in such papers, there is little of "theoretical" interest -- these are engineering papers, not papers with new theoretical techniques or technically challenging theoretical proofs (when we have proofs in this type of paper). Theory people will wonder why we're caring about these details of the hardware/software/data model -- it's only interesting in the context of your applications.
4) Indeed, the point of writing such a paper is that we are looking (or trying to look) precisely at the application and systems issues. This paper is meant for you! In fact, sometimes on the theory side it seems the only way we can get you to notice a new and powerful technique is to write papers for your community, so please try to let us in -- it will speed up adoption of these new, powerful techniques, helping your community overall. (My examples are Bloom filters, d-left hashing, etc.)
One worry seems to be that you'll accept a paper that turns out not to have the payout you'd have expected. That will happen! (By the way, it's also happening with all the other less theoretical papers you're accepting now...) But you need to have a pipeline of these ideas coming in -- who knows where the next basic data structure idea that will have a lot of applications (e.g. a Bloom filter) will come from! I worry you're cutting off your pipeline.
I understand that these papers will face the same high bar that everyone else does to get into say SIGCOMM; that's not my complaint. But in multiple settings I've seen such papers essentially dismissed out of hand, which seems unfortunate. I often hear (and even promote!) the idea that theory people could be working on more practical things. When we do try to meet you halfway, leave the door open a little for us...
Thursday, March 05, 2009
Subscribe to:
Post Comments (Atom)
10 comments:
Bravo! I'll generalize further that the broader systems communities frequently have elaborate dogma about both the topics that are "in scope" and the particulars of "how" one describes a contribution. Some of this is inherent -- communities build culture and as a submitter you have some responsibility to grok that culture to become part of that community (e.g., there are various unintuitive "sacred cow" issues in various communities; the evil of packet reordering at SIGCOMM or the preeminence of privacy interests in the security community).
However, in my opinion there is far too much orthodoxy present -- and, in particular, a bias in service to dominant commercial mindset (i.e., the "this idea isn't practically deployable because...") In addition to discouraging bolder papers it also puts the community at risk of having decreasing relevance.
There was a period of time when the core systems community (i.e. SOSP/OSDI) was moving towards defining itself out of existence (in the mid-to-late 90s there was a constituency who was arguing that systems papers needed to be about kernels or file-systems or something similarly gung-ho... when such topics were being overshadowed by the impact the Internet).
That said, one point I'll push back on is the need to effectively argue for or demonstrate the value in a real application setting. This issue is deep in the DNA of systems people -- the need to understand how a real problem is solved. If its also ready been established that hash performance is key to application X, Y and Z, then life is easy and you can just cite prior work. If its not clear then I think that -- for a systems conference -- this is a responsibility to convince the reviewer that this is so.
. That said, undo orthodoxy is
Another push back on real application settings. Far too often, "principled systems building" is a euphemism for premature optimization.
Stefan --
From the size and frequency of your posts, I think its time to start your own blog ;)
-Ranjit.
Hi Ranjit,
Too much time in airports I fear. That said, I think its far far better to hang out on other people's blogs.... no rent.
Having served on a thesis committee for one of Michael's students, I can understand his frustration at someone taking a "systems" view of "theory" work. (In this case, I was the source of the frustration.) The thesis in question had a very hand-wavy description of how the particular data structure could be used in all kinds of networking settings, but did not link the technique to a particular use case to demonstrate why the idea was so compelling.
In this case, there was no page limit (it's a dissertation after all), so I pushed pretty hard to get a more concrete use scenario. And I got it: the final thesis did a nice job of making tha connection.
I can imagine a SIGCOMM or NSDI reviewer having the same problem I did. It's great to do things that are general but Stefan is right that it is deeply in our DNA to look at things from a "real problem" perspective.
Hey Matt. As we've discussed before, I totally agree that it was appropriate in the setting of a thesis to ask for a more concrete use scenario. As you point out, there's no page limit, and moreover one can expect a wide range of readership (to the extent people read theses!), and so one should try to give a clearer picture of the use story. (And it ended up helping us greatly that you made the student do this.)
For a conference (or page-limited journal -- like IEEE/ACM Transactions on Networking) I expect people to understand that you can't always have EVERYTHING in the paper. I admit I also expect the readers to be more knowledgable of fundamentals -- I find it troublesome to think that I need to explain to networking folks why a better hash table (that tries to take into account standard hardware issues) is something that should squarely into a networking publication! Contrary to what Stefan says, I find that citing half a dozen papers that use hash tables doesn't seem to satisfy some reviewers; indeed, I find some reviewers aren't happy even if you list and describe some standard applications unless you specifically test your structure in a specific setting. (It also doesn't help, I suppose, that things like how Cisco uses hash tables in their routers is generally proprietary and not documented in a citeable way.)
Contrary to what Stefan says, I find that citing half a dozen papers that use hash tables doesn't seem to satisfy some reviewers;
To be clear, I wasn't suggesting that this would be sufficient. Rather, if there are papers demonstrating that the property of hashing that you address (e.g. space or time performance in the context of some input environment) are dominant or even large parts of a well-known systems or networking problem, then I think citation can get you a long way. If many systems simply use hashing and don't specifically make that point about hashing's importance then the onus tends to fall on you.
Think about this from the standpoint of the systems reviewer. They actually may not know how important hashing is to application X. However, over time they do end up reading lots of papers that optimize components of systems that, when considered holisticly, really didn't matter that much. There are cannon examples of this in caching, where countless papers optimized eviction or replacement in settings where the most important component was compulsory misses (heavy tail). Now sure, you know to pick problems in which the data structures you are optimizing are the ones that matter, but the reviewer is frequently not that sophisticated unless the bottleneck has already been well established. For example, you wouldn't have a problem arguing for why space-time optimization for DFA/NFA representations of classifiers are important for network security because there's a significant literature demonstrating that already. If you wanted to argue that your modified splay tree is 5x faster than Sleator and Tarjan's then it probably wouldn't suffice to say that splay trees are used in Windows NT... people would want to be convinced that the performance mattered.
That said, sometimes this can be done by fiat (i.e. Personal Communication with someone at Cisco saying that this matters) and I personally think its fine to do a measurement on an existing system to demonstrate the importance and then just do microbenchmarks on your solution (although I'll admit that the systems community certainly has its hardliners who want the "full monty" as you point out :-( )
Michael and Matt,
It's no secret that I'm the former student in question. If I recall correctly, the issue with my thesis was that Michael and I both originally thought that the connections to concrete applications were basically obvious given a few citations. Matt politely disagreed, and we eventually concluded that a brief exposition was in order. I did that, and the whole thing worked out nicely. (And that review became the foundation for a DIMACS book chapter that Michael plugged here a few months ago.)
You guys make it sound like you had to pull teeth from me :-) It was just a simple misunderstanding...
As for getting this work accepted by the networking community, I agree with Michael's frustration. In particular, we've encountered a lot of bizarre pushback (e.g., a conference paper is accepted to a high-profile networking conference with high reviews, but the journal version is initially immediately rejected as out-of-scope; adding some more introductory material in a revision passes the "out-of-scope" test, but despite persistent effort, the journal has incredible trouble getting reviewers to even read the paper). Arg.
Sorry, Adam -- I don't think I (or Matt) meant to make it sound like pulling teeth -- it just wasn't in the first draft, and Matt did push (politely) to say it really should be in there. So you did it. The end, no exciting arguments or drama or anything. :)
And we agree it turned out great since it really was useful for that DIMACS chapter (which everyone should read!).
Great!! Great!!! Great!!. The last mobicom was reduced to implementation only conference. Finally a post.
Post a Comment