Tuesday, February 14, 2012

A Systems Complaint

I think of myself as a person who works both in systems and theory.  And I've found that's often a challenging position.  I've been vocal before that I think theory conferences, to their detriment, downplay experimental work -- most algorithms papers don't actually implement algorithms.  Today it's time for a complaint about the systems culture.

I think many systems people appreciate good and useful theoretical results.  However, they have I think some unusual expectations abut what is good and useful, in that it has to be something they can absorb in the space of a few pages*.  (Really, in the space of the paper you submit, but given that you have to actually produce some implementation results for a systems paper, you really only have a couple of pages to get your theory across.)  For example, I was fairly disturbed when I got back reviews for a paper (some time back now...) sent to a systems conference with significant theoretical content that included comments like (edited to protect the conference):
While the program chairs were eventually able to locate reviewers who knew enough about the previous work in the area to understand and verify your work, this indicates to me that the XXXX audience will have a difficult time understanding this paper.   (The average XXXX attendee will not be familiar with Celebrated20YearOldTheoryResult, much less the more recent work you build on.)
Another review stated:
It is reasonable (though a bit of a stretch) to assume a XXXX reader is familiar with MAJOR20+YEAROLDAREAOFTHEORY and Celebrated20YearOldTheoryResult. It is less reasonable to assume the reader is familiar with [major paper cited several times in our submission], and not at all reasonable to assume the reader is familiar with [other work under submission and on the arxiv], an unpublished manuscript by the authors. Yet I found that this paper was not sufficiently self-contained to understand it without referring to those prior works.
Of course, this was a challenging paper, and in fact in the paper we were quite clear that we were building on lots of previous work.  We tended to state things in the form of, "It is known that X holds [citation], and using that we do..."  So if they didn't believe us, they could go look it up (though, admittedly, it's not easy stuff).  We had expected that they would take our word on background material;  as the reviews state, they did verify we were accurate in our use of the prior work, and that we had clearly cited it. 

But apparently (and I didn't get the memo on this), "self-containedness" is considered important for systems conferences.  Which, interestingly, goes against most work in theory, where you're often trying to build on a substantial chain of previous work.

I find this worrisome for work spanning the systems/theory divide, and will plan to fight this attitude on whatever systems PCs I am on.  I think it's great that systems people are interested in and like to use theory.  But if they want to apply something more complex than, say, a Bloom filter (a truly wonderful data structure, easily described completely in 2-3 paragraphs), the community might have to accept that, yes, to fully grok the interesting stuff in our paper, they'll either have to accept our word for what's in the past papers, or find and trust reviewers who can vouch for what's in past papers, or go read the papers themselves.  Not everything can be adequately spelled out in complete detail starting from a blank slate in a 10 page paper.  

Perhaps one could argue that this was an isolated incident, and certainly I know there are (plenty of) people in systems who do not think this way.  But I do think it's representative of (a non-trivial part) of the culture.  

I find the idea that a paper that involves theory that goes beyond what you, or even your community, has yet to experience is automatically out of scope a disturbing framework, and I find the expectation that I should be able to get you up to speed on a complex area in a few pages so that the paper is "self-contained" unreasonable.  

*I've heard some people more disparagingly say that you have to make them think they've understood it, not actually understand it.

7 comments:

Mikko Särelä said...

As a starting researcher in networking and systems community and some small background in theory community, I would say that what the systems community most needs is people like you who understand both sides and who are capable of popularizing theory.

As a reviewer, I would very much appreciate it, if a paper grounding itself in theory would try to explain the theory in ways that are understandable without extensive knowledge of the theory behind it.

When it comes to proofs of existing work that needs to be understood to validate the theory in such paper - I will gladly go and read the original work.

The big bonus is in finding a conceptual way of presenting the theoretical construct so that it becomes useful for a non-theory person. Perhaps we could have more new ideas floating in the systems community than just the same old Bloom filter thingy. :)

Just my two cents.

Maurice said...

I got a rejection once that said "I hope I never live long enough to see a proof in an ISCA paper" (the proof was in an appendix).

Anonymous said...

A somewhat biased opinion, if the results are exciting (e.g., Google's page-rank, IBM's Watson system, Einstein's gravitational lensing, Strassen's matrix multiplication) people are more than willing to read up the theory however onerous it seems.

Adam Smith said...

Perhaps one could argue that this was an isolated incident, and certainly I know there are (plenty of) people in systems who do not think this way. But I do think it's representative of (a non-trivial part) of the culture.

I've received reviews like this from KDD and VLDB, although in both cases the paper was accepted in light of another, more open-minded and positive review.

So my experience supports both your points: the point of view is held by a nontrivial fraction of the community, but that fraction is far less than 1 (my admittedly small sample leads to an estimate of 0.5, plus or minus 0.49).

Michael Mitzenmacher said...

Anon #3: I'm afraid I'm not clear on your point. PageRank and Strassen's algorithms are things I teach to undergrads in 1/2 a lecture, so if your intention is to say that systems people will learn difficult things if they need to, I don't think those are particularly compelling examples.

Anonymous said...

I am not anon3, but I can see what he is trying to suggest.

Basically, systems people are willing read theory only if that theory produces very impressive experimental results. Now I am not arguing whether it's a good scientific attitude or not, but it is both true and sad(based on personal experience). Again, my sample size is not large either, so systems people should not take my comments to their hearts.

Systems people have a tendency of looking for impressive final numbers (e.g. 20% performance improvement). Then they "reverse review" the paper, and try to compare with previous (experimental) approaches.
Unfortunately, I have seen systems people reading papers to reject it, rather to accept it. Systems people take great pride in showing off "the best technique so far".

Now, if you mix complex theory in systems paper, they have hard time reviewing the paper. They do not want to check proofs etc., and simply want to dismiss the paper by saying "too complex for so little gain".

I had similar problems with my analytical modeling and statistical analysis paper. We figured out ways to solve it by showing that "non-experimental" part is just a black box, using which you can gain many useful insights that were previously unknown, and can not be discovered by experiments alone. We also showed that "black-box" can also be used to improve performance. By the way, here is the details of "black-box" and "experiments" to prove the validity of this black-box.

Guess what system reviewers are not cry babies any more. I am not sure how "your theory" can be cloned in "black-box". But something similar may work. At the end of the day, systems people just want to see pretty graphs and previously unknown insights. So your flagship result has to be impressive numbers and elegant design of the system, use the theory to build that system but don't try to sell the foundation (theory), sell the roof-top bar (results). It is sad, in my opinion, to "sell" in that fashion but that's the key I would think in getting a systems paper in.

Again, these are just my views (may sound too pessimistic and unhelpful), but then not everything in this world is glorious (an experimental result.

--Dev

Unknown said...

FWIW, I haven't personally run into this particular self-containedness problem when publishing papers with theory in systems conferences, though I don't doubt it sometimes happens.

Other problems have come up, though, such as in interpreting what a theoretical statement means in practice. For example there is occasionally a lack of intuition about randomness and how likely certain events are to happen, and this can cause reviewers to raise concerns which are not actually real problems in practice.