I think many systems people appreciate good and useful theoretical results. However, they have I think some unusual expectations abut what is good and useful, in that it has to be something they can absorb in the space of a few pages*. (Really, in the space of the paper you submit, but given that you have to actually produce some implementation results for a systems paper, you really only have a couple of pages to get your theory across.) For example, I was fairly disturbed when I got back reviews for a paper (some time back now...) sent to a systems conference with significant theoretical content that included comments like (edited to protect the conference):
While the program chairs were eventually able to locate reviewers who knew enough about the previous work in the area to understand and verify your work, this indicates to me that the XXXX audience will have a difficult time understanding this paper. (The average XXXX attendee will not be familiar with Celebrated20YearOldTheoryResult, much less the more recent work you build on.)Another review stated:
It is reasonable (though a bit of a stretch) to assume a XXXX reader is familiar with MAJOR20+YEAROLDAREAOFTHEORY and Celebrated20YearOldTheoryResult. It is less reasonable to assume the reader is familiar with [major paper cited several times in our submission], and not at all reasonable to assume the reader is familiar with [other work under submission and on the arxiv], an unpublished manuscript by the authors. Yet I found that this paper was not sufficiently self-contained to understand it without referring to those prior works.Of course, this was a challenging paper, and in fact in the paper we were quite clear that we were building on lots of previous work. We tended to state things in the form of, "It is known that X holds [citation], and using that we do..." So if they didn't believe us, they could go look it up (though, admittedly, it's not easy stuff). We had expected that they would take our word on background material; as the reviews state, they did verify we were accurate in our use of the prior work, and that we had clearly cited it.
But apparently (and I didn't get the memo on this), "self-containedness" is considered important for systems conferences. Which, interestingly, goes against most work in theory, where you're often trying to build on a substantial chain of previous work.
I find this worrisome for work spanning the systems/theory divide, and will plan to fight this attitude on whatever systems PCs I am on. I think it's great that systems people are interested in and like to use theory. But if they want to apply something more complex than, say, a Bloom filter (a truly wonderful data structure, easily described completely in 2-3 paragraphs), the community might have to accept that, yes, to fully grok the interesting stuff in our paper, they'll either have to accept our word for what's in the past papers, or find and trust reviewers who can vouch for what's in past papers, or go read the papers themselves. Not everything can be adequately spelled out in complete detail starting from a blank slate in a 10 page paper.
Perhaps one could argue that this was an isolated incident, and certainly I know there are (plenty of) people in systems who do not think this way. But I do think it's representative of (a non-trivial part) of the culture.
I find the idea that a paper that involves theory that goes beyond what you, or even your community, has yet to experience is automatically out of scope a disturbing framework, and I find the expectation that I should be able to get you up to speed on a complex area in a few pages so that the paper is "self-contained" unreasonable.
*I've heard some people more disparagingly say that you have to make them think they've understood it, not actually understand it.