Saturday, April 18, 2009

SIGCOMM PC, Post-Analysis

1) Setting a new high bar for local arrangements, outside the conference room was a cappuccino/espresso machine, and (for most of the time) a barista just for the PC.
2) There seemed to be widespread agreement that the quality of submissions this year was not as high as in previous years. (Just my luck...) Whether this reflects reality or we were all just very grumpy is open to interpretation. (I do not think we were grumpy.)
3) Because of this feeling about paper quality overall, there will be, I believe, fewer accepted papers this year than in the last few previous years.
4) I was amused to see when the classified papers into groups there was a whole category of papers labelled "theory". Theoretical SIGCOMM papers generally refer to new and interesting algorithms or data structures for applications, but still generally require an implementation demonstrating the effectiveness of the idea in practice (or a suitable simulation). Overall, theory papers seem to do reasonably well at SIGCOMM. George Varghese is the master of writing such papers, if you are looking for an example to emulate.
5) Indeed, there seemed to be some enthusiasm for more openness to theoretical work on the committee, which seemed in line with my open complaint to networking/systems people. There may be a push (I'll be pushing!) to aim for a "cool algorithm/data structure implemenation tricks and ideas" next year. (The hard part of this is writing down what the right criteria for such papers are... clear practical utility in a network setting being what I'd aim for.)
6) The 5-point scale did seem to have its problems. There was the usual problem that people did not work with the scale appropriately/treat it consistently. The other problem was, because of the impression there were few strong submissions, there were very few 5's and comparatively fewer 4's than usual, effectively collapsing the scale. I'm not sure the 5-point scale itself is to blame for these problems.
7) The 5-point scale was very effective for initially dismissing a lot of papers quickly.
8) Probably because there are fewer papers at the PC meeting to deal with, there seems to be more intense discussion of papers overall and in particular of controversial papers at the SIGCOMM PC as compared to say the STOC PC. Also, more people on average are able to (and more than willing to) give an opinion on any given paper.
9) While there was plenty of discussion, there were no real fights -- at least while I was in the room. Indeed, the sharpest discussion -- all about what is expected of a SIGCOMM paper -- was probably instigated by me, regarding a more theoretical paper, leading to a discussion of what exactly was expected in terms of evaluation of an interesting idea for SIGCOMM. Again, I'm hoping this dicussion might lead to a special session of possibly shorter papers with useful algorithmic tricks for networking people... though we'll have to see if enough such papers actually exist!
10) The PC dinner involved, among other food, an entire roast boar.

Overall, a very interesting PC experience.

26 comments:

Anonymous said...

2) There seemed to be widespread agreement that the quality of submissions this year was not as high as in previous years.
7) The 5-point scale was very effective for initially dismissing a lot of papers quickly.
For the 2006 Sigcomm, the PCs tried something "new". I don't remember what it was exactly, but I think more papers were discarded based on the 5-point scale, fewer papers were discussed, resulting in a number of good papers falling through the cracks. The number of papers remained about the same, but the quality of accepted papers dropped. Do you think this happened again this year?

Here's a separate thought: do you think having a global feedback system for reviewers will help in some way? Sure, people with accepted papers are more likely to leave good feedback, and vice versa for the other group. But with sufficient samples, and after adjustments / normalization, it might provide a good-enough system.

What I'm hoping for is an indirect (and politically-correct) way of combating the impression that papers authored by the "Sigcomm mafia" are more likely to be accepted. If the feedback system works, it means that we get impartial PCs, and hence the mafia problem should not arise.

Unknown said...

Anon: There was actually a larger than average number of papers at SIGCOMM 2006.

I like the idea of short algorithmic papers, as long as "short" does not become incorrectly associated with "less interesting".

Anonymous said...

There seemed to be widespread agreement that the quality of submissions this year was not as high as in previous years.that is strange, since there was an effort to renew the pc, so that the history from the year before is not as widely known. I would guess that all old timers are always glad to inform the newbies that "things ain't what they used to be."

Michael Mitzenmacher said...

Anon 1:
I can't speak for 2006. Here, each paper got at least 2 reviews. Some fraction were rejected after just 2 reviews; my impression, however, was that in this case (and indeed in all cases) the chairs were very thorough and would only reject a paper if there were clearly laid out reasons in the reviews -- not just based on the scores alone. Indeed, I realize I didn't say this in my original post, but I thought Dina and Luigi were remarkable in their thoroughness as chairs, watching over the various discussions.

I'm not sure how a feedback system for reviewers would help. If you're labelled as a good reviewer, then you get asked to do even more PCs, which is hardly what anyone wants...

I, personally, saw no signs of Mafia-dom preference in the meeting. While it doesn't mean there weren't any, perhaps it can be taken as a signal.

Michael Mitzenmacher said...

Brighten --

I'd argue such papers would be "more interesting". :)

The focus of such papers would, I imagine, be less on "building an entire system" than on "engineering ideas everyone should be aware of". The sort of thing you'd want to teach to students in a class, or might fit in a network algorithmics book. Things like better hash table designs, which I've been known to work on.

At the meeting, several PC members suggested they'd be very interested in such papers. But in discussions at the PC meeting there was a clear (majority) position stated that such papers didn't fit the current SIGCOMM standard, and while it might be appropriate to change the standard in some way for next year (including them as short papers, for example), it would not be appropriate to change standards on the fly. I actually agree with this position; it will be interesting to see if something appropriate can be arranged for next year.

Michael Mitzenmacher said...

Anon 3:

The opinion on paper quality seemed also to be shared by the "newbies", who also widely avoided giving scores of 4 and 5, and presumably understood the quality bar based on published SIGCOMM papers.

In short, I don't think it was old-timer-itis. I'm not that old.

Anonymous said...

Thanks for posting on the review process, it goes a long way into making it more transparent :)

I'm not sure how a feedback system for reviewers would help. If you're labelled as a good reviewer, then you get asked to do even more PCs, which is hardly what anyone wants...
I think it would help weed out those who give bad reviews because of personal agendas.

Also, I would imagine reviewers have some freedom in picking the conferences they wish to participate as PCs. So the process might be: pick the top (say) 10% of the reviewer pool, ask them if they want to be PCs. Those with prior commitments can reject, then the next (say) 2% of the pool is asked. Of course, those who take more than they can chew are worthy of Darwin Awards :)

Anonymous said...

Transparent? This is just a means of justifying the fact that SIGCOMM is an "old boy's club" conference. What's the best way to get tenure? Organize your own conference and claim it's the hardest conference to get in. Oldest trick in academia. Not going for tenure? I'm sure there are many PCs' students who need some sort of a 'boost'.

Anonymous said...

5) Indeed, there seemed to be some enthusiasm for more openness to theoretical work on the committee, which seemed in line with my open complaint to networking/systems people. There may be a push (I'll be pushing!) to aim for a "cool algorithm/data structure implemenation tricks and ideas" next year. (The hard part of this is writing down what the right criteria for such papers are... clear practical utility in a network setting being what I'd aim for.)

I actually bumped into your previous post a week ago and the funny thing is that I have exactly the same problem. I have an interesting (imho :) ) work on data structure/algorithm but no "real application", and I have a hard time finding where and how to publish it. So I am happy to read your thoughts about it. But I am not sure I would agree with the criteria you bring up here.

Think of it, if the Wright brothers were told that their invention must show clear practical utility in a massive transportation, they would never invent the airplane... I would agree that networking community is not the best place for such innovations which lack any direct relation to networking, but what is?

Anonymous said...

Woah, get this: a reviewer left the comment, "I just can't convince myself it works." No further elaboration is given, no careful analysis of the methodology in the paper, no refuting the results, nothing. And the reviewer gives a score of 1. I guess he or she really wants to spend more time eating roast boars, or drinking coffee.

How did Sigcomm fall this low?

Anonymous said...

Cannot agree more. Based on whats said in the reviews, the average reviewer quality in this year sigcomm was pathetic at best. One just hopes that this year was just a blip and at least they are more careful in selecting the pc next year.

Seriously, someone should take charge and avoid Sigcomm from becoming oblivious.

Michael Mitzenmacher said...

For those with complaints about the reviewing, I actually think the PC, the chairs, and the SIGCOMM executive committee would appreciate feedback. On the other hand, I feel the comments thus far have not been particularly constructive. It would be, I think, natural to dismiss them as sour grapes.

If people want to post comments (anonymous or not) on their impression of their reviews, please feel free to do so. But it would help if you could make them more constructive. Things like, "In previous years, reviews seemed longer and more detailed", or "The paper summaries did not properly reflect the contribution of the work, which was clearly explained in the Introduction and Conclusion" are helpful and informative. The stuff I've seen so far... not so much.

Anonymous said...

Just to exemplify to Michael what systems research actually means ...

I am actually going to try to tell a story.

SIGCOMM 2008, best paper award presentation.

Just to try to summarize what the paper is about: Wireless networking, you have collisions(usually all information sent is lost during a collision) and by doing some clever signal processing the authors manage to get all the information back irrespective of collisions.

Ok, presentation is finished, Stefan Savage steps to the microphone and says something like: "Ok, you did all this and it's great but you only did this for 1 Mbps Wireless which is not that used at all and sure collisions are important at this speed but wouldn't you think that at higher and more used speeds collisions aren't such a big problem?". And the presenter attempts a response but after a while his adviser steps to the microphone and basically bails him out.

The first point?

Stefan's point was a pretty good one and if he would have made it during the PC meeting and insisted upon it, it would have just killed the paper. Why didn't he make it then?

The second point?

That's what we usually get in all of our reviews. Factual things like this ... And I am not saying that it wasn't a good paper. It was. It's just that most systems research is like this, it has clear advantages and clear drawbacks. So the line is a pretty murky line and it depends a lot upon some people at a meeting pointing out the advantages in a paper (which might get you in) versus the disadvantages (which will definitely get you rejected).

Anonymous said...

As for my experience, I had 4 reviewers (out of 5) who valued their own confidence with 2. The fifth reviewer's confidence was 3. Looks a bit low, doesn't it?

Besides, the TPC meeting summary reported a potential flaw which is actually discussed in the paper.
So my feedback is pretty clear: reviewers had low confidence on the field.

Michael Mitzenmacher said...

Anon 14:

Right, so your point seems to be, "the PC should aim to have multiple high confidence reviews for every paper."

Shall I point out that you seem to be an exceptional case? I suppose this is revealing information, but for every paper I was assigned as a 1st or 2nd round reviewer, every paper had at least 2 reviewers with a confidence of 3. Looking over a random sampling of other papers, this seems to be true with very rare exception.

Do you think having a (self-labelled) more experienced reviewer would have changed the decision on your paper? You did have 5 reviews, and your paper was discussed at the PC meeting, where other PC members had the chance to look at the paper as well. It would seem your paper received a non-trivial amount of attention.

To be clear, the constructive criticism derived from your experience -- aim to have high-confidence reviewers -- is useful, and is the sort of comment I think is very worthwhile. I'm just pointing out that I believe that was seen as a goal by the chairs and PC, and your case seems to be a rare exception. Moreover, given the attention given your paper, it's hard for me to understand your specific case as a global sign of systemic failure.

Anonymous said...

As a theory researcher, I always thought our community had a lot of bickering and infighting. But looking at these posts it looks like the networking community is worst. Almost every post seems to have something against the reviews/work of their peers and seem to be pretty vocal in denouncing the quality of their peer reviewers. If they cannot respect each other why should people outside their community respect them?

My guess would be that all these are posts from sour grapes. Otherwise it is just a doom for this community. Thankfully we in a theory community are still a lot more coherent and civil.

Anonymous said...

Dismissing comments right away as sour grape reaction, rather than hearing both sides of the story, now, that isn't very open-minded, is it? ;)

When so many complaints are made, don't you think maybe, just maybe, there's really something bad going on?

And thus far, there's no official way for the authors to give feedback. We don't have a reviewer feedback system, emails to the PC chairs get sent to /dev/null. Those who belong to the inner clique, hey, they'll just stay in there forever. It's like... tenure in the Sigcomm committee.

On reviewer familiarity with area: if I'm a PC, what's to stop me from claiming I'm an expert in a particular area?

Sure, much more constructive, reviewer-specific feedback can be given. But, we're still civil, we don't want to voice that sort of opinion in public, not in blogs, not saying precisely why a particular reviewer is not Doing His Job. :) But then again, there's no other avenue for feedback, is there?

Michael Mitzenmacher said...

Anon 17:

Dismissing comments right away as sour grape reaction, rather than hearing both sides of the story, now, that isn't very open-minded, is it? ;) 1) I didn't dismiss the comments as sour grapes. Please read what I actually wrote, then try again.

When so many complaints are made, don't you think maybe, just maybe, there's really something bad going on? 2) few complaints in response to my post doesn't seem like "so many complaints" to me. So far, I've heard more complaints about the quality of the submissions (from other PC members) than I have heard about the quality of the reviews.

But I am certainly curious if there are ways the process can be improved, and if that information can be passed on appropriately.

It's like... tenure in the Sigcomm committee. 3) Since, from what I understand, this year there was specific effort to have more "newbies" than usual on the PC committee -- indeed, some people have complained that that was a problem -- this seems like an incongruous statement.

But then again, there's no other avenue for feedback, is there? I can perfectly understand people thinking this blog is not a suitable place for these sorts of comments or that sort of discussion. I suggest you e-mail the PC chairs, or the SIGCOMM executive committee, with your concerns; if you'd like to do so anonymously, pass them through a colleague. I'd like to believe the SIGCOMM higher-ups are willing to listen to concrete, well-thought out criticism.

Luigi said...

Let me jump into the discussion because perhaps I have a bit more information. Sorry if my post is a bit long.

1. As Michael said, we aren't dismissing comments or feedback. But so far I received only one email complaining on two specific reviews (as i replied to the author, I agree on one; i disagree on the other; and I think that even removing those two reviews the final evaluation of the paper would have been the same).
Feel free to email me if you have specific complaints.

2. If you want to give feedback, either you are detailed (mentioning review numbers or other info that let us check the facts) or I am afraid we cannot follow up.
Take as an example, the poster complaining about "I just can't convince myself it works." : i cannot find this sentence or part of it in the reviews, and a sentence out of context tells me nothing.
On the contrary, another poster complained about four out of five reviews having low confidence, and I can probably track which paper is that and am reviewing the reviews to see
if we did something wrong. I am sorry, you are right that in an ideal world you have many expert reviewers; however note that even in this case the reviewers, despite the low confidence did raise valid points about the evaluation part (i can be more specific if you guys contact me).

3. I do think a rebuttal phase would be useful to protect everyone from gross misunderstandings (which hopefully only hit a handful of papers, and this is pretty much unavoidable).

The reason we did not plan for rebuttals is that to properly handle them you'd need to use another couple of weeks, between the time for authors to reply and the time for reviewers (who have other business as well to take care of) to respond. There is a compromise involving short turnaround, detailed reviews and size of the conference (300 is pretty much borderline for what a single-layer committee can handle; more than that and you have to partition the load, and so introduce another kind of unfairness).

cheers
luigi rizzo

Anonymous said...

I have a question about the timestamps of the reviews.

My understanding is that there were a few phases involved in the review process.

After the different phases were the reviewers allowed to modify their reviews and the timestamps actually reflect these modifications or after each phase, reviews and hence timestamps couldn't be modified?

Luigi said...

Reviews were always open and could be updated by reviewers at all times, same as scores and comments. Typically this was done during the various online discussions and at the PC meeting. The timestamps reflect the last update. The whole process is described in some detail in the notification email sent to all authors.

luigi

Anonymous said...

Luigi,

Many thanks for your inputs.

Suppose we've got this paper, that should have gotten in, but encountered poor reviewers. The reviewers could be well-respected members of the community, but, for certain reasons, gave it a low score.

Granted there's not enough time (or incentive/disincentive) for the reviewers to re-review this paper, it seems that there's no way the error can be corrected in this conference. But, what do you do to prevent it from happening again in the next?

Anonymous said...

Anon #n:
I am actually going to try to tell a story.

SIGCOMM 2008, best paper award presentation.
Here's another story, in a previous Sigcomm.

There was this paper, also on wireless protocols. It had two immediately-noticeable flaws:

1. It looked like it was put together a couple of hours before the deadline. As most of us know, if there's any slight ambiguity in our papers, it will be used to reject our work. I was one of the reviewers, and it read like a undergrad lab tech report.

2. As most people who dabble in wireless protocols, it's important that the new protocol works with TCP. Even for folks without real, large testbeds, they do their best with NS-2 simulations. Everyone tries. This paper's solution (opportunistic bulk transfer) will cause packet reordering, which kills TCP performance. How do the authors counter this? IIRC, they say they use a proxy, at the wireless network exit point, to reconstruct the order. How did that work? Nobody knows, it's not in the submitted paper, nor in the final camera-ready copy.

But, this paper went on to win best paper award anyway. In Sigcomm. Strange world, eh?

Luigi said...

Of course we cannot take countermeasures if we don't know the specific problem in the first place. The way to address the hypothetical problem you mention is 1. find the specific paper/review(s) you have in mind,; 2. review the reviews; 3. verify that the complaint is justified; 4. verify that this was not just a glitch but a pattern common to most of the 10..20 reviews from the same person; 5. report the issue to next year's chairs so they can exclude the persons involved.
Of course this is all hypothetical because you give us no actual case, and I think if we had a case that passed check #4 it would have been noticed already by other reviewers.

But i will repeat (for the last time) my comment: if you have an actual case please contact us with the details and we will be happy to discuss it, otherwise don't expect any reply.

Anonymous said...

I have been on 3 Sigcomm TPCs (1997, 2008, 2009) and on many others in the last nearly two decades. My experience with all top-tier TPCs is the same: it is a bunch of people, all of whom have loads of other work, volunteering a couple of hundred hours of their time to pass as fair a judgment as possible on their peers' work. Sure, the end results may not to be everyone's liking, and sure, there are problems, but the TPC is trying its darndest to be fair, to set an even bar, and to avoid conflicts.

I am not naive: there are certainly some TPC members who use the inherent trust relationships to further their own interests. But, I think these folks are in a decided minority. My experience is that most mistakes that happen are unintentional, and when pointed out, are immediately corrected.

Here is an open invitation: Ernst Biersack and I are co-chairing the CoNext TPC this year and we will be putting together the rules for the TPC in the coming months. Tell us what you think we should do to make the TPC process better and we will be happy to incorporate it.

keshav

PS: [blatant self-promotion] for a recent paper on inherent conflicts in the paper review process, see a paper I co-authored (with Crowcroft and McKeown) in the Jan 2009 CACM.

Anonymous said...

Keshav,

Thanks for reaching out and soliciting feedback.

Firstly, I think it would be great if a formal channel is set up, rather than us blogging on Michael's site :) I'm not expecting anyone else having issues with this year's Sigcomm reviews to voice their concerns here.

Like Luigi mentioned, some sort of history is needed to ensure that what happened before (bad reviewers) don't happen in the future. It would be great if such a system is in existence, otherwise the effect lasts only for a year.

But, with the current system, I think the root of the issue with PCs is that they have no incentive not to let personal bias be in the way. What's stopping them from saying, "Hey if I reject this other similar paper, mine / my good friend's paper will have a higher chance of getting in?" In lieu of this, why wouldn't a group of people collaborate, each spending more hours reviewing but in the end pays off when their papers have a better chance of getting in?