Tuesday, March 16, 2010

More on SIGCOMM

The first round of SIGCOMM is pretty much done.  (As usual, many reviews are still out, though the deadline has passed.)  I had mentioned earlier that my first round papers, in general, seemed pretty terrible.  My colleagues agreed.  This year the ratings scale was 1-10 (which I dislike over the standard 5 point scale from previous years), and I had five papers that didn't get a score higher than a 3.  (A score of 3 is "Reject";  scores of 1 and 2 are below reject.  A score of 5 is still only a borderline reject, for comparison.)  In some cases, I gave a 3 and was the high score.  I did (eventually) read a couple of good papers that may well be accepted.  Hopefully, I'll see better papers in round 2.  

Matt Welsh, perhaps at least partially inspired by being on the SIGCOMM PC as well, has suggested an approach for dealing with the large quantity of awful submissions by charging authors to submit;  Suresh disagrees.

Interestingly, the scores (and reviews) on my papers were generally consistent across the line, with a rare exception or two.  Since there's always some overenthusiastic anonymous commenter who thinks its important to call SIGCOMM an insider's club whenever I blog about it, I'll repeat that I'm not aware of belonging to any such club, and once again, the reviews I see from others not only make sense, they match my own opinions, which I view as independent, to a striking degree.

18 comments:

Anonymous said...

Why do you think SIGCOMM attracts so much garbage compared to theory conferences?

cak said...

Here is a coupon-based system that gives incentive for writing both good submissions and good reviews:

Every time you review a paper, you earn 1 coupon. Every time you submit a paper, you lose 1 coupon.

The coupons could be valid across all conf/journals, or for groups of conf/journals separately, or for each conf/journal separately. There could be weights: the number of coupons could be different for reviews vs submissions, for one conf/journal vs another, etc. There could be credit: ok, you don't have the necessary coupon to submit right now, but you need to send it within a year.

No money involved. Equal opportunity for everyone to participate.

Anonymous said...

Or actual money if you don't have enough coupons.

Anonymous said...

"... belonging to any such club..." Good deflection Mike. Then again, looking at your SIGCOMM track record, I'm inclined to agree. In other words, you're more of a one squirt fountain. Hence, I'm barking up the wrong tree.

- enthusiastic anon.

Stefan Savage said...

I may be alone in this, but I find it a bit unseemly to discuss the review process of a conference while reviewing is still in progress... especially so when the focus is on how bad the papers are. This doesn't seem helpful.

Michael Mitzenmacher said...

Stefan,

I certainly think you raise a valid point, and as you know I value your opinion. However, I am not clear on what your reasoning is for finding it unseemly (if you'd prefer to talk further, I'd certainly be happy to discuss it via e-mail instead).

Your point seems to be, "This doesn't seem helpful." Let me disagree, for the sake of argument if nothing else; generally, the feedback I've received in the past is that blogging about my experiences as a PC member is both interesting and helpful to the communities involved, as certain issues get aired for discussion that often don't get discussed in open forums, and it gives people who haven't served on PCs (especially graduate students) some insight into the process.

But perhaps the issue is more specifically about the timing than the content. I don't believe I've said anything in these posts that I wouldn't have said after the fact. Specifically, I avoid giving information that would reveal other reviewers, and believe I am avoiding giving sufficient information to even reveal myself as a reviewer for a specific paper. I would appreciate you letting me know (again, privately) if the content is an issue in any way; I aim to respect fully the double-blind process.

I think providing the posts now, in real-time, provides a clearer sense of the process as it unfolds, and hopefully a more accurate reading of my impressions at the actual time. I understand one could argue that the loss in summarizing after the fact is little to none, or that for even less loss I could write the posts now but release them after the process is over. I'm not clear, though, what you feel the gain is. My first guess is that you think I'm just exacerbating the stress for those who have submitted papers. If this actually is the case for anyone, I would certainly like to apologize now; that was not in any way the intent.

Given your suggestion, however, I will tentatively plan not to post further on the conference until decisions are released.

Thanks,
Michael

Anonymous said...

To sort out the junk submissions how about:
* Initially assign each paper to one reviewer only, who reads the paper quickly (< 1 week) and decides if it is plausible enough to deserve multiple reviews.
* If that reviewer says "yes" the paper is reviewed in the usual way.
* If that reviewer says "no" the authors are immediately sent a brief review. The authors may chose to spend an "appeal coupon" to have their paper reviewed normally. Appeal coupons are earned when an appealed paper is accepted (probably a rare occurrence) and by attending the conference.

Matt Welsh said...

I think Stefan means it's not helpful to HIM being stressed about his (no doubt) multiple SIGCOMM submissions in the pipe :-)

Matt Welsh said...
This comment has been removed by the author.
Stefan Savage said...

I'll be clear... I think there are several kinds of issues here, some with the content and some with the timing:

1) Actual information leakage. I think this is a minor point, but still real and particularly so if one knows anything about how PC reviews get assigned. Thus, if I write a paper that is heavily algorithmic or perhaps has the title "How triple hashing can save the Internet" then I can reasonably assume that you'll end up being a reviewer and hence your comments are more likely to refer to my paper.

2) Creating perceptions of arrogance. When you review a bunch of anything, papers, grad students, faculty candidates, schools, etc. there will be some things you like and some you like less. However, if what you're heard to say is "I read these papers and they mostly suck" a fairly negative signal may be received by all those people who submitted papers (at least some of whom put a fair amount of time into their efforts) whether or not you even reviewed their paper. Imagine that we were talking about people instead of papers. "Yeah, I looked at all the grad school applicants this year and while there are a couple ok ones, boy do most of them suck" or "I was on Harvard's faculty recruiting committee, reviewing applications from grad students around the country, but y'know... they all seem to really be terrible. Not sure why they all suck this year." You might understand better why these kinds of sentiments might not win you some friends from those populations. Moreover, I say this kind of feedback isn't "helpful" because it poisons some people's minds and doesn't provide much in the way of constructive guidance in exchange. What does one as a paper author learn from this?

3) PC dynamics. Putting together a program is increasingly a PITA. In addition to just the huge amount of work that must be done, there are jizillions of "issues" that come up these days (plagiarism, LPUs, conflict gaming, late reviewers, bad reviewers, blind review issues, PC balance) many of which can cause conflict. Moreover, there are always people unhappy about how things are done even if its truly puzzling what the conflict could be (as you can tell from the plethora of anonymous comments about SIGCOMM that show up your blog :-) From my own experience, one of the un-fun parts of being chair is trying to stay ahead of the various whisper campaigns that may be going on at the time (and can be influencing your PC to focus on issues other than doing the best reviews which is what you want to help them do). Thus, I'm not a big fan of contemporaneous reporting on what's going on in the PC. I think it feeds the gossip beast and pulls others in. While your own post contains nothing really objectionable, as this kind of "live blogging the PC" meme grows I think the envelope gets pushed as well. Certainly if I were chair I think it would make my job harder and not easier. Moreover, its not clear what benefit comes from making it contemporaneous instead of simply saving your thoughts and revealing them after the decisions are done.

Anyway, I recognize that reasonable people can disagree on such things. However, if you're ever on a PC that I chair, please no live blogging ok? :-)

Webcraft said...

I am very interested to known how to you judge these two papers:

a paper with great idea but little error(something like typo) and experiments is not enough

a paper with good presentation and exhaust experiments but the idea is just so so

which one will you give better scores? or reject both since they are not "perfect" papers?

Michael Mitzenmacher said...

Stefan--

We're certainly agreed that as a PC chair you would have every right to institute a "no live-blogging" rule. I would certainly follow it, no hesitation. I can certainly understand, even if we disagree, why you would call for such a rule.

(Also, as always, I'd like to thank you for posting with your name -- not anonymously -- and for providing real, well-thought out arguments. I wish there was more of that on blogs generally, but particularly on this one. Hence my taking time to respond.)

Regarding your specific points.

1) Information leakage: this seems rather restrictive. The information leakage you're concerned about seems to be who is reviewing which paper. That information is independent of timing, so it would seem if this was a concern, I should never be able to make a statement of the type I've been making, even after the PC meeting. I can see your point that it may simply be desirable not to discuss anything that may reveal additional information about the reviewers. I hope you see my point that there's a tradeoff, and on the whole I think it's better to relay some of this information to the community.

Also, my take is that I'm not revealing significant information regarding who is reviewing what paper (including myself). As you've suggested, if someone's written a paper on why triple hashing will save the Internet, and they care enough, they'll be able to calculate the likelihood that I read their paper, and even estimate which of the actual reviews they received came from me. I don't think anything I've said gives much additional information away in that regard.

2) Arrogance: I think I understand what you're saying here, although again, I think the reviews themselves that authors receive speak far louder to everyone than anything I've said here. Eventually, authors who wrote bad papers will see their actual scores, and react accordingly.

In my defense, the statements of the sort you're concerned about, "This year's papers really sucked" (or, conversely, "This year's papers were incredibly awesome") is the sort of stuff that is talked about quite frequently amongst the PC, and often more openly after the fact. So again, this seems to be an issue of timing; my bias is shedding more light on the process is a worthwhile activity.

3) The Meme Problem : This, I think, is your strongest argument. Even if I could get you to agree that I had not actually done anything wrong to this point, you could rightfully respond, "But what about the next time? Or what about someone else, who following your example, messes things up?"

Essentially that reason was why I stated in my last response I wouldn't plan on posting further until after the PC meeting. So while we disagree on some points, I certainly see your arguments, and can agree to postpone these posts going forward.

Anonymous said...

"(Also, as always, I'd like to thank you for posting with your name -- not anonymously -- and for providing real, well-thought out arguments. I wish there was more of that on blogs generally, but particularly on this one. Hence my taking time to respond.)"
--- You are still sulking... OMG this is so funny :) (I just realized that professors are also human beings!!!)

Anonymous said...

I appreciate information about the workings of the PC. However,I also found it weird commenting while the PC discussion is going on, since it is supposed to be confidential. So, on the one hand I like you blogging about it, but on the other I think it is not appropriate.

Matt Welsh said...

Stefan has some very good points. One potential objection to "live blogging" a PC is that then there may be an expectation on the part of authors that they can get you to reveal details of the process or how particular papers (or categories of papers) are being considered, if you have been somewhat forthcoming in the past.

One problem I have observed in many recent PCs is a fair amount of information leakage prior to and during the PC. I'll admit that I have leaked emails to students of mine when I know the disposition of their paper during a PC meeting (at least when it is positive). On the other hand, I know people who somehow find out things like the content of the reviews and the scores well before the PC meeting is over. This to me is going too far and suggests that PC chairs need to do more to plug leaks. This is really about setting community standards of what is acceptable for PC members to share.

Live blogging perhaps creates a perception that it is OK to leak *some* information and lowers the standard of confidentiality until the decisions are made. So while I don't find anything objectionable in Michael's posts per se, it is a slippery slope that is perhaps best avoided.

Matt Welsh said...

Stefan's comments swayed me to delete my comment with my review score distribution so far - not that it revealed anything about specific papers.

Michael Mitzenmacher said...

Matt --

Agreed. I too was persuaded by the slippery slope argument.

Anonymous said...

Stefan,
It is not correct to compare papers with job-candidates and students. No one ever gives any feedback to students and job-candidates on why they were not selected. OTOH, paper writers do get feedback. I think its okay if they get feedback a few weeks earlier through a reviewer's blog. I don't think restricting live-blogging during the paper review process achieves anything.