Monday, November 16, 2009

Conference Reviewing Update

A few weeks ago, I talked about the reviewing process for NSDI and LATIN. I suppose now is a reasonable time for an update.

LATIN is nearing the end of the reviewing process. I think it went well -- my top ranked papers seem to be being accepted, my low ranked papers are not. There's been some electronic discussion of papers where there was wide disagreement, but we're not having an on-site PC meeting, and overall there's been surprisingly little discussion on my papers. Because LATIN is a "2nd tier" conference, I had previously suggested that I expected there would be some wide deviations among review scores, "corresponding to different opinions about how interesting something is". There were in fact some wide scoring discrepancies, though this may not have been the primary reason. I was a reviewer on multiple papers where one reviewer really didn't seem to "get" the paper -- in most cases, ranking it high when I thought the ranking should be lower. (I imagine the scores will change before the reviews come back to the authors.) I've seen similar problems even in other, stronger theory conferences -- selecting 3 reviewers who are expert on the subject of paper in a broad theory conference is very difficult to consistently get right, especially when subreviewers come into play -- though I think it was more problematic here, where the papers are weaker on average in any case. Finally, I still don't like the easychair interface that much.

The NSDI reviews have been, for me, substantially more interesting, no doubt in part because the papers are more interesting (to me). The "first round" is nearing the end, and at least on my papers, the review scores are remarkably consistent. In cases where they aren't consistent, there's usually a clear reason for the discrepancy that comes out in the reviews, which tend to be longer and more detailed for systems conferences. While that's all very satisfying, at this point I'm hoping to be offered some dramatically more controversial papers to look at for Round 2, or I'll be finding the PC meeting pretty boring. (I should note I have a paper submitted to NSDI, so I reserve the right to either completely trash the reviewing system, or sing its praises ever-higher, depending on the eventual outcome.) Finally, I still like the hotcrp interface a lot.

I get asked to serve on a number of PCs, and usually, I make efforts to serve, because I believe such service is important to the community. But I must say, doing these two at roughly the same time has led me to think I'll be more circumspect in the future. The older I get, the more precious time seems to become, and perhaps I've just reached a point where I think I've done enough PC service that I can be choosier about what I agree to, aiming for things that are more enjoyable. At the same time, I wouldn't want everyone to start acting that way, since then I imagine it would be tough to get good PCs together for the many conferences we have.

5 comments:

dot said...

I dream of living in a world where there would be a logged peer-reviewing system which would give access to the full version of a number of papers proportional to the number and quality of reviews you produced.

The quality can be evaluated directly by giving to refer among the unknow papers some which quality is known to be high or low and compare the review to what is known; or indirectly by peer-reviewing the review themselfes, and apply the same "hidden" challenge principle to them.

Such a system would solve three problems at once:
1) it provides an incentive for people to participate to the reviewing process,
2) it provides some quality control on reviews
3) it would provide access to published work not based on money but on quality of participation, which opens the access to labs from developping countries or underdevelopped countries (Africa?)

Have been thinking about it for seven years, was always told to wait for Tenure to even think of working on something like this. What do people think?

Anonymous said...

Reviewing a paper takes so much time and effort, and then that work is discarded. I wish there were a way to share the reviews (at least for the accepted papers) with the community.

Anonymous said...

I agree with second commenter--alot of work is wasted.

However, regarding jyby comment: wouldn't people then have an incentive to review the paper in the way that they think *other* people are going to review it?

I think a marker of a good review is thoroughness. But judging a review by the outcome may work for extremely high quality papers, but would not be a good way of measuring how a reviewer would do on an average paper. For most average papers, there is no right answer.

Anonymous said...

jyby, that would mostly cut access off published work. It is already hard enough to get papers, we don't need more barriers. This seems crazy.

dot said...

> I wish there were a way
> to share the reviews

Me too! The summary written for reviews is often better than the one from the author, if only because the referee has a better perspective on the paper as a whole.

> For most average papers,
> there is no right answer.

In the current system we already need to accept or not such a paper in a conference. Reviewing a paper could/should/would include proposing a category (i.e. conference/journal) for it, and evaluating the review would include judging this classification.

> that would mostly cut access off
> published work. It is already hard
> enough to get papers, we don't need
> more barriers.

The idea is that it would change the barriers from money-based to cooperativeness (and capacity)-based. An example of motivation: I nor my university are paid for the reviews I do, and my university cannot afford to pay for the access to publications older than 10 years, some other universities being even more limited.

Say the amount of work you currently do (say, 10 reviews and 4 publications a year) gave you access to 5000 pages (and all the abstracts you want), or 500 10-pages papers, you have enough to give an initial "reading" credit (in addition to all papers you already downloaded in your private system) to each of your student and still not be limited.

I have my own worries with such a system: it is not obvious how to choose the virtual prices and rewards for submitting papers, reviewing papers, downloading papers, whether they should be paper or page based, how to avoid coalitions self-promoting their own papers while still accepting and encouraging small communities on a specific topic, but those questions are already studied in mechanism design.

Anyway, this is not a formal proposal, only the stuff of which (my) dreams are made of, I am curious to hear about other's dreams too ;)