Friday, May 02, 2014

Reviewing Question: What's Important

Nick Feamster on Google+ recently shared (and gave me permission to blog about) the following review comment:
This review comment is super enlightening:

"I think there is a general problem with the overall goal of this study and area of research. This seems to be making data usage a critical thing people should be paying attention to. People have  real issues and I am not at all sure that this one deserves attention. They have real concerns like, are they going to lose their job, where should their children go to college, should they encourage their elderly parents to move into a retirement center."
It's so cringeworthy, it's funny.  You could substitute "data usage" with pretty much any research topic you feel like, and you have a review that's almost certainly accurate and justifies rejection.  Which is what makes it such a wonderful, terrible review.  

Without wanting in any way to excuse this reviewer, I do want to say that this review highlights an ever-increasing problem:  I believe review decisions are more and more becoming dominated by subjective decisions about what topics are "important".  I realize some may say it has ever been thus, and I acknowledge that the importance of the underlying problem has always been a factor in judging a paper.  I think the subjective judgment has become more significant in both in systems and in theory over the years for multiple reasons.  As the field has expanded there's less of an underlying agreement and common understanding of what's important.  Many times the reviewer may not know the area well enough to judge the importance, and there is every-growing potential for area bias:  the problems in my area are (more) important.  Further, there are far too many papers for the available slots so reasons to reject have to be found.  As the above comment suggests, one can always call into question the importance of the problem the paper aims to solve.   

But finally, I think, it fits in with an issue that keeps coming up for me:  reviewers are too arrogant.  If they don't see why the problem is important, then the issue must be with the research or the writing;  it couldn't be with their reading or understanding of the problem space.  Reviewers will have opinions regarding the importance of the works they read -- no getting around that -- and they should where possible give advice to authors on how to best present their results.  But they could often be a bit more judicious in recognizing that they are expressing their opinion and offering advice;  they are not, in the end, the final arbiter of a paper's eventual, actual importance.

I don't see subjective decisions in "importance" going away.  But I think they could be given a bit more care, both in how they are used in the final decision-making, and in how reviewers express their opinions on "importance" to authors.

If you'll excuse me now, I don't have time to blog further, I have to go plan where my children should go to college and where I'll eventually ship off my aging parents.  (Thank goodness I have a tenured position.)           

4 comments:

WithinTheContext said...

If I had to guess, I think the context here is important. Nick has made some talks at ONS on how to empower people to manage their home networks and control data usage. I would assume the comment here pertains to something like this: "here's a solution that allows the parent to set data usages limit to different kids so that an aggregate stays under the cap." And the solution may be technically valid, using SDN and nice cool solutions, but in the end, the question the reviewer is asking is: is this a feature that the parents will use? Will they indeed spend time worrying about allocating 1Gb to Timmy and 1.5Gb to Betsie?

Not all papers would suffer from this criticism, not many target home users.

The criticism of: "why are you studying this problem that's not very relevant in practice" is relatively common. Just look at many energy efficiency papers which improve one phase of a process that is negligible in the big picture.

Anonymous said...

"fail!"... a review almost as good as one written by Eliza, lol... the review is quite laughable & probably by some grad student who is new to reviewing, and whose advisor didnt give them much guidelines on that. it would be interesting to know what paper it was in reference to.

however, the intense/narrow specialization of modern scientific fields in general, TCS included, is worth some consideration and few within the fields comment on that. it is rarely pointed out. striving to see or not lose sight of the "big picture" is a very worthwhile goal.

Anonymous said...

I think picking on the reviewers is wrong at least in TCS conferences: currently PC members get 40-50 papers to review they are asked to reject 80-90% of them and then we act surprised when their reviews are completely random.

The fault lies on PC chairs and steering committees who have failed to let conferences grow along with their fields. Having larger conferences reduces the amount of "waterfall" papers that demand review cycles in each iteration while at the same time reducing the incredibly high rejection rates of top conferences.

Additionally we should take a page from other fields in CS where conferences have affiliate workshops/venues where a good but not great paper can appear and be done within one review cycle. INFOCOM A and INFOCOM B are one example.

But I don't expect anything to change. If FOCS hasn't found a way to accept 5 more papers in 20 years, what hope do they have of achieving any meaningful change?

conti said...

A review is almost by definition an opinion, except when it points out factual errors. Is it really necessary to mention this specifically in every review? Isn't it enough to always read them as such?

Further more, unless you propose that all papers that are factually correct have to be accepted to any journal and conference they are submitted to, these opinions in reviews matter.

And finally, yes the quoted review is worded strongly, but it actually does express that this is just an opinion. It uses phrases like "I think" and "I am not at all sure". Of course you can rephrases it to "In my most humble opinion (which shouldn't matter of course because the authors probably know better) it could potentially be helpful if the authors considered..." but what is the point?