Wednesday, May 13, 2009

Life After Rejection (Guest Post)

Guest blog post by Aaron Sterling.

Michael M. asked me to post about an academic journey of one of my papers, so here goes.

In November 2008, I submitted a paper, "Distributed Agreement in Tile Self-Assembly," to STOC. The paper was rejected, but the comments I got from the reviewers were superb. They were extensive and specific -- and I agreed with everything they said. I changed the paper to address each of the concerns raised, and resubmitted it to DNA 15 (the 15th Annual Meeting of DNA Computing and Molecular Programming). This time around, my paper was accepted, and I recently learned that it won the Best Student Paper Award.

Two points seem important to me.

First, I didn't (and don't) take rejection personally. I view paper submission not as an event, but as part of a process. If I get a quality rejection letter, and I improve the paper based on the comments in the letter, it's just a matter of time before an improved version of my paper will get published somewhere.

Second, the STOC PC reviewers played a role in advancing computer science, beyond just putting together a program for STOC. My experience may be unusual. One congratulatory email I received, from a well known theoretical computer scientist, basically said, "Congrats on your award, and I'm shocked that you got useful feedback from STOC reviewers. So congrats on that too." Therefore, I'd like to emphasize to anyone reviewing something, that even if a submission is not a good fit for your particular venue, providing useful feedback is scientifically important. I'm very grateful that I had reviewers who approached their role conscientiously.

I'll conclude by shifting gears into a soundbite of my paper's technical results. I was able to show connections between the geometry of self-assembling networks, and the theory of multiprocessor synchronization. For example, three-dimensional self-assemblies can simulate strictly stronger shared objects than two-dimensional self-assemblies. These connections seem to intrigue both the "nano people" and researchers in distributed computing -- and I'm now investigating synchronization problems in several subareas of natural computing. If nothing else, it looks as though it'll be a lot of fun.

A pre-proceedings version of the paper is available here.

6 comments:

Anonymous said...

My thoughts as well. I recently submitted a paper to SPAA and got rejected, but the comments are really helpful. Kudos to the SPAA PC.

Oliver said...

This is an very important issue that is stressed in this post. I personally find reviews of high-quality very productive; if the paper/research is not in a reasonable shape, than I'm thankful to everyone who provides constructive feedback on how to improve the presentation quality or the research process itself. However, the issue is of importance as most of the reviews that I've seen so far (especially at tier 2, 3, ... conferences) are of poor quality and can simply piss one off. Papers get rejected because reviewers did not try to understand the technical contribution, because they think "it is known" and pointing to parallel work that is currently under submission as well or don't even dare to provide any useful references. There are a lot more examples of bad reviews, and there exists even a Facebook group called Reviewer 2 Must Be Stopped! dealing with bad reviews.

However, it is not only the responsibility of every reviewer to write helpful but non-lame reviews, it is also the responsibility of the chair / TPC member to ban or ignore such reviews during the TPC meeting and not taking them into account when deciding of accepting or rejecting a paper (In some cases I have the strong feeling that this is not the case and bad reviews are considered without asking an additional reviewer for a new review).

Maybe one reason might be that reviews as community services are in some sense not rewarding as they are typically done anonymously (besides some attempts of open reviews). Good reviews get credit within the TPC, but not from the entire research community. I once read that in some fields it was common to publish the best reviews as kind of an additional paper. Maybe this would be an incentive for writing high-quality reviews and not just for the sake of becoming a TPC member / chair in just another conference.

What do you think?

Anonymous said...

"Maybe one reason might be that reviews as community services are in some sense not rewarding as they are typically done anonymously"

I don't think that this is really true. When you write a review, everyone on the PC sees the review (non-anonymously in theory conferences). I have certainly formed (or augmented) my opinions of someone based on their very comprehensive or very lacking review comments on a submission that I was involved with or even one which I am not involved in. If you submit a review reflecting your laziness, it will surely influence someone's opinion of you.

Anonymous said...

If you submit a review reflecting your laziness, it will surely influence someone's opinion of you.Many of the bad reviews I have received have the following property: they are lazy, but it is only apparent to one who has read the paper that the review is lazy. For instance, claiming that the paper should be rejected because it fails to deal with a certain issue, yet it does actually deal with the issue. If the PC did not read the paragraph that deals with the issue, they do not realize it is a bad review.

Anonymous said...

Having been on many PCs for 2nd/3rd tier conferences, the most common reason for short reviews, ones that may be viewed as lame, is not so much out of laziness of the reviewer per see. Instead, many (maybe most) of the papers consider problems that don't seem that interesting, and would seem to be solvable in a straight-forward manner using standard techniques. The papers then go on to show that indeed the problems can be solved by straight-forward application of standard techniques. It is hard to know what useful feedback that you can provide beyond stating that the paper isn't interesting. The rarer type of paper, that are flawed, but address interesting/novel problems tend to get much greater/better feedback in part because there is more that you can say as a reviewer that is helpful.

Anonymous said...

It is important to note that top tier conferences tend to get very polished papers. Thus, making a paper more enjoyable to read, as opposed to pissing reviewers off.