[Mikkel Thorup graciously has offered a guest blog post, discussing what he looks for in conference papers, and possible implications for refereeing.]
Results versus Techniques (Mikkel Thorup)
-------------------------
When I go to STOC/FOCS, I hope to see some great new results/theorems
and some great new techniques/proofs. I want both, and in some wonderful
cases, I even get both in the same paper. However, many of the
greatest results may be in papers with less interesting techniques, or
vice versa, and the worst thing is if they then get out-competed by papers
achieving semi-strong results using semi-interesting techniques.
I am saying this because many refereeing forms have fields like
"Submission's strengths" and "Submission's weakness" suggesting a
balanced evaluation with pros and cons. A common case I see is that
strong results get criticized for being obtained with too specialized
techniques. However, if somebody needs to cite a theorem/result,
then, typically, it doesn't matter if the underlying proof/technique
is specialized.
I am arguing that if we want an interesting program, then we should
not worry about weakness unless it renders a paper unacceptable (e.g.,
a bug). I am proposing a max-evaluation rather than a sum. What
matters is if a paper has an exiting contribution.
I am myself mostly in the problem solving business where we believe
that there are important computational problems. Sometimes general
techniques work, but other cases require a deep understanding of the
problem at hand leading to very problem specific techniques. Sometimes
a tiny but subtle change in existing techniques can make a huge
difference. Other times, as, e.g., for the 4-color theorem, it seems
that what is needed is a huge mess. The point here is that if we want
to succeed, then we have to be open to use whatever techniques it
takes. Later there may be very interesting technical work simplifying
and generalizing the techniques.
I am not saying that techniques are not important. I love papers
introducing great new techniques, and I am more than happy to accept them
on a technical basis. What I am saying is that if the main
contribution of a paper is a result, then the techniques may play a
more supporting role whose only obvious merit is that they prove
something interesting.
It often turns out that the techniques developed to solve a central
problem have other applications, but this may not be easy to guess up
front based on a quick review. My experience is that referees are
worse than random when it comers to guessing if a techniques will
later prove useful in solving other problems.
My suggestion is positive refereeing where the overall score is based
on the major contribution of a paper: the thing that would make
people come to the talk, and use and cite the paper in the future. If
a paper has something interesting to say, then it doesn't
matter if other aspects of it are less interesting.
Tuesday, April 09, 2013
Subscribe to:
Post Comments (Atom)
2 comments:
This is a good time however to remind ourselves that good science is performed in many different ways. For example, there are people in all sciences who look at the big picture and break new avenues, such as Langlands and Gromov, and then there are the deep specialized miners who reach the face of the mine faster and dig harder and deeper than any one else. Both are impressive human feats, produce useful science and they each deserve to be rewarded in their own way, even though the latter perform work that is harder to explain to the general public.
Within CS, regular conferences tend to favor tunnel diggers. The short review cycle, the requirement of technical difficulty, the lack of time to ponder the implications of a new area all work in favor of technical breakthroughs. As Mikkel points out, good results often get shot down if the techniques aren't very sophisticated (typical review: "yes, they cured cancer, but the proofs are trivial").
Additionally to non-technical strong results, bold new ideas also get shot down which is why many sub-disciplines have started HotXXX conferences (e.g. HotOS, HotNets, HotSDN, HotChips, HotWiSec, HotMobile). These conferences provide a welcoming environment for the half baked ideas that might one day become a path breaking field. Many of them do not even have proceedings, since the ideas might not be mature enough for the permanent scientific record. In there ideas receive candid constructive criticism and proposals can be honed further on their way to becoming fully formed research projects. Think of the Langlands program which as best as I can recall lived in a weird state for about a decade until the first set of results on local Langland's conjectures showed that it was indeed likely to be fruitful.
As I understand it, ITCS was meant to be TCS's version of HotXXX but judging from some recent experiences the program committees have yet to fully grasp the speculative nature of path breaking research and are sill expecting completely developed theories and results instead of "Go West Young Man"-type of papers.
As Mikkel said, a better solution woudl be to use a more expansive review system. SIGCOMM has been running an experiment in which the refereeing for their newsletter focuses on what is of value on a paper rather on the minutiae that can be used as an excuse to shoot it down.
Very nice post by Mikkel.
To Alex,
As Mikkel points out, good results often get shot down if the techniques aren't very sophisticated (typical review: "yes, they cured cancer, but the proofs are trivial").
Actually, Mikkel said the opposite: good results often get shot down if the techniques are too sophisticated, that is, "too local" or too specialized to be used elsewhere.
Post a Comment