I very much like the conference-based publication system of computer science. But an obvious problem with the system -- which mathematicians sometimes throw back in the face of CS theory researchers -- is that this system leads to buggy results getting published and accepted by the community. (In my last post, I talked about the headaches this issue can cause PC members.)
This problem could be ameliorated if as a community we had some standard ways of reporting or dealing with such errors. But I don't think we really do. Occasionally an author will self-report an error or fix something for a journal version, but I imagine errors slip through more often than we'd care to think about. Perhaps it isn't really a problem; for big, important papers, bugs will be found and knowledge of them disseminated. But for smaller papers (which, let's face it, is most of what actually gets written -- even in the major conferences), there doesn't seem to be a process -- in fact, even trying to suggest that there's a bug in someone's work can get your own paper killed.
Yes, I'm unhappy to report, this happened to me. Once, on a paper, a student found a bug in some previous related work, and thought it important to mention in the conference submission to deal with possible questions about how our work related to this other paper. [Since he's job-hunting, I feel I should say this was NOT Adam.] After going back and forth, I agreed that we could in a footnote mention that there appeared to be an error that we were discussing with the author. (The author took a while to admit there was an error, and in fact the student helped suggest a fix.) The PC sent back nasty reviews, with one even suggesting that our action was unprofessional. I, obviously, disagree. This was a submission, ostensibly confidential, not for publication (the PC could ask us to remove the footnote if they objected). We were in contact with the author and trying to clarify and fix the bug we found. How the heck else were we supposed to let the committee know what was going on, if they felt it important? If they felt it wasn't important, it was just a footnote they were welcome to skip.
This attitude, I think, stems from the fact that, on the whole, we're a very pleasant, non-confrontational area of science. Fights in CS theory are rare; most people get along (professionally) quite well. From what I've seen, with rare exception, we're much less confrontational than other sciences. So somehow mentioning out loud that someone might have made a mistake is not considered good form. Again, I may be wrong, but other sciences seem less sanguine.
Of course, the underlying problem in this incident, and others I've seen as a PC member and in other contexts, is that we don't have an error-correction mechanism that allows one to report bugs or suspected bugs in published work. Perhaps we're better off without it -- maybe it would just take up time and provide little value. But I've never heard it even come up as an issue to be thought about and discussed by the community. Perhaps it should.
Subscribe to:
Post Comments (Atom)
4 comments:
No, the problem is that it is perceived as shameful or humiliating to make a mistake particularly in a symbol of prestige such as a paper. This fear of a tarnished reputation induces people to silently ignore mistakes, even third parties, who empathize and want to reinforce such a system so that it doesn't happen to them either.
I agree with anonymous 1: The problem is attitude. I know that brilliant people make mistakes, because I do. So if I tell you that you made a mistake I'm NOT telling you that you are not brilliant.
For some reason that I can't understand most people consider pointing out an error in their work as being an attack on their person. (Perhaps they are deluded an think that making mistakes is abnormal?)
There was an opinion piece in Science, a few years back, by Bruce Alberts - former president of NAS - who pointed out that he adopted a motto early in life to only do experiments that resulted in meaningful contributions whether they "succeeded" or "failed". If you take this view, then there are only two types of mistakes - (a) major issues about past claims that are worthy of refutation in a subsequent paper and (b) minor bugs that don't impact the bigger picture - which should just get taken of by a more thorough and detailed journal reviewer.
This discussion feels a bit foreign to me, because in cryptography (and security) people publish papers breaking other peoples' schemes all the time. This includes schemes that come with a claimed security proof. The most famous example I can think of in the field was Victor Shoup's 2001 paper reporting a bug in the proof of security for a scheme introduced by Bellare and Rogaway in 1996, but there are other examples. It is part of the way the field moves forward. I am sorry to hear that you and your student had such a terrible experience.
That being said, a "break paper" does not automatically get in - it depends partially on what else is submitted, what the impact of the original scheme and proof was, and other factors. Still, in my reviewing I am usually happy to see a paper that clearly explains a problem in previous work.
Now this cycle of having errors and then fixing them is of course not the ideal state of affairs. Opinions on what to do about it differ. Shai Halevi has a nice paper on eprint.iacr.org arguing that many of the steps in a modern cryptographic proof of security can be written down and machine checked. Blanchet and Pointcheval actually published a paper and released code which does this for some of the standard proofs.
http://www.cryptoverif.ens.fr/
Others have proposed additional automated methods. There's a formal methods community that does work on verifying crypto protocols, as well, which I'm not doing justice in this comment.
One of the problems with the formal approach, unfortunately, is how tedious it can be to encode things that are simple to say in a paper. I worry particularly about arguments that require new insights or new techniques.
Still, the programming language community seems to be going in this direction. (One of the POPL tutorials this year was on how to write your next paper with Coq, a proof assistant!)
Neal Koblitz has argued, alternatively, that we should use journals more and be less dependent on conferences. (He also doesn't like formal methods or machine-checked proofs much.) I think people here and at the complexity blog have gone over this issue a few times, not sure what I could add.
Still, I agree that having some error correction mechanism is useful. A shift to allow "break papers" might be helpful. Of course, then you have to deal with the follow-on paper claiming the break paper is wrong...
Post a Comment