Monday, November 03, 2008


In a recent review of a journal paper, I got the following comment:
In explaining why you are presenting simulation results, you say, "First we wish to check our theoretical analysis..." I don't understand this motivation. Your theoretical analysis is substantiated by mathematical proofs. What more evidence do you need of its validity?
Please keep this statement in mind.

I've stated frequently that theorists should actually implement their algorithms. I have some further recent anecdotal evidence to back up this claim.

I have students working on a variety of projects, and recently, I've had a strange convergence: in several of the projects, the students have found what appear to be non-trivial errors in recent theory papers (2 in CS theory, 1 in EE theory). It's not clear that any of the results actually break -- indeed, I don't think any of them do. I don't want to exaggerate. In one of them, a constant factor seems to be messed up -- not disastrous, but it is an important constant factor in context. And perhaps in one of the papers we could chalk it up to really just a sequence of confusing typos rather than an actual error.

Now I'm not naive enough to expect conference papers (or even journal papers) without errors. But the key here is that these errors were either easily found and/or proven to me by the students by having them implement the algorithms described in the paper. Then they sort of jumped out.

Implementing your own algorithm is a good way of checking your work. If you aren't implementing your algorithm, arguably you're skipping a key step in checking your results. Why should a reviewer go to the trouble of checking your work carefully if you're not?

Moreover, imagine not some student but an actual real systems-builder who decides your algorithm is worth implementing for their system. Imagine how they feel when they find things don't quite work as you've stated. That doesn't exactly inspire confidence, and is the sort of thing that discourages someone from using your algorithm. More globally, it gives systems people the feeling that theory (and theorists) aren't important. Why should they be debugging your proofs? You should give them evidence your algorithm can be implemented and works as claimed by actually implementing it if possible or reasonable to do so.

I'm not stating that you personally have to implement your algorithm. Hire a student to do it! It's good experience for them. You can apply for an REU, or other student research funding.

So, to answer the anonymous reviewer, I think there's always plenty of good reason to check our mathematical proofs by experiment and impelementation, and it's suitable to put those results in the journal paper as an aid to the reader (and reviewer!).


Anonymous said...

The way you write things down, one would believe that "designing the algorithm is the hard part" and "implementing it is the easy part".

When it is the case, then often, the implementation is really easy. If it requires 20 lines in a high level language to implement the algo., why wouldn't the prof. implement it? Ok, getting a student to earn some experience is a good thing, I guess. We have to find ways to keep them busy... but why not implement it yourself, and then also ask a student to do it, just to confirm your results?

Of course, quite often, implementation is a lot harder than design. In fact, I can point to several algorithms or systems that I am sure nobody ever implemented... and nobody will ever implement them for good reasons!!! Sometimes, one person implemented something very messy that closely match what appears in the paper... but could not be scrutinized by others...

Anyhow, in such cases, asking a student to do it is dangerous: (1) the student could waste a lot of time and get no result, partly through inexperience (2) even if the student claims to have implemented the algorithm, how can you be sure he did?

Mark Wilson said...

It may not be epistemologically sound to claim to be checking a proof by implementing, but there are many good reasons to do so.

1) In reality, proofs of correctness can have errors, and implementing an algorithm often finds them quickly.

2) Algorithm analysis uses abstract models of resource consumption. Often an implementation will show that, for example, cache behaviour makes the running time behave quite differently from the model. Or the average case behaviour may be quite different from the proved upper bounds.

3) If we really want to have more connection between theory and practice (and I don't know anyone who doesn't), we need to have more confidence that theoretical results have some use. Many published algorithms are very complex, and more practical souls can dismiss them for that reason. But surely it is unreasonable to assume that only really easy-looking algorithms are of practical value!

If it's good enough for Don Knuth to implement his own algorithms, why can't we do it? Leaving it to a student might be OK but can lead to problems, and as Daniel says the details can be really hard. A proper implementation of a serious algorithm for an important problem should count as a publication. Maybe then more people would implement their own algorithms. Or can some algorithm theorists not program?

Anonymous said...

I agree wholeheartedly with the value of implementation. But just to play devil's advocate, how do you know the implementation was error-free? Or that you run on a sufficiently large input size so that the timings accurately reflect the algorithm rather than the overhead?

Michael Mitzenmacher said...

All commenters:

Good comments.

I agree, sometimes implementations are hard, and one must take care to match to the level of the student. The question of whether one needs to verify a student implementation is correct depends on what it's used for -- here, I'm using it for experiments/bug-finding, so it's less of a concern. But really, correctness is a concern for any program, no matter who implements it! (I don't think these arguments excuse not implementing it.)

BTW, I think an argument to have students implement is not that theoreticians can't code (although I'm sure some can't), but that it's not the best use of a theory professor's time. A student might take 40 hours to code up something I could code in 2. But they aren't going to prove the theorem I proved in those two hours in any amount of time... in many cases, the best use of my time is to have a student code. (If it will take me more time to explain it to them than to code it up, then I'll do it myself. :) )

Anonymous said...

We had a somewhat similar experience. Proof checking is a human endeavor and as such equally open to error. In a particular paper we could present a lengthy, straightforward and unenlightening calculation which would give the reader no confidence of the result (due to its length), or we could leave this as an exercise and present orthogonal experimental evidence which for this specific problem was quite decisive.

We argued that providing the experimental evidence was the best way to go, considering that the calculation was likely to be nixed for space reasons anyway.

The referees would have none of it. Too much of a cultural shock I guess.

Anonymous said...

I think the last anonymous commenter brought up an important point: culture shock. There are (at least) two cultures when it comes to theoretical algorithms research. One comes from the pure mathematical side, where the algorithm is really just exemplifying that some mathematical structure has some particularly nice properties (that happen to be algorithmically exploitable). The other culture comes from the practical side. I think fundamentally the issue you're coming up against is the clash between these two cultures.

It's easy to conflate these two cultures by thinking that there is a single culture of algorithms. But that would be missing the main issue, I think.

By way of anecdotes: I am a theorist. But it was only by implementing an algorithm in someone else's paper that I realized that I could do it better, which rapidly led to my first publication.