After my first conference rant on competitive analysis (based on my current stints on the SPAA and ICALP committees), I feel it's only fair to spread the love and rant a bit on my fellow algorithmists.
I simply claim the following : if you are presenting an algorithm with the claim that it might be useful in practice, you should aim to include at least a short experimental section showing that you've implemented the algorithm and that it/how it behaves.
1) If your suggestion is it's potentially going to be useful in practice, and it's your algorithm, it's incumbent on you to provide some evidence for this statement. An implementation is the best evidence. I don't expect pages of simulation results examining corner cases (although, if there's space, that's certainly nice); but a couple of paragraphs explaining that you implemented it, tested on basic data, and the program actually finished goes a long way.
2) It's not that I don't trust your math. It's just that -- well, no, it is just that I don't trust your math. Maybe you've proven that the algorithm is O(n^2). I'd like to know if in practice it seems to be O(n log n) [even if it's O(n^2) in the worst case -- now you've given an average-case or special-case open problem!]. I'd like to know if there's a hidden constant of 10,000 there that makes it really not practical in its current form. I'd like to see that you didn't make an error and that it doesn't look like Theta(n^3) when you run it. [I've got 46 papers to look over in a month. Make my job easier, your paper's more likely to get in.]
3) Maybe, just maybe, someone who might actually want to implement your algorithm will actually read your paper. A non-theorist. Don't you think they want to see that it seems to actually work before implementing it better themselves? Won't some experiments make talking about your work to non-theorists easier? Help make that connection...
I'm not saying every algorithms paper needs an implementation section. In many cases, "algorithmic" results are really "complexity" results -- we're just showing that something can be done, we don't actually expect anyone to do it, and in this case there's no need for a simulation or experimental results. (Of course, in such cases, I expect the authors to limit their claims of interest/utility in practice.) In some cases, space won't permit a reasonable evaluation of the algorithm -- but do plan on it for the journal version. In some cases, the coding and evaluation of the algorithm are so interesting it merits an entirely separate paper!
But I'm amazed at how few algorithms papers provide any actual experimental results (unless they're appearing in a networking/database/other systems conference, where it's more understood that results are also expected). I've actually submitted theory papers to theory conferences with experimental sections and had reviewers urge them to be taken out, which I find mystifying (and I ignore).
And yes, if I'm reviewing your paper, and you don't have any numbers where I think you should, I'm probably mentally (and numerically) docking your paper score....