Now, being a quantitative sort of person, I decided the best way to bring something new to the discussion would be to gather some data. So I picked a non-random year -- 2000 -- and decided to look at the citation counts on Google Scholar for all of the papers from the conferences and compare. I figured 2000 was far enough back that we'd get meaningful citation counts for comparison. Yes, I know, citations counts are not the end-all and be-all and all-that, but that's an argument for a different day (or for the comment section). In aggregate, they must tell us something important.
You can imagine my surprise when I looked at the numbers and found SODA completely dominated. Here's a little chart:
|Papers||Median Cites||Max.Cites||Total Cites|
FOCS seems to have had an absurdly good year, but the data speaks volumes. At least as of 2000, the best results went overwhelmingly to FOCS and STOC. Overall, I found the results of this experiment rather disappointing for SODA.
It would be nice to check to see what the current trend is, but it was annoyingly time-consuming to do this by hand, and I didn't feel like writing a script. (There seem to be many exceptional cases; if you try to do an exact match on the title, you'll often not find the paper because the title has changed somewhere along the line or a typo or something. But with a regular query you often have to do some searching to find the paper. Perhaps someone more talented will be inspired to write a script?)
I learned something interesting and useful by gathering this data. As a community, should we be paying more attention to numbers like these, so we can make our conferences better? For example, I can't recall anyone ever trying to systematically answer whether a PC made good decisions or not after the fact, but apparently we now have the data available to try to answer such questions in an empirical fashion. (We might check if rejected papers were later accepted and highly cited, for example.) What possible lessons can we learn using these citation tools?