My colleague and frequent co-author John Byers knows that I can spend up to several hours a day actively worrying about my ranking as a computer scientist. While the automated daily Google Scholar updates (any new citations today?) are helpful, it's not always clear how I should interpret them. So John was happy to direct me to a new paper, Assessing Computer Scientists Using Citation Data, to help me and other ranking-obsessed individuals find their place in the world.
The methodology itself is actually quite interesting. The first question is what version of the h-index do you want to use? In particular, what external information do you use to judge which version of the h-index appears most accurate? The method used here is to assume that department rankings accurately represent the quality of the faculty within the departments, and use a regression between the reputation of departments and the mean citation score of the faculty to help determine which versions of the h-index appear most accurate for assessment purposes. There are several other factors accounted for in the methodology, such as a prediction model for the probability a computer scientist works at a department depending on their quality "mismatch", and how to take into account the thing like the field and years-since-PhD of individual researchers. (Theory papers as a group obtain much fewer citations on average; security and cryptography papers, much more.) The latter allows one to come up with variations of the h-index score that are field- and age-adjusted to individual researchers. That is, the paper provides a systematic approach that attempts to correct for some of the known weaknesses of h-index scores. This sort of analysis is common in econometric papers, and is the same general type of analysis we did in our Groupon paper a while back.
I'm well aware that many people object to this type of ranking of individuals, some based on arguments of principle (this isn't how scientific research should be judged) and some based on technical arguments (these approaches are fundamentally flawed). This work doesn't really try to address the first type of argument, but arguably it goes a fair way toward addressing various technical concerns by showing a suitable application of econometric techniques. How well does it do? You'll have to look at the tables in the paper and decide for yourself.
I generally find these types of measurements useful, with the understanding that they're imperfect. (When asked to write a promotion letter for someone, for instance, I do examine citation counts.) To the extent that they become "more perfect", the implication seems to me that they will become the standard first-order approximation of quality. I don't see how such an outcome could reasonably be avoided. One argument is that it shouldn't, because it's a useful guide to performance; if certain people don't perform well in relative terms under that metric, but should be thought of as exceptions to the first-order rule, then exceptional arguments (in letters and such) can be made when needed. But then not everyone can or will be an exception.