The discussion on citation counts reminded me of the related question of how much of a role this sort of data should play in tenure decisions.
In the world of extremes, one could imagine tenure decisions being based solely on letters without really looking at citation data. A motivation for this approach would be that letters give you a richer picture of how a person is viewed by their peers in the research community, what their work has been about, and what the potential impact of this work will be in the future. On the other hand, the system can be gamed, by making sure positively inclined people get chosen to write letters. (The person up for tenure might not exactly be able to game the system themselves, but certainly a friendly department chair could...) I have to admit, the letter-based approach feels rather "old-boy network" to me, which leaves me a bit uncomfortable.
As another extreme, one could imagine tenure decisions being based solely on numerical data gathered from Google scholar or other sources. A motivation for this approach would be that the numbers supposedly give you an unbiased picture of the impact of a researcher's work, allowing comparisons with other comparable researchers. The data could also be used to gauge the derivative -- how one's work is changing and growing in impact over time. On the other hand, the system can be gamed, by developing groups who purposefully cite each other whenever possible or by working on projects that give better numbers without really giving high impact. I have to admit, I like the numbers, and in some respects I trust them more than letters, but I still don't entirely trust them, either.
My limited experience with promotion decisions is that it makes sense to gather both types of data and make sure they are consistent. When they are not consistent, then the departmental arguments can begin. When asked to write letters, I know I look at the citation data, and would include it in the letter if I felt it was appropriate.