Conceivably, we could set up the reviews to have a score for each factor. For example, I'm on the PC for NSDI, a systems conference, and we have to give scores for Overall Merit, Technical Merit, Longevity (= how important will this work be over time), Novelty, and Writing (as if that score matters :) ). Personally, I don't like this, and I'm not intending to do it for STOC. It's more pain for me as a reviewer without I think giving meaningful information to the authors (instead of spending time trying to decide if a paper is a 2 or 3 in terms of novelty, let's give another comment in the review text!), and when it comes time to make the decisions, I'm not really sure what I'm supposed to be (Pareto-)optimizing in this multidimensional space.
I'm a big believer, for conferences, in the "simple" method, as I've said before -- papers just get a score from 1-5, under the following scheme:
- 1: Bottom 1/2 of submissions.
- 2: Top 1/2 but not top 1/3 of submissions.
- 3: Top 1/3 but not top 1/5 of submissions.
- 4: Top 1/5 but not top 1/10 of submissions.
- 5: Top 1/10 of submissions.