Wednesday, February 20, 2008

Conference Reviewing: Another Rant on Competitive Analysis

Because of an inadequate lack of attention regarding conference deadlines on my part, I find myself currently reading lots of papers while concurrently serving on the SPAA and ICALP Program Committees.

And somehow, this means I'm reading lots of papers with competitive analyses of algorithms. (Both the standard variety, and the more new-fangled but generally similar "price of anarchy" variety.) Looking back on it, I should have simply have told the PC chairs that it might be best to make sure not to assign me any such papers, as I'm not sure it's fair to the authors who get me as a reviewer.

OK, I'm exaggerating. Don't get me wrong; I'm not automatically assigning negative scores to any such paper. Some of these papers I really like. But in general, as I've expressed frequently elsewhere, I'm a skeptic when it comes to competitive analysis. Why exactly should anyone care that you have an algorithm that in the worst case always gets within a factor of 6.5 of the optimal solution possible? Why is that the right way to judge an algorithm? (Never mind the offshoots, like those where you have an algorithm that's competitive if it has twice as much as memory as you allow for the optimal algorithm, and so on...)

In my mind, the problem is that competitive analysis has become such a standard in Theoretical Computer Science that people don't feel the need to bother justifying anywhere in the paper why they should use competitive analysis, when in fact these papers scream for a need for motivation.

Here are some motivations that tend to work for me when reading competitive analysis papers.

1. The resulting problem is just so mathematically interesting that we really have to look at it. (One person I can think of that manages to pull off this motivation with me time and again is Susanne Albers.) Generally, to pull off this motivation, your problem should be simple, clean, and natural; if you start to bog down in details, claiming that you're giving a "more realistic" model of the problem, you've already missed the boat in my mind. (If you really wanted to be realistic, you wouldn't be using competitive analysis...)
2. You are considering the performance of real, existing algorithms. This seems reasonable to me; here's something people actually do (like, say, Shortest Job First scheduling), and you want to gain some insight into its worst-case performance for this notion of worst-case. Many price-of-anarchy results actually correspond to the behavior of reasonable algorithms one might consider in practice, so I admit I often find myself more sympathetic to Price of Anarchy results, which seem more motivated.
3. You are using competitive analysis for an essentially (complexity)-theoretical result.
4. You are using competitive analysis to gain significant insight into the problem (and algorithms for the problem) that could be useful in understanding real performance or designing real algorithms. I think most papers implicitly claim that their paper fits into this last category, just because it obviously doesn't fit into the other three, but usually this isn't explicitly discussed. I think this motivation works best when the problem is inherently very hard.

I'm sure there are other reasonable motivations. But overall I would call for more restraint from the community; not everything should be subjected to competitive analysis, just because it can.

14 comments:

Anonymous said...

Hi Michael,

You can make exactly the same argument for (off-line) approximation algorithms, and worst-case analysis (on time, space, solution quality, etc.) in general.

Some non-theory people have expressed thoughts that TCS is useless in practice using this line.

Jonathan Katz said...

If I understand you correctly, everything you say would apply to *any* algorithms paper, not just those dealing with competitive analysis. Turning this around, I think competitive analysis is just as well-motivated as worst-case analysis (and different people may have different opinions as to whether worst-case analysis is well-motivated or not).

Michael Mitzenmacher said...

Oh, please. Yes, you can make the same high-level argument for any algorithms paper -- the paper should actually have some motivation. But just on the criteria I raised (mathematical niceness/interestingness, applicability + insight into practice, never mind its role in complexity) worst-case analysis of algorithms has proven and in my mind actually is much more valuable, robust, whatever you want to call it compared to competitive analysis.

So yes, you could make the same argument against worst-case analysis, but it would be a much, much weaker argument. And yes, we should have the recognition that there are times when worst-case analysis isn't necessarily all that well-motivated, but as a community I think we already do that. (Individuals may vary.) As a community, I think we've failed to exercise restraint and say that for many problems, competitive analysis isn't really well motivated.

Anonymous said...

For many problems, competitive analysis isn't really well motivated.

This is more or less commonly known with the on-line community. We have put pen to paper on this, in two survey papers which summarize work on the field:

A Survey of Performance Measures for On-line Algorithms. Reza Dorrigiv and A. López-Ortiz. SIGACT News, Vol. 36, No. 3, September 2005.

Reza Dorrigiv, Alejandro López-Ortiz: Closing the Gap Between Theory and Practice. New Measures for On-Line Algorithm Analysis. WALCOM 2008: 13-24


Alex

Anonymous said...

I found it interesting to explain the concept of competitive ratio to non-CS folk (from high school kids to EE PhDs). They cannot be more confused on such a "meaningless" definition if this is the first time they hear about it - "So you are saying an algorithm is still considered to be good if it is reaching 10% of the optimal solution?"

Anonymous said...

"So you are saying an algorithm is still considered to be good if it is reaching 10% of the optimal solution?"

Or what is worst, we know of a very good online algorithm, in the sense that it compares positively with every other online algorithm, but the competitive ratio gives it a low score because it compares badly against the idealized, unachievable offline optimum.

Alex

Anonymous said...

I think hn has a point. Do you think the same way about approximation algorithms? At least, in an off-line scenario you may even be able to find the optimal solution in reasonable time for virtually every problem instance appearing in practice. Usually, you cannot hope for something like that for an on-line problem.

I'm not saying that competitive analysis is the right thing all the time. I'm just saying that if competitive analysis is not the way to go, approximation algorithms aren't either.

Anonymous said...

Competitive analysis sounds like an engineering problem, not a theoretical problem.

Michael Mitzenmacher said...

Anonymous #7: I also have some skepticism regarding approximation algorithms, primarily in cases where there's motion in the form of a 6-approximation to a 4-approximation to a 3.5-approximation and so on. (That is, there's just improvements in the constant.) Usually such problems are amenable to local search algorithms in practice, and nobody ever uses any of the approximation algorithms.

Again, other motivations (like those I gave in my post) may still apply.

Anonymous said...

You raised some interesting points. But I am wondering what are the other ways to analyze problems which are inherently online in nature, and where you dont know all the input parameters in advance? I felt that competitive analysis is a natural way to analyze such online problems.

Anonymous said...

I felt that competitive analysis is a natural way to analyze such online problems.


If you think about it, what competitive analysis is doing is measuring the performance of an online algorithm in a manner proportional to the complexity of the input. The proxy for "complexity of the input" is the performance of the offline optimum on said input, but there is nothing magical or indeed particularly natural about this measure of difficulty. Others make equal or more sense. Have a look at the survey quoted above for a sample of alternatives.

Anonymous said...

If 3.5-approximations are not so interesting and PTAS take time that has functions of 1/epsilon in the exponent... does that imply that we should be skeptical about a large majority of the work in approximation algorithms?

Michael Mitzenmacher said...

Anonymous 12: I can understand the reasoning that it's potentially complexity-theoretically interesting to know if a problem has a constant-competitive algorithm or a PTAS. This fits with my possible motivation 3. If the problem's mathematically pretty, perhaps also motivation 1. If the competitive algorithm being studied is a "real" algorithm for the problem, motivation 2 would come into play. Or if the study gave some real insight into how to design algorithms for the problem, motivation 4.

I think it pays to be skeptical of all lines of research and question their value. (That's a way to figure out what YOU want to work on.) But yes, I think I've been on record as stating I'm skeptical about work in approximation algorithms when I feel it doesn't seem well-motivated (using one of the above motivations, or some other motivation).

Anonymous said...

The author of this blog appears to believe that improving constant factors in case of approx. algorithms is may not be particularly interesting or useful. Yet, is it not in this type of work, where it becomes harder and harder to improve the algorithm, that novel and creative techniques are discovered? Surely then, such work is worthy of effort.