Friday, June 06, 2008

Is Pure Theory "Wasteful"?

Daniel Lemire has a post on his blog entitled "Why Pure Theory Is Wasteful".

I encourage people from here, if they are so inclined, to comment at his blog...

12 comments:

Anonymous said...

I actually kind of agree with him.

Anonymous said...

Too many people out there are always ready to be judgemental of others work.

Pure theory is great by itself. It solves problems with some new models/ideas and some implement systems with approximations of the theory, only to find out more problems. This is the way science advances. It is possible some of it never leads to practical implementations, but it may be having a second order effect - inspiring work which lead to better systems. Such work cannot be considered wasteful.

At the end, just let be good free citizens of the world and allow others to do what they want. If the systems people dont want to learn new theory, no one is forcing them. Why keep passing judgements on styles of work. Its just frustration.

Nuf said, now go back to more theory!

Anonymous said...

I thought for a moment that he was referring to theoretical computer science as pure theory.

Indeed, in software engineering, machine learning, etc. I actually see a lack of sufficient theory before they can be successfully applied.

Daniel Lemire said...

Come on! What is this? You have 3 anonymous comments in a row. I must be more powerful than I thought.

Anonymous said...

Folks, it's called a polemic; controversy for its own sake. Please let's not encourage this kind of behavior.

Anonymous said...

Ok, I'll play the devil's advocate here. When was the last time a complexity theory paper(mind you, not algorithms, not theory) led to a big impact? (And by impact, I mean impact outside theory).

Anonymous said...

Here's a recent example. The Valiant-Vazirani lemma (a complexity result from the 1980's) has been applied (with some engineering modifications) to get radically faster methods for counting the number of models in Boolean formulas. This has had impact in AI. See the recent work by Gomes, Selman, and others.

Anonymous said...

I don't understand why the author of this blog feels that finding an algorithm with a better competitive ratio is "bad theory."

Michael Mitzenmacher said...

Hi anonymous #8. Perhaps you just haven't been reading my blog long enough. Take a look at this diatribe from February:

http://mybiasedcoin.blogspot.com/2008/02/conference-reviewing-another-rant-on.html

In regard to the specific issue of "better competitive ratio", I feel that a paper that improves things by showing a 4-competitive algorithm for a problem where there was previously a 5-competitive algorithm, when there's not much of a point to either algorithm according to the criteria of my post above, is fundamentally pretty uninteresting, and hence "bad theory".

Michael Mitzenmacher said...

If that link didn't come out right, it was
here , hopefully.

Anonymous said...

Constant factor improvements in competitive ratio can be significant in the real world. Theory folks have not woken up to it yet. It is ironic (?) that a theory technique that is somewhat practically relevant is being labeled as useless by theory folks.

Anonymous said...

I'm not with the Daniel's views. How would you decide whether a research is useful or not ? I think research is research whether it is pure theory or non-theory.
Research that is not useful now may find applications in next 50 years and may not find even. It is just outof curiosity.

And if not useful, it atleast profiles what has been thought of.

The question of testing research against practical applications is a different one. Not every research is meant for that. I don't recall exactly but some lectures of Clay institute, I think by Timothy Gowers would clarify more.