Monday, November 18, 2013

Easy Now

In the past year or so, I've gotten reviews back on multiple papers with the complaint that the result was too simple.  My interpretation of review: sure you might have come up with an efficient data structure that solved an at least seemingly interesting and/or useful problem, but wasn't the underlying math and/or data structure approach "too easy"?  (The reviews were either from theory conferences or, my interpretation, from theoretically minded reviewers in settings where there might have been a mix on reviewers.)  I'll admit, it's a bit disheartening.  And I'll also acknowledge that blogging about it probably seems like sour grapes.  But here it goes.   

From my standpoint, easy is a plus, not a minus.  I've taken to describing it this way.  In computer science we're used to measuring algorithms and data structures in terms of the two basic tradeoff costs -- time and space.  The third basic cost -- error -- takes some people a while to get used to.  That is, your algorithm can be "wrong" (either probabilistically, or in that it gives an approximate answer) in some hopefully well-defined way in order to trade off against time and space needs -- many times you're willing to settle for a good answer instead of the optimal answer if it's much quicker.  But there's also a 4th basic tradeoff cost -- programmer time -- that I find the theory community is happy to just ignore.  Simple is good, because more people will use simple things, even if they're not the most efficient possible, because often the usual time/space efficiency isn't really the bottleneck.  Coding up something that works is.  This is why Bloom filters show up in most of my talks (Suresh, drink!);  for me they provide an outstanding example of issues related to the 3rd and 4th tradeoff costs.

But I was inspired to go ahead and post something about this because of the following from Paul Krugman's blog today.  (His title is "The Power of Two" -- if that's not a sign, what is?)  I figured he says it (everything?) better than I could, though you'll have to substitute CS keywords for economics keywords in the below.  
If this strikes you as too easy, and you think that real economics should involve harder math, well, I feel sorry for you — you just don’t get what it’s all about. (You’re what Rudi Dornbusch used to call a “fearful plumber”). And by the way, coming up with a really simple formulation of what seems at first like a very hard problem can take a lot of work. It sure did in the case of the MF lecture, where I actually did start with a really ugly saddle-path thingie until I realized that formulating the sudden stop the right way would make all of that go away.

Simple doesn’t mean stupid. Thinking that it does, does.
     


20 comments:

D. Eppstein said...

These sorts of reviews also have a very detrimental effect on the field, in that they discourage researchers from finding the simplest way to present their results.(Even if the researchers themselves are not deliberately trying to dress up their results as more complicated than they really are, the complicated ones are the ones that make it through the review filter.)

That is one of the reasons I like the Canadian Conference on Computational Geometry as a model: it's not a high-prestige conference, because they take almost all the submissions that are on topic, novel, and not obviously wrong. But they're not afraid to take the small "easy" results that help build the literature but do not look deep enough to go to more prestigious conferences, and I think it's important to have a place for these results.

PS said...

Thankfully in CS systems research, the opposite is true. Papers are rejected because the solution presented is too complex. Simple and somewhat correct wins over complex and correct.

The underlying system tends to be quite complex (take a look at any recent SOSP/OSDI paper), but the underlying central idea tends to be really simple, and explainable in 2-3 sentences/bullet points.

Jeff P said...

I've started sometimes stating explicitly and prominently in the intro of papers something like:
"an important advantage of this result is that it's analysis and algorithm are very simple," when relevant.

The sample size is still too small to know if it has helped, but it has prevented reviews that directly complain it is not complicated enough.

Anonymous said...

To echo what MM says: if the question is deemed of interest, simplicity in the solution should be considered an asset not a drawback.

Sadly this is not the case in TCS today. The more selective the conference, the more enamored with proof difficulty they seem to be.

Piotr Indyk said...

Hi Michael,

I sympathize with the issue you raised. My "partial solution" that I employed over the last few years will not come as a novelty or surprise to you, but it seems to be reasonably effective. Specifically: if something is simple and looks like it could actually work, I try to code it up (well, get my students to do it) and get at least some empirical results. They could end up in the paper itself, or in the pre/post-cursor. Many (although perhaps not all) good theory venues will find an empirical validation to be a valuable addition to the paper. As an added bonus, the paper can attract a much wider audience, and ultimately have more impact.

Piotr

Michael Mitzenmacher said...

JeffP : I think it's very useful to do what you say (and I do it myself); I'm unconvinced that it's effective. One paper I submitted had "Simple" in the title, but the reviewers still thought it was too simple. :)

Piotr: Similarly, I completely agree with you that I think experiments are good to add to (algorithmic) theory papers. However, I don't think this is really a positive or negative in terms of getting papers accepted (to theory conferences). In fact, I've had reviews where people have told me to remove simulations (that I felt were important) from theoretical venues, so at least some people view them as a negative. (Why do you need simulations? You have a proof.)

Mahdi said...

It's even worse when one reviewer thinks the paper is uninteresting for being "too simple" and another reviewer considers the same submission "too complicated".

Usually "simple" also means smaller constants, which is a definite plus.

Anonymous said...

We've tried JeffP suggestion on our own, first with a small sentence, then with an entire paragraph. In both cases the paper got rejected with "too simple" as sole reason, in spite of resolving several well studied open questions. We then submitted it to a journal and rejected again for the same reason.

This time we challenged the decision through the editor (first time we ever done this) and requested that the referee explain why simplicity was a drawback of our paper. Only then the referee stopped long enough to ponder about this, and to his/her credit replied saying "indeed simplicity in this case is a plus".

Total time elapsed from submission to first conference to accept decision from journal? over four years.

Anonymous said...

In defence of referees, it is sometime very hard to tell the difference between simple and trivial. That is why, when refereeing, I usually try to figure out the main result of a paper on my own before I read the technical part.

And indeed, sometime, a thing looks easy, but it takes months to come up with it. Sometime all you need is one simple definition.

However, it is not true that simple things get rejected automatically. You just need to add additional results to the paper to convince the referees that the mass is there.

But yes, I also got bitten by the "its too easy bug". I take it as the price of doing business...

--Sariel

Anonymous said...

The reason why papers with simple proofs are considered "uninteresting" boils down to the fact that it is very hard to estimate significance of the problem itself. The reason for this is that most research in TCS has little relevance to practical problems and algorithms. What's important and interesting is guided primarily by mathematical tastes and CV building efforts of the community members. This is not unique to TCS: theoretical research in economics, physics and several other areas have similar problems. In such a world the problems themselves become secondary to many other interests (both mathematical and personal).

So if you write a paper on a problem that has not been explored (much) before and has a simple solution then a natural reaction is to classify it as a shallow question which is not worth much attention.

Anonymous said...

Anon 11:41pm gives a good explanation of how the preference for technical results came to be. The problem is when referees solely prefer complexity even when there is evidence that the problem is interesting and/or well explored as was the case for both Michael ("solved an at least seemingly interesting and/or useful problem") and Anon 9:35pm ("resolving several well studied open questions").

Michael Mitzenmacher said...

@David Eppstein :

My version of your CCCG is the Allerton Conference. I don't think it's viewed as high-prestige, but I've more than once used it as an outlet for results that didn't seem to have another natural conference home. And it turns out it's a very good conference, in that there's a lot of networking/EE theory people there or paying attention to it, and they seem to have less of an aversion to "simple".

Grigory Yaroslavtsev said...

I agree with Piotr, but from a slightly different angle. In my experience, a typical dilemma with such papers is whether to submit them to a 2-tier theory conference or to a 1-tier applied conference. Some of the advantages of the latter choice are 1) wider dissemination 2) opportunity to express simple ideas simply rather than complying to established theory writing standards, which can obfuscate basic ideas quite a lot (e.g. by throwing in unnecessary levels of generality, formalism, etc.).

Anonymous said...

My experience with experimental sections for papers has been similarly negative. In one recent case, we had a worst case analysis that at first looked fairly pessimistic and unlikely to occur, thus leaving open the question of whether to use the algorithm in practice or not. We then ran experiments showing that "typical" inputs (as opposed to worst case) were as bad as the worst case theorem had predicted only for the referee to complain: "experiments are not needed, theorem is already there".

In this specific case the paper had other flaws so this wasn't a deciding factor in the rejection, but it does back up the point that not all referees see experimental sections favorably.

GZ said...

Is this a more recent phenomenon, or has there always been a bias against simpler/less surprising papers?

Piotr Indyk said...

@Anon 1:16 AM, @ Michael Mitzenmacher (regarding negative feedback on experiments described or referred to in submissions to theory conferences):

I guess all of us deal with limited sample sizes, which makes it hard to infer general trends. Still, in my experience, the feedback ranged from neutral (as in, the value was lost on the reviewers) to positive (the reviewers though that experiments add value to the paper). I don't remember ever getting a negative feedback, although I am sure there is a first time for everything.

This isn't all roses, of course: I did have a paper rejected as "impractical", despite a clear reference to a followup publication at an applied conference using the algorithm. Of course, it is often hard to tell what were the main reasons for the decision.

Anonymous said...

I did have a paper rejected as "impractical", despite a clear reference to a followup publication at an applied conference using the algorithm.

He, he. I have the distinction of having three papers studying algorithms from actual commercial production systems be rejected because said algorithms were, according to the referee, impractical. What makes it funnier, is that the reviewer was likely in academia with little or no insight into what is or isn't used in the real world.

One of the problems is that because of NDAs we cannot write: "this family of algorithms is in use in Cisco Catalysts". All we can say is "this is the family of algorithms running in various modern network switches".

Anonymous said...

I've always wondered who these bad referees are that so unjustly victimize our whole community. Their code of silence seems much stronger than the mafia's. The bad referees seem to be legion, but I'm not aware than anyone has ever admitted to being one.

Anonymous said...

This is yet another instance of "Worse is Better" vs "The Right Thing." (look these up) These two approaches to design do exist and are still as much at odds with each other as they ever were.

David desJardins said...

This gets at a universal condition in academia: the cult of difficulty. Because people are largely rewarded for solving hard problems, they start to think that anything that seems too easy is therefore not valuable or worthwhile. Of course, this ignores the reality that if no one has done it before then maybe it's not as easy as it seems. When it really becomes a problem is when people who have natural aptitudes in certain areas avoid working on the things that they are good at because those things don't seem hard enough to them. I've seen this happen over and over (including in my own work).