Wednesday, September 21, 2011

Beyond Worst Case Analysis Workshop

I'm just off a plane coming home from the Beyond Worst Case Analysis Workshop at Stanford.  It went really well, and I really enjoyed it.

I think a variety of things made it work.  First, and I apologize to Tim Roughgarden for saying it, but the execution was superb -- someone should make him organize more things.  Great room, great food, great staff on hand (thanks to Lynda Harris for being incredibly helpful), and a great collection of speakers.  Also, an excellent location.  Stanford is relatively easy to get to with 2 major airports nearby (but avoid the cabs -- it's now over $100 to take a cab from SFO!), and the location guarantees an audience of Stanford, Microsoft, and Google folks.  (While not everyone was there all the time, I understand well over 100 people were registered.)

To the content.  It was a great program -- apparently the talks will be on the web later, so you may want to check them out if you weren't there.  My favorites would have to be Dan Spielman's talk on smoothed analysis (perhaps because Dan just always gives great talks), and Kevin Leyton-Brown's talk on statistical methods (using learning theory) to predict an algorithm's performance on new instances.  (Can you predict the running time of your satisfiability algorithm accurately before running it on a specific instance very quickly just by taking a careful look at the problem?  The answer seems to be -- yes you can!)

There was a panel that focused on questions like "Where could we point students to work on these sorts of problems.  What are the currently most glaring examples of algorithms whose properties are poorly explained by existing theoretical models, where there might be hope for progress?"  and "To what extent can "real-world data" be modeled? Is it important to model accurately the properties of "real-world data"?"  While the panel ended up being fairly uncontroversial -- I'm afraid no fights or even major disagreement broke out --  I found it very interesting to listen to the others on the panel give their opinions and insights on these questions.

I think people got into the spirit of the workshop.  Kevin, an AI person by trade, found it amazing that  theorists were getting together to talk about heuristics and experiments.  (Kevin's talk was followed by Dick Karp talking about his work on tuning and validating heuristic algorithms -- though of course not all talks were on that theme.)  It will be interesting to see if this workshop inspires any specific new research -- but even if not, it was well organized, well put together for content, and well worth the trip. 

My talk slides are here.  

3 comments:

Anonymous said...

So what were the conclusions of the panel? Can you tell us more?

Peter said...

In order to sic students on these problems, it seems like access to either the aforementioned "real world data" or to a suitable model of such is required. Where can such things be generally found?

Anonymous said...

@Peter That question was asked on the panel. Dick Karp pointed out that some such data is available from contests conducted at Dimacs. Beyond that, the panel pointed out that the best resource is to talk directly to practitioners. Nevertheless, it does seem that compiling all this "real world data" would be a useful for the community.