Wednesday, June 17, 2020

Algorithms with Predictions: Survey and Workshop

There's a whole new, interesting theory trend  -- Algorithms with Predictions.  The idea, spurred by advances in machine learning, is that you assume you have predictor that tells you something about your input.  For example, in caching, you might have a prediction of when the item you are currently accessing will be next accessed.  Of course, machine learning predictions aren't perfect.  Still, you'd like to use this prediction to improve your caching algorithm, but from the theory side, we'd like provable statements.  For example, you could say, if my prediction is THIS good (e.g., the error is bounded under some metric), then my caching performance will correspondingly be at least THIS good (e.g., performance bounded in some way).

If you haven't seen the burgeoning spread of this line of work and are interested, you're in luck.  First, Sergei Vassilvitskii and I have written a brief survey that's now on the arxiv.  We had written it for a collection Tim Roughgarden is organizing on Beyond Worst-Case Analysis (that we thought we be out by now, and should be out from the publisher soon-ish), but we've gone ahead and put a version on the arxiv to make it available.  The area is moving fast, so there are already many new results --  we hope to update the "survey" with new material as the area grows.

Second, one of the STOC'20 Workshops will be on Algorithms with Predictions.  It will be on Friday from 1-4pm, with speakers Tim Roughgarden, Edith Cohen,  Ravi Kumar, and me.  I'll be talking about some of my recent work  (in submission) on queues with predictions, and partitioned learned Bloom filters.  (Arxiv papers are here, here, and here, but maybe you want to see the talk first.)  I'll also do a blog post on partitioned learned Bloom filters in the near future.

1 comment:

td said...

Predicted variables (https://arxiv.org/abs/1810.00619v1) seem like a good interface for many such methods