Tuesday, June 08, 2021

Machine Learning for Algorithms Workshop (July 13-14)

We're having an online workshop on "Machine Learning for Algorithms" on July 13-14, with a great group of speakers.  Announcement below, link at https://fodsi.us/ml4a.html, free registration (but please register in advance)!

In recent years there has been increasing interest in using machine learning to improve the performance of classical algorithms in computer science, by fine-tuning their behavior to adapt to the properties of the input distribution. This "data-driven" or "learning-based" approach to algorithm design has the potential to significantly improve the efficiency of some of the most widely used algorithms. For example, they have been used to design better data structures, online algorithms, streaming and sketching algorithms, market mechanisms and algorithms for combinatorial optimization, similarity search and inverse problems.  This virtual workshop will feature talks from experts at the forefront of this exciting area.

The workshop is organized by Foundations of Data Science Institute (FODSI), a project supported by the NSF TRIPODS program (see fodsi.us). To attend, please register at    
 
https://fodsi.us/ml4a.html  

Sunday, November 29, 2020

ADAPT: Designing Activity-Informed Viral Diagnostic Assays

I wanted to give a pointer to a new preprint on bioRxiv on developing diagnostic assays for viruses, by (first author) Hayden Metsky (and others!) out of the Sabeti Lab at the Broad Institute (that I've been a bit involved with).  Hayden, who somehow is both a computer science PhD and an expert in virology, has devised a novel software pipeline for developing diagnostics that are designed from the start to deal with genomic diversity (a virus evolves to have many somewhat different variants) and the challenge of false matches (you don't want to get false positives from matching some other different virus) -- also known as sensitivity and specificity.  Algorithmically, he uses machine learning to determine scores for possible tests for matches to small pieces of the genome, or probes, and utilizes locality-sensitive hashing, combinatorial optimization algorithms for submodular maximization, and sharding pattern matching across tries as substages in the overall design.  

I am always excited to see algorithmic ideas being used to solve real-world problems, and this is a deep and difficult example of the "algorithmic lens"  at work.  I am optimistically hopeful that this type of technology will help drive the development of viral diagnostic and monitoring methods forward.     

Thursday, November 26, 2020

TCS Connections Questionnaire

I wanted to link to a survey that is up entitled Committee on TCS Connections Questionnaire.  They are examining modifying approaches to publishing in the theoretical computer science community, and they are focusing on FOCS/STOC.

I personally approve of the idea of the committee, though I admit I am concerned that it's too little, too late.  For years, FOCS/STOC has been a culture concerned with some sense of "prestige" -- the number of accepted papers has to be kept low, because we want people outside of theory to take FOCS/STOC as an imprimatur for the top theory work.  Because of this, FOCS/STOC has stayed essentially the same size, while the field (whether you view the field as TCS or computer science writ large) has expanded.  This has led to a proliferation of additional conferences (ITCS, HALG, various theory workshops...) that reduce the importance of FOCS/STOC and their role in creating community cohesion.  It has also led to other areas (most notably AI) becoming the home to work that should be quite at home in major TCS conferences.  

I don't think FOCS/STOC is what is used to be (the central home for theory results, when theory was smaller) or what it has supposedly wanted to be (the home for the best theory results).  I think it makes a lot of sense to stop and think about what they should be for the future.  Hence the survey is important, and I encourage the theoretical computer science community to respond.  I'm not sure, though, that there are great answers -- external forces, and the community's general aversion to change, may mean that there is not much to be done.  

Wednesday, September 02, 2020

Broad Testing Thank You

 I have a loose association with the Broad Institute, an institute created so that "complementary expertise of the genomic scientists and the chemical biologists across MIT and Harvard be brought together in one place to drive the transformation of medicine with molecular knowledge."  (See https://www.broadinstitute.org/history )

They recently passed an amazing milestone, having performed over 1 million Covid tests.  They weren't set up to be a Covid testing lab, but the converted their institute space to respond to the Covid crisis.  (See https://covid19-testing.broadinstitute.org/ )  

In short, they stepped up.  They certainly didn't have to, but they did.  Maybe it will help them do good science, now and in the future.  But my understanding is that they saw a clear societal need (lots of outbreak specialists there ) and they realized they had the expertise and equipment to do good that went beyond science.  I just wanted to give a shout out to the Broad for their good works, scientific and societal.  

Saturday, June 27, 2020

STOC Workshop on "Algorithms with Predictions"

The STOC workshop on Algorithms with Predictions was on Friday, and I thought it went really well!  I can't speak for my talk, but the other 3 talks (Tim Roughgarden, Edith Cohen, Ravi Kumar) were fantastic and inspiring, and I really recommend them for anyone with an interest in "Beyond Worst-Case Analysis".   

The talks are all on Youtube.  And the workshop page is full of useful links and information. 

Thursday, June 25, 2020

Writing Code for a Paper : A Note to Students

This post both relates to some of the stuff I'll be presenting at Friday's STOC workshop on Algorithms with Predictions, but is also for future students in my classes, who sometimes wonder why I have them write code on theory material.  (Somehow this might be a theme for a lecture in my grad class next semester, so maybe these are a first attempt at notes for the lecture.  Comments  and  suggestions welcome.)

Some of the work I'm doing is looking at how queueing systems perform with various sorts of predictions on the times the jobs take.  This particular work I'm doing on my own.  (While most of my research has been and is with collaborators, and it's one of the things I enjoy about computer science -- we're a very collaborative field!, which seems to surprise many people -- I still sometimes like to do research projects on my own.  I've looked at queueing systems since my PhD thesis, and it's a bit outside the research interest of most of my collaborator pool, and it's "fun" sometimes to do my own thing.  The reason why "fun" is in quotes is described below.) 

Often in my work in queueing I'm looking at mean-field limits (meant to model infinite systems of queues, which provides a good approximation for large finite systems under reasonable assumptions), where I can derive families of differential equations describing the system behavior.  I can also simulate the large finite system directly, and make sure the results match.  I generally do this for all of these types of papers.

Now the numbers I get from simulating the system directly and from simulating the differential equations should match (say within 1% or so).  If they don't, something is wrong.  In an effort to avoid wrongness, I won't consider the paper ready for outside consumption until I get a match.  Unfortunately, there are three ways things can go wrong.

1.  My simulation code for the queueing system might have bugs.
2.  My code to evaluate the differential equations might have bugs.
3.  My equations themselves might have bugs.

And I find there are two main categories of bugs.  Sometimes the bugs are simple/standard coding mistakes -- I'm off by 1 on an index, or I cut and paste and forget to change an i++ to a j++ in one my double loops, or I type x instead of a y.  Usually it's pretty easy to find these things, although I've had times where a hidden typo took hours to find.  But sometimes the bug is a thinking mistake -- I've forgotten a subcase and so my equations aren't complete (and so my code evaluating the equations won't give the right answer), or I've not handled a subcase correctly in my simulation.  That type usually takes longer. 

Usually, the first time through, most all of these types of bugs happen -- my math is off, I've typed some stuff wrong, it can all happen.  And then, like coders everywhere, I go through and fix it.  And it's painful.  Sometimes everything goes right, a quick check or two and everything works.  For more complicated stuff, it's more time figuring out what went wrong than setting up the code to begin with.  And being the personality type to not let things sit, that can mean late nights figuring out what went wrong.

For my talk this week, there was one last problem I wanted to include, which meant finally taking the model and writing the equations and code.  I didn't even need it for the talk, but it's also the last bit before I put a paper draft on arxiv, so taking advantage of a deadline, I figured now was the time.  Which means the last 2 days, I've spent many hours (and a late night) trying to remove the disagreements.

On the plus side, when everything finally works, it's a wonderful feeling.  And it always makes me feel better when I have worked to verify my math this way;  this time, what kept me up well past midnight and took several hours to track down was actually a boundary case I had left out of the equations.  (I had looked at the equations over and over again without noticing I had left out the subcase;  I had to step through numbers from the differential equations one time step at a time to track down what was missing, and then the numbers told me what I had done wrong.)

On the down side, it's work, and debugging is never particularly fun.

For students out there, maybe I'm just explaining that I understand the pain that I am putting you through.  You may wonder why I have you do simulations that take a few hours if you do them well, but days if you don't think through the best approach.  But using programming and theory together can be powerful;  it's helped me countless times in my research.

(Related: on theory and experiments that I've written on before, along with a viewpoint by Jeffrey Ullman.)


Wednesday, June 17, 2020

Algorithms with Predictions: Survey and Workshop

There's a whole new, interesting theory trend  -- Algorithms with Predictions.  The idea, spurred by advances in machine learning, is that you assume you have predictor that tells you something about your input.  For example, in caching, you might have a prediction of when the item you are currently accessing will be next accessed.  Of course, machine learning predictions aren't perfect.  Still, you'd like to use this prediction to improve your caching algorithm, but from the theory side, we'd like provable statements.  For example, you could say, if my prediction is THIS good (e.g., the error is bounded under some metric), then my caching performance will correspondingly be at least THIS good (e.g., performance bounded in some way).

If you haven't seen the burgeoning spread of this line of work and are interested, you're in luck.  First, Sergei Vassilvitskii and I have written a brief survey that's now on the arxiv.  We had written it for a collection Tim Roughgarden is organizing on Beyond Worst-Case Analysis (that we thought we be out by now, and should be out from the publisher soon-ish), but we've gone ahead and put a version on the arxiv to make it available.  The area is moving fast, so there are already many new results --  we hope to update the "survey" with new material as the area grows.

Second, one of the STOC'20 Workshops will be on Algorithms with Predictions.  It will be on Friday from 1-4pm, with speakers Tim Roughgarden, Edith Cohen,  Ravi Kumar, and me.  I'll be talking about some of my recent work  (in submission) on queues with predictions, and partitioned learned Bloom filters.  (Arxiv papers are here, here, and here, but maybe you want to see the talk first.)  I'll also do a blog post on partitioned learned Bloom filters in the near future.

Saturday, June 06, 2020

CATCS Visioning Workshop

Reposting an important call -- these events have had big impact in the past!
The CATCS will be hosting a virtual “Visioning Workshop” the week of July 20 in order to identify broad research themes within theoretical computer science (TCS) that have potential for a major impact in the future. The goals are similar to the workshop of the same name in 2008: to package these themes in a way that can be consumed by the general public, which we would deliver primarily to the Computing Community Consortium and others (e.g. funding agencies) to help them advocate for TCS.
While participation in the workshop is primarily through invitation, we have a few slots available for the broader community. If you are interested in participating, please see details of the application process below. The workshop will be organized according to area-wise breakout groups. Each breakout group will have 1-2 leads. Breakout groups will meet for 4-5 hours spread across several days and will be tasked with brainstorming ideas and preparing materials related to their topic. Leads are further expected to participate in plenary sessions held on Monday July 20 and Friday July 24 (4-5 hrs of additional time) where these materials will be discussed.
If you are interested in participating in the workshop, please fill out this Google form by Monday June 15. On this form, applicants are asked to contribute one or two major results in the last 10 years whose significance can be explained in layperson terms, and one or two major challenges for theory whose significance can be explained in layperson terms. These descriptions can be very brief. We will just use them to select participants and create breakout groups.

Sunday, February 02, 2020

Current CS 124 Stats

This is as much personal recording for me (and perhaps of interest to Harvard people who read the blog).  But also putting the numbers here for others to know for comparison.

I'm teaching the undergraduate algorithms and data structures class, CS 124, for the first time in a few years, so let's look at the initial enrollment numbers.  Harvard has its strange and wonderful shopping period, so I'm just getting the numbers now after the first week.  (Feel free to comment with comparative stats from your own institution!)

Current numbers are a bit over 280 students, about 20 more than last year, about 60 more than the last time I taught it.  Seems like the course has been growing about 20 people a year for the last few years.  This number may go up or down a few (most likely down), but is probably close to where the class will end up.  I keep thinking we've hit "peak 124", but it keep going upwards.  Part of this seems to be because a few more grad students (especially various master's students) are taking it.  Something like 1/7 of the undergraduates take this course now, which to me always seems amazing.  When I was an undergraduate at Harvard, CS courses were just not this big.  My first year teaching at Harvard, CS 124 was about 90 students, and that was huge.  I do realize, of course, that many places have even larger CS classes;  I'm not meaning to complain.

Surprising to me, about 1/4 of the course is first-years.  This is more than I remember from my previous years teaching it;  it's a hard course that requires both math and programming background, so first-years usually wait.  On the other hand, the first-years taking it are self-selecting, and in past years the first-year grade average is notably higher than the class average.

About 40% sophomores, not surprisingly the largest group.  Then a mix of juniors, seniors, and grad students from various programs.

The course is also offered through the Extension School;  here numbers change a lot.  Right now it's about 45, but that will most likely drop further.

If I've counted right, it's my 18th time teaching the class.  I'm pretty sure I'll get to 20.  I suppose it's possible I'll get to 30 or more, but it's likely someone else will take it over at some point.