Sunday, November 29, 2020

ADAPT: Designing Activity-Informed Viral Diagnostic Assays

I wanted to give a pointer to a new preprint on bioRxiv on developing diagnostic assays for viruses, by (first author) Hayden Metsky (and others!) out of the Sabeti Lab at the Broad Institute (that I've been a bit involved with).  Hayden, who somehow is both a computer science PhD and an expert in virology, has devised a novel software pipeline for developing diagnostics that are designed from the start to deal with genomic diversity (a virus evolves to have many somewhat different variants) and the challenge of false matches (you don't want to get false positives from matching some other different virus) -- also known as sensitivity and specificity.  Algorithmically, he uses machine learning to determine scores for possible tests for matches to small pieces of the genome, or probes, and utilizes locality-sensitive hashing, combinatorial optimization algorithms for submodular maximization, and sharding pattern matching across tries as substages in the overall design.  

I am always excited to see algorithmic ideas being used to solve real-world problems, and this is a deep and difficult example of the "algorithmic lens"  at work.  I am optimistically hopeful that this type of technology will help drive the development of viral diagnostic and monitoring methods forward.     

Thursday, November 26, 2020

TCS Connections Questionnaire

I wanted to link to a survey that is up entitled Committee on TCS Connections Questionnaire.  They are examining modifying approaches to publishing in the theoretical computer science community, and they are focusing on FOCS/STOC.

I personally approve of the idea of the committee, though I admit I am concerned that it's too little, too late.  For years, FOCS/STOC has been a culture concerned with some sense of "prestige" -- the number of accepted papers has to be kept low, because we want people outside of theory to take FOCS/STOC as an imprimatur for the top theory work.  Because of this, FOCS/STOC has stayed essentially the same size, while the field (whether you view the field as TCS or computer science writ large) has expanded.  This has led to a proliferation of additional conferences (ITCS, HALG, various theory workshops...) that reduce the importance of FOCS/STOC and their role in creating community cohesion.  It has also led to other areas (most notably AI) becoming the home to work that should be quite at home in major TCS conferences.  

I don't think FOCS/STOC is what is used to be (the central home for theory results, when theory was smaller) or what it has supposedly wanted to be (the home for the best theory results).  I think it makes a lot of sense to stop and think about what they should be for the future.  Hence the survey is important, and I encourage the theoretical computer science community to respond.  I'm not sure, though, that there are great answers -- external forces, and the community's general aversion to change, may mean that there is not much to be done.  

Wednesday, September 02, 2020

Broad Testing Thank You

 I have a loose association with the Broad Institute, an institute created so that "complementary expertise of the genomic scientists and the chemical biologists across MIT and Harvard be brought together in one place to drive the transformation of medicine with molecular knowledge."  (See https://www.broadinstitute.org/history )

They recently passed an amazing milestone, having performed over 1 million Covid tests.  They weren't set up to be a Covid testing lab, but the converted their institute space to respond to the Covid crisis.  (See https://covid19-testing.broadinstitute.org/ )  

In short, they stepped up.  They certainly didn't have to, but they did.  Maybe it will help them do good science, now and in the future.  But my understanding is that they saw a clear societal need (lots of outbreak specialists there ) and they realized they had the expertise and equipment to do good that went beyond science.  I just wanted to give a shout out to the Broad for their good works, scientific and societal.  

Saturday, June 27, 2020

STOC Workshop on "Algorithms with Predictions"

The STOC workshop on Algorithms with Predictions was on Friday, and I thought it went really well!  I can't speak for my talk, but the other 3 talks (Tim Roughgarden, Edith Cohen, Ravi Kumar) were fantastic and inspiring, and I really recommend them for anyone with an interest in "Beyond Worst-Case Analysis".   

The talks are all on Youtube.  And the workshop page is full of useful links and information. 

Thursday, June 25, 2020

Writing Code for a Paper : A Note to Students

This post both relates to some of the stuff I'll be presenting at Friday's STOC workshop on Algorithms with Predictions, but is also for future students in my classes, who sometimes wonder why I have them write code on theory material.  (Somehow this might be a theme for a lecture in my grad class next semester, so maybe these are a first attempt at notes for the lecture.  Comments  and  suggestions welcome.)

Some of the work I'm doing is looking at how queueing systems perform with various sorts of predictions on the times the jobs take.  This particular work I'm doing on my own.  (While most of my research has been and is with collaborators, and it's one of the things I enjoy about computer science -- we're a very collaborative field!, which seems to surprise many people -- I still sometimes like to do research projects on my own.  I've looked at queueing systems since my PhD thesis, and it's a bit outside the research interest of most of my collaborator pool, and it's "fun" sometimes to do my own thing.  The reason why "fun" is in quotes is described below.) 

Often in my work in queueing I'm looking at mean-field limits (meant to model infinite systems of queues, which provides a good approximation for large finite systems under reasonable assumptions), where I can derive families of differential equations describing the system behavior.  I can also simulate the large finite system directly, and make sure the results match.  I generally do this for all of these types of papers.

Now the numbers I get from simulating the system directly and from simulating the differential equations should match (say within 1% or so).  If they don't, something is wrong.  In an effort to avoid wrongness, I won't consider the paper ready for outside consumption until I get a match.  Unfortunately, there are three ways things can go wrong.

1.  My simulation code for the queueing system might have bugs.
2.  My code to evaluate the differential equations might have bugs.
3.  My equations themselves might have bugs.

And I find there are two main categories of bugs.  Sometimes the bugs are simple/standard coding mistakes -- I'm off by 1 on an index, or I cut and paste and forget to change an i++ to a j++ in one my double loops, or I type x instead of a y.  Usually it's pretty easy to find these things, although I've had times where a hidden typo took hours to find.  But sometimes the bug is a thinking mistake -- I've forgotten a subcase and so my equations aren't complete (and so my code evaluating the equations won't give the right answer), or I've not handled a subcase correctly in my simulation.  That type usually takes longer. 

Usually, the first time through, most all of these types of bugs happen -- my math is off, I've typed some stuff wrong, it can all happen.  And then, like coders everywhere, I go through and fix it.  And it's painful.  Sometimes everything goes right, a quick check or two and everything works.  For more complicated stuff, it's more time figuring out what went wrong than setting up the code to begin with.  And being the personality type to not let things sit, that can mean late nights figuring out what went wrong.

For my talk this week, there was one last problem I wanted to include, which meant finally taking the model and writing the equations and code.  I didn't even need it for the talk, but it's also the last bit before I put a paper draft on arxiv, so taking advantage of a deadline, I figured now was the time.  Which means the last 2 days, I've spent many hours (and a late night) trying to remove the disagreements.

On the plus side, when everything finally works, it's a wonderful feeling.  And it always makes me feel better when I have worked to verify my math this way;  this time, what kept me up well past midnight and took several hours to track down was actually a boundary case I had left out of the equations.  (I had looked at the equations over and over again without noticing I had left out the subcase;  I had to step through numbers from the differential equations one time step at a time to track down what was missing, and then the numbers told me what I had done wrong.)

On the down side, it's work, and debugging is never particularly fun.

For students out there, maybe I'm just explaining that I understand the pain that I am putting you through.  You may wonder why I have you do simulations that take a few hours if you do them well, but days if you don't think through the best approach.  But using programming and theory together can be powerful;  it's helped me countless times in my research.

(Related: on theory and experiments that I've written on before, along with a viewpoint by Jeffrey Ullman.)


Wednesday, June 17, 2020

Algorithms with Predictions: Survey and Workshop

There's a whole new, interesting theory trend  -- Algorithms with Predictions.  The idea, spurred by advances in machine learning, is that you assume you have predictor that tells you something about your input.  For example, in caching, you might have a prediction of when the item you are currently accessing will be next accessed.  Of course, machine learning predictions aren't perfect.  Still, you'd like to use this prediction to improve your caching algorithm, but from the theory side, we'd like provable statements.  For example, you could say, if my prediction is THIS good (e.g., the error is bounded under some metric), then my caching performance will correspondingly be at least THIS good (e.g., performance bounded in some way).

If you haven't seen the burgeoning spread of this line of work and are interested, you're in luck.  First, Sergei Vassilvitskii and I have written a brief survey that's now on the arxiv.  We had written it for a collection Tim Roughgarden is organizing on Beyond Worst-Case Analysis (that we thought we be out by now, and should be out from the publisher soon-ish), but we've gone ahead and put a version on the arxiv to make it available.  The area is moving fast, so there are already many new results --  we hope to update the "survey" with new material as the area grows.

Second, one of the STOC'20 Workshops will be on Algorithms with Predictions.  It will be on Friday from 1-4pm, with speakers Tim Roughgarden, Edith Cohen,  Ravi Kumar, and me.  I'll be talking about some of my recent work  (in submission) on queues with predictions, and partitioned learned Bloom filters.  (Arxiv papers are here, here, and here, but maybe you want to see the talk first.)  I'll also do a blog post on partitioned learned Bloom filters in the near future.

Saturday, June 06, 2020

CATCS Visioning Workshop

Reposting an important call -- these events have had big impact in the past!
The CATCS will be hosting a virtual “Visioning Workshop” the week of July 20 in order to identify broad research themes within theoretical computer science (TCS) that have potential for a major impact in the future. The goals are similar to the workshop of the same name in 2008: to package these themes in a way that can be consumed by the general public, which we would deliver primarily to the Computing Community Consortium and others (e.g. funding agencies) to help them advocate for TCS.
While participation in the workshop is primarily through invitation, we have a few slots available for the broader community. If you are interested in participating, please see details of the application process below. The workshop will be organized according to area-wise breakout groups. Each breakout group will have 1-2 leads. Breakout groups will meet for 4-5 hours spread across several days and will be tasked with brainstorming ideas and preparing materials related to their topic. Leads are further expected to participate in plenary sessions held on Monday July 20 and Friday July 24 (4-5 hrs of additional time) where these materials will be discussed.
If you are interested in participating in the workshop, please fill out this Google form by Monday June 15. On this form, applicants are asked to contribute one or two major results in the last 10 years whose significance can be explained in layperson terms, and one or two major challenges for theory whose significance can be explained in layperson terms. These descriptions can be very brief. We will just use them to select participants and create breakout groups.

Sunday, February 02, 2020

Current CS 124 Stats

This is as much personal recording for me (and perhaps of interest to Harvard people who read the blog).  But also putting the numbers here for others to know for comparison.

I'm teaching the undergraduate algorithms and data structures class, CS 124, for the first time in a few years, so let's look at the initial enrollment numbers.  Harvard has its strange and wonderful shopping period, so I'm just getting the numbers now after the first week.  (Feel free to comment with comparative stats from your own institution!)

Current numbers are a bit over 280 students, about 20 more than last year, about 60 more than the last time I taught it.  Seems like the course has been growing about 20 people a year for the last few years.  This number may go up or down a few (most likely down), but is probably close to where the class will end up.  I keep thinking we've hit "peak 124", but it keep going upwards.  Part of this seems to be because a few more grad students (especially various master's students) are taking it.  Something like 1/7 of the undergraduates take this course now, which to me always seems amazing.  When I was an undergraduate at Harvard, CS courses were just not this big.  My first year teaching at Harvard, CS 124 was about 90 students, and that was huge.  I do realize, of course, that many places have even larger CS classes;  I'm not meaning to complain.

Surprising to me, about 1/4 of the course is first-years.  This is more than I remember from my previous years teaching it;  it's a hard course that requires both math and programming background, so first-years usually wait.  On the other hand, the first-years taking it are self-selecting, and in past years the first-year grade average is notably higher than the class average.

About 40% sophomores, not surprisingly the largest group.  Then a mix of juniors, seniors, and grad students from various programs.

The course is also offered through the Extension School;  here numbers change a lot.  Right now it's about 45, but that will most likely drop further.

If I've counted right, it's my 18th time teaching the class.  I'm pretty sure I'll get to 20.  I suppose it's possible I'll get to 30 or more, but it's likely someone else will take it over at some point.

Tuesday, January 14, 2020

ITCS 2020, Reflections

I've spent Sunday/Monday at ITCS, or Innovations in Theoretical Computer Science, where I am giving a talk on this paper on Scheduling with Prediction and the Price of Misprediction (LIPIcs page) (which is one of several recent works on Algorithms with Predictions (powerpoint slides)).

I'm told it's 10 years since ITCS started as a conference, and I was one of the skeptics that really did not think it was a good idea 10 years ago.  So as I'm sitting in the sessions, what do I think now?  What are the pros and cons of ITCS?

On the negative side, ITCS is not very big.  It is just over 100 people registered, so it's like a big workshop/small conference size.  (And considering that it's usually held in central places with lots of students, those numbers are buffered by locals.)  Somehow, it's scheduled the 2nd week of January, right after SODA, which seems problematic, and certainly (if it's kept that way) may keep it from getting any larger.  The number of "senior people" around at any one time seemed generally small, a problem for things like "Graduating Bits" (see below).  As ITCS, at least 10 years ago, was supposed to build up to another "premier" TCS conference, focused on "innovative" work, the attendance seems a bit disappointing.

On the neutral side, the conference is single-session, and to make that work, talks this year are limited to 12 minutes.  Your mileage may vary on whether you think this is good or bad;  it seems to work.  (My take:  I slightly prefer parallel sessions, because it means there's more times where there's something I want to see, and that's more important than the times where there are two talks at the same time I want to see.  And 12 minutes is certainly workable but maybe a few minutes short.  But again, that's just my opinion.)  This year (and possibly going forward), some papers were accepted without a full talk -- instead they have a 3-minute talk and a poster in a poster session.  Again, it's not clear to me if this is good or bad (though more paper acceptances makes me lean to the good side), but it seemed to work fine and people were happy with it.  (Such papers are considered full publications and treated the same in the proceedings.)

On the positive side, the conference seems well run.  It's held in a university building, so no expensive hotel costs;  instead they're generous with food and keep registration costs reasonably low.  They use LIPIcs, which provides a good system and approach for publishing papers at low cost.  (Note, I was until recently part of the LIPIcs editorial board, so I'm biased there.)  They seem to be covering their expenses from what I understand.  The business meeting was blessedly short.  They're recording all the talks.  They're doing interesting things like "Graduating Bits" for people who are job-looking (where people graduating or coming out of a postdoc give short talks about their work).

In terms of content, it seems really good.  I've seen several good talks and interesting papers.  While I'm not sure how to quantify whether ITCS work is more "innovative" than the work at other major TCS conferences, I do actually think they are noticeably more open at ITCS than other conferences about accepting papers based on the paper's underlying idea rather than on "technical mastery".

My thoughts 10 years ago were that ITCS was not a great outcome for the community, and that instead the community should push for:

1)  Aiming to do better about opening up the criteria for paper acceptance, including weighing innovation/practical relevance in reviewing papers at FOCS/STOC/SODA.
2)  Increasing the number of papers accepted to these conferences, as too many good papers were being rejected.

Viewed under this lens, ITCS could, I think, be viewed as a success.  The theory community seems unwilling to expand conferences by accepting more papers.  (I note that while the STOC theory-fest has changed and expanded STOC, it hasn't really seemed to increase attendance, and while the number of accepted papers has increased slightly, it hasn't kept pace with the growth in the field.)  ITCS provides another venue for high-quality theory papers, thereby increasing the number of such papers published each year within the theory community, and I think it is generally viewed as a high-quality conference.  And, as I mentioned, ITCS seems at least somewhat more flexible in its criteria for what is an acceptable paper.  ITCS has, I think, in these regards been a benefit for the theory community.

However, despite the success of ITCS, I think it's a band-aid on structural issues in the theory community.  While these issues are complex and many-sided, just comparing with the growth and excitement in the AI community is a little depressing.  Indeed, what I see is AI absorbing significant parts of the theory community;  lots of theory papers now end up at NeurIPS, AAAI, ICML, or AISTATS, because the theory community doesn't seem to have room for them, or doesn't judge them as sufficiently significant.  I view this as a problem for the theory community, the result of the problems I saw 10 years ago for which I didn't think ITCS was the right response.  (Though perhaps it was/is not really viewed as a problem by others;  all of CS, including theory, seems to continue growing;  theory just seems to me to be growing less than it could or should.)

To conclude, my thanks and congratulations to those many people that have organized and maintained ITCS over the years;  I appreciate your work, as I think the whole community does. 

By the way, I thought it never snowed in Seattle, so I'm confused; what is all the cold white stuff outside?