Wednesday, September 04, 2019

Happy New Academic Year: Teaching Randomized Algorithms

It seems I haven't written on this blog for a while.

Today was the start of a new semester.  I'll be teaching Randomized Algorithms and Probabilistic Analysis, using the new edition of my book with Eli Upfal as a base, and throwing in other material.  (Everyone should buy the book!  Here's a link.) 

It's a graduate level class, but generally designed for first year graduate students, and there were a lot of undergrads "shopping" it today.  (We don't do pre-registration at Harvard, and students get the first week to choose classes, known as shopping.)  So many that people were standing out the doors of the room.  But because we have a bit of a shortage of classes this semester, I'm guessing there's a good fraction of students just checking it out.  We'll see Thursday, but for now I'll predict we'll fit in the classroom, and wait to see if I'm wrong.  (If I'm wrong, that's wonderful too.)   

It's been four years since I last taught the course, so this time I'm trying something new.  When I've previously taught the course, I tried to make the class inviting and friendly by telling the class we'd begin without assuming the class knew probability, and so the first couple of weeks would be reviewing basics (like, say, linearity of expectations and union bounds), albeit in a CS algorithms context.  This time, I let the class know I'm assuming they know (or will pick up) basic probability, and so they should read chapters 1-4 on their own, and we'll start with Chapter 5, Balls and Bins models.  Over the last decade, I've seen a huge shift in probability knowledge -- Stat 110, Harvard's probability course, has become one of Harvard's biggest classes.  Many students have already taking AI or ML or even data science courses where they've done some (further) probability.  It feels appropriate (and safe) to assume people entering in the class know probability, or can review what they need on their own, and start the class further along.

Now finally, a request.  It's actually hard for me to teach when using this book, because I don't want to just read the book to the students.  That's boring.  On the other hand, if I thought something was important, I most likely already put it in the book.  We have to mix up the standard lecturing format a bit.  So two things we'll be doing are

1)  doing some "puzzle problems" at the beginning of most classes, so people can try to solve problems.  (Kind of a flipped classroom approach, but not a full commitment.)
2)  reading papers, related to the class topics.

So if you have any good suggestions of probability puzzle problems, or readable papers (particularly application papers) that use relatively basic probabilistic analysis in neat ways, send them over.  I've got a semester to fill.

For curious people, here's one of today's starting problems, which I first learned about in graduate school.  (I'm pretty sure I owe thanks to Claire Kenyon for teaching it.  I'll link to the corresponding Wikipedia page on the problem maybe later.)

After lunch, Bob suggests the following game to see who pays.  Alice and Bob will each choose a different sequence of three flips.  (So they could choose "Heads-Tails-Heads'', or "Tails-Tails-Tails'' for example.)  After they choose, a fair coin will be tossed until one of their sequences appears as a consecutive subsequence of the coin tosses.  The player whose sequence appears first wins. (Note that if they choose the above sequences, and if the flips come up Heads-Tails-Tails-Tails, the player that chose Tails-Tails-Tails would win as soon as their subsequence appears;  it's not three flips, then start over again.)  Bob politely says that Alice can choose first, and after she chooses and tells him her sequence he'll choose a different sequence.  What should Alice choose?

5 comments:

Philip Sterne said...

Hooray for your return to blogging!

I haven't seen this problem before, but it is strongly reminiscent of those efficient string matching algorithms where you build a little state machine, and never look at a letter twice. I'm going to guess that either HTT, or THH are good answers because once you've seen at least one of each H and T, then you will always be matching at least one symbol in the string. I can't think of other strings which will have this property.)

Feel free to delete this comment if I got it correct! (Or leave it up to lead others astray.)

Anonymous said...

The secretary problem sounds like one of those problems where the math is easy and students can develop their own ideas.
There is also a generalization to bipartite matching that reaches the same approximation bound of 1/e. The proof by Kesselheim et al. is short and uses only elementary probability arguments.

Michael Mitzenmacher said...

Thanks for the pointer -- secretary problem is the start of next class, though maybe less puzzle-oriented.

Kodlu said...

I teach your pattern probability / expectation puzzle in my information theory class. It's got some nice combinatorics, and the fact that the expected time to first see the pattern depends on the pattern is counterintuitive at first. Non rigorously, if a pattern self overlaps, as in HHH, since it overlaps with itself if shifted by one and by two letters, it tends to take longer to appear at first "to keep the long term probability rate the same". J. H. Conway has also looked at this in terms of correlation polynomials which represent the overlap properties. Also, whoever chooses first, the second person can choose a pattern that will beat the first pattern on average, but this is non transitive, Pattern A can beat Pattern B which beats Pattern C which beats Pattern A.

Kodlu said...

I should have added "to keep the long term probability rate the same, since it can occur in overlapping clusters".