tag:blogger.com,1999:blog-88902042023-05-24T10:34:46.552-04:00My Biased CoinMy take on computer science -- <br>
algorithms, networking, information theory -- <br>
and related items.Michael Mitzenmacherhttp://www.blogger.com/profile/06738274256402616703noreply@blogger.comBlogger857125tag:blogger.com,1999:blog-8890204.post-67596253646492896302021-11-08T09:45:00.000-05:002021-11-08T09:45:37.381-05:00Postdoc call for FODSI<p>As a member of FODSI (Foundations of Data Science Institute -- an NSF funded institute with the aim of advancing theoretical foundations for data science), I'm self-interestedly posting the call for postdocs for this year. Two of the areas are a) Sketching, Sampling, and Sublinear-Time Algorithms and b) Machine Learning for Algorithms (which includes what I call "Algorithms with Predictions.") I'd be happy to see postdoc applications in those areas from people who want to spend some time at Harvard, for example.... but of course there are lots of other exciting things going on with FODSI too and you should take a look.</p><p>The call is at <a href="https://academicjobsonline.org/ajo/jobs/20132">https://academicjobsonline.org/ajo/jobs/20132</a> </p><p>Call text below:</p><table align="center" border="1" class="ads" style="border-collapse: collapse; border: 1px solid rgb(204, 204, 204); color: #2a2a2a; font-family: Verdana, Arial, Helvetica, sans-serif; font-size: 13.3333px; padding: 5px; width: 95%px;"><tbody><tr><td style="border-collapse: collapse; border: 1px solid rgb(204, 204, 204); padding: 5px;">The Foundations of Data Science Institute (FODSI), funded by the National Science Foundation TRIPODS program, is announcing a competitive postdoctoral fellowship. FODSI is a collaboration between UC Berkeley and MIT, partnering with Boston University, Northeastern University, Harvard University, Howard University and Bryn Mawr College. It provides a structured environment for exploring interdisciplinary research in foundations of Data Science spanning Mathematics, Statistics, Theoretical Computer Science and other fields.<p>We are looking for multiple postdoctoral team members who will collaborate with <a href="https://fodsi.us/team.html" style="color: #005f6f; font-weight: 700; text-decoration-line: none;">FODSI researchers</a> at one or more of the participating institutions. These positions emphasize strong mentorship, flexibility, and breadth of collaboration opportunities with other team members -- senior and junior faculty, postdocs, and graduate students at various nodes around the country. Furthermore, postdoctoral fellows will be able to participate in workshops and other activities organized by FODSI.</p><p>The fellowship is a one-year full-time appointment, with the possibility of renewal for a second year (based upon mutual agreement) either at the same or at another FODSI institution. The start date is flexible, although most appointments are expected to start in summer 2022. Candidates are encouraged to apply to work with more than one faculty mentor <b>at one or more participating institutions</b> (in-person mentoring is preferred, but remote options will be also considered). The applicants should have an excellent theoretical background and a doctorate in a related field, including Mathematics, Statistics, Computer Science, Electrical Engineering or Economics. We particularly encourage applications from women and minority candidates.</p><p>The review process will start on November 15, 2021 and will continue until positions are filled.</p></td></tr></tbody></table>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-33676733957101710152021-11-04T09:01:00.000-04:002021-11-04T09:01:10.035-04:00HotNets Presentation : Zero-CPU Collection with Direct Telemetry Access<p>HotNets has asked that we let people know that the 2021 presentations <a href="https://www.youtube.com/channel/UCZZ5nf4RNDIIe4nifI8grwQ/videos">are available here</a>. I'm using that an excuse to highlight our paper on Zero-CPU Collection with Direct Telemetry Access (<a href="https://arxiv.org/abs/2110.05438">arxiv version here</a>), but really I want to highlight the talk by graduate student Jonatan Langlet (Queen Mary University of London) who, as is the nature of graduate students, did all of the real work, and who really did a great job on <a href="https://www.youtube.com/watch?v=_M8AbF_f8Kk&t=2s">the talk (direct link)</a>. If you guessed from my involvement this involves hashing in some way, your maximum likelihood estimate turns out to be correct.</p><p>I think our work fits the HotNets call, which asks for new approaches and preliminary work. Specifically, the call for the HotNets workshop says this:</p><p></p><blockquote><p>We invite researchers and practitioners to submit short position papers. We encourage papers that identify fundamental open questions, advocate a new approach, offer a constructive critique of the state of networking research, re-frame or debunk existing work, report unexpected early results from a deployment, report on promising but unproven ideas, or propose new evaluation methods. Novel ideas need not be supported by full evaluations; well-reasoned arguments or preliminary evaluations can support the possibility of the paper’s claims.</p><p>We seek early-stage work, where the authors can benefit from community feedback. An ideal submission has the potential to open a line of inquiry for the community that results in multiple conference papers in related venues (SIGCOMM, NSDI, CoNEXT, SOSP, OSDI, MobiCom, MobiSys, etc.), rather than a single follow-on conference paper. The program committee will explicitly favor early work and papers likely to stimulate reflection and discussion over “conference papers in miniature”.</p></blockquote><p>There are similar other "Hot" workshops in other areas, and it was about <a href="http://mybiasedcoin.blogspot.com/2007/08/hottheory-workshop.html">14 years ago that I asked whether CS theory should have a HotTheory workshop</a>. There's been a proliferation of new conferences and workshops in theory since then, but none of them really seem to have this flavor. So maybe it's worth asking again whether a HotTheory workshop would make sense? Or do existing theory events meet the theory community needs?</p><p></p><p><br /></p>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-86286668194530993272021-10-25T11:00:00.000-04:002021-10-25T11:00:27.120-04:00How to send a real number using a single bit (and some shared randomness)<p>In this post, we'll look at the natural problem of how to communicate an estimate of a real value in [0,1], using just 1 bit. The post is based on <a href="https://arxiv.org/abs/2010.02331" target="_blank">this paper</a> (by Ran Ben-Basat of UCL and Shay Vargaftik of VMware Research and myself -- they helped also with the post) that appeared in ICALP last year. </p><p>This question is motivated by various aggregation problems; multiple sending devices may measure a value, and wish to send the value to an aggregator who will compute something from the received values, such as the average. In our problem, the senders have a real value x in [0,1] to send, but are constrained to send just a single bit. Variations of this problem have come up in recent work on distributed/federated learning, where clients compute a gradient vector and send it to a centralized parameter server to update a learning model; we may want to compress the vector to a small number of bits, or even 1 bit, per coordinate. (We'll have more to say on the federated learning problem in a future post.) Of course, it's also just an interesting randomized algorithm problem that seems interesting in its own right. </p><p>A natural way to look at the problem is as a variation on rounding. Given a value x in [0,1], one natural approach if limited to one bit is to deterministically round it to X. But what should the receiver do when they receive the (rounded) bit value X? It depends on what one's optimization goal is, but to minimize the maximum possible error, the receiver should have their estimate x' take on the value 3/4 when X is 1, 1/4 otherwise. Note though that deterministic rounding this way is biased -- the expectation E[x'] does not equal x. Randomized rounding, where the sender sends 1 with probability x and 0 otherwise, and the receiver uses the received bit X as the estimate x', has the property that E[x'] = x. Unbiased estimators are arguably more natural for many estimation problems. Here the measure of performance would be the maximum variance for the estimate over all inputs x, so for randomized rounding the cost is 1/4 (when x = 1/2). </p><p>Can one do better than these schemes? It turns out that you can, if you have available shared randomness. An approach that has been known in the engineering world (where it has been used in signal processing) is subtractive dithering: </p><p>We assume that the sender and receiver have access to shared randomness ℎ∼𝑈[−1/2,1/2]. Given a value x, the sender sends 1 if x+h≥1/2, 0 otherwise. The receiver estimates x' = X - h. We leave as an exercise that this is unbiased, which can be shown by deriving the stronger fact that x' is distributed as 𝑈[𝑥−1/2,𝑥+1/2] , and thus Var[𝑥']=1/12.</p><p>Subtractive dithering ignores that generating a shared real number may be more costly or problematic than generating a finite number of shared bits. So one of the results of our paper is developing a "finite shared random bits" unbiased estimator, that corresponds to randomized rounding with no shared bits and converges to subtractive dithering as the number of shared random bits goes to infinity. (The approach does allow for generating a private random real value.) </p><p>Also in our paper, we study biased schemes, aiming to minimize the worst-case expected mean-squared error (where the expectation is over randomness used in the algorithm). For example, it's very odd in the setting of subtractive dithering that one can obtain estimates smaller than 0 or greater than 1, when the input is restricted to [0,1], but that's a price we pay for having an unbiased estimator. For a biased estimator, you might naturally truncate the result from subtractive dithering; by truncating to [z,1-z] for an appropriate z > 0, you can indeed slightly improve over the worst-case mean-squared error of 1/16 for deterministic rounding.</p><p>We then studied various algorithmic improvements to obtain better biased schemes. We were able to progress by looking at limited shared randomness, namely finding the best algorithm with s shared bits. For example, consider the case of just 1 shared random bit h in {0,1}. The receiver receives 1 bit X from the sender, and thus can have four possible estimates x' depending on X and h. If the 4 possible estimate values are v0, v1, v2, v3 (all between 0 and 1), then it is possible to show the largest possible expected squared error occurs at one of the five inputs 0, 1, (v0+v1)/2, (v1+v2)/2, (v2+v3)/2. We can then write a quadratic program to find the values that minimizes the worst-case expected squared error. The end result is the following rounding algorithm: given 1 shared random bit h in {0,1} and the value x, let X = 1 if x ≥ 0.4 + 0.2h, and 0 otherwise; then let the estimate x' = 0.1 + 0.2h + 0.6X. This has a worst-case expected mean-squared error of 1/20, beating deterministic rounding and truncated subtractive dithering. Using some additional arguments we can handle more shared random bits; at 8 bits we improve the worst-case expected squared error to about 0.04599, which is quite close to our lower bound of about 0.0459, and this is better than anything we could come up with analytically. The optimal solution is still not known (an open question for future work!). </p><p>Overall there are many variants of the rounding problem, and few tight bounds currently. So even for simple-seeming problems like rounding, there are still interesting things to do. </p>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com1tag:blogger.com,1999:blog-8890204.post-33948246205189113432021-10-20T14:13:00.000-04:002021-10-20T14:13:27.095-04:00Networking+Theory Postdoc at Harvard<p>The last couple of years one aspect of research I've greatly enjoyed is getting back into networking, which is really due to my excellent (and patient) collaborators Ran Ben Basat (now at UCL, was a postdoc at Harvard) and Minlan Yu (at Harvard). Minlan and I are working to establish a larger-scale Networking+Theory (hopefully broadening to an even larger Systems+Theory) group at Harvard, working on algorithmic problems in the context of real (or at least very real-ish) systems. We have funding, and are looking for a postdoc, the basic description is below. Ideally we're looking for people comfortable with the theory side and the systems side. The website link for applying is <a href="https://academicpositions.harvard.edu/postings/10748">https://academicpositions.harvard.edu/postings/10748</a> . We have preliminary website for the group at <a href="https://projects.iq.harvard.edu/theosys">https://projects.iq.harvard.edu/theosys</a> (it's just a start, Minlan and I are both on sabbatical, but you can see some of our publications). We look forward to finding another member of the team!</p><p><span style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;"><span style="box-sizing: border-box; font-family: Avenir-Heavy, Helvetica, Arial, sans-serif; font-weight: bolder;">Networking + Theory Postdoctoral Position</span></span><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><span style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;">The John A. Paulson School of Engineering and Applied Sciences at Harvard University (SEAS) seeks applicants for a postdoctoral position in networking and theory. The postdoc is intended for one year but there will be funding to potentially extend it to a second year. The postdoc will receive a generous salary as well as an allocation for research and travel expenses.</span><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><span style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;">We are looking for junior scientists who are especially interested in working at the intersection of networking and algorithmic theory, in areas such as programmable network architectures, data center network management, cloud computing, and algorithms for the Internet. Example topics of interest include but are not limited to the design and analysis of sketches and filters for use in real systems, network security, network compression methods, and optimizing network performance for machine learning applications. The ideal candidate will be interested in both building real systems and either developing algorithms and data structures or using existing, underutilized results from the theoretical literature in system design. </span><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><span style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;">The postdoc is intended to work with closely with Minlan Yu and Michael Mitzenmacher, and others involved in the group focused on Systems + Theory work that they are developing, as well as possibly other Harvard faculty. </span><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><span style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;">Candidates should have backgrounds in networking and/or theoretical computer science. Candidates should demonstrate experience in working at the intersection of these areas, or otherwise demonstrate how they will be able to contribute at the intersection. The candidate will be expected to publish scholarly papers, attend internal, domestic, and international conferences and meetings, and take on a mentorship role for undergraduate and graduate students. </span><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><br style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;" /><span style="background-color: white; box-sizing: border-box; color: #212529; font-family: AvenirLTStd55Roman, Helvetica, Arial, sans-serif; font-size: 14px;">Harvard SEAS is dedicated to building a diverse community that is welcoming for everyone, regardless of disability, gender identity and expression, physical appearance, race, religion, or sexual orientation. We strongly encourage applications from members of underrepresented groups.</span></p><p><br /></p><div><br /></div>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-25896149941591581512021-09-04T16:14:00.004-04:002021-09-04T16:55:04.167-04:00Harvard Shopping Period, Here We Go Again<p>I was looking at today's Harvard Crimson, and noted that <a href="https://www.thecrimson.com/article/2021/9/3/save-shopping-week-petition/" rel="nofollow" target="_blank">Harvard's shopping period looks ready to be vanished again</a>. Shopping period is that wonderful Harvard tradition where students don't preregister for classes, but instead they choose classes after the first week, after having a chance to go to a lecture or two and see how they like it. I encourage students -- and faculty -- to push back against efforts that restrict student flexibility in choosing their classes. While administrators hate it, I still think it's better for students to avoid strong forms of preregistration. </p><p>In fact, I realized I've been fighting this fight for quite a while -- here's <a href="http://mybiasedcoin.blogspot.com/2008/02/more-on-shopping-period.html" rel="nofollow" target="_blank">something I wrote about when the administration under Larry Summers tried to get rid of it</a>, from this blog back in 2008, and here's a <a href="https://www.thecrimson.com/article/2002/10/17/preregistration-plan-slated-for-next-fall/" rel="nofollow" target="_blank">Crimson article from 2002</a>, where I spoke out against the plan for moving to preregistration that the blog post refers to. </p><p>More recently, in 2018-2019, I found myself on the <a href="https://courseregistration.fas.harvard.edu/" rel="nofollow" target="_blank">"Course Registration Committee"</a>, a committee that initially seemed similarly designed to find a way to move Harvard from shopping period to preregistration. But a few of us on the committee made convincing arguments that shopping period had many benefits, and the disappearance of shopping period seemed at least somewhat further delayed, while a better solution was found. </p><p>Then the pandemic. Shopping period seemed impossible in the chaos, and things were more ad hoc (although there was, last year, a push for "class previews" of some sort before the beginning of classes). This seems to give an opening to remove shopping period again. </p><p>I'm not saying there aren't ways to change the system to make it better. I'm not blind to the issues of shopping period -- not having a determined class size before classes begin is problematic for some classes. I believe the committee people who have continued to look at the issue are trying to make things better going forward. But the push always seems to be to make a system which is easier for administrators, and somehow correspondingly that is worse for the students. Students should have the flexibility to see classes and teachers before committing, which to me means either a shopping period, or a structure that allows them to easily change classes for the first couple of weeks with negligible overhead. I suppose one could design an add/drop system with the flexibility I'd have in mind, but it never seems to work that way in practice -- students end up needing signatures and approvals of various flavors, because (I think) it's in the best interest of administrators to make it hard for students to change classes once classes begin. (Having 60 people sign up for a class but then having 30 people leave in the first week is possibly worse than the shopping period approach of not having sign-ups before the class at all, but it's a lot easier to disincentivize students from switching out (or in) with a preregistration system, so that problem disappears, at the cost of student flexibility.) </p><p>As an example of a "non-shopping" issue I've seen this semester, first-year students at Harvard first semester are "limited" to four classes. (There may be a way to get an exception to this, but I understand it would be rare exception.) So this semester, with no shopping period, first years have to choose their 4 classes -- without seeing them -- and then manage a confusing add/drop process if they don't like them. The older students generally know how to play the game -- they sign up for 5 or even 6 classes (if they can) and drop the ones they don't like, because dropping is generally easier than adding. But first years don't have that flexibility because the 4-course rule is enforced at signup. (I advise some first year students, and this problem came up.) </p><p>I'm sure many non-Harvard people reading this think the shopping period system sounds crazy, and maybe a few think Harvard students are bizarrely spoiled. Perhaps they're right. But I found as a student it was wonderful, and shaped my future in powerful ways by encouraging me to explore. You walk into a class you thought you should take, and find the professor puts you to sleep; or you get dragged to a class by a friend, and find an inspiring subject you had no idea you would like. I believe my college experience would have been lessened significantly without shopping period. </p><p>As a faculty member, shopping period is clearly a pain, but I think a manageable one. Back in 2002-2003, the faculty pushed back against preregistration (<a href="https://www.thecrimson.com/article/2003/3/12/faculty-blasts-preregistration-several-professors-fiercely/" rel="nofollow" target="_blank">see this old Crimson article</a>), but it seems opinions have shifted over time; many faculty seemed to have moved to thinking it's not worth it, which is probably in their own self-interest. Having been on both sides, I'm still strongly in favor of shopping period. I suppose if I ever get into administration I may see things differently. </p><p>I hope there are (younger) defenders out there, in the students and faculty, to push to make sure any changes still favor student choice over administrative convenience, and lead to the best outcomes for students. </p>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com1tag:blogger.com,1999:blog-8890204.post-4804554177399370862021-08-09T16:01:00.003-04:002021-08-09T17:00:59.845-04:00Queues with Small Advice<p>I have had papers rejected, with comments of the form that the results seem too easy, and are at the level of a homework assignment. Generally, I think these reviewers miss the point. The fact that the results seem easy may be because the point isn't the derivation but the conception and framing of the problem. I actually think that generally it's an interesting subclass of good papers that can be and are turned into homework assignments.</p><p>A new-ish paper of mine, Queues with Small Advice, was recently accepted to the very new SIAM Conference on Applied and Computational Discrete Algorithms (<a href="https://www.siam.org/conferences/cm/conference/acda21">ACDA21</a>), which took place July 19-21. This conference focuses on algorithms with a close tie to applications. Some people unfamiliar with theory conferences might think that algorithms work would naturally be tied to applications, but I've generally found that algorithmic work tied to applications is more negatively reviewed in theory conferences. Indeed, that type of work is much more likely to receive comments of the form that the results seem too easy, and are at the level of a homework assignment. So perhaps this new conference will fill an important role and hole in the current slate of theory conferences. </p><p>In any case, I actually do think this paper is in some ways easy (in that the analysis is readily approachable with standard tools), and parts of it would, I believe, make a great homework assignment. The goal was to show the potential power of using even very simple advice, such as from machine-learning algorithms, in queueing systems. This seems to me to be a very understudied topic, and fits into the recently growing theme of <a href="https://arxiv.org/abs/2006.09123">Algorithms with Predictions</a>. (The paper was rejected previously from a conference, where the most negative review said "Very accessible and well written paper, which certainly provides motivation to consider problems of this type." but also said "The mathematical analysis in this paper is fairly standard, and in that sense not novel... the paper is interesting, but not advancing sufficiently the state of the art.") </p><p>The paper focuses on the case of 1 bit of advice -- essentially, is the job "short" or "long". I think this type is advice is a good approach to look at for queueing -- it corresponds naturally to putting a job at the front of the queue, or the back. And it may be easier for machine-learning algorithms to generate accurately. Simple is often good in practice. </p><p>Rather than describe the paper further, I'll go ahead and turn it directly into a collection of homework problems. Feel free to use them or variations you come up with; hopefully, the students won't find the paper for answers. I personally would be thrilled if one outcome of this paper was that prediction-based problems of this form made their way into problem sets. (Although, serious question: who still teaches queueing theory any more?) </p><p><b>Section 1: One-Bit Advice (Single Queue)</b></p><p>a) Consider the standard M/M/1 queue, with Poisson arrivals at rate λ, and exponentially distributed service times of mean 1; the expected time a job spends in the queue in equilibrium is 1/(1-λ). Now suppose each job comes with one bit advice; if the job has service time greater than T, the bit is 1, and if it is smaller than T, the bit is 0. A "big" job goes to the end of the queue, a "small" job goes to the front. (Assume the queue is non-preemptive.) Find the expected time for a job in this queue in equilibrium, as a function of T and λ.</p><p>b) What is the optimal value for T (as a function of λ)? </p><p>c) Repeat parts a and b, but this time with a preemptive queue. Does preemption help or hurt performance?</p><p>Harder variation: The above questions, but with an M/G/1 queue (that is, for a general, given service distribution); derive a formula for the expected time in the system, where the formula may involve terms based on the service distribution.</p><p>Easier variation: Write a simulation, experimentally determine the best threshold, and the improvements from one bit of advice. Different service time distributions can be tried. </p><p><b>Section 2: One-Bit Advice with Predictions (Single Queue)</b></p><p>Where would possibly get a bit of advice in real life? Perhaps from a machine learning predictor. But in that case, the advice might turn out to be wrong. What if our bit of advice is just right most of the time?</p><p>a) Consider the (non-preemptive) M/M/1 queue variation from Section 1 part a above, but now the advice is correct with some probability p. Find the expected time for a job in this queue in equilibrium, as a function of p, T, and λ.</p><p>b) Repeat part a with a preemptive queue.</p><p>Harder variations: The above questions, but have the probability the advice is correct depend on the size of the job. A particularly fun example is when the "predicted service time" for a job with true time x is exponentially distributed with mean x, and the prediction bit is 1 if the predicted time is larger than T, and 0 otherwise. Also, one can again consider general service times. </p><p>Easier variation: Again, write a simulation and derive experimental results/insights. </p><p><b>Section 3: One-Bit Advice with Prediction (Power of 2 Choices)</b> <i>[harder, grad student level; needs to know fluid limit models; I'd stick with sections 1 and 2!]</i></p><p>a) Derive fluid limit equations for a collection of N queues, where there are two types of jobs: "large" jobs arrive as a Poisson stream of rate λ₁N and have exponentially distributed service times with mean μ₁ and "small" jobs arrive as a Poisson stream of rate λ₂N and have exponentially distributed service times of mean μ₂. Each job comes with a bit of advice determining whether it is large or small, but large jobs are mislabelled with probability p₁ and small jobs are mislabelled with probability p₂. An incoming job selects a queue using "the power of two choices" -- it is up to you to describe how a job determines what is the better of the two choices (there are multiple possibilities) and how jobs are processed within a queue (non-preemptive is suggested). </p><p>[Hint: the queue state can be represented by the number of jobs that are labelled short that are waiting, the number of jobs that are labelled long that are waiting, and the type of the job currently being serviced.] </p><p>b) Compare fluid limit results to simulations for 1000 queues to see if your equations seem accurate. </p><p><br /></p>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com2tag:blogger.com,1999:blog-8890204.post-75548712125914408422021-06-08T15:04:00.001-04:002021-06-08T15:04:52.377-04:00Machine Learning for Algorithms Workshop (July 13-14)<p>We're having an online workshop on "Machine Learning for Algorithms" on July 13-14, with a great group of speakers. Announcement below, link at <a href="https://fodsi.us/ml4a.html">https://fodsi.us/ml4a.html</a>, free registration (but please register in advance)!</p><div class="gmail_default" style="font-size: small;">In recent years there has been increasing interest in using machine learning to improve the performance of classical algorithms in computer science, by fine-tuning their behavior to adapt to the properties of the input distribution. This "data-driven" or "learning-based" approach to algorithm design has the potential to significantly improve the efficiency of some of the most widely used algorithms. For example, they have been used to design better data structures, online algorithms, streaming and sketching algorithms, market mechanisms and algorithms for combinatorial optimization, similarity search and inverse problems. This virtual workshop will feature talks from experts at the forefront of this exciting area.<br /><br />The workshop is organized by Foundations of Data Science Institute (FODSI), a project supported by the NSF TRIPODS program (see fodsi.us). To attend, please register at <br /> <br /><a href="https://fodsi.us/ml4a.html">https://fodsi.us/ml4a.html</a> </div>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-56169479533236897512020-11-29T22:25:00.003-05:002020-11-30T01:16:22.731-05:00ADAPT: Designing Activity-Informed Viral Diagnostic Assays<p><span style="font-size: small;">I wanted to give a pointer to a <a href="https://www.biorxiv.org/content/10.1101/2020.11.28.401877v1" rel="nofollow" target="_blank">new preprint on bioRxiv</a> on developing diagnostic assays for viruses, by (first author) <a href="https://www.haydenmetsky.com/" rel="nofollow" target="_blank">Hayden Metsky</a> (and others!) out of the <a href="https://www.sabetilab.org/" rel="nofollow" target="_blank">Sabeti Lab</a> at the Broad Institute (that I've been a bit involved with). Hayden, who somehow is both a computer science PhD and an expert in virology, has devised a novel software pipeline for developing diagnostics that are designed from the start to deal with genomic diversity (a virus evolves to have many somewhat different variants) and the challenge of false matches (you don't want to get false positives from matching some other different virus) -- also known as sensitivity and specificity. Algorithmically, he uses machine learning to determine scores for possible tests for matches to small pieces of the genome, or probes, and utilizes locality-sensitive hashing, combinatorial optimization algorithms for submodular maximization, and sharding pattern matching across tries as substages in the overall design. </span></p><div class="gmail_default" style="font-size: small;">I am always excited to see algorithmic ideas being used to solve real-world problems, and this is a deep and difficult example of the "algorithmic lens" at work. I am optimistically hopeful that this type of technology will help drive the development of viral diagnostic and monitoring methods forward. </div><div class="yj6qo"></div><div class="gmail_default adL" style="font-size: small;"><br style="background-color: white; color: #222222; font-family: Arial, Helvetica, sans-serif;" /></div><div class="gmail_default adL" style="font-size: small;">Some additional pointers: <a href="https://github.com/broadinstitute/adapt" rel="nofollow" target="_blank">to code</a>, <a href="https://adapt.sabetilab.org/covid-19/" rel="nofollow" target="_blank">to some array designs for SARS-CoV-2</a>, and <a href="https://twitter.com/haydenmetsky/status/1333121522127548416" rel="nofollow" target="_blank">Hayden's twitter thread describing the work</a>. </div>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-2673151467145747322020-11-26T10:46:00.003-05:002020-11-26T10:46:45.080-05:00TCS Connections Questionnaire<p>I wanted to link to a survey that is up entitled <a href="https://docs.google.com/forms/d/e/1FAIpQLSeubLuaICwNvHfHto7nBDw_gwkrqKdo-_Fjyz7XZONJ0tJRoA/viewform" rel="nofollow" target="_blank">Committee on TCS Connections Questionnaire</a>. They are examining modifying approaches to publishing in the theoretical computer science community, and they are focusing on FOCS/STOC.</p><p>I personally approve of the idea of the committee, though I admit I am concerned that it's too little, too late. For years, FOCS/STOC has been a culture concerned with some sense of "prestige" -- the number of accepted papers has to be kept low, because we want people outside of theory to take FOCS/STOC as an imprimatur for the top theory work. Because of this, FOCS/STOC has stayed essentially the same size, while the field (whether you view the field as TCS or computer science writ large) has expanded. This has led to a proliferation of additional conferences (ITCS, HALG, various theory workshops...) that reduce the importance of FOCS/STOC and their role in creating community cohesion. It has also led to other areas (most notably AI) becoming the home to work that should be quite at home in major TCS conferences. </p><p>I don't think FOCS/STOC is what is used to be (the central home for theory results, when theory was smaller) or what it has supposedly wanted to be (the home for the best theory results). I think it makes a lot of sense to stop and think about what they should be for the future. Hence the survey is important, and I encourage the theoretical computer science community to respond. I'm not sure, though, that there are great answers -- external forces, and the community's general aversion to change, may mean that there is not much to be done. </p>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-41741133979672723122020-09-02T20:32:00.002-04:002020-09-02T20:32:41.655-04:00Broad Testing Thank You<p> I have a loose association with the Broad Institute, an institute created so that "complementary expertise of the genomic scientists and the chemical biologists across MIT and Harvard be brought together in one place to drive the transformation of medicine with molecular knowledge." (See <a href="https://www.broadinstitute.org/history">https://www.broadinstitute.org/history</a> )</p><p>They recently passed an amazing milestone, having performed over 1 million Covid tests. They weren't set up to be a Covid testing lab, but the converted their institute space to respond to the Covid crisis. (See <a href="https://covid19-testing.broadinstitute.org/">https://covid19-testing.broadinstitute.org/</a> ) </p><p>In short, they stepped up. They certainly didn't have to, but they did. Maybe it will help them do good science, now and in the future. But my understanding is that they saw a clear societal need (lots of <a href="https://www.amazon.com/gp/product/B07DFZ12KV/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=michaelmitzen-20&creative=9325&linkCode=as2&creativeASIN=B07DFZ12KV&linkId=5a5c250c33b2ba71012ff577c69e1a77">outbreak specialists there</a> ) and they realized they had the expertise and equipment to do good that went beyond science. I just wanted to give a shout out to the Broad for their good works, scientific and societal. </p>Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-21517902747539330952020-06-27T14:06:00.000-04:002020-06-27T14:06:03.221-04:00STOC Workshop on "Algorithms with Predictions" The STOC workshop on Algorithms with Predictions was on Friday, and I thought it went really well! I can't speak for my talk, but the other 3 talks (Tim Roughgarden, Edith Cohen, Ravi Kumar) were fantastic and inspiring, and I really recommend them for anyone with an interest in "Beyond Worst-Case Analysis". <br />
<br />
The talks are <a href="https://www.youtube.com/watch?v=byKYJwN6XKw&feature=youtu.be">all on Youtube</a>. And the <a href="https://www.mit.edu/~vakilian/stoc-workshop.html">workshop page</a> is full of useful links and information. Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-57328324783593893422020-06-25T13:35:00.001-04:002020-06-25T13:35:52.494-04:00Writing Code for a Paper : A Note to StudentsThis post both relates to some of the stuff I'll be presenting at Friday's STOC workshop on Algorithms with Predictions, but is also for future students in my classes, who sometimes wonder why I have them write code on theory material. (Somehow this might be a theme for a lecture in my grad class next semester, so maybe these are a first attempt at notes for the lecture. Comments and suggestions welcome.)<br />
<br />
Some of the work I'm doing is looking at how queueing systems perform with various sorts of predictions on the times the jobs take. This particular work I'm doing on my own. (While most of my research has been and is with collaborators, and it's one of the things I enjoy about computer science -- we're a very collaborative field!, which seems to surprise many people -- I still sometimes like to do research projects on my own. I've looked at queueing systems since my PhD thesis, and it's a bit outside the research interest of most of my collaborator pool, and it's "fun" sometimes to do my own thing. The reason why "fun" is in quotes is described below.) <br />
<br />
Often in my work in queueing I'm looking at mean-field limits (meant to model infinite systems of queues, which provides a good approximation for large finite systems under reasonable assumptions), where I can derive families of differential equations describing the system behavior. I can also simulate the large finite system directly, and make sure the results match. I generally do this for all of these types of papers.<br />
<br />
Now the numbers I get from simulating the system directly and from simulating the differential equations should match (say within 1% or so). If they don't, something is wrong. In an effort to avoid wrongness, I won't consider the paper ready for outside consumption until I get a match. Unfortunately, there are three ways things can go wrong.<br />
<br />
1. My simulation code for the queueing system might have bugs.<br />
2. My code to evaluate the differential equations might have bugs.<br />
3. My equations themselves might have bugs.<br />
<br />
And I find there are two main categories of bugs. Sometimes the bugs are simple/standard coding mistakes -- I'm off by 1 on an index, or I cut and paste and forget to change an i++ to a j++ in one my double loops, or I type x instead of a y. Usually it's pretty easy to find these things, although I've had times where a hidden typo took hours to find. But sometimes the bug is a thinking mistake -- I've forgotten a subcase and so my equations aren't complete (and so my code evaluating the equations won't give the right answer), or I've not handled a subcase correctly in my simulation. That type usually takes longer. <br />
<br />
Usually, the first time through, most all of these types of bugs happen -- my math is off, I've typed some stuff wrong, it can all happen. And then, like coders everywhere, I go through and fix it. And it's painful. Sometimes everything goes right, a quick check or two and everything works. For more complicated stuff, it's more time figuring out what went wrong than setting up the code to begin with. And being the personality type to not let things sit, that can mean late nights figuring out what went wrong.<br />
<br />
For my talk this week, there was one last problem I wanted to include, which meant finally taking the model and writing the equations and code. I didn't even need it for the talk, but it's also the last bit before I put a paper draft on arxiv, so taking advantage of a deadline, I figured now was the time. Which means the last 2 days, I've spent many hours (and a late night) trying to remove the disagreements.<br />
<br />
On the plus side, when everything finally works, it's a wonderful feeling. And it always makes me feel better when I have worked to verify my math this way; this time, what kept me up well past midnight and took several hours to track down was actually a boundary case I had left out of the equations. (I had looked at the equations over and over again without noticing I had left out the subcase; I had to step through numbers from the differential equations one time step at a time to track down what was missing, and then the numbers told me what I had done wrong.)<br />
<br />
On the down side, it's work, and debugging is never particularly fun.<br />
<br />
For students out there, maybe I'm just explaining that I understand the pain that I am putting you through. You may wonder why I have you do simulations that take a few hours if you do them well, but days if you don't think through the best approach. But using programming and theory together can be powerful; it's helped me countless times in my research.<br />
<br />
(Related: on theory and experiments that <a href="https://cacm.acm.org/magazines/2015/9/191184-theory-without-experiments/fulltext">I've written on before</a>, along with <a href="https://cacm.acm.org/magazines/2015/9/191183-experiments-as-research-validation/fulltext">a viewpoint by Jeffrey Ullman</a>.)<br />
<br />
<br />Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-69523581976683956582020-06-17T13:52:00.003-04:002020-06-17T13:53:04.247-04:00Algorithms with Predictions: Survey and WorkshopThere's a whole new, interesting theory trend -- Algorithms with Predictions. The idea, spurred by advances in machine learning, is that you assume you have predictor that tells you something about your input. For example, in caching, you might have a prediction of when the item you are currently accessing will be next accessed. Of course, machine learning predictions aren't perfect. Still, you'd like to use this prediction to improve your caching algorithm, but from the theory side, we'd like provable statements. For example, you could say, if my prediction is THIS good (e.g., the error is bounded under some metric), then my caching performance will correspondingly be at least THIS good (e.g., performance bounded in some way).<br />
<br />
If you haven't seen the burgeoning spread of this line of work and are interested, you're in luck. First, Sergei Vassilvitskii and I have written a <a href="https://arxiv.org/abs/2006.09123">brief survey that's now on the arxiv</a>. We had written it for a collection Tim Roughgarden is organizing on Beyond Worst-Case Analysis (that we thought we be out by now, and should be out from the publisher soon-ish), but we've gone ahead and put a version on the arxiv to make it available. The area is moving fast, so there are already many new results -- we hope to update the "survey" with new material as the area grows.<br />
<br />
Second, one of the <a href="https://www.mit.edu/~vakilian/stoc-workshop.html">STOC'20 Workshops will be on Algorithms with Predictions</a>. It will be on Friday from 1-4pm, with speakers Tim Roughgarden, Edith Cohen, Ravi Kumar, and me. I'll be talking about some of my recent work (in submission) on queues with predictions, and partitioned learned Bloom filters. (Arxiv papers are <a href="https://arxiv.org/abs/1902.00732">here</a>, <a href="https://arxiv.org/abs/1905.12155">here</a>, and <a href="https://arxiv.org/abs/2006.03176">here</a>, but maybe you want to see the talk first.) I'll also do a blog post on partitioned learned Bloom filters in the near future.Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com1tag:blogger.com,1999:blog-8890204.post-64215657338392821782020-06-06T18:40:00.000-04:002020-06-06T18:40:02.720-04:00CATCS Visioning Workshop<div style="color: #222222; font-family: "Lucida Sans Unicode", "Lucida Grande", sans-serif; letter-spacing: 0.1px; line-height: 22px; padding: 0px 0px 20px; text-align: justify; word-spacing: 1.4px;">
<span class="im">Reposting an important call -- these events have had big impact in the past!</span></div>
<div style="color: #222222; font-family: "Lucida Sans Unicode", "Lucida Grande", sans-serif; letter-spacing: 0.1px; line-height: 22px; padding: 0px 0px 20px; text-align: justify; word-spacing: 1.4px;">
<span class="im">The <a href="https://thmatters.wordpress.com/catcs/" style="color: #3f9291; text-decoration-line: none;">CATCS</a> will be hosting a virtual <strong>“Visioning Workshop”</strong> the <strong>week of </strong></span><strong>July 20</strong> in order to identify broad research themes within theoretical computer science (TCS) that have potential for a major impact in the future. The goals are similar to the <a href="https://thmatters.wordpress.com/visioning-workshop/" style="color: #3f9291; text-decoration-line: none;">workshop of the same name</a> in 2008: to package these themes in a way that can be consumed <span class="im">by the general public, which we would deliver primarily to the Computing Community Consortium and others (e.g. funding agencies) to help them advocate for TCS.</span></div>
<div style="color: #222222; font-family: "Lucida Sans Unicode", "Lucida Grande", sans-serif; letter-spacing: 0.1px; line-height: 22px; padding: 0px 0px 20px; text-align: justify; word-spacing: 1.4px;">
While participation in the workshop is primarily through invitation, we have a few slots available for the broader community. If you are interested in participating, please see details of the application process below. The workshop will be organized according to area-wise breakout groups. Each breakout group will have 1-2 leads. Breakout groups will meet for 4-5 hours spread across several days and will be tasked with brainstorming ideas and preparing materials related to their topic. Leads are further expected to participate in plenary sessions held on Monday July 20 and Friday July 24 (4-5 hrs of additional time) where these materials will be discussed.</div>
<div style="color: #222222; font-family: "Lucida Sans Unicode", "Lucida Grande", sans-serif; letter-spacing: 0.1px; line-height: 22px; padding: 0px 0px 20px; text-align: justify; word-spacing: 1.4px;">
If you are interested in participating in the workshop, please fill out <a href="https://forms.gle/cdCTsLfUs56CDhKS9" style="color: #3f9291; text-decoration-line: none;">this Google form</a> by <strong>Monday June 15</strong>. On this form, applicants are asked to contribute one or two <span class="im">major results in the last 10 years whose significance can be explained in layperson terms, and one or two major challenges for theory whose significance can be explained in layperson terms. These descriptions </span>can be very brief. We will just use them to select participants and create breakout groups.</div>
Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com1tag:blogger.com,1999:blog-8890204.post-4745491493257715482020-02-02T22:03:00.001-05:002020-02-02T22:04:10.908-05:00Current CS 124 StatsThis is as much personal recording for me (and perhaps of interest to Harvard people who read the blog). But also putting the numbers here for others to know for comparison.<br />
<br />
I'm teaching the undergraduate algorithms and data structures class, CS 124, for the first time in a few years, so let's look at the initial enrollment numbers. Harvard has its strange and wonderful shopping period, so I'm just getting the numbers now after the first week. (Feel free to comment with comparative stats from your own institution!)<br />
<br />
Current numbers are a bit over 280 students, about 20 more than last year, about 60 more than the last time I taught it. Seems like the course has been growing about 20 people a year for the last few years. This number may go up or down a few (most likely down), but is probably close to where the class will end up. I keep thinking we've hit "peak 124", but it keep going upwards. Part of this seems to be because a few more grad students (especially various master's students) are taking it. Something like 1/7 of the undergraduates take this course now, which to me always seems amazing. When I was an undergraduate at Harvard, CS courses were just not this big. My first year teaching at Harvard, CS 124 was about 90 students, and that was huge. I do realize, of course, that many places have even larger CS classes; I'm not meaning to complain.<br />
<br />
Surprising to me, about 1/4 of the course is first-years. This is more than I remember from my previous years teaching it; it's a hard course that requires both math and programming background, so first-years usually wait. On the other hand, the first-years taking it are self-selecting, and in past years the first-year grade average is notably higher than the class average.<br />
<br />
About 40% sophomores, not surprisingly the largest group. Then a mix of juniors, seniors, and grad students from various programs.<br />
<br />
The course is also offered through the Extension School; here numbers change a lot. Right now it's about 45, but that will most likely drop further.<br />
<br />
If I've counted right, it's my 18th time teaching the class. I'm pretty sure I'll get to 20. I suppose it's possible I'll get to 30 or more, but it's likely someone else will take it over at some point.Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com2tag:blogger.com,1999:blog-8890204.post-50918428442979989452020-01-14T15:27:00.002-05:002020-01-14T15:27:55.437-05:00ITCS 2020, ReflectionsI've spent Sunday/Monday at <a href="http://itcs-conf.org/index.html">ITCS</a>, or Innovations in Theoretical Computer Science, where I am giving a talk on this paper on <a href="https://drops.dagstuhl.de/opus/volltexte/2020/11699/">Scheduling with Prediction and the Price of Misprediction</a> (LIPIcs page) (which is one of several recent works on <a href="http://www.eecs.harvard.edu/~michaelm/Talks/ALGO2109.pptx">Algorithms with Predictions</a> (powerpoint slides)).<br />
<br />
I'm told it's 10 years since ITCS started as a conference, and I was one of the skeptics that really did not think it was a good idea 10 years ago. So as I'm sitting in the sessions, what do I think now? What are the pros and cons of ITCS?<br />
<br />
On the negative side, ITCS is not very big. It is just over 100 people registered, so it's like a big workshop/small conference size. (And considering that it's usually held in central places with lots of students, those numbers are buffered by locals.) Somehow, it's scheduled the 2nd week of January, right after SODA, which seems problematic, and certainly (if it's kept that way) may keep it from getting any larger. The number of "senior people" around at any one time seemed generally small, a problem for things like "Graduating Bits" (see below). As ITCS, at least 10 years ago, was supposed to build up to another "premier" TCS conference, focused on "innovative" work, the attendance seems a bit disappointing.<br />
<br />
On the neutral side, the conference is single-session, and to make that work, talks this year are limited to 12 minutes. Your mileage may vary on whether you think this is good or bad; it seems to work. (My take: I slightly prefer parallel sessions, because it means there's more times where there's something I want to see, and that's more important than the times where there are two talks at the same time I want to see. And 12 minutes is certainly workable but maybe a few minutes short. But again, that's just my opinion.) This year (and possibly going forward), some papers were accepted without a full talk -- instead they have a 3-minute talk and a poster in a poster session. Again, it's not clear to me if this is good or bad (though more paper acceptances makes me lean to the good side), but it seemed to work fine and people were happy with it. (Such papers are considered full publications and treated the same in the proceedings.)<br />
<br />
On the positive side, the conference seems well run. It's held in a university building, so no expensive hotel costs; instead they're generous with food and keep registration costs reasonably low. They use <a href="https://www.dagstuhl.de/en/publications/lipics">LIPIcs</a>, which provides a good system and approach for publishing papers at low cost. (Note, I was until recently part of the LIPIcs editorial board, so I'm biased there.) They seem to be covering their expenses from what I understand. The business meeting was blessedly short. They're recording all the talks. They're doing interesting things like "Graduating Bits" for people who are job-looking (where people graduating or coming out of a postdoc give short talks about their work).<br />
<br />
In terms of content, it seems really good. I've seen several good talks and interesting papers. While I'm not sure how to quantify whether ITCS work is more "innovative" than the work at other major TCS conferences, I do actually think they are noticeably more open at ITCS than other conferences about accepting papers based on the paper's underlying idea rather than on "technical mastery".<br />
<br />
My thoughts 10 years ago were that ITCS was not a great outcome for the community, and that instead the community should push for:<br />
<br />
1) Aiming to do better about opening up the criteria for paper acceptance, including weighing innovation/practical relevance in reviewing papers at FOCS/STOC/SODA.<br />
2) Increasing the number of papers accepted to these conferences, as too many good papers were being rejected.<br />
<br />
Viewed under this lens, ITCS could, I think, be viewed as a success. The theory community seems unwilling to expand conferences by accepting more papers. (I note that while the STOC theory-fest has changed and expanded STOC, it hasn't really seemed to increase attendance, and while the number of accepted papers has increased slightly, it hasn't kept pace with the growth in the field.) ITCS provides another venue for high-quality theory papers, thereby increasing the number of such papers published each year within the theory community, and I think it is generally viewed as a high-quality conference. And, as I mentioned, ITCS seems at least somewhat more flexible in its criteria for what is an acceptable paper. ITCS has, I think, in these regards been a benefit for the theory community.<br />
<br />
However, despite the success of ITCS, I think it's a band-aid on structural issues in the theory community. While these issues are complex and many-sided, just comparing with the growth and excitement in the AI community is a little depressing. Indeed, what I see is AI <i>absorbing</i> significant parts of the theory community; lots of theory papers now end up at NeurIPS, AAAI, ICML, or AISTATS, because the theory community doesn't seem to have room for them, or doesn't judge them as sufficiently significant. I view this as a problem for the theory community, the result of the problems I saw 10 years ago for which I didn't think ITCS was the right response. (Though perhaps it was/is not really viewed as a problem by others; all of CS, including theory, seems to continue growing; theory just seems to me to be growing less than it could or should.)<br />
<br />
To conclude, my thanks and congratulations to those many people that have organized and maintained ITCS over the years; I appreciate your work, as I think the whole community does. <br />
<br />
By the way, I thought it never snowed in Seattle, so I'm confused; what is all the cold white stuff outside?Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com3tag:blogger.com,1999:blog-8890204.post-41104860830976300452019-11-13T01:23:00.000-05:002019-11-13T01:23:47.003-05:00Allston : Students Should Speak UpI have had (as is pretty usual for me) some number of lunches with students this semester, and many of them asked me about the Computer Science move to Allston. A lot of times, these questions are concerns: what food will we have there, how frequently will buses run, what sort of space will there be for students to hang out/study etc., what all will be over there?<br />
<br />
These are good questions. I don't know the answers, I think a lot is still being worked out or decided. I realize that's not a great response. <br />
<br />
I've started encouraging students to start asking these questions, more directly, to the powers that be. Specifically, I've suggested to the small number of undergrads I've been lunching with that they should start sending e-mails to the powers that be, asking whatever questions they have, and expressing their concerns. If you have questions, or wishes, regarding the food, safety, transportation, etc. at the new Allston building, you should ask or let your desires be known. And, I'll be frank here, you should not (just) let the faculty know -- we're at best an indirect channel to the powers, and they may listen to all of you more than they listen to us. Contact the powers directly.<br />
<br />
(Graduate students too.)<br />
<br />
It might be more impactful if students organize to make their questions and/or wishes known. Or not. That's up to you really. But if you want Allston to be successful from where you stand, now is a pretty good time to get involved, before we all start over there next fall. For Allston to be the academic home you want, you may have to speak up. <br />
<br />
I realize undergrads might not know who are the powers that be that they should contact. So I'll provide a starting list:<br />
<br />
Frank Doyle, Dean of the School of Engineering and Applied Sciences<br />
Claudine Gay, Dean of the Faculty of Arts and Sciences<br />
Alan Garber, Provost<br />
Rakesh Khurana, Dean of Harvard College<br />
Amanda Claybaugh, Dean of Undergraduate Education<br />
<br />
And finally, yes, of course it's a <a href="https://en.wikipedia.org/wiki/Powers_That_Be_(Angel)">Buffy (well, Angel) reference</a>. <br />
<br />Michael Mitzenmacherhttp://www.blogger.com/profile/06738274256402616703noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-36084084473202794862019-10-12T16:53:00.001-04:002019-10-12T16:53:04.999-04:00Crimson Editorial on Allston<a href="https://www.thecrimson.com/article/2019/10/7/editorial-campus-divided/">This was the first Crimson editorial</a> I can recall about the upcoming move computer science will be making to Allston next year. <br />
<br />
I'd encourage students (both undergraduate and graduate, but perhaps especially undergraduates -- there are more of you!) to find ways to make their concerns prior to the move and eventually their issues with the Allston move heard by the powers-that-be at Harvard. You could interpret this as cynicism that I think Harvard will mess this up significantly in various ways. Or you could interpret this as optimism that I think that, while of course problems will arise, Harvard will work to fix them when they are brought to their attention by enough people. <br />
<br />
Under either interpretation, I think the students making their own voice heard will be very important to helping make this move a success. Michael Mitzenmacherhttp://www.blogger.com/profile/06738274256402616703noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-44768741592489823332019-10-03T01:06:00.001-04:002019-10-03T01:06:58.980-04:00Harvard Admissions Lawsuit Decision OutAs someone who reads a significant number of court documents and decisions (I still do expert witness work), I can recommend for your reading pleasure the <a href="https://admissionscase.harvard.edu/files/adm-case/files/2019-10-30_dkt_672_findings_of_fact_and_conclusions_of_law.pdf">very recent decision on the Harvard admissions case</a>. For those who want a sense of how Harvard admissions works, you will get a good summary of the information that came out during the trial. For those who want to see a well-written court decision, in my opinion, this is a good example. (Whether you agree with the decision or not, you should find the decision well written; it lays out the issues and challenges in determining the decision clearly, and similarly explains the reasons for the ultimate conclusion clearly.) And for those who care about the actual underlying issues of discrimination and affirmative action, I think the document provides a lot of food for thought, with a depth beyond what you'll see in the news coverage. Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-3001312910372515882019-09-20T16:25:00.001-04:002019-09-20T16:25:25.684-04:00Changing TimesAn old friend from college sent me an e-mail, and it got me thinking. When I was an undergraduate at Harvard some significant number of years ago, I took the graduate level algorithms course offered by Michael Rabin and the graduate level complexity course by Les Valiant. There were maybe a half dozen people in each of the classes. (They were great classes, of course. But CS at Harvard back then was really, really small.) <br />
<br />
This semester, I'm teaching the graduate level course on randomized algorithms and probabilistic analysis. Right now, the enrollment is 74 students; well more than half are undergraduates. Somehow, that says something to me -- about how the field has grown, and in at least some regards how Harvard has changed. And about how much more prepared students are these days for these kinds of classes. (Knowledge or probability is much more prevalent.) Class sizes have been creeping up for so long that while it's noticeable year-to-year, it's much more stark and remarkable when I think back to my own time in college. <br />
<br />
Of course, it's also on my mind because it's a pain teaching a graduate class that large. But it's a pain I can live with -- if I didn't like teaching, I wouldn't have become a professor. And it's gratifying, if not a little bit shocking, that there's this kind of interest in the subject I really love, that I've been excited by for decades. <br />
<br />
<br />Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-24456205308137050162019-09-07T15:59:00.003-04:002019-09-07T15:59:49.751-04:00Off to ALGO/ESA 2019I'm shortly hopping on a plane to head to ALGO/ESA. I'll be giving a survey-ish talk on Learning Augmented Algorithms, covering my work so far in the area as well as some of the work by others. I think it's a highly promising direction fitting in the framework of Beyond Worst Case Analysis, so I'm excited to give the talk, and hoping it's still a novel enough area to be new to most of the audience. <br />
<br />
For those of you who are there, feel free to say hi -- I'm looking forward to talking to people. <br />
<br />
<br />Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-80505228082913609072019-09-04T00:13:00.002-04:002019-09-04T00:19:36.313-04:00Happy New Academic Year: Teaching Randomized AlgorithmsIt seems I haven't written on this blog for a while.<br />
<br />
Today was the start of a new semester. I'll be teaching Randomized Algorithms and Probabilistic Analysis, using the new edition of my book with Eli Upfal as a base, and throwing in other material. (Everyone should buy the book! <a href="https://www.amazon.com/gp/product/110715488X/ref=as_li_tl?ie=UTF8&tag=michaelmitzen-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=110715488X&linkId=d598fb636637ebb656410099e7042a7c">Here's a link</a>.) <br />
<br />
It's a graduate level class, but generally designed for first year graduate students, and there were a lot of undergrads "shopping" it today. (We don't do pre-registration at Harvard, and students get the first week to choose classes, known as shopping.) So many that people were standing out the doors of the room. But because we have a bit of a shortage of classes this semester, I'm guessing there's a good fraction of students just checking it out. We'll see Thursday, but for now I'll predict we'll fit in the classroom, and wait to see if I'm wrong. (If I'm wrong, that's wonderful too.) <br />
<br />
It's been four years since I last taught the course, so this time I'm trying something new. When I've previously taught the course, I tried to make the class inviting and friendly by telling the class we'd begin without assuming the class knew probability, and so the first couple of weeks would be reviewing basics (like, say, linearity of expectations and union bounds), albeit in a CS algorithms context. This time, I let the class know I'm assuming they know (or will pick up) basic probability, and so they should read chapters 1-4 on their own, and we'll start with Chapter 5, Balls and Bins models. Over the last decade, I've seen a huge shift in probability knowledge -- Stat 110, Harvard's probability course, has become one of Harvard's biggest classes. Many students have already taking AI or ML or even data science courses where they've done some (further) probability. It feels appropriate (and safe) to assume people entering in the class know probability, or can review what they need on their own, and start the class further along.<br />
<br />
Now finally, a request. It's actually hard for me to teach when using this book, because I don't want to just read the book to the students. That's boring. On the other hand, if I thought something was important, I most likely already put it in the book. We have to mix up the standard lecturing format a bit. So two things we'll be doing are<br />
<br />
1) doing some "puzzle problems" at the beginning of most classes, so people can try to solve problems. (Kind of a flipped classroom approach, but not a full commitment.)<br />
2) reading papers, related to the class topics.<br />
<br />
So if you have any good suggestions of probability puzzle problems, or readable papers (particularly application papers) that use relatively basic probabilistic analysis in neat ways, send them over. I've got a semester to fill.<br />
<br />
For curious people, here's one of today's starting problems, which I first learned about in graduate school. (I'm pretty sure I owe thanks to Claire Kenyon for teaching it. I'll link to the corresponding Wikipedia page on the problem maybe later.)<br />
<br />
After lunch, Bob suggests the following game to see who pays. Alice and Bob will each choose a different sequence of three flips. (So they could choose "Heads-Tails-Heads'', or "Tails-Tails-Tails'' for example.) After they choose, a fair coin will be tossed until one of their sequences appears as a consecutive subsequence of the coin tosses. The player whose sequence appears first wins. (Note that if they choose the above sequences, and if the flips come up Heads-Tails-Tails-Tails, the player that chose Tails-Tails-Tails would win as soon as their subsequence appears; it's not three flips, then start over again.) Bob politely says that Alice can choose first, and after she chooses and tells him her sequence he'll choose a different sequence. What should Alice choose?Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com5tag:blogger.com,1999:blog-8890204.post-45906496418959560202019-01-09T17:25:00.004-05:002019-01-09T17:25:51.239-05:00ANALCO, SOSA, SODA postI spent the last few days at SODA-ANALCO-ALENEX-SOSA in San Diego. (Nice location choice, I'd say!) Here's some news.<br />
<br />
This will be the last ANALCO (Analytic Algorithms and Combinatorics). Apparently submissions have been decreasing, so they've decided it will halt and the work on these topics will go into SODA and other conferences. I'm not sure how to think of it -- I think we as a community have far too many conferences/workshops generally, but I think the SODA model of having ANALCO and ALENEX (and now SOSA, I imagine) folded in cleanly into the main conference is an excellent model. I also like the ANALCO topics. But I can understand the time may have come to do something else. Thanks to everyone who worked to organize ANALCO and keep it going these many years.<br />
<br />
It looks like SOSA (Symposium on Simplicity in Algorithms) will be taking its place in the SODA lineup. I co-chaired the symposium with Jeremy Fineman this year, the second for the symposium. I was surprised by the high quality of the submissions, and was then further surprised by the strong turnout at SODA. The room was quite full for the Tuesday afternoon sessions, and there were easily 75+ people at several of the talks. I do think there's a need for SOSA -- no other workshop/conference hits the theme of simplicity in our area, and it's a really nice fit with the rest of SODA. I'm hoping it will last, and in particular that they'll continue to have a good number of high quality submissions, but that depends on all of you. Ideally, there will be a positive feedback loop here -- now that there's a good home for this type of work (besides notes on the arxiv), people will be more inclined to write up and submit things to SOSA. For Tuesday's talks, I'll call out Josh Alman's great presentation on "An Illuminating Algorithm for the Light Bulb Problem" as my favorite for the day.<br />
<br />
With ANALCO exiting, though, I think there's more room for additional satellite events at SODA, so hopefully some people will get creative.<br />
<br />
If I had thought about it I should have live-blogged the business meeting. I'd say as highlights, first, Sandy Irani presented the report of the ad hoc committee to combat harassment and discrimination in the theory of computing community. (See <a href="https://www.ics.uci.edu/~irani/safetoc.html">here</a> for the report.) There was an overwhelming vote to adopt their recommendations going forward. It's good to see progress in addressing these community concerns. Second, Shuchi Chawla will be the next PC chair, and she brought forward a plan to have SODA PC members be allowed to submit papers (with a higher bar) that was voted on favorably as well.<br />
<br />
I suppose the last note is that Jon Kleinberg's invited talk was the conference highlight you expect a Jon Kleinberg talk to be, with interesting results and models related to fairness and implicit bias.<br />
<br />
Thanks to SIAM and all the organizers for their hard work.Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com1tag:blogger.com,1999:blog-8890204.post-3856204254575359162018-12-05T18:35:00.000-05:002018-12-06T23:46:36.715-05:00NeurIPS 2018 PostToday was my first day at a NeurIPS conference. Advertising note, before my thoughts on the experience:<br />
Tomorrow I'll be stationed at a poster 10:45 AM -- 12:45 PM @ Room 517 AB #169<br />
A Model for Learned Bloom Filters and Optimizing by Sandwiching<br />
and at the same time Diana Cai will be stationed at a poster I'm involved with:<br />
Room 210 #26<br />
A Bayesian Nonparametric View on Count-Min Sketch<br />
Please stop by to talk to me if you're around -- Room 517 seems to be a bit off the beaten path and I'm concerned I'll be lonely with nobody to talk to. And of course stop by to see Diana too.<br />
<br />
NeurIPS is just huge. It's like an extremely large academic conference (of 1000-2000 people) glued together to an extremely large industry conference (of more than that). Just the academic part (talks and poster sessions, 3 parallel sessions for talks) is a lot, and then there's a trade show with booths from 50+ companies there too.<br />
<br />
I think I'd like the academic part more if I was in the area -- I'm coming in from the algorithms perspective, and there's a bit of language gap. It seems to me that conference suffers from some of the standard aspects of large conferences -- with scope and size that big, you have to look and find the things that are interesting and important to you, because a fair bit probably isn't. And while I can't tell entirely myself, I'm told by others that, given the size, there's not a lot of "junk" -- the work seems good-on-average. Also, given the conference size, it seems well organized -- staff managing the people flow, they keep things on time, plenty of room in the poster session (with appropriate food and drink stuff). I can appreciate the work that must go into to making something this big work.<br />
<br />
Because the conference is so big, I've run into a good number of "theory people" here. As a percentage of attendees, we're probably small, but it's a good number because it's so big. I kept running into people at the poster sessions, which was nice.<br />
<br />
The trade show part is impressive in its own way. If you didn't know AI was big, this would tell you. All the big players are there, but there are at least a dozen machine-learning focused companies I haven't heard of, and that's not including the dozen or more consulting/Wall Street/hedge fund firms that are big enough into AI that they want to have a presence here. I understand there's lots of networking and pre-interviewing and interviewing going on. On the positive side, I feel like if I wanted to leave academia, I could land a job with a variety of AI-based companies, beyond the big companies. <br />
<br />
Hopefully tomorrow will be interesting as well.<br />
<br />
<br />Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0tag:blogger.com,1999:blog-8890204.post-75000808984604568292018-10-25T15:43:00.001-04:002018-10-25T19:22:08.625-04:00Rabin Postdoctoral Fellowship Advertising Post Michael O. Rabin Postdoctoral Fellowship in Theoretical Computer Science<br />
<br />
Deadline for full consideration: December 3, 2018. Applications can be submitted here:<br />
<br />
https://academicpositions.harvard.edu/postings/8512<br />
<br />
The Harvard John A. Paulson School of Engineering and Applied Sciences at Harvard University seeks applicants for the Michael O. Rabin Postdoctoral Fellowship in Theoretical Computer Science. The standard duration of the Rabin Fellowship is two years. Rabin Fellows will receive a generous salary as well as an annual allocation for research and travel expenses.<br />
<br />
Past fellows are Mika Goos and Aviad Rubinstein and the current fellow is Alexander Golovnev.<br />
<br />
We are looking for exceptional junior scientists in theoretical computer science, broadly construed. Rabin Fellows will be provided with the opportunity to pursue their research agenda in an intellectually vibrant environment with ample mentorship. While interaction with Harvard faculty, students, and visitors is encouraged, Rabin Fellows are free to pursue their own interests. Candidates are required to have a doctorate or terminal degree in Computer Science or a related area by the expected start date.<br />
<br />
Required application documents include a cover letter, research statement, CV (including a list of publications), and names and contact information for three references. We recommend that papers mentioned in the CV be available online on your homepage or on electronic archives. Applicants will apply on-line at the above address. We encourage candidates to apply by December 3, 2018, but applications will be accepted until the position is filled.<br />
<br />
Harvard is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.Michael Mitzenmacherhttp://www.blogger.com/profile/02161161032642563814noreply@blogger.com0