## Thursday, January 31, 2008

### Algorithms for Newbies: Static or Dynamic?

A talk I just went too (by Lynn Stein) raised the following question for me.

While I have some strange ideas about how an algorithms class should be taught (like students should do some programming in an algorithms class), in most ways my algorithms class is quite traditional. Roughly, my class looks quite similar in the path it takes to Algorithm Design by Kleinberg/Tardos (or the corresponding subsections of the standard Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein). And specifically, I teach primarily what we'd call "static" algorithms; there's an input, a desired output, and the algorithm is the recipe.

These days, many of us do research in and arguably most commercial products use "dynamic" algorithms; that is, they're processes that are always on, reacting to the environment around them. (Your cell phone, for example, might be considered a dynamic algorithm device; so would a robot vehicle. If you like, you can think of "dynamic algorithms" as roughly being closely tied to "distributed systems".) My course is not geared to teach or have student explore dynamic algorithms, although they get brief mention. Lynn seemed to be suggesting that dynamic algorithms and programs should be taught before static algorithms and programs. It is, after all, the paradigm students are most familiar with, and arguably more interesting to them.

This doesn't sit right with me. In my mind, it's partly a "walk before you can run" issue -- you could teach a lot of calculus without teaching trig, but it doesn't seem the right way to go. Although that's not really quite the right analogy; I can see that the issues in dynamic algorithms/distributed systems -- like communication, and responding to asynchronous incoming signals -- are often quite different than for static algorithms, so much so that in many case you don't have to know your standard static algorithms to write a dynamic one. I think it's more that I think that to do anything that I would consider technically interesting you need to know static algorithms first. That is, sure you can probably teach people to write an interactive Internet chat program pretty easily, and they'd learn a lot by doing it, but that's like the task 20 years ago of writing a checkbook program, just harder because of issues raised by the interactivity. To do really interesting things with dynamic algorithms, my bias is you need good static algorithms behind it. Like Google mail's search features, or Amazon's recommendation feature, or other stuff. (Or at least, in many cases, you should understand the static version of an algorithm before trying to understand the dynamic version of it.)

So now I'm wondering if I'm just old-fashioned. What do you all think?

## Tuesday, January 29, 2008

### More SODA Stuff: Congratulations to Susanne Albers

Another fun SODA activity: getting to (belatedly) congratulate Susanne Albers on her Leibniz Prize in person. I had the great pleasure of working with Susanne early in our careers, and I'm thrilled to see her get this kind of recognition, which she greatly deserves. (A paper of hers I remember greatly enjoying was On Randomized Online Scheduling, if you're looking for a starting point to check out her work.) And 2.5 million euros is real money, particularly at the current exchange rate!

After my queries, Susanne politely explained to me that the money could not actually be used to buy a house, but was for research. I suggested that now was a very good time to ask for a raise. More banter followed. Certainly, one of the pleasures of conferences is catching up with colleagues in distant places. Congratulations Susanne!

## Sunday, January 27, 2008

### The CRA blog on NSF Funding

If you aren't regularly reading the Computing Research Policy blog at the CRA, I recommend stopping by once a month or so. This month: find out about the latest NSF funding disaster (or "how the NSF and science in general got shafted in the budget process yet again") , including whether CDI will actually get funded (after we suckers researchers sent in 1300 proposals for about 30 grants...)

## Friday, January 25, 2008

### Harvard's Financial Aid

Bill Gasarch asked me to comment further on Harvard's new financial aid policy, which I briefly mentioned here, a post which strangely has gotten a few comments in the last 24 hours. (Of course, some crazy people have the wacky idea Harvard should be free.) One of the comments points to a recent op-ed piece in the New York Times; another commenter calls the piece illogical, which I'd have to agree with.) I'll offer my opinion, which has been colored by discussions with Harry Lewis. (By the way, have you bought his book yet?)

First and foremost, I'm glad Harvard did it.

One complaint is that Harvard only did it to stave off potential future legislation requiring it to spend more of its endowment. I just don't think this is true. Harvard has been raising its financial aid already quite substantially the last few years. There are plenty of other motivations and pressures to do so outside some sort of potential future legislation. In particular, if there are any political motivations, I'd say a more likely one is that the new President of Harvard and Dean of FAS did it to show some leadership and put a positive spin on their new administration out in the news. But I don't even think that's such a big reason. It's the direction Harvard's been moving for some time.

Another complaint is that this will put pressure on other schools who can't afford to do the same thing, particularly public schools, who will lose some talented students. I don't see this as a huge problem. Harvard only takes about 1700 students a year. Even if the whole Ivy League tagged along (like Yale has), that leaves plenty of talented students -- including the ones who would have gotten into Harvard but now didn't because the financial aid allowed a similarly or higher qualified student with fewer financial means to attend. When Harvard expands its entering class by a factor of 2, then people can complain what a terrible thing Harvard is doing, stealing good students away from other institutions. (Their argument will still be weak and misguided, though.)

I like how the two major complaints are so contradictory in nature. The first says Harvard isn't doing enough with it's big endowment; the other says Harvard is doing too much.

Me, I'm thrilled. Harvard did something good: it made attending Harvard more affordable. In doing so, it showed leadership, and has got people talking about the cost of college in places like the Opinion page of the New York Times. I hope it will lead to further positive changes, and improve the affordability college more generally.

## Thursday, January 24, 2008

### The Job Search, Another Perspective

I am chairing the search committee for Harvard this year, motivating me to comment on the view from the other side of the search process. Everything here should be taken as general commentary, my own opinion, not the opinion of my employer or this search committee, and not necessarily specific to our search, which of course I can't discuss in any detail. (The following comments are not theory-specific, though I use theory-examples.)

A general search begins with getting several hundred applications for a small number of interviews and an even smaller number of eventual offers. Candidate quality is quite high, to the point where it's very difficult to whittle the folders down to a number that can be reasonably considered for interviews. [I'm personally doubtful that there will be enough academic/research lab jobs available this year (in, say, North American academic institutions) for all the highly qualified candidates. All the good people will get a job, I'm sure -- perhaps just not in academia, or in research labs. We'll lose people to non-research industry. That's not necessarily a bad thing, but I'm sure there are plenty of people who would like to stay in research that won't be able to. Of course, even if you get an academic job, it's not clear these days there's enough research funding to go around for everyone, anyway...]

Candidates need to find a way to stand out. More precisely, their research record needs to stand out. Somehow, unfortunately, it's not even enough to have a number of papers in top-tier conferences. In theory, for example, there's a number of people with multiple FOCS/STOC/SODA papers. Let's hypothetically say there's a dozen candidates with 3 or more papers in top-tier conferences in an area. What will make you stand out as one of the top two or three that gets an interview?

Letter-writers confirming the quality of your research (and your contributions on multi-author papers) are one obvious answer, and this partially explains the often observed hiring bias in favor of students from top-tier institutions; they have top-tier letter-writers. It is, generally, helpful to come from a top-tier institution, but it's less clear that it matters which particular one. Research quantity is another way to stand out. Those rare extreme cases of people who manage to publish 20 or so papers in top conferences do get noticed, certainly.

Quality, however, matters much more than quantity. Are you working on exciting problems, that can have a significant impact in some way? Are you following the crowd, or getting ahead of it? In essence, the question is "What is candidate X famous for?" If there's a good answer to that question when you replace "candidate X" with your name, you're much more likely to get an interview. And the statement "Candidate X is famous for writing a lot of papers" isn't really sufficient in itself.

Finally, I think there are further intangibles that contribute to this notion of reputation that need to be thought about earlier on in graduate school that impact the hiring process. Giving good conference talks gets you and your work more notice, increasing the fame factor. Working with many people, including people from outside your home institution, increases the visibility of your work (and improves your collection of letters).

In the end, of course, all this advice seems obvious. Then again, being on the search committee and writing this up clarified it my mind, and will make me reflect on my own recent work practices.

## Wednesday, January 23, 2008

### The Job Search, One Perspective

Of course the most interesting part of the SODA conference (and generally all conferences) is the conversations one has out in the halls. At some point in talking to one of the many people job-hunting this year, my own employment history arose, and at the risk of self-indulgence I thought it might be instructional to repeat here.

When I was finishing graduate school, the job market wasn't great. Most people went into postdocs. Your Jon Kleinbergs and David Kargers got jobs, of course, but most top 36 schools weren't hiring theorists out of graduate school. I felt pretty fortunate; I had interned at Digital Systems Research Center (SRC -- which, sadly, is now before-many-people's time...), and it seemed reasonably likely I might be offered a job. I also applied for several postdocs just in case, and applied to a small number of universities. I avoided a wide search, figuring my chances were small, and that I'd rather do a postdoc than take a job somewhere I didn't really want to go.

I got a small number of faculty job interviews -- including an interview at Harvard. But, at the end of the day, no academic offers. I did, happily, get a job at SRC. I think from the perspective of many graduate students, this would be seen as a failure -- no faculty job! (In telling the story at SODA, the look in the student's eyes seemed to suggest that interpretation.) In hindsight, of course, it's easy to see that this was far and away the best possible thing for me. SRC was a great place to be, full of innovative people and a strong ethic of theory/systems collaboration, and I received the benefits of mentoring from many great researchers (particularly Andrei Broder), as well as time to develop my research abilities and profile.

I decided to throw my hat in the ring again two years later. Since I liked SRC, I again only applied a small number of places that I might conceivably leave SRC for. I got 3 interviews. And just one job offer, at Harvard, which had rejected me last time around.

I suppose in this rambling anecdote I'm hoping there are some messages job-seekers current and future might take away. Patience really is a virtue; you needn't get to where you're going immediately. (It seems many CS theory folks have had long, winding, and quite pleasant careers.) For many, it might be beneficial to have some time and experience before facing the pressures of the tenure clock. You should never take it personally if you don't get an interview or a job offer. Don't be afraid to fail. Summer internships are a very good thing. Research labs are a very good thing.

I'm sure many job-seekers, having gone from undergrad right to grad school, feel the pressure to get to right to the next step, a tenure-track position. It might not work out that way. But there are many paths to a happy career; I hope you don't get so tied up in the process that you stop enjoying what you're doing.

## Tuesday, January 22, 2008

### More From SODA, Andrei Broder's talk

Andrei Broder gave an interesting invited talk on the new area of "computational advertising". As Andrei was my mentor for several years, and historically speaking, I am always thrilled by his work and his insights, consider this a gushingly positive review.

The talk started with a wonderfully complimentary introduction by Allan Borodin, who described accurately how Andrei was one of the initial leaders in the area of Web search algorithms, and his early success is a large part of the reason that so many people from our area have become involved in the field.

Andrei described computational advertising is a new scientific sub-discipline, at the intersection of search, information retrieval, machine learning, statistical modeling, etc. The main problem: find the best match between a given user in a given context and a suitable advertisement.

One way of understanding context, naturally, is to use a search engine. To classify a keyword phrase, one could look at the search engine results, and re-define or interpret the keyword according to these results (say by giving it a category, to match a collection of ads). This can be made fast by classifying pages offline, and then use pages returning from the search query to "vote" on a category for the query (rather than text-analyzing returned pages on the fly). Even a weak classifier of pages does well if you have a reasonable voting scheme! Similar ideas can be used in content ads, using a mix of category and semantic information to find ads to go with a page. Andrei also raised the issue of caching ads; when are queries "close enough" that you can use a cached from a query-ad pair that may live in a cache?

I left the talk with the feeling that there were a lot of challenging "algorithm engineering" problems here. It's often hard to prove things in real-world settings, but one often needs real algorithmic insight to design solid, well-founded approaches to these real-world problems. This is why Google and Yahoo and MSN are hiring a lot of theoretically minded people. For such companies, there is always the aspect that one may have to give up some intellectual purity when working in these areas. You can't prove everything, and you need to make things that work. But this is a tradeoff. Not just a tradeoff in terms of money (though that's certainly there), but in terms of having a large base of users who actually use and gain from your algorithmic ideas. It's a tradeoff typical when working at theory/practice boundary, but here there's a pretty clear economic/users motivation that's really pushing this area forward. I think the talk did an excellent job of introducing the basic problems and motivation to a general (theory) audience.

## Monday, January 21, 2008

### Some Thoughts from SODA

There are many reports from SODA at the various theory blogs, but here's a quick report:

Persi Diaconis gave a great talk on interesting connections among carrying, shuffling, and Young tableaux. The starting point: consider adding m n-digit numbers, where the numbers are uniform base b. Then the possible carry values are between 0 and m - 1. For large n, what's the fraction of time the carry value is 0, 1, 2,... As the carries process is a Markov chain, this can be determined. He then connected this to theory of shuffles. Here's a nice model for a shuffle; each "card" corresponds to a point chosen uniformly on [0,1]. An a-shuffle maps x-> ax mod 1. The resulting re-ordering is a shuffling of the cards, which leads the way to analysis. From there he went on to tableaux.

Rather than try to summarize the talk, I'll point to his major references:

John Holt, Am Math Monthly: Carries, Combinatorics and an Amazing Matrix
Persi Diaconis, web page, papers from 2003: Mathematical developments from the analysis of riffle shuffles
R. Stanley: Volume II, enumerative combinatorics, chapter 7

He left open the problem of trying to develop a clean, pretty correspondence between the carrying process and shuffles.

Muthu conversed in the morning about MapReduce; see his new blog entry. In particular, the perspective and counterperspective on MapReduce in this post from the Database Column is quite entertaining.

I enjoyed the talk for (and want to understand the paper for) Improved Algorithmic Versions of the Lovasz Local Lemma by Aravind Srinivasan (who didn't give the talk, but it was still well-given). The Lovasz local lemma is quite beautiful, although I don't think I've ever used it in a paper. I'm inspired to try to find a good use for it.

## Friday, January 18, 2008

### Hunting for Problems

2. Go to talks. For the same reason you should read a lot -- for exposure to problems and ideas.
3. Talk to people. Don't just talk to your advisor. Find other people with interesting problems and research, and see if you can help them, or if by talking to them you get a new idea for a research direction. I certainly don't come up with all of my own problems. Sharing ideas with others is key. In fact, I strongly recommend you talk with your other graduate students as much as possible, and try to start a paper-reading group/research group with them. Other students have more time than your advisor, so leverage that and work together to solve a problem. (Thank goodness we work in a field where cooperation is considered a virtue and collaboration is the norm.) At worst, it will make your work/social life at graduate school much better and less isolated.
4. Don't limit yourself to reading papers by/going to talks by/talking to theorists. If you're looking to come up with a new problem, odds are the motivation for that problem will come from outside the theory community itself. Find out what kinds of problems the systems people are having, and see if you can turn it into a theory problem. Even if your theory version is too toy to solve the real-world problem, you'll have a reasonable motivation for your introduction. And if your theory problem actually helps solve their real-world systems problem, bonus -- you now have contacts/letter-writers from outside theory that can help you in the future. And again, it will make your work/social life at graduate school better and less isolated if you talk to other people in the department besides theorists.

## Wednesday, January 16, 2008

I think one of the signs a graduate student is on his or her way to graduating is when they start taking control of our meetings. With younger graduate students, I've found I have to set the agenda -- asking questions, pulling out information from them, and telling them things they need to do. Some time ago, I noticed that Adam was coming into our meetings with a written agenda. He asks me questions, gets information from me, and tells me what I need to do. And then, on a good day, we can collaborate on research.

I'm not saying that graduate students can walk in, day one, and take control like that. Indeed, most probably can't and shouldn't. But as a graduate student, the sooner you can take charge of your education and set your own agenda -- so your advisor is a resource, rather than your manager -- the better off you'll probably be.

## Tuesday, January 15, 2008

### Distributed Beam-forming: New Preprint

Besides NSF proposals, another item that took up time over my winter non-break was finishing a submission to ISIT 2008. To me, this paper was another example of how CS theory people and EE theory people are often tackling the same sort of problems, just in a different language.

The paper is titled Distributed Beamforming with Binary Signaling. Here's the English version of the problem. A bunch of weak transmitters are trying to transmit the same message to a receiver. Before sending the message, they are all out of phase, so their signals potentially cancel each other. They'd like to send so that they are all mostly in phase, so the signals reinforce each other and the message gets through. Initially, nobody knows their phase. The only communication possible is all the transmitters can send a signal, and the receiver can broadcast information back. How quickly can alignment occur?

To better understand the problem, we simplified it as follows. In each round, each transmitter can send a bit, a -1 or +1. Suppose the jth transmitter send the bit b(i,j) in the ith round. The jth transmitter has, for all time, a fixed phase p(j), which is +1 or -1. The receiver's obtained signal in the ith round is |sum_j b(i,j)p(j)|. If there are n transmitters, we'd like to get the obtained signal to beta n for some constant 0 < beta < 1 as quickly as possible; this is what we'll mean by alignment. The system is fully distributed, so transmitters can't communicate directly, and the only feedback they get is 1 bit broadcast every round. In this simple model, we looked at lower bounds and algorithms. The lower and upper bounds are both linear in n, but trying to get those constant factors right is apparently pretty important.

While this is a natural EE theory-type problem, it feels close to very similar problems in communication complexity, at least when simplified in this way. It was also interesting working on a lower bound, where we developed a reduction from a standard rate distortion problem. It seems to me EE people don't generally think in terms of reductions, at least not the way CS people do, although it's a terminology and framework that I think is increasing in use at conferences like ISIT. On the other hand, CS people don't always do reductions that give specific constant factors (in this case, related to entropy). So all in all it was an interesting reduction to develop here.

The "full paper" (ISIT papers are limited to 5 pages) will cover a lot more variations and have more details, though I expect there will still be many open problems. More generally, the theme of unifying the "communication complexity" picture between EE and CS, much as the coding theory picture has been greatly unified of late, seems like a great place to look for research problems.

## Sunday, January 13, 2008

### NSF-related stuff

Although it might seem like it, I haven't really been on vacation -- just a vacation from blogging. One thing I was doing was sending in an NSF preliminary proposal for CDI (Cyber-Enabled Discovery and Innovation -- an unwieldy name I never remember without looking up), with that inspiring January 8 deadline. I was also doing minimal work on an Expeditions pre-proposal, thanks to enterprising colleagues who will probably seek payback if we get to submit a full proposal.

Does anyone have an opinion on this whole letters of intent/pre-proposal system for these new grants? Letters of intent I understand -- it's good to know how many people intend to apply for a new grant. But pre-proposals? Does this minimize the work for anyone? Sure, a 6 page pre-proposal is somewhat easier than a full proposal, but there's still a high overhead in just getting something off the ground (and getting it in the Fastlane system). And the tradeoff is now you may have to do a preliminary and full proposal, and there have to be 2 panels (1 for pre-proposals). I'd be happy if I thought someone from NSF had done the math and found that this approach would really save time and effort on our (the researcher) side and the NSF side, but I somehow think this was inspired some bureaucratic incentive I wouldn't want to imagine. I'd be happy to be proven wrong, if anyone has inside information.

So while at this point I'm sick of proposals, and would love to spend time actually doing research instead of writing to the NSF about research I'll happily do if they fund me, I see a slew of proposal deadlines coming up that I might end up applying to, and are certainly worth mentioning to the blog audience. Rather that point to individual calls, here's the link to the CISE list:

1. Theoretical Foundations. March 19 deadline. Looks standard. SING is still there. Nice to see that this time PIs can apply on two grants -- last time I applied, it was just one.
2. CyberTrust. March 24 deadline. I've never applied to this, but I hear many crypto/complexity people have had success.
3. NeTS. March 25 deadline. That's networking, for you theory folk. It looked like they've really revamped this call, completely changing all of the subareas. I'd say the subareas look more theory-friendly, with areas like Network Ecosystems and Exploratory Networking. But theory-friendliness seems to be at the whim of the panel, in my experience. (Sometimes that's good, sometimes that's bad.)
4. Assembling the Tree of Life. March 14 deadline. Not my kind of stuff, but I do know some theorists have received funding for algorithmic work on Tree of Life problems.
5. Human and Social Dynamics. February 19 deadline. Still plenty of time.

Sigh. So many calls, thankfully, not enough time to do them all. I wonder what's with the tightly grouped deadlines.