Wednesday, November 26, 2008

The Funding Cycle

Most of the last week was spent working on an NSF grant proposal. (By the way, does anyone know why they kept the pdf and txt format of the call, but removed the html? That was annoying as I was trying to figure out exactly how many colons my title needed to meet government standards.) I don't have any really entertaining stories about it. That's just how I was spending too much of my time. November 28 is a funny day to have deadline, though. For us, that translated into a submission deadline of yesterday. (Is your Sponsored Research Office really open on the 28th?) Next deadline (for small grants) is December 17, and I plan to get a proposal in for that as well.

I try not to continuously send in NSF proposals, but it's seemed like that's the way it's been going recently. I'd prefer to metaphorically let the tank get close to empty as far as funds go, then apply and refill the tank, so I could spend less effort on the grant process. But there's ever-growing competition, and smaller amounts available. And given the long lag between calls, and the high risk factor each year, I feel the need to be more proactive than even a few years back. Especially with the economy tanking, so I'm not clear if corporate money will be available in the coming year.

Anyhow, since a proposal did go in yesterday, I can relax over the holiday weekend. I hope all of you will be as well.

Complexity Theory Humor

Ah, the joys of outsourcing at Read down to the bottom to find responses by Godel and Cantor...

Update -- someone at the site noticed and took down the original page (although it's still in the Google cache here.)

Monday, November 24, 2008

Center for Computational Intractiblity

This popped into my mailbox, and I can't see any reason not to pass on the announcement here. Especially notice the bit at the end about postdocs (get your paperwork in NOW!!!).


Dear Colleague:

I am writing to tell you about a new Center for Computational Intractability that is funded by the National Science Foundation and is a collaborative effort between Princeton University, Institute for Advanced Study, Rutgers University, and New York University. The PIs are Allender, Arora, Barak, Charikar, Chazelle, Impagliazzo, Khot, Naor, Saks, Szegedy,Wigderson, and Tarjan.

The Center's mission views "intractability" fairly broadly, and covers both algorithms and complexity theory as well as other topics in theoretical CS. I would like to draw your attention specifically to the following: (a) Visiting possibilities for short-to-medium term (up to 1 year). We can pay travel costs and salary. (b) Several workshops a year, with possibilities to visit the center at the same time. (We can pay expenses.) (c) Postdoctoral positions both at the center and at the affiliated institutions. (d) Streaming videos of all center activities and seminars and other web-based resources.

I invite you to visit our website and to especially click on "Opportunities" from the sidebar.

I was wondering if you could forward this email to others in your department (including students and postdocs) who may be interested in the center's activities. Please note that the deadline for postdoc applications is December 15, and the deadline for postdoc applications to IAS is December 1.

With best regards,

Sanjeev Arora

Tuesday, November 18, 2008

STOC notes

It looks like we ended up with about 329 submissions. (Hopefully that won't change much at this point.) That's between 40-50 papers per PC member at 3 reviews per paper.

Thanks to everyone who withdrew the papers they weren't submitting. Otherwise, I had to go through and withdraw them manually myself.

Thanks to Shai Halevi for continuing to help me with the reviewing system.

Yes, I did get mail from about 10 people who hadn't known they'd need to file an abstract the week before, and I accommodated them. I think this should become a standard and everyone should get used to it. Again, keep in mind there's 40-50 papers per PC member; anything that makes their work easier is a good thing. (And Shai's interface for choosing preferences lets you see a paper's abstract pop up with a mouse rollover, so it's really nice to have abstracts early!)

Assuming things continue to go well, expect e-mail from PC members asking for you to handle a subreview before you head off for Thanksgiving....

Saturday, November 15, 2008

Technical Depth vs. Novelty vs. Potential Impact

Let's say you're the PC chair for a major (theory) conference, about to give instructions to the committee. The standard way to judge a paper in theory is primarily based on its technical depth, but there's certainly a push in the community (and, arguably, from our funders the NSF) to consider other aspects, including novelty (could this start a new research direction, or give us a new way to look at things) and potential impact (might people actually use these ideas)? How, exactly, should you instruct the PC to weight these various factors?

Conceivably, we could set up the reviews to have a score for each factor. For example, I'm on the PC for NSDI, a systems conference, and we have to give scores for Overall Merit, Technical Merit, Longevity (= how important will this work be over time), Novelty, and Writing (as if that score matters :) ). Personally, I don't like this, and I'm not intending to do it for STOC. It's more pain for me as a reviewer without I think giving meaningful information to the authors (instead of spending time trying to decide if a paper is a 2 or 3 in terms of novelty, let's give another comment in the review text!), and when it comes time to make the decisions, I'm not really sure what I'm supposed to be (Pareto-)optimizing in this multidimensional space.

I'm a big believer, for conferences, in the "simple" method, as I've said before -- papers just get a score from 1-5, under the following scheme:
1: Bottom 1/2 of submissions.
2: Top 1/2 but not top 1/3 of submissions.
3: Top 1/3 but not top 1/5 of submissions.
4: Top 1/5 but not top 1/10 of submissions.
5: Top 1/10 of submissions.
but that doesn't mean that reviewers shouldn't be using factors such as Longevity and Novelty, and even Writing, in deciding their overall score. So, as you're all finishing your submissions, now is your chance to make a suggestion -- how do you think the PC should weight these various factors?

Monday, November 10, 2008

"I write today about the global economic crisis and its implications for us at Harvard."

In what I'm taking as a sign of continuing Impending Doom on the economic front, I received mail from the 3 levels in the hierarchy above me regarding the financial state of Harvard in these tough times. (First from Harvard President Drew Faust which is where the title quote comes from, then the Dean of the Faculty of Arts of Sciences, and then the Dean for the School of Engineering and Applied Sciences.)

Little was given in way of specifics, but the general theme was clear. About 1/2 of Harvard's budget comes from the endowment payout each year. (The curse of a large endowment -- this number is probably higher than most institutions.) While nobody is giving a number for Harvard, the widely quoted statement from Moody's financial research is that endowments will lose about 30% this year. Given that Harvard has been outperforming the market and most other endowments over an extended time period, you can take your guess as to whether our losses will be above or below average. No matter how you do the math, it's not good.

So that means there will be some belt-tightening, and some delays in various plans. Again, very little in the way of specifics, but I'm sure (and Faust's letter suggested) that Harvard's new progressive financial aid program would not be touched. I imagine most everyone is getting similar messages at other institutions, but feel free to share your stories in the comments.

STOC deadline, reminder again

Remind your friends (and, if you're feeling generous, your enemies) that indeed the short abstract for your future STOC submission is due today (full conference submission deadline - Nov 17). Thanks to Shai Halevi's help, the server is even synched with the deadlines on the conference Web page.

I've seen over 150 submissions so far, and it's not even close to midnight...

Friday, November 07, 2008

Reminder, Abstract deadline for STOC

Remember that November 10 is the deadline for the "short abstract" for STOC.

More information at the web site, or head to the submissions server.

Tuesday, November 04, 2008

IPAM Networks of Networks (Day 2)

Day 2 of IPAM also had a bunch of interesting talks. My favorite for the day was by Ramesh Johari, who was considering the following problem: suppose that you only allow bilateral transactions -- corresponding essentially to a barter system, or in his view roughly how BitTorrent currently works. (People upload to you if you download to them -- not necessarily the same amount, but there nominally should be some equivalence in terms of utility.) How much do you lose over a system where you allow multilateral transactions -- which corresponds to the system we all know and love, where there's money behind exchanges, so you can always get money and use it to buy something else later. He set up an interesting model to look at these questions and say something meaningful about the differences between the equilibrium states in these two settings.

David Alderson also gave a very interesting talk on modeling the Internet topology using optimization methods, and using it to study the scale of the damage an adversary could do to the Internet. His point of view is very different (and I think much more compelling in the Internet setting) than the scale-free analysis popularized by for example Barabasi. I recommend checking out his papers (most of them co-authored with Doyle and Willinger, among others) on the theme.

Finally, I ended the day nicely by getting to talk with UCLA faculty Rafail Ostrovsky and Amit Sahai. Rafail expressed his usual enthusiasm insisting that we find a problem to all start working on -- so now I even have homework to keep me busy on the plane ride home!

IPAM Networks of Networks (Day 1)

I spent Monday at an IPAM meeting on Networks. (I'll be here today, too.) I gave a very "high-level" talk, covering my taxonomy of networking papers (just replace "power-law" papers with the more general "networking papers" in this editorial of mine) and discussing my thoughts on how we get closer to validation/control in the Internet with a universal hashing architecture. Slides are here; it's sort of a mix of some other longer talks here, here, and here.

The highlight for me for Monday was listening to Tim Roughgarden (who always gives excellent, crisp, clear talks) on comparing FIFO vs. FairShare queueing policies using the worst-case price of anarchy over all possible utility functions (and, unfortunately, only quadratic cost functions). This is work in progress, so I don't think Tim will have slides up, but it's a very nice analysis. [Spoiler: FairShare wins!]

David Clark gave a public lectre in the afternoon on the Internet -- what's wrong with it, and his ideas for fixing it. It got a big audience, and he was certainly interesting, amusing, and provocative. I've got to admit I have a lot of skepticism so far about how we get to a better Internet, one with security and designed with the economics of the creature in mind from the start instead of developing by accident. Which reminds me, what's the state of the NSF GENI project? I've been hearing rumors that it's headed to an early rest, but I'd be curious if anyone more in the know can comment anonymously.

Monday, November 03, 2008


In a recent review of a journal paper, I got the following comment:
In explaining why you are presenting simulation results, you say, "First we wish to check our theoretical analysis..." I don't understand this motivation. Your theoretical analysis is substantiated by mathematical proofs. What more evidence do you need of its validity?
Please keep this statement in mind.

I've stated frequently that theorists should actually implement their algorithms. I have some further recent anecdotal evidence to back up this claim.

I have students working on a variety of projects, and recently, I've had a strange convergence: in several of the projects, the students have found what appear to be non-trivial errors in recent theory papers (2 in CS theory, 1 in EE theory). It's not clear that any of the results actually break -- indeed, I don't think any of them do. I don't want to exaggerate. In one of them, a constant factor seems to be messed up -- not disastrous, but it is an important constant factor in context. And perhaps in one of the papers we could chalk it up to really just a sequence of confusing typos rather than an actual error.

Now I'm not naive enough to expect conference papers (or even journal papers) without errors. But the key here is that these errors were either easily found and/or proven to me by the students by having them implement the algorithms described in the paper. Then they sort of jumped out.

Implementing your own algorithm is a good way of checking your work. If you aren't implementing your algorithm, arguably you're skipping a key step in checking your results. Why should a reviewer go to the trouble of checking your work carefully if you're not?

Moreover, imagine not some student but an actual real systems-builder who decides your algorithm is worth implementing for their system. Imagine how they feel when they find things don't quite work as you've stated. That doesn't exactly inspire confidence, and is the sort of thing that discourages someone from using your algorithm. More globally, it gives systems people the feeling that theory (and theorists) aren't important. Why should they be debugging your proofs? You should give them evidence your algorithm can be implemented and works as claimed by actually implementing it if possible or reasonable to do so.

I'm not stating that you personally have to implement your algorithm. Hire a student to do it! It's good experience for them. You can apply for an REU, or other student research funding.

So, to answer the anonymous reviewer, I think there's always plenty of good reason to check our mathematical proofs by experiment and impelementation, and it's suitable to put those results in the journal paper as an aid to the reader (and reviewer!).