Sampath Kannan gave his overview from the NSF. It was actually very positive (the last slide ended with "CCF is alive and well") -- there seems to be a wave of new funding, and funding rates have been relatively high (and improving) over the last few years, way ahead of the "low teen" years around 2004-2005. Sampath also gave some strong verbal support to applied theory, pushing for more theory interactions with the rest of computer science (and other areas) and for a theory of experimental algorithms. He suggested there may be discussions upcoming for a Theory+CS systems program. I admit I was very pleased to hear such a strong message in support of applied algorithms in particular.
Salil Vadhan presented the outcomes of the Theory Visions workshop from last year, with the goal to produce "nuggets" to help highlight excited efforts going on in theory to communicate to the NSF and other computer scientists. Example nugget titles include Algorithms for Understanding Data on a Massive Scale, P vs. NP, Making Economic Theory Tractable, Achieving Privacy and Utility, Efficient Computation in the Physical Universe, Computational Approaches to Modeling the Brain and Cells, Commuting as a Commodity, etc. The nuggets will be up at www.theorymatters.org.
The Godel Prize was awarded. I gave my conference report (I'll get the slides up at some point). Local arrangements report. Various other reports (SIGACT budget, STOC 2010 Boston, STOC 2011 FCRC). New Business - Theory of Computing, an open access journal, at www.theoryofcomputing.org. Princeton workshop Aug 25-29 on Barriers to Complexity Theory. New conference -- Innovations in Computer Science (ICS) to take place in China (most/all expenses paid) in 2010-201. "ICS aims to regain and support the interest in works that are focused on exploring novel directions in computation and communicating new conceptual messages (e.g., introduction of a novel model, problem, or technique)."
Sunday, May 31, 2009
STOC 2009, Day 1
Just a couple of quick highlights from Day 1.
Costis Daskalakis, when discussing oblivious PTAS's for Nash equilibria, presented the following probability lemma that seems like it should be very useful (note: it's not clear I'm stating it correctly, or with all appropriate conditions): if we have collections of random variables X_i and Y_i and look at sum_{i=1}^n X_i, sum_{i=1}^n Y_i, and the sums agree on their first k moments, the variation distance between distribution of the sums in 2^{-\Omega{k}}. (I believe the X_i and Y_i have to all be independent.) The result is based on something called the Krawtchouk Expansion. I'm looking forward to seeing this argument; it seems like a potentially useful lemma for other contexts.
Aaron Bernstein gave a nice talk on his result with David Karger creating an oracle for All-Pairs Shortest-Paths result that can find the shortest path when a node or edge fails; that is, you can find and store shortest paths avoiding an individual node or edge, and you can do it in (up to polylogarithmic factors) the same time/space as for a standard APSP structure. This goes on my list of algorithms I'd like to have a student implement, to see how it really works.
Costis Daskalakis, when discussing oblivious PTAS's for Nash equilibria, presented the following probability lemma that seems like it should be very useful (note: it's not clear I'm stating it correctly, or with all appropriate conditions): if we have collections of random variables X_i and Y_i and look at sum_{i=1}^n X_i, sum_{i=1}^n Y_i, and the sums agree on their first k moments, the variation distance between distribution of the sums in 2^{-\Omega{k}}. (I believe the X_i and Y_i have to all be independent.) The result is based on something called the Krawtchouk Expansion. I'm looking forward to seeing this argument; it seems like a potentially useful lemma for other contexts.
Aaron Bernstein gave a nice talk on his result with David Karger creating an oracle for All-Pairs Shortest-Paths result that can find the shortest path when a node or edge fails; that is, you can find and store shortest paths avoiding an individual node or edge, and you can do it in (up to polylogarithmic factors) the same time/space as for a standard APSP structure. This goes on my list of algorithms I'd like to have a student implement, to see how it really works.
STOC 2009, Day 0
The Saturday before STOC was devoted to a 60th birthday celebration for Les Valiant. The program is here. I'm afraid I missed most of the morning talks because I didn't fly in until the morning, but the afternoon talks were really very good. (I hope the speakers will all put their slides up on their web pages -- it's useful for everyone!) Some brief highlights, with apologies to those talks I don't describe:
Rocco Servedio started the afternoon with a short survey of learning theory, starting with the foundations laid by Valiant's PAC-learning paper and some of the early work in the area, and then connecting it to recent results and still-open problems in learning theory today.
Michael Kearns followed with a talk that highlighted the impact of this early work on the development of the machine learning community and how it is practiced today. One point he highlighted is that Les was not content to just publish his CACM article on the theory of the learnable, but was proactive in trying to ensure that the ideas reached other communities (in particular the machine learning community) so that they could really bloom. I think a hallmark of Les's work is his willingness to not only "focus the computational lens" on other areas but also to try to frame models in ideas in ways that those other areas can further build on them.
Vitaly Feldman provided a well-presented short survey of Les's more recent work on evolvability, and his own work in that area.
Avi Wigderson closed with a talk showing some of the threads and connections leading from Les's work, focused primarily on theoretical results related to the permanent. One of the really nice things about this talk was that Avi tried to close his quick discussions of each thread by listing some remaining open problems. I really enjoy surveys that are structured this way -- not just "here's what was done" but a clear sign of "and here's what's still left to do". I've encouraged Avi (and will do so again) to get the slides for this talk up on his web site as soon as possible; it should be a useful resource for people (especially graduate students) looking for problems to think about.
Perhaps the one disappointment for the day is Les himself didn't give a talk. But that's not surprising to me; Les avoids self-promotion, and lets his work speak for himself. I think the program itself pleasantly overwhelmed him, and provided a remarkable display of all that he has given the theory community (and the broader scientific community) through his work.
Rocco Servedio started the afternoon with a short survey of learning theory, starting with the foundations laid by Valiant's PAC-learning paper and some of the early work in the area, and then connecting it to recent results and still-open problems in learning theory today.
Michael Kearns followed with a talk that highlighted the impact of this early work on the development of the machine learning community and how it is practiced today. One point he highlighted is that Les was not content to just publish his CACM article on the theory of the learnable, but was proactive in trying to ensure that the ideas reached other communities (in particular the machine learning community) so that they could really bloom. I think a hallmark of Les's work is his willingness to not only "focus the computational lens" on other areas but also to try to frame models in ideas in ways that those other areas can further build on them.
Vitaly Feldman provided a well-presented short survey of Les's more recent work on evolvability, and his own work in that area.
Avi Wigderson closed with a talk showing some of the threads and connections leading from Les's work, focused primarily on theoretical results related to the permanent. One of the really nice things about this talk was that Avi tried to close his quick discussions of each thread by listing some remaining open problems. I really enjoy surveys that are structured this way -- not just "here's what was done" but a clear sign of "and here's what's still left to do". I've encouraged Avi (and will do so again) to get the slides for this talk up on his web site as soon as possible; it should be a useful resource for people (especially graduate students) looking for problems to think about.
Perhaps the one disappointment for the day is Les himself didn't give a talk. But that's not surprising to me; Les avoids self-promotion, and lets his work speak for himself. I think the program itself pleasantly overwhelmed him, and provided a remarkable display of all that he has given the theory community (and the broader scientific community) through his work.
Thursday, May 28, 2009
STOC schedule updates
I was asked to put up the following schedule changes for STOC on the blog. The information will also be available at the conference of course.
1. The business meeting will be held 8:30-10 PM Sunday, instead of 7-9 PM, with the venue being unchanged (the Haverford/Baccarat, which is one half of the Crystal Ballroom). This should allow people to have a more relaxed dinner rather than having to rush to get back to the business meeting.
2. The Athena Lecture on Sunday and the two prize talks on Monday will also be held in the Haverford/Baccarat instead of the whole Crystal Ballroom, since the latter is very large.
See you there.
1. The business meeting will be held 8:30-10 PM Sunday, instead of 7-9 PM, with the venue being unchanged (the Haverford/Baccarat, which is one half of the Crystal Ballroom). This should allow people to have a more relaxed dinner rather than having to rush to get back to the business meeting.
2. The Athena Lecture on Sunday and the two prize talks on Monday will also be held in the Haverford/Baccarat instead of the whole Crystal Ballroom, since the latter is very large.
See you there.
Tuesday, May 26, 2009
SIGCOMM PC Report
The chairs of the SIGCOMM PC were asked to submit a note to CCR on the Commitee Process, available here. It's an interesting read. (Overall, I've found the systems community a bit ahead of the theory community in thinking about and reviewing its conference processes.) They also make some anonymized data available -- which looks safe to me, but if anyone thinks that's a bad idea, I think it's worth speaking up about it.
Some highlights:
1: "We do see a problem with requesting only 14-page submissions, and want to use this document to convey a position we frequently heard from TPC members as well: short papers could be an interesting addition to future CFPs. In a conference with such a low acceptance rate as that of Sigcomm, “small contributions” naturally find themselves at a disadvantage. The issue is that the description and evaluation of an ingenious but small idea certainly takes less than 14 pages. The same is true for work that improves on prior work or presents tools with important practical use but small technical contribution."
Short papers are, in my mind, also a potential mechanism for mixed theory/practice papers to get into SIGCOMM. One issue that arises is that theory papers aren't written in the way systems papers are. In a theory paper, if we're describing an algorithm or data structure, our goal is to make it as GENERAL as possible -- here's a neat idea, here are some places it might be useful, please find others. SIGCOMM pappers are designed almost the opposite way -- to show an algorithm or data structure is useful requires looking at a very specific application and experimentally detailing exactly what performance gains arise in the context of that specific application. I will push for short papers to include interesting algorithms and data structures that might be GENERALLY useful to lots of people and applications but may not yet have been shown to offer groundbreaking performance for a specific application. (As pointed out in the PC meeting, my work is often in that space, so I have a bias in this argument...) Suggestions for making this workable are desired.
2: "ACM SIGCOMM defines a conflict of interest between an author and a reviewer if the two have worked together in the past two years. Students and advisors are considered “conflicted for life” and of course any institutional or private relationship between the author and the reviewer instantly qualifies as a conflict."
I just had to point out the "of course" there, after some of the inane commentary regarding my introducing much weaker conflict of interest policies and goals for STOC. (See here and here.) [When I've talked about the theory community's standard conflict practices to systems people, most politely shake their heads and mumble, "Well, if that's the way it's done..." The more open people privately make clear their opinion our approach opens the door for all sorts of systematic biases and seems remarkably backward. And I'm phrasing that politely.]
3: "For a few years Sigcomm has used a two-tier TPC: “light” members are asked to do 10-15 reviews and are not invited to the meeting; “heavy” members are enrolled for 20-25 reviews plus mandatory participation in the TPC meeting. The split is mostly motivated by keeping the TPC meeting of manageable size. This year we introduced a third tier, called “senior” for lack of a better name. The role we intended for senior TPC members was a mix of conflict solver, an additional voice during the TPC meeting, helping hands for last minute reviews, and possibly carriers of a different perspective that could put the submitted research under a more positive light."
Compare with the theory discussion taking place here on the roles of junior/senior PC members in theory conferences. Also, compare with the theory conference practice of using 100+ subreviewers with PC members responsible for 45+ reviews. I'm not claiming this approach is better; but it is food for thought.
Some highlights:
1: "We do see a problem with requesting only 14-page submissions, and want to use this document to convey a position we frequently heard from TPC members as well: short papers could be an interesting addition to future CFPs. In a conference with such a low acceptance rate as that of Sigcomm, “small contributions” naturally find themselves at a disadvantage. The issue is that the description and evaluation of an ingenious but small idea certainly takes less than 14 pages. The same is true for work that improves on prior work or presents tools with important practical use but small technical contribution."
Short papers are, in my mind, also a potential mechanism for mixed theory/practice papers to get into SIGCOMM. One issue that arises is that theory papers aren't written in the way systems papers are. In a theory paper, if we're describing an algorithm or data structure, our goal is to make it as GENERAL as possible -- here's a neat idea, here are some places it might be useful, please find others. SIGCOMM pappers are designed almost the opposite way -- to show an algorithm or data structure is useful requires looking at a very specific application and experimentally detailing exactly what performance gains arise in the context of that specific application. I will push for short papers to include interesting algorithms and data structures that might be GENERALLY useful to lots of people and applications but may not yet have been shown to offer groundbreaking performance for a specific application. (As pointed out in the PC meeting, my work is often in that space, so I have a bias in this argument...) Suggestions for making this workable are desired.
2: "ACM SIGCOMM defines a conflict of interest between an author and a reviewer if the two have worked together in the past two years. Students and advisors are considered “conflicted for life” and of course any institutional or private relationship between the author and the reviewer instantly qualifies as a conflict."
I just had to point out the "of course" there, after some of the inane commentary regarding my introducing much weaker conflict of interest policies and goals for STOC. (See here and here.) [When I've talked about the theory community's standard conflict practices to systems people, most politely shake their heads and mumble, "Well, if that's the way it's done..." The more open people privately make clear their opinion our approach opens the door for all sorts of systematic biases and seems remarkably backward. And I'm phrasing that politely.]
3: "For a few years Sigcomm has used a two-tier TPC: “light” members are asked to do 10-15 reviews and are not invited to the meeting; “heavy” members are enrolled for 20-25 reviews plus mandatory participation in the TPC meeting. The split is mostly motivated by keeping the TPC meeting of manageable size. This year we introduced a third tier, called “senior” for lack of a better name. The role we intended for senior TPC members was a mix of conflict solver, an additional voice during the TPC meeting, helping hands for last minute reviews, and possibly carriers of a different perspective that could put the submitted research under a more positive light."
Compare with the theory discussion taking place here on the roles of junior/senior PC members in theory conferences. Also, compare with the theory conference practice of using 100+ subreviewers with PC members responsible for 45+ reviews. I'm not claiming this approach is better; but it is food for thought.
Saturday, May 23, 2009
Grades Out, Complaints Begin...
Apparently, student who filled out their class reviews got access to their grades today. It's been a while since I've taught a class this large (80 students) -- how many e-mails do you think I'll get from students inquiring about why their grades were lower than expected?
I can't really blame them for asking. In their defense, so far those who have asked did poorly enough on the final that their grade did move down a level from before the final. However, the ones who suggest they want to go back over problem sets from earlier in the semester (that they've had back for some number of weeks) to get more points in an effort to improve their final grade could potentially annoy me. Even if a problem was misgraded, they've had plenty of time to bring it up before (the TAs, who graded the assignments, are generally not available now to ask if a question comes up), and it seems clear the goal is find a way to push themselves over the grade boundary line. Doing this after grades are turned in is a major hassle. Apparently, I should add a note to the syllabus with a statute of limitations on grade changes for assignments. Luckily (and by design) almost all students are far enough away from the grade boundary that even a substantial change on the homework won't change their final grade.
So far, before noon this Saturday morning, I've had 3 e-mails from students about their grades...
I can't really blame them for asking. In their defense, so far those who have asked did poorly enough on the final that their grade did move down a level from before the final. However, the ones who suggest they want to go back over problem sets from earlier in the semester (that they've had back for some number of weeks) to get more points in an effort to improve their final grade could potentially annoy me. Even if a problem was misgraded, they've had plenty of time to bring it up before (the TAs, who graded the assignments, are generally not available now to ask if a question comes up), and it seems clear the goal is find a way to push themselves over the grade boundary line. Doing this after grades are turned in is a major hassle. Apparently, I should add a note to the syllabus with a statute of limitations on grade changes for assignments. Luckily (and by design) almost all students are far enough away from the grade boundary that even a substantial change on the homework won't change their final grade.
So far, before noon this Saturday morning, I've had 3 e-mails from students about their grades...
Wednesday, May 20, 2009
Godel Prize to Salil Vadhan
Congratulations to Professor Salil Vadhan of Harvard for winning SIGACT's Godel Prize.
(Oh, right, also congratulations to his co-authors Professors Omer Reingold and Avi Wigderson.)
(Oh, right, also congratulations to his co-authors Professors Omer Reingold and Avi Wigderson.)
Computer Science 222 : A Discussion
I'm reflecting on my Graduate Course on (vaguely) network algorithms from this year, and particularly the discussion I had with students at the end of the day. I view the class as being geared toward first-year graduate students, with explicit goals of the class being to introduce the students to research and how it's done and to build theory/systems connections.
1. The class really tries to mix theory and practice -- on good days, we discuss two papers on a topic, where I've tried to have one come from the theory side and one from the systems side. At the end of the day, not surprisingly, the theorists like the theory papers more and the systems people like the systems paper more. But it does seem that the course opens the lines of communication, and they have a greater understanding and appreciation for the other side. The payoff, if any, may only be seen in the long term -- but I'm pretty sure there's a payoff there.
2. The class can apparently be a lot of work at times -- the theory people have to pick up things (how does TCP work?), and systems people have to pick up things (what's that notation mean?). I hope that's good training, on both sides.
3. Students seem surprised that all papers we read aren't "blockbusters" (especially since we start with the papers for PageRank and HITS -- two clear blockbusters). That's on purpose. Beginning graduate students have exaggerated expectations -- what they've generally seen are the blockbuster results, and they wonder where their breakthrough will come from. (This matches my recollection of starting graduate school.) They're unaware that research is a skill to be learned, that it's perfectly OK to start out with a "mere" incremental result, especially to get one's feet wet. For typical graduate students, there's a learning curve in figuring out how dissect and analyze problems, organize and present experiments, write clear and compelling text, etc.; but many don't really think of their first year as training in this sense. Also, beginning graduate students tend to underestimate the potential value of innovative follow-on work; lots of incremental results are actually really valuable. (Few papers actually start new fields.) Several students said they were relieved in coming to realize while taking the course that their first papers didn't have to be top-conference breakthroughs.
4. The most popular topic by far (for both systems and theory people) was information theory (particularly compression and related algorithms). Many said it was their first real exposure to information theory -- why hadn't it come up before? Some asked where there was a course on information theory algorithmics. (None other that I know at Harvard -- is there a course to recommend at MIT?) And really, you can't go through the Burrows-Wheeler compression algorithm without being completely floored. If I ever stop teaching undergraduate algorithms and data structures, maybe I should put together an advanced undergraduate course merging (practical) algorithmics with information theory and queueing theory.
5. There's demand to expand these theory/systems connections; while I focused on networks, there was interest in covering related topics in databases, distributed systems, and other areas.
6. I'm hoping that several of the class projects -- with or without my assistance and input -- can be expanded into research papers. A number of them look promising. However, I usually teach this course in the fall, and can use the momentum to work with students in the spring to realize the results more fully. I'm not sure that will work this year (especially with some projects being done by graduating seniors...)
Sadly, I probably won't teach this class again until Fall 2010 or 2011. Something to look forward to. (If anyone from the class reads this, feel free to post comments anonymously or mail me directly...)
1. The class really tries to mix theory and practice -- on good days, we discuss two papers on a topic, where I've tried to have one come from the theory side and one from the systems side. At the end of the day, not surprisingly, the theorists like the theory papers more and the systems people like the systems paper more. But it does seem that the course opens the lines of communication, and they have a greater understanding and appreciation for the other side. The payoff, if any, may only be seen in the long term -- but I'm pretty sure there's a payoff there.
2. The class can apparently be a lot of work at times -- the theory people have to pick up things (how does TCP work?), and systems people have to pick up things (what's that notation mean?). I hope that's good training, on both sides.
3. Students seem surprised that all papers we read aren't "blockbusters" (especially since we start with the papers for PageRank and HITS -- two clear blockbusters). That's on purpose. Beginning graduate students have exaggerated expectations -- what they've generally seen are the blockbuster results, and they wonder where their breakthrough will come from. (This matches my recollection of starting graduate school.) They're unaware that research is a skill to be learned, that it's perfectly OK to start out with a "mere" incremental result, especially to get one's feet wet. For typical graduate students, there's a learning curve in figuring out how dissect and analyze problems, organize and present experiments, write clear and compelling text, etc.; but many don't really think of their first year as training in this sense. Also, beginning graduate students tend to underestimate the potential value of innovative follow-on work; lots of incremental results are actually really valuable. (Few papers actually start new fields.) Several students said they were relieved in coming to realize while taking the course that their first papers didn't have to be top-conference breakthroughs.
4. The most popular topic by far (for both systems and theory people) was information theory (particularly compression and related algorithms). Many said it was their first real exposure to information theory -- why hadn't it come up before? Some asked where there was a course on information theory algorithmics. (None other that I know at Harvard -- is there a course to recommend at MIT?) And really, you can't go through the Burrows-Wheeler compression algorithm without being completely floored. If I ever stop teaching undergraduate algorithms and data structures, maybe I should put together an advanced undergraduate course merging (practical) algorithmics with information theory and queueing theory.
5. There's demand to expand these theory/systems connections; while I focused on networks, there was interest in covering related topics in databases, distributed systems, and other areas.
6. I'm hoping that several of the class projects -- with or without my assistance and input -- can be expanded into research papers. A number of them look promising. However, I usually teach this course in the fall, and can use the momentum to work with students in the spring to realize the results more fully. I'm not sure that will work this year (especially with some projects being done by graduating seniors...)
Sadly, I probably won't teach this class again until Fall 2010 or 2011. Something to look forward to. (If anyone from the class reads this, feel free to post comments anonymously or mail me directly...)
Tuesday, May 19, 2009
Grading Time
I have to get in my final grades in the next day or so. Some thoughts and questions.
1) What's the average grade you give? (Anonymous answers are fine.) Without getting too specific, my average for my undergraduate class is usually somewhere between a B and a B+ depending on the year. (I would happily give straight A's in my undergraduate class if I felt it was merited. Hasn't happened yet.)
2) My course seems to have recently become a "proving ground" for freshmen who consider themselves good at math/CS. I had more freshmen than ever this year... and on the whole, they did MUCH better than average. (I had about 10-15% of the class as freshmen this year -- and they got about 1/2 the A grades...)
I'd like to encourage this. I'm thrilled to get bright freshmen into algorithms and data structures early (and I hope to get them in my grad classes or to do a thesis later). I have to remind myself to send these students a note and see if they want to be teaching assistants next year. It's great when you get a TA who can do the course multiple years...
3) My grades for my graduate class will be much higher than for my undergraduate class on average -- even for the undergraduates who take the class. This isn't really surprising. The graduate class is very self-selecting. (One interpretation is that only the very top undergrads from my undergrad class choose to take the graduate class, so of course they do well; the other is that the undergrads who like me as a teacher choose to take the class, and I'm just therefore easier on my grad class. What's the Jaccard coefficient of these two sets of students...?) And grad classes generally seem to have a higher curve, since most institutions (Harvard included) require grad students to maintain something like a B average, so anything less than a B is like a failing grade. Is there a huge difference between grades for your grad classes and undergrad classes at your institution? Even if you get undergraduates in the grad classes?
1) What's the average grade you give? (Anonymous answers are fine.) Without getting too specific, my average for my undergraduate class is usually somewhere between a B and a B+ depending on the year. (I would happily give straight A's in my undergraduate class if I felt it was merited. Hasn't happened yet.)
2) My course seems to have recently become a "proving ground" for freshmen who consider themselves good at math/CS. I had more freshmen than ever this year... and on the whole, they did MUCH better than average. (I had about 10-15% of the class as freshmen this year -- and they got about 1/2 the A grades...)
I'd like to encourage this. I'm thrilled to get bright freshmen into algorithms and data structures early (and I hope to get them in my grad classes or to do a thesis later). I have to remind myself to send these students a note and see if they want to be teaching assistants next year. It's great when you get a TA who can do the course multiple years...
3) My grades for my graduate class will be much higher than for my undergraduate class on average -- even for the undergraduates who take the class. This isn't really surprising. The graduate class is very self-selecting. (One interpretation is that only the very top undergrads from my undergrad class choose to take the graduate class, so of course they do well; the other is that the undergrads who like me as a teacher choose to take the class, and I'm just therefore easier on my grad class. What's the Jaccard coefficient of these two sets of students...?) And grad classes generally seem to have a higher curve, since most institutions (Harvard included) require grad students to maintain something like a B average, so anything less than a B is like a failing grade. Is there a huge difference between grades for your grad classes and undergrad classes at your institution? Even if you get undergraduates in the grad classes?
Thursday, May 14, 2009
A Problem Archive?
Having just gone through the painful process of making up my final exam (again), I'm inspired to think that there must be a better way. We should have a problem (and solution) repository so professors/instructors can easily find a collection of problems to borrow from or to use as a creative inspiration. It would be nice to be able to say, "I need a dynamic programming problem, a linear programming problem, and an approximation algorithm problem..." and go to one place to start getting ideas. (Right now, all of you that keep your exams online, I'm probably cribbing from you, thanks to Google... I notice more of you are keeping such info password-protected than a few years ago...)
I (obviously) haven't thought much about how this would work in practice, so consider this an open discussion. Of course, there's the problem of keeping such a repository protected from student eyes. Ideally, actually, perhaps we'd have a student site (unprotected) and a faculty site (password-protected). Would you use such a site? Would you contribute problems to it? What sort of incentives would make such a site work? What other uses (besides making up exams and problem sets) could such a site be good for?
I (obviously) haven't thought much about how this would work in practice, so consider this an open discussion. Of course, there's the problem of keeping such a repository protected from student eyes. Ideally, actually, perhaps we'd have a student site (unprotected) and a faculty site (password-protected). Would you use such a site? Would you contribute problems to it? What sort of incentives would make such a site work? What other uses (besides making up exams and problem sets) could such a site be good for?
Wednesday, May 13, 2009
Life After Rejection (Guest Post)
Michael M. asked me to post about an academic journey of one of my papers, so here goes.
In November 2008, I submitted a paper, "Distributed Agreement in Tile Self-Assembly," to STOC. The paper was rejected, but the comments I got from the reviewers were superb. They were extensive and specific -- and I agreed with everything they said. I changed the paper to address each of the concerns raised, and resubmitted it to DNA 15 (the 15th Annual Meeting of DNA Computing and Molecular Programming). This time around, my paper was accepted, and I recently learned that it won the Best Student Paper Award.
Two points seem important to me.
First, I didn't (and don't) take rejection personally. I view paper submission not as an event, but as part of a process. If I get a quality rejection letter, and I improve the paper based on the comments in the letter, it's just a matter of time before an improved version of my paper will get published somewhere.
Second, the STOC PC reviewers played a role in advancing computer science, beyond just putting together a program for STOC. My experience may be unusual. One congratulatory email I received, from a well known theoretical computer scientist, basically said, "Congrats on your award, and I'm shocked that you got useful feedback from STOC reviewers. So congrats on that too." Therefore, I'd like to emphasize to anyone reviewing something, that even if a submission is not a good fit for your particular venue, providing useful feedback is scientifically important. I'm very grateful that I had reviewers who approached their role conscientiously.
I'll conclude by shifting gears into a soundbite of my paper's technical results. I was able to show connections between the geometry of self-assembling networks, and the theory of multiprocessor synchronization. For example, three-dimensional self-assemblies can simulate strictly stronger shared objects than two-dimensional self-assemblies. These connections seem to intrigue both the "nano people" and researchers in distributed computing -- and I'm now investigating synchronization problems in several subareas of natural computing. If nothing else, it looks as though it'll be a lot of fun.
A pre-proceedings version of the paper is available here.
Friday, May 08, 2009
Hoopes Prize Committee
I'm serving this year, as I have every year I've been at Harvard, on the Hoopes Prize Committee for Harvard science theses. This is one committee I don't mind serving on at all. I get to read (or at least skim) a number of senior theses in a variety of areas -- mostly computer science, math, applied math, and engineering, but usually I have a couple in fields more removed from my natural interest, like psychology, astronomy, evolutionary biology, and even geology. (I admit, I eagerly avoid the very large number of theses in biology and biochemistry; I'm sure my colleagues in related fields can better deal with the jargon there.) The worst of them is generally quite interesting; the best are high-quality publishable research. The committee sends in the reviews, and meets for lunch to work out who gets the prizes.
The committee exemplifies all the possible problems one can imagine in committees (that have arisen in the context of discussing PC meetings on this blog).
1. Only 2 reviews per work.
2. No guarantee the reviewer is an "expert" in the specific subarea.
3. Widely varying interpretations on what the scores mean. (In past years we've been told to aim for an average of 3. Most submissions are quite good; novice reviewers tend to end up with an average closer to 4.)
4. Widely varying interpretations on what the prize should be for. It's a writing prize, for the sciences. How should one judge math theses, which, arguably, if written entirely correctly, nobody in the room might be able to properly read in the time available? How much weight should be given to scientific novelty versus the writing itself?
5. Increasing numbers of submissions per reviewer. (When I started, I think we each had six theses to read; this year, it has hit ten. The barrier to entry is low, so perhaps there's an increasing tendency to submit...)
And so on... in some sense, it's great training for program committee work.
Despite the potential problems, it generally works out well. Some years there are many disagreements that have to be discussed; most years, surprisingly few. Overall, there's always the feeling that the student work is quite amazing, and that we at some point have to draw a line and just pick what we think is best. I wish all committees left me feeling as positive when I left the room.
The committee exemplifies all the possible problems one can imagine in committees (that have arisen in the context of discussing PC meetings on this blog).
1. Only 2 reviews per work.
2. No guarantee the reviewer is an "expert" in the specific subarea.
3. Widely varying interpretations on what the scores mean. (In past years we've been told to aim for an average of 3. Most submissions are quite good; novice reviewers tend to end up with an average closer to 4.)
4. Widely varying interpretations on what the prize should be for. It's a writing prize, for the sciences. How should one judge math theses, which, arguably, if written entirely correctly, nobody in the room might be able to properly read in the time available? How much weight should be given to scientific novelty versus the writing itself?
5. Increasing numbers of submissions per reviewer. (When I started, I think we each had six theses to read; this year, it has hit ten. The barrier to entry is low, so perhaps there's an increasing tendency to submit...)
And so on... in some sense, it's great training for program committee work.
Despite the potential problems, it generally works out well. Some years there are many disagreements that have to be discussed; most years, surprisingly few. Overall, there's always the feeling that the student work is quite amazing, and that we at some point have to draw a line and just pick what we think is best. I wish all committees left me feeling as positive when I left the room.
Thursday, May 07, 2009
Communications Technologies and Universities : No More Office Phones?
Here's an amusing article tying together budget cuts at universities and new technologies; the communications department at UW is cutting costs by getting rid of phones. Students who want to meet professors will have to e-mail them or find them.
I admit I would be loathe to give up my office phone. If only to save my fingers from potential RSI (I haven't been cursed yet), I think using a phone instead of e-mail is a good idea in many situations. And there's no way I'd give students (or even many staff people) my cell phone number. On the other hand, I must admit my office phone is not a highly used device, and students in particular almost never reach me by phone. Indeed, these days a number of the calls on the office phone are solicitors -- this has only really started happening in the past year or two, and I wonder if Harvard's phones were somehow off the grid of phone solicitors way back when or if there's been some other change. So while there's irony in a communications department getting rid of phones, I can see where the cost-benefit analysis might suggest it's the right thing to do.
I admit I would be loathe to give up my office phone. If only to save my fingers from potential RSI (I haven't been cursed yet), I think using a phone instead of e-mail is a good idea in many situations. And there's no way I'd give students (or even many staff people) my cell phone number. On the other hand, I must admit my office phone is not a highly used device, and students in particular almost never reach me by phone. Indeed, these days a number of the calls on the office phone are solicitors -- this has only really started happening in the past year or two, and I wonder if Harvard's phones were somehow off the grid of phone solicitors way back when or if there's been some other change. So while there's irony in a communications department getting rid of phones, I can see where the cost-benefit analysis might suggest it's the right thing to do.
Tuesday, May 05, 2009
An AMS Paper on Power Laws and Network Modeling
There's a new article on up at Notices of the AMS entitled Mathematics and the Internet: A Source of Enormous Confusion and Great Potential, by Willinger, Alderson, and Doyle.
It's an interesting read, although I've heard some of the main points in talking with them before. In terms of specific examples of how problems in the modeling of the Internet arose, my take on their take of the history is that the trouble started with the well-known Faloutsos^3 paper (On Power-Law Relationships on the Internet Topology), which first appeared in SIGCOMM 1999, and which claimed that the degree distribution of the Internet's router-level topology followed a power law. It has been argued that there are a number of flaws with this paper, most notably in the underlying data, as described in this article, making their conclusions on the power-law relationship unreliable. (Given my recent posts and corresponding discussions on the SIGCOMM PC meeting, it's interesting to reflect on the importance this paper has played after being published in SIGCOMM.) The second part of this troublesome history is the subsequent work of Barabasi and Albert, who leveraged this result to help proclaim that the underlying mechanism of the Internet topology is preferential attachment, explaining this power law behavior. The authors discuss how this claim was made without sufficient validation, and argue that there are other optimization-based models that much better explain the guiding mechanisms behind the Internet topology, based on actual Internet engineering.
To start, I should admit not-so-humbly that I think the authors could have mentioned my very-relevant survey and editorial on these issues (the editorial, in particular, discusses the validation issue in this context). But leaving that aside, it's an interesting read with many possible lessons to draw or questions to think about. Are we insufficiently critical at early stages (e.g., conference publications), so that once a result is accepted, it becomes difficult to point out where it may be flawed? How do we deal with "noisy" network data in analysis? Are we getting to the point where only teams from Microsoft, Google, Akamai or AT&T (or some other big company) can publish in certain areas of networking because they're the only ones that can get good enough data? What are the useful ways mathematicians can contribute to the study of networking topology (yet another power law model does not seem to be compelling)?
I have to admit, I have a side worry with this appearing in the AMS. Given how it sometimes appears that mathematicians view computer scientists, I worry that this article will reinforce the belief in the minds of some mathematicians that computer science is "non-serious" or "non-rigorous", which clearly wasn't the intention of the authors. The problem of explaining the history of "enormous confusion" and "great potential" is the risk that too many readers will focus on the former rather than the latter.
It's an interesting read, although I've heard some of the main points in talking with them before. In terms of specific examples of how problems in the modeling of the Internet arose, my take on their take of the history is that the trouble started with the well-known Faloutsos^3 paper (On Power-Law Relationships on the Internet Topology), which first appeared in SIGCOMM 1999, and which claimed that the degree distribution of the Internet's router-level topology followed a power law. It has been argued that there are a number of flaws with this paper, most notably in the underlying data, as described in this article, making their conclusions on the power-law relationship unreliable. (Given my recent posts and corresponding discussions on the SIGCOMM PC meeting, it's interesting to reflect on the importance this paper has played after being published in SIGCOMM.) The second part of this troublesome history is the subsequent work of Barabasi and Albert, who leveraged this result to help proclaim that the underlying mechanism of the Internet topology is preferential attachment, explaining this power law behavior. The authors discuss how this claim was made without sufficient validation, and argue that there are other optimization-based models that much better explain the guiding mechanisms behind the Internet topology, based on actual Internet engineering.
To start, I should admit not-so-humbly that I think the authors could have mentioned my very-relevant survey and editorial on these issues (the editorial, in particular, discusses the validation issue in this context). But leaving that aside, it's an interesting read with many possible lessons to draw or questions to think about. Are we insufficiently critical at early stages (e.g., conference publications), so that once a result is accepted, it becomes difficult to point out where it may be flawed? How do we deal with "noisy" network data in analysis? Are we getting to the point where only teams from Microsoft, Google, Akamai or AT&T (or some other big company) can publish in certain areas of networking because they're the only ones that can get good enough data? What are the useful ways mathematicians can contribute to the study of networking topology (yet another power law model does not seem to be compelling)?
I have to admit, I have a side worry with this appearing in the AMS. Given how it sometimes appears that mathematicians view computer scientists, I worry that this article will reinforce the belief in the minds of some mathematicians that computer science is "non-serious" or "non-rigorous", which clearly wasn't the intention of the authors. The problem of explaining the history of "enormous confusion" and "great potential" is the risk that too many readers will focus on the former rather than the latter.
Monday, May 04, 2009
New Paper: On Compressing Social Networks
I'm posting online the "final version" of a paper that will be appearing at KDD, On Compressing Social Networks; the collaboration came out of a visit to Yahoo last summer.
(Co-authors: Chierichetti, Kumar, Lattanzi, Panconesi, and Raghavan.)
The paper definitely tries to look at both the "theory" and "practice" of compressing social network graphs. On the theoretical side, one contribution is that we formalize arrangement problems that are variations of the standard minimum linear arrangement problem that naturally capture features of the underlying compression problem. In the linear arrangement problem, we are given a graph G, and we wish to lay the nodes via a permutation mapping pi:V->[1,n] out on the number line (in positions 1 to n, the number of nodes) in such a way as to minimize sum_{i,j in E} |pi(i) - pi(j)|.
But when we're trying to compress edges in the graph, and we're given an ordering of the nodes, the "cost" of compressing an edge is the cost of the pointer from one vertex to another, and as the number giving the gap between the positions i and j in the node ordering is |pi(i) - pi(j)|, the cost in bits for representing an edge from i to j is (essentially) log_2 |pi(i) - pi(j)|. This suggests considering the minimum logarithmic arrangement problem, where the goal is to find an ordering to minimize sum_{i,j in E} log |pi(i) - pi(j)|. (More generally, one could consider other functions besides the logarithm of the gap --which could naturally correspond to different encodings of the natural numbers in this context -- but this seems the most natural.) We show that (in multigraphs) this problem is NP-hard. I admit, having pondered over this a bit, I wish we had a bit "nicer" of a reduction. In particular, we don't have hardness of approximation results (or an approximation algorithm), which would be a step forward. We also consider a more complicated variation of the arrangement problem that corresponds naturally to compressing successive gaps instead of individual gaps.
On the practical side, we explore several algorithms for compressing social networks, and see how they do on real data. Our main innovation here is to come up with a quick and useful method for generating an ordering on the social network nodes so that nodes with lots of common neighbors are near each other in the ordering, a property which is useful for compression. For Web graphs, this has generally been done using the external information of the URLs; sorting by URL naturally gives an ordering where nodes with common neighbors are near each other. In social networks, there are possible pieces of external information we could use -- zip code, or order of entry into the social network -- but these are much less reliable. We find a "shingle ordering" using the ideas used for document similarity based on min-wise independence; basically, we pick a (first) random ordering of the nodes, and then group nodes according to their first neighbor in this random ordering. Two nodes i and j with neighbor sets N(i) and N(j) are then in the same group with probability equall to the Jacard coefficient |N(i) intersect N(j)| / |N(i) union N(j)|; orders within the groups can be handled according to external information, a second random ordering, etc. This works quite well. (The best comparable technique we know of, which also works quite well, is to order nodes by Gray-code orderings of their neighborhoods; see this paper by Boldi, Santini, and Vigna.)
I enjoyed working on this paper quite a bit, for several reasons. Micah Adler and I wrote one of the early papers on Web graph compression, utilizing the idea of finding nodes with similar link structures; it was nice to return to this theme. I think the paper ended up being a good mix of theoretical framework and experiments based on implementation. It's also just pleasant when my regular trips out to California lead to concrete results -- it gives me a good excuse to continue taking regular trips to California!
(Co-authors: Chierichetti, Kumar, Lattanzi, Panconesi, and Raghavan.)
The paper definitely tries to look at both the "theory" and "practice" of compressing social network graphs. On the theoretical side, one contribution is that we formalize arrangement problems that are variations of the standard minimum linear arrangement problem that naturally capture features of the underlying compression problem. In the linear arrangement problem, we are given a graph G, and we wish to lay the nodes via a permutation mapping pi:V->[1,n] out on the number line (in positions 1 to n, the number of nodes) in such a way as to minimize sum_{i,j in E} |pi(i) - pi(j)|.
But when we're trying to compress edges in the graph, and we're given an ordering of the nodes, the "cost" of compressing an edge is the cost of the pointer from one vertex to another, and as the number giving the gap between the positions i and j in the node ordering is |pi(i) - pi(j)|, the cost in bits for representing an edge from i to j is (essentially) log_2 |pi(i) - pi(j)|. This suggests considering the minimum logarithmic arrangement problem, where the goal is to find an ordering to minimize sum_{i,j in E} log |pi(i) - pi(j)|. (More generally, one could consider other functions besides the logarithm of the gap --which could naturally correspond to different encodings of the natural numbers in this context -- but this seems the most natural.) We show that (in multigraphs) this problem is NP-hard. I admit, having pondered over this a bit, I wish we had a bit "nicer" of a reduction. In particular, we don't have hardness of approximation results (or an approximation algorithm), which would be a step forward. We also consider a more complicated variation of the arrangement problem that corresponds naturally to compressing successive gaps instead of individual gaps.
On the practical side, we explore several algorithms for compressing social networks, and see how they do on real data. Our main innovation here is to come up with a quick and useful method for generating an ordering on the social network nodes so that nodes with lots of common neighbors are near each other in the ordering, a property which is useful for compression. For Web graphs, this has generally been done using the external information of the URLs; sorting by URL naturally gives an ordering where nodes with common neighbors are near each other. In social networks, there are possible pieces of external information we could use -- zip code, or order of entry into the social network -- but these are much less reliable. We find a "shingle ordering" using the ideas used for document similarity based on min-wise independence; basically, we pick a (first) random ordering of the nodes, and then group nodes according to their first neighbor in this random ordering. Two nodes i and j with neighbor sets N(i) and N(j) are then in the same group with probability equall to the Jacard coefficient |N(i) intersect N(j)| / |N(i) union N(j)|; orders within the groups can be handled according to external information, a second random ordering, etc. This works quite well. (The best comparable technique we know of, which also works quite well, is to order nodes by Gray-code orderings of their neighborhoods; see this paper by Boldi, Santini, and Vigna.)
I enjoyed working on this paper quite a bit, for several reasons. Micah Adler and I wrote one of the early papers on Web graph compression, utilizing the idea of finding nodes with similar link structures; it was nice to return to this theme. I think the paper ended up being a good mix of theoretical framework and experiments based on implementation. It's also just pleasant when my regular trips out to California lead to concrete results -- it gives me a good excuse to continue taking regular trips to California!
Subscribe to:
Posts (Atom)