I was reminded by that bizarre NYT opinion piece that interdiscplarity is all the rage, but people rarely talk about intradisciplinarity. How big is the discrepancy? Well, Google had about 27,500 documents for me when I search intradisciplinary, and over 17,000,000 for interdisciplinary -- over 600:1. (Naturally, Google suggested that I meant interdisciplinary when I searched for intradisciplinary.) On Google Scholar, it's still over 100:1; 4,050 to 697,000.
I'm a big fan of intradisciplinary work. I enjoy working with computer scientists (and EE people, who I'll add in the mix, as we're often working on the same problems) in a range of areas on different types of problems. I've designed my graduate class in networking algorithms to bring together systems and theory, with papers from both sides of the aisle represented. I find one of the benefits of being in a smaller department at Harvard is that it promotes intradisciplinary work, because you talk to a more varied mix of people.
I wish there was more attention given to intradisciplinary work, because from my standpoint, it's important and useful. It's great when computer science can influence the direction of biology, economics, and physics; but I think we also get amazing payoffs when networking people and theory people and machine learning people and architecture people work together too. While it does happen, and non-trivially frequently, thank goodness, on the whole I think the community could do a bit more to promote that kind of work.
Thursday, April 30, 2009
Wednesday, April 29, 2009
The ACM Does NOT Support Open Access
UPDATED BELOW
I was a few hours ago cc'ed on some messages to some STOC authors -- well, really to Salil Vadhan -- with the opening statement:
What happened? Well, Harvard's Faculty of Arts and Sciences (as of February 2008) has an open access policy; we're supposed to put our publications in a Harvard database that serves as an open-access repository for faculty work, and grant Harvard the right to distribute them. Our Office for Scholarly Communications -- the director of which is computer science professor Stuart Shieber -- has a standard boilerplate to add to the standard copyright forms saying, essentially, we work for Harvard and Harvard reserves the right to distribute copies of our scholarly work in this open-access repository.
Apparently, the ACM just now noted that Salil added these on (I should point out, the proceedings are going to press pretty much now), and refused them. Salil called to explain and work something out, and was told explicitly that the ACM does not support open access policies. So officially Salil is now directing Harvard to waive the policy for these articles (the policy provides a waiver mechanism) to keep the ACM happy and his papers in the proceedings. I'm sure it will work out, but because Salil is conscientiously trying to follow everyone's official rules, he's having an unnecessary headache to deal with. (If there were time, I imagine he'd make more of an effort to find someone in the ACM to approve the Harvard addendum, but again, they're letting him know just as the proceedings have to get produced.)
It is often hard to tell in these situations whether this is REALLY ACM policy or if the person in the office at the time just assumes the appropriate response is to say only the original ACM form is acceptable. But this not acceptable behavior of our professional society. I support the ACM and the many good people that work there, but I can't support such limitations on distribution of scholarly work.
If you feel similarly, please pass your opinion, politely, along to any ACM higher-up you know. But if we don't get their attention, things aren't going to change. There is contact information for people on the ACM council or executive committee, as found here. Perhaps if you feel strongly enough mailing president@acm.org is a good start.
President: Wendy Hall, president@acm.org
Vice President: Alain Chesnais, chesnais@acm.org
Secretary-Treasurer: Barbara G. Ryder, ryder@cs.vt.edu
SIG Governing Board Chair: Alexander L Wolf, alw@acm.org
Past President: Stuart I Feldman, sif@acm.org
UPDATE
Salil later received correspondence from the ACM, which included the following statement:
ACM Copyright Policy allows authors to retain several rights, many of which are contained in the Addendum, including distribution from authors’ personal and institutional web sites and use in lectures, presentations, etc., however, the right to grant permission for reuse is not among them and several other stipulations run counter to ACM’s established policies and interests.
Apparently discussions between the ACM and Harvard's office to come to an agreement are ongoing. Regarding the issue on the right to grant permission for reuse, there's some explanation on Harvard's side on the policy FAQ. Salil's response also reinforces that this was not what he was told when he called to discuss the issue:
Thank you for the clarification. It would have helped to hear some of these explanations during our conversation. Individual Harvard authors such as myself have no way of knowing what discussions are underway between ACM and the Office of Scholarly Communication, and do not appreciate being told to use the waiver option with no explanation beyond being told that our professional society does not support open access.
I was a few hours ago cc'ed on some messages to some STOC authors -- well, really to Salil Vadhan -- with the opening statement:
"Dear Authors,
It has recently come to our attention that the "addendum" attached to your signed ACM copyright form is unacceptable (not recognized) with the ACM Copyright Office. Please let us know immediately if you can provide us with a new unaltered signed ACM copyright form..."What happened? Well, Harvard's Faculty of Arts and Sciences (as of February 2008) has an open access policy; we're supposed to put our publications in a Harvard database that serves as an open-access repository for faculty work, and grant Harvard the right to distribute them. Our Office for Scholarly Communications -- the director of which is computer science professor Stuart Shieber -- has a standard boilerplate to add to the standard copyright forms saying, essentially, we work for Harvard and Harvard reserves the right to distribute copies of our scholarly work in this open-access repository.
Apparently, the ACM just now noted that Salil added these on (I should point out, the proceedings are going to press pretty much now), and refused them. Salil called to explain and work something out, and was told explicitly that the ACM does not support open access policies. So officially Salil is now directing Harvard to waive the policy for these articles (the policy provides a waiver mechanism) to keep the ACM happy and his papers in the proceedings. I'm sure it will work out, but because Salil is conscientiously trying to follow everyone's official rules, he's having an unnecessary headache to deal with. (If there were time, I imagine he'd make more of an effort to find someone in the ACM to approve the Harvard addendum, but again, they're letting him know just as the proceedings have to get produced.)
It is often hard to tell in these situations whether this is REALLY ACM policy or if the person in the office at the time just assumes the appropriate response is to say only the original ACM form is acceptable. But this not acceptable behavior of our professional society. I support the ACM and the many good people that work there, but I can't support such limitations on distribution of scholarly work.
If you feel similarly, please pass your opinion, politely, along to any ACM higher-up you know. But if we don't get their attention, things aren't going to change. There is contact information for people on the ACM council or executive committee, as found here. Perhaps if you feel strongly enough mailing president@acm.org is a good start.
President: Wendy Hall, president@acm.org
Vice President: Alain Chesnais, chesnais@acm.org
Secretary-Treasurer: Barbara G. Ryder, ryder@cs.vt.edu
SIG Governing Board Chair: Alexander L Wolf, alw@acm.org
Past President: Stuart I Feldman, sif@acm.org
UPDATE
Salil later received correspondence from the ACM, which included the following statement:
ACM Copyright Policy allows authors to retain several rights, many of which are contained in the Addendum, including distribution from authors’ personal and institutional web sites and use in lectures, presentations, etc., however, the right to grant permission for reuse is not among them and several other stipulations run counter to ACM’s established policies and interests.
Apparently discussions between the ACM and Harvard's office to come to an agreement are ongoing. Regarding the issue on the right to grant permission for reuse, there's some explanation on Harvard's side on the policy FAQ. Salil's response also reinforces that this was not what he was told when he called to discuss the issue:
Thank you for the clarification. It would have helped to hear some of these explanations during our conversation. Individual Harvard authors such as myself have no way of knowing what discussions are underway between ACM and the Office of Scholarly Communication, and do not appreciate being told to use the waiver option with no explanation beyond being told that our professional society does not support open access.
Tuesday, April 28, 2009
The Taylor Editorial
My students and colleagues today pointed me to this New York Times editorial by Mark C. Taylor, the chairman of the religion department of Columbia, that begins with "Graduate education is the Detroit of higher learning." As pointed out by my students and by my colleague Matt Welsh on his blog, a lot of his arguments seem to break down when viewed from the perspective of computer science and engineering departments -- where, generally, we do try to train students to possibly do something besides becoming an academic -- and it's fun to deconstruct his article with this in mind. In particular, let's look at his 6 steps to fixing American higher education.
1. Restructure the curriculum, to become more cross-disciplinary.
I think CS (and in particular theory) does a very good job with this already, thank you, what with algorithmic game theory, quantum computing, the study of social networks, etc. And outside of theory I can point to my colleagues like Matt Welsh using sensors to monitor volcanos or Radhika Nagpal studying cell behaviors as multi-agent systems as further examples. (Yes, I know David Parkes and Yiling Chen are also obvious choices of cross-disciplinarity in action...)
One thing I generally think people (or perhaps just chairs of religion departments) fail to understand when saying education should become more cross-disciplinary is that before trying to do cross-disciplinary work it is, in my opinion, extremely beneficial to actually be an expert in (at least) one field so one has a base to work from. The corollary is that you can't just erase traditional structures and expect wonderful things to suddenly bloom.
2. Abolish permanent departments and create problem-focused programs.
Again, my opinion (and to be clear, it's an opinion) is that while problem-focused programs have their place, you need a solid base to work from. In his own example, to handle the problems with the future of water, he suggests bringing together people from humanties, arts, social science, natural sciences, law, medicine, and business together to deal with the problem in the large. That's great, but it implies you need experts in these areas in the first place to get together, which means you still need a great deal of specialized training.
3. Have smaller, more focused institutions; use technology to offer best-of from all over.
This is certainly an area for exploration, though I know it's the subject of wide debate whether distance education methods are as good as "being there". Certainly, many universities are already doing this to various degrees. I don't think we know all the answers on how to best make use of technology in this way yet, although our understanding will keep getting better.
4. Transform the traditional dissertation.
I don't think he explains exactly what it should transform into, but he seems here to be speaking to humanities people whose dissertations are essentially books nobody ever reads. Computer science dissertations are rarely read, but generally they're a collected form of papers that hopefully some people have read. I think we're fine here.
5. Expand the range of professional options for graduate students.
Good idea. CS graduates can, as again my colleague Matt Welsh pointed out, go into work in academia, research labs, industry, and government -- never mind entrepreneurial opportunities. (As a whole, theorists are perhaps a bit behind in this regard -- we really should make sure our graduates obtain some practical CS skills as well -- but apparently we're much better off than our other university counterparts.)
6. Abolish tenure.
In CS, tenure has always seemed less about "academic freedom" (a common argument for it for humanities) and more about providing a perk (commensurate with academic traditions) to make up for the generally lower pay scale versus industry. And it's a perk that demonstrates its value in periods like the current economy. What would abolishing tenure do to the university system? Who knows. It seems a situation ripe for the Law of Unintended Consequences, and I'm loathe to try to predict whether it would be a good or bad thing. Mark C. Taylor, naturally, is more confident of the benefits.
For several other criticisms of this editorial, there are plenty of comments at the NY Times site. I know these anti-University diatribes come out from time to time, but it's sad to see such a poorly argued one. While it's beneficial for universities to be self-reflecting, in this case the article just made we wish for some clearer, more rational, dare I say more SCIENTIFICALLY thought out criticism.
1. Restructure the curriculum, to become more cross-disciplinary.
I think CS (and in particular theory) does a very good job with this already, thank you, what with algorithmic game theory, quantum computing, the study of social networks, etc. And outside of theory I can point to my colleagues like Matt Welsh using sensors to monitor volcanos or Radhika Nagpal studying cell behaviors as multi-agent systems as further examples. (Yes, I know David Parkes and Yiling Chen are also obvious choices of cross-disciplinarity in action...)
One thing I generally think people (or perhaps just chairs of religion departments) fail to understand when saying education should become more cross-disciplinary is that before trying to do cross-disciplinary work it is, in my opinion, extremely beneficial to actually be an expert in (at least) one field so one has a base to work from. The corollary is that you can't just erase traditional structures and expect wonderful things to suddenly bloom.
2. Abolish permanent departments and create problem-focused programs.
Again, my opinion (and to be clear, it's an opinion) is that while problem-focused programs have their place, you need a solid base to work from. In his own example, to handle the problems with the future of water, he suggests bringing together people from humanties, arts, social science, natural sciences, law, medicine, and business together to deal with the problem in the large. That's great, but it implies you need experts in these areas in the first place to get together, which means you still need a great deal of specialized training.
3. Have smaller, more focused institutions; use technology to offer best-of from all over.
This is certainly an area for exploration, though I know it's the subject of wide debate whether distance education methods are as good as "being there". Certainly, many universities are already doing this to various degrees. I don't think we know all the answers on how to best make use of technology in this way yet, although our understanding will keep getting better.
4. Transform the traditional dissertation.
I don't think he explains exactly what it should transform into, but he seems here to be speaking to humanities people whose dissertations are essentially books nobody ever reads. Computer science dissertations are rarely read, but generally they're a collected form of papers that hopefully some people have read. I think we're fine here.
5. Expand the range of professional options for graduate students.
Good idea. CS graduates can, as again my colleague Matt Welsh pointed out, go into work in academia, research labs, industry, and government -- never mind entrepreneurial opportunities. (As a whole, theorists are perhaps a bit behind in this regard -- we really should make sure our graduates obtain some practical CS skills as well -- but apparently we're much better off than our other university counterparts.)
6. Abolish tenure.
In CS, tenure has always seemed less about "academic freedom" (a common argument for it for humanities) and more about providing a perk (commensurate with academic traditions) to make up for the generally lower pay scale versus industry. And it's a perk that demonstrates its value in periods like the current economy. What would abolishing tenure do to the university system? Who knows. It seems a situation ripe for the Law of Unintended Consequences, and I'm loathe to try to predict whether it would be a good or bad thing. Mark C. Taylor, naturally, is more confident of the benefits.
For several other criticisms of this editorial, there are plenty of comments at the NY Times site. I know these anti-University diatribes come out from time to time, but it's sad to see such a poorly argued one. While it's beneficial for universities to be self-reflecting, in this case the article just made we wish for some clearer, more rational, dare I say more SCIENTIFICALLY thought out criticism.
Thursday, April 23, 2009
Will Network Coding Become Part of the Network? A Class Experiment
This week in my graduate class on networking algorithms, I asked my class to focus on the question of whether network coding will become a part of "the network" (whatever you choose that to mean). Given the wealth of papers that have appeared its clear the academic community has embraced the notion of network coding, and there's certainly lots of interesting theory. But I don't know of what I would call a significant real-world deployment -- will we see one in the next decade or so? This is, to me, an important question -- certainly I'd be more inclined to work on network coding problems if I think there's a potential for real-world impact. The class has read a few current papers, and I wondered what they would think.
Interestingly, the class pretty much split down the middle, with perhaps a slight bias toward no. The main reasons cited against the development of practical network coding was that there didn't seem to be an immediate killer applications, and the gains, while non-trivial, weren't enough to cover many of the inherent problems (network support, backward compatibility, etc.) in deploying network coding. A common example given was that there were many challenges in mesh/ad hoc networks, and network coding (while useful) wasn't solving the really important ones. On the more optimistic side, several students saw situations (wireless mesh network, "closed" networks where complete control is centralized so network coding can be set up easily) and possible important issues (can network coding improve energy usage?) that would lead to more widespread network coding use in the years to come.
The discussion reminded me of short survey I once wrote up on "Digital Fountains" (and hey, when Googling I see the powers-that-be even posted my slides) where besides discussing their applications I asked why they weren't now in widespread use. The issues sound familiar.
I'd be happy to have others offer their opinions to my class on this issue. Feel free to comment on whether you think we'll see real-world deployments of network coding systems in the future (or let me know of any in existence now!), and what you see the challenges being for network coding going from an interesting idea to a deployed solution.
Interestingly, the class pretty much split down the middle, with perhaps a slight bias toward no. The main reasons cited against the development of practical network coding was that there didn't seem to be an immediate killer applications, and the gains, while non-trivial, weren't enough to cover many of the inherent problems (network support, backward compatibility, etc.) in deploying network coding. A common example given was that there were many challenges in mesh/ad hoc networks, and network coding (while useful) wasn't solving the really important ones. On the more optimistic side, several students saw situations (wireless mesh network, "closed" networks where complete control is centralized so network coding can be set up easily) and possible important issues (can network coding improve energy usage?) that would lead to more widespread network coding use in the years to come.
The discussion reminded me of short survey I once wrote up on "Digital Fountains" (and hey, when Googling I see the powers-that-be even posted my slides) where besides discussing their applications I asked why they weren't now in widespread use. The issues sound familiar.
I'd be happy to have others offer their opinions to my class on this issue. Feel free to comment on whether you think we'll see real-world deployments of network coding systems in the future (or let me know of any in existence now!), and what you see the challenges being for network coding going from an interesting idea to a deployed solution.
Wednesday, April 22, 2009
Two Talks : Savage and Upfal
Today was a busy day with talks.
Stefan Savage of UCSD gave a talk at Harvard's Center for Research on Computation and Society on his work on Spamalytics, which is really about the economics of botnets. Essentially, they "infiltrated" a botnet in order to (mostly passively) monitor its behavior and learn what these networks actually do and what their economic potential is. It's fascinating both from an engineering perspective (how do you do it) and an economics perspective (what do you learn about the behavior of the participants), and should guide both anti-bot technical efforts and policy. While it's not clear to me there's much in the way of "science" in the pure sense of the word in this work, I liked Stefan's analogizing this work to anthropology: the goal here is to study what's going on and learn the relationships among the actors.
I look forward to cornering Stefan sometime and hearing more about the "issues" that arose with this work -- somehow, the FBI kept coming up at points in his talk, but he didn't really have time for the details. (He did seem to state they're more in touch with the FBI now than initially -- to help make sure the FBI doesn't mistake them for the botnet!) What's interesting in my mind is there must be more projects like this where the government-powers-that-be would both like and benefit from active research into the misuse of computer systems and related computer security problems. How can that cooperation be fostered, in a way that maintains the academic goals like publication and dissemination of the knowledge learned? I'm not sure it works by government agencies initiating the project; it seems it would have to start the other way around, as this project did. But I don't envy the time Stefan (or his team) must have spent with lawyers making sure they weren't breaking the law because they weren't working under government supervision.
[The question I didn't get to ask Stefan: what grant do you use to cover "legal expenses" for projects like this? Can that be an NSF line item, or did the corporate donations cover that part?]
The second talk of the day was Eli Upfal visiting MIT to talk about his work on multi-armed bandit problems (see the paper list here). His variations were all nicely motivated by related problems for search engines, specifically matching ads to web pages. (I recall hearing about these motivations when we were both visiting Yahoo! Research, so they resonated with me.) The variations include when the bandits are mapped to a metric space and their value satisfies a Lipschitz condition, when the bandits value can change over time (specifically the mean changes according to a Brownian motion process), and when the useful lifetime of a bandit is given by a stochastic distribution. The talk was at the opposite extreme from Stefan's -- very theoretical, with a focus on both upper and lower bounds and the techniques behind them. I had thought of multi-armed bandits as a fairly well-mined area of research, so it was interesting to see multiple novel, well-motivated examples -- suggesting there's plenty more interesting questions left in this area.
Stefan Savage of UCSD gave a talk at Harvard's Center for Research on Computation and Society on his work on Spamalytics, which is really about the economics of botnets. Essentially, they "infiltrated" a botnet in order to (mostly passively) monitor its behavior and learn what these networks actually do and what their economic potential is. It's fascinating both from an engineering perspective (how do you do it) and an economics perspective (what do you learn about the behavior of the participants), and should guide both anti-bot technical efforts and policy. While it's not clear to me there's much in the way of "science" in the pure sense of the word in this work, I liked Stefan's analogizing this work to anthropology: the goal here is to study what's going on and learn the relationships among the actors.
I look forward to cornering Stefan sometime and hearing more about the "issues" that arose with this work -- somehow, the FBI kept coming up at points in his talk, but he didn't really have time for the details. (He did seem to state they're more in touch with the FBI now than initially -- to help make sure the FBI doesn't mistake them for the botnet!) What's interesting in my mind is there must be more projects like this where the government-powers-that-be would both like and benefit from active research into the misuse of computer systems and related computer security problems. How can that cooperation be fostered, in a way that maintains the academic goals like publication and dissemination of the knowledge learned? I'm not sure it works by government agencies initiating the project; it seems it would have to start the other way around, as this project did. But I don't envy the time Stefan (or his team) must have spent with lawyers making sure they weren't breaking the law because they weren't working under government supervision.
[The question I didn't get to ask Stefan: what grant do you use to cover "legal expenses" for projects like this? Can that be an NSF line item, or did the corporate donations cover that part?]
The second talk of the day was Eli Upfal visiting MIT to talk about his work on multi-armed bandit problems (see the paper list here). His variations were all nicely motivated by related problems for search engines, specifically matching ads to web pages. (I recall hearing about these motivations when we were both visiting Yahoo! Research, so they resonated with me.) The variations include when the bandits are mapped to a metric space and their value satisfies a Lipschitz condition, when the bandits value can change over time (specifically the mean changes according to a Brownian motion process), and when the useful lifetime of a bandit is given by a stochastic distribution. The talk was at the opposite extreme from Stefan's -- very theoretical, with a focus on both upper and lower bounds and the techniques behind them. I had thought of multi-armed bandits as a fairly well-mined area of research, so it was interesting to see multiple novel, well-motivated examples -- suggesting there's plenty more interesting questions left in this area.
Tuesday, April 21, 2009
Conference Sign Ups : STOC
Just a reminder that the early registration cutoff for STOC is approaching -- April 28. Go on and register! Having done local arrangements, I know all too well how important it is to have a solid count as early as possible; let them know you're coming. And you won't want to miss it (and ancillary activities such as Valiant day.)
In other news,a reminder that NSDI is in town this week.
In other news,a reminder that NSDI is in town this week.
Saturday, April 18, 2009
SIGCOMM PC, Post-Analysis
1) Setting a new high bar for local arrangements, outside the conference room was a cappuccino/espresso machine, and (for most of the time) a barista just for the PC.
2) There seemed to be widespread agreement that the quality of submissions this year was not as high as in previous years. (Just my luck...) Whether this reflects reality or we were all just very grumpy is open to interpretation. (I do not think we were grumpy.)
3) Because of this feeling about paper quality overall, there will be, I believe, fewer accepted papers this year than in the last few previous years.
4) I was amused to see when the classified papers into groups there was a whole category of papers labelled "theory". Theoretical SIGCOMM papers generally refer to new and interesting algorithms or data structures for applications, but still generally require an implementation demonstrating the effectiveness of the idea in practice (or a suitable simulation). Overall, theory papers seem to do reasonably well at SIGCOMM. George Varghese is the master of writing such papers, if you are looking for an example to emulate.
5) Indeed, there seemed to be some enthusiasm for more openness to theoretical work on the committee, which seemed in line with my open complaint to networking/systems people. There may be a push (I'll be pushing!) to aim for a "cool algorithm/data structure implemenation tricks and ideas" next year. (The hard part of this is writing down what the right criteria for such papers are... clear practical utility in a network setting being what I'd aim for.)
6) The 5-point scale did seem to have its problems. There was the usual problem that people did not work with the scale appropriately/treat it consistently. The other problem was, because of the impression there were few strong submissions, there were very few 5's and comparatively fewer 4's than usual, effectively collapsing the scale. I'm not sure the 5-point scale itself is to blame for these problems.
7) The 5-point scale was very effective for initially dismissing a lot of papers quickly.
8) Probably because there are fewer papers at the PC meeting to deal with, there seems to be more intense discussion of papers overall and in particular of controversial papers at the SIGCOMM PC as compared to say the STOC PC. Also, more people on average are able to (and more than willing to) give an opinion on any given paper.
9) While there was plenty of discussion, there were no real fights -- at least while I was in the room. Indeed, the sharpest discussion -- all about what is expected of a SIGCOMM paper -- was probably instigated by me, regarding a more theoretical paper, leading to a discussion of what exactly was expected in terms of evaluation of an interesting idea for SIGCOMM. Again, I'm hoping this dicussion might lead to a special session of possibly shorter papers with useful algorithmic tricks for networking people... though we'll have to see if enough such papers actually exist!
10) The PC dinner involved, among other food, an entire roast boar.
Overall, a very interesting PC experience.
2) There seemed to be widespread agreement that the quality of submissions this year was not as high as in previous years. (Just my luck...) Whether this reflects reality or we were all just very grumpy is open to interpretation. (I do not think we were grumpy.)
3) Because of this feeling about paper quality overall, there will be, I believe, fewer accepted papers this year than in the last few previous years.
4) I was amused to see when the classified papers into groups there was a whole category of papers labelled "theory". Theoretical SIGCOMM papers generally refer to new and interesting algorithms or data structures for applications, but still generally require an implementation demonstrating the effectiveness of the idea in practice (or a suitable simulation). Overall, theory papers seem to do reasonably well at SIGCOMM. George Varghese is the master of writing such papers, if you are looking for an example to emulate.
5) Indeed, there seemed to be some enthusiasm for more openness to theoretical work on the committee, which seemed in line with my open complaint to networking/systems people. There may be a push (I'll be pushing!) to aim for a "cool algorithm/data structure implemenation tricks and ideas" next year. (The hard part of this is writing down what the right criteria for such papers are... clear practical utility in a network setting being what I'd aim for.)
6) The 5-point scale did seem to have its problems. There was the usual problem that people did not work with the scale appropriately/treat it consistently. The other problem was, because of the impression there were few strong submissions, there were very few 5's and comparatively fewer 4's than usual, effectively collapsing the scale. I'm not sure the 5-point scale itself is to blame for these problems.
7) The 5-point scale was very effective for initially dismissing a lot of papers quickly.
8) Probably because there are fewer papers at the PC meeting to deal with, there seems to be more intense discussion of papers overall and in particular of controversial papers at the SIGCOMM PC as compared to say the STOC PC. Also, more people on average are able to (and more than willing to) give an opinion on any given paper.
9) While there was plenty of discussion, there were no real fights -- at least while I was in the room. Indeed, the sharpest discussion -- all about what is expected of a SIGCOMM paper -- was probably instigated by me, regarding a more theoretical paper, leading to a discussion of what exactly was expected in terms of evaluation of an interesting idea for SIGCOMM. Again, I'm hoping this dicussion might lead to a special session of possibly shorter papers with useful algorithmic tricks for networking people... though we'll have to see if enough such papers actually exist!
10) The PC dinner involved, among other food, an entire roast boar.
Overall, a very interesting PC experience.
Friday, April 17, 2009
Theory and Many Cores Reflection
Since I'm at a lull point at the PC meeting, I'll write another post.
There's a call for a Workshop on Theory and Many-Cores at the University of Maryland the Friday before STOC. (Remember the Saturday before is Valiant's birthday celebration.) There's been some discussion in various blogs (as I note below) about it, so before discussing it, I'll repeat part of the call:
Both Mihai Patrascu and Bill Gasarch have blogged about this. Mihai's post is very sarcastic and (although he clarifies his point in the comment) is not very clear, but his point seems to be that this is something theorists should but won't get involved with, having worked too much in this area in the past (PRAM algorithms) with little success. And there are what I take as negative comments on Bill's post, which seem to say "shouldn't the systems people be explaining to us why this is interesting" and even that theory should be independent of what's actually going on in CS.
I don't think the advertising is over the top at all. There is an unfortuantely-too-small collection of theory-oriented people who like working on problems and algorithms that real people might use. Mulit-core seems to be a rapidly growing area, with potential for interesting challenges. The fact that years ago people got over-excited about what in retrospect (or even at the time) turned out not to be a suitable-for-practice set of models doesn't mean there's not interesting work to be done now. It won't appeal to all theorists, and perhaps this sort of work doesn't meet some (Mihai's?) definition of "fundamental work theorists should be doing". But when there's a burst of activity in area in CS, as a community theory should be there if they can contribute. Exploring the possibilities here is just a good idea. I don't get the negativity, but I do see it as another example of a subset of theorists pushing a separation of CS theory from the rest of CS that's ultimately unhealthy, in terms of our relevance (and related things like funding).
And let's face it -- what's going to happen in multi-core architectures and languages is likely going to happen without the input of theory (and could happen without theory input as all). Does that mean we can't make meaningful contributions or find interesting new problems? Of course not. But if we believe this is the direction computing is going to move (and I hear that an awful lot these days), it does mean that for the theory community to sit around waiting for systems people to come asking for input is not a good idea -- unless the community wants to highlight a lack of relevance and interest in actual computer science as being practiced.
There's a call for a Workshop on Theory and Many-Cores at the University of Maryland the Friday before STOC. (Remember the Saturday before is Valiant's birthday celebration.) There's been some discussion in various blogs (as I note below) about it, so before discussing it, I'll repeat part of the call:
The sudden shift from single-processor computer systems to many-processor parallel computing systems requires reinventing much of Computer Science (CS): how to actually build and program the new parallel systems. Indeed, the programs of many mainstream computer science conferences, such as ASPLOS, DAC, ISCA, PLDI and POPL are heavily populated with papers on parallel computing and in particular on many-core computing. In contrast, the recent programs of flagship theory conferences, such as FOCS, SODA and STOC, hardly have any such paper. This low level of activity should be a concern to the theory community, for it is not clear, for example, what validity the theory of algorithms will have if the main model of computation supported by the vendors is allowed to evolve away from any studied by the theory. The low level of current activity in the theory community is not compatible with past involvement of theorists in parallel computing, and to their representation in the technical discourse. For example, 19 out of 38 participants in a December 1988 NSF-IBM Workshop on Opportunities and Constraints of Parallel Computing in IBM Almaden had theory roots. The lack of involvement of theorists should also concern vendors that build many-core computers: theorists are often the instructors of courses on algorithms and data-structures, and without their cooperation it will be difficult to introduce parallelism into the curriculum.
The main objective of the workshop will be to explore opportunities for theoretical computer science research and education in the emerging era of many-core computing, and develop understanding of the role that theory should play in it.I view this call as an attempt at strong advertising -- this area is important, lots of other areas are working on it, it could be core for the future -- but it seems to have brought some strong negative reactions.
Both Mihai Patrascu and Bill Gasarch have blogged about this. Mihai's post is very sarcastic and (although he clarifies his point in the comment) is not very clear, but his point seems to be that this is something theorists should but won't get involved with, having worked too much in this area in the past (PRAM algorithms) with little success. And there are what I take as negative comments on Bill's post, which seem to say "shouldn't the systems people be explaining to us why this is interesting" and even that theory should be independent of what's actually going on in CS.
I don't think the advertising is over the top at all. There is an unfortuantely-too-small collection of theory-oriented people who like working on problems and algorithms that real people might use. Mulit-core seems to be a rapidly growing area, with potential for interesting challenges. The fact that years ago people got over-excited about what in retrospect (or even at the time) turned out not to be a suitable-for-practice set of models doesn't mean there's not interesting work to be done now. It won't appeal to all theorists, and perhaps this sort of work doesn't meet some (Mihai's?) definition of "fundamental work theorists should be doing". But when there's a burst of activity in area in CS, as a community theory should be there if they can contribute. Exploring the possibilities here is just a good idea. I don't get the negativity, but I do see it as another example of a subset of theorists pushing a separation of CS theory from the rest of CS that's ultimately unhealthy, in terms of our relevance (and related things like funding).
And let's face it -- what's going to happen in multi-core architectures and languages is likely going to happen without the input of theory (and could happen without theory input as all). Does that mean we can't make meaningful contributions or find interesting new problems? Of course not. But if we believe this is the direction computing is going to move (and I hear that an awful lot these days), it does mean that for the theory community to sit around waiting for systems people to come asking for input is not a good idea -- unless the community wants to highlight a lack of relevance and interest in actual computer science as being practiced.
Talk on Codes for Flash Memory
While here at SIGCOMM, I gave a talk at University College London on some new results for codes for flash memory. Here are the slides. Although it was a "theory" talk to a "systems" audience, it seemed to go well, I think because people are generally interested in flash memory and how it works.
As a reminder (also discussed at this old post) flash memory is strange. As a simplified model, you have blocks of cells, where each cell can take on a value in the range 0 to q-1 (where q is currently pretty small -- say 4, but seems to be growing with technology), and a block can consist of thousands (or more) of cells. Reads of cells are fast. When writing, you can INCREASE the value of a cell, but in order to decrease any cell value, you first have to ERASE (set back to 0) the entire block. And erases are bad -- it wears out the memory, and the number of erases is the primary factor in the memory lifetime.
So, for example, if you have a block of 6 cells, and q = 4, it's "easy" to make the change
0 2 1 3 1 2 -> 2 2 1 3 1 2
but "hard" to make the change
0 2 1 3 1 2 -> 0 2 1 1 1 2
since you'd have to erase to change that 3 back to a 1. In this setting, when talking about a code, I was not talking about error-correction, but rather data representation: how can we use blocks of cells to represent data effectively while minimizing erasures.
The talk had an overarching question, and some specific results.
1) The big question: how does flash memory change things, at the scientific level? Should we be designing algorithms and data structures specifically for flash? If so, what should these things look like? Similarly, what about memory hierarchies that utilize flash at one or more levels? [This seemed to be the part that interested in the systems people.]
2) Some deterministic, worst-case codes: this work was primarily the senior thesis work of Hilary Finucane, who is heading to Weizmann next year. We've made it available as a Technical Report which can be found here.
3) Average-case codes. We've actually extended our previous analysis from Allerton and submitted a journal version; at some point I'll put it up as a preprint (but mail me if you're interested).
I think flash memory is a potentially interesting area that seems to have slipped in under the radar of many people. After getting into it by way of codes, I'm currently trying to learn more and see if there really are worthwhile algorithmic questions there.
As a reminder (also discussed at this old post) flash memory is strange. As a simplified model, you have blocks of cells, where each cell can take on a value in the range 0 to q-1 (where q is currently pretty small -- say 4, but seems to be growing with technology), and a block can consist of thousands (or more) of cells. Reads of cells are fast. When writing, you can INCREASE the value of a cell, but in order to decrease any cell value, you first have to ERASE (set back to 0) the entire block. And erases are bad -- it wears out the memory, and the number of erases is the primary factor in the memory lifetime.
So, for example, if you have a block of 6 cells, and q = 4, it's "easy" to make the change
0 2 1 3 1 2 -> 2 2 1 3 1 2
but "hard" to make the change
0 2 1 3 1 2 -> 0 2 1 1 1 2
since you'd have to erase to change that 3 back to a 1. In this setting, when talking about a code, I was not talking about error-correction, but rather data representation: how can we use blocks of cells to represent data effectively while minimizing erasures.
The talk had an overarching question, and some specific results.
1) The big question: how does flash memory change things, at the scientific level? Should we be designing algorithms and data structures specifically for flash? If so, what should these things look like? Similarly, what about memory hierarchies that utilize flash at one or more levels? [This seemed to be the part that interested in the systems people.]
2) Some deterministic, worst-case codes: this work was primarily the senior thesis work of Hilary Finucane, who is heading to Weizmann next year. We've made it available as a Technical Report which can be found here.
3) Average-case codes. We've actually extended our previous analysis from Allerton and submitted a journal version; at some point I'll put it up as a preprint (but mail me if you're interested).
I think flash memory is a potentially interesting area that seems to have slipped in under the radar of many people. After getting into it by way of codes, I'm currently trying to learn more and see if there really are worthwhile algorithmic questions there.
Thursday, April 16, 2009
Blogging from the SIGCOMM PC
I don't think I'm revealing any big secret by saying I'm currently at the SIGCOMM PC meeting. So far, it's what you'd expect -- a crowded room of 30+ people, hashing out what papers we actually want to accept. Lots of discussion, some of it contentious, but no big fights (yet).
I'm realizing that a possible downside of bigger PCs (which I think theory conferences could use) is that just logistically it's harder to have over 30 people in a room having these sorts of discussions. It's harder to hear and see everyone. When it's your turn to talk, talk loud.
An interesting logistic difference is that we're discussing papers in numerical order by topic, not ranked by score. I'm still absorbing how well this works. (I've found it nice in other PCs to start at the top and bottom and quickly accept and reject "easy papers". This is different.)
People are removed from the room for conflicts. There are papers where I can't see the reviews or anything beyond the paper title. Yet we soldier on. Seriously, this is not a big deal. Being outside is a good chance to talk to people. There are plenty of non-conflicted people to offer opinions, of course, so the issue brought up by theory people that conflicted people need to be around so you have enough knowledgeable people doesn't arise. (Of course, I think it comes up much less in theory than people argue.)
After having chaired STOC, it's very nice to sit back and participate in a PC meeting without being a chair. I even have time to blog about it.
I'm realizing that a possible downside of bigger PCs (which I think theory conferences could use) is that just logistically it's harder to have over 30 people in a room having these sorts of discussions. It's harder to hear and see everyone. When it's your turn to talk, talk loud.
An interesting logistic difference is that we're discussing papers in numerical order by topic, not ranked by score. I'm still absorbing how well this works. (I've found it nice in other PCs to start at the top and bottom and quickly accept and reject "easy papers". This is different.)
People are removed from the room for conflicts. There are papers where I can't see the reviews or anything beyond the paper title. Yet we soldier on. Seriously, this is not a big deal. Being outside is a good chance to talk to people. There are plenty of non-conflicted people to offer opinions, of course, so the issue brought up by theory people that conflicted people need to be around so you have enough knowledgeable people doesn't arise. (Of course, I think it comes up much less in theory than people argue.)
After having chaired STOC, it's very nice to sit back and participate in a PC meeting without being a chair. I even have time to blog about it.
Wednesday, April 15, 2009
The Town Hall Meeting
I went to part of the "Town Hall Meeting" about Harvard's gloomy financial situation yesterday; some news writeups are available with more details.
I don't often see Mike Smith since he took leave of the Computer Science faculty to become Dean of the Faculty. And when I do, it's not usually in his Dean capacity. I have to say, I think he did a very good job with a bad situation yesterday. Mike's demeanor is generally calm, collected, composed -- he comes off as reassuring in the face of crisis. (Yes, I know the same is said of our still-new President -- the style is somewhat similar on its face.) I must admit I'm also reassured knowing there's a technical, engineering-oriented person up there who can deal with the numbers and tradeoffs -- somebody who just knows what it means when told something like you have to cut 10% from the budget and 50% of the budget is fixed. Finally, I thought he answered questions well. I've heard some thought he was "evasive", but I think he simply avoids giving details when he's not sure what the details are yet, or doesn't have the facts at hand. And he answers questions thoughtfully and respectfully, which wasn't the case with some previous higher-ups in the administration....
The situation is still a bad situation. But I, at least, have some confidence in the people who are supposed to be managing it.
I don't often see Mike Smith since he took leave of the Computer Science faculty to become Dean of the Faculty. And when I do, it's not usually in his Dean capacity. I have to say, I think he did a very good job with a bad situation yesterday. Mike's demeanor is generally calm, collected, composed -- he comes off as reassuring in the face of crisis. (Yes, I know the same is said of our still-new President -- the style is somewhat similar on its face.) I must admit I'm also reassured knowing there's a technical, engineering-oriented person up there who can deal with the numbers and tradeoffs -- somebody who just knows what it means when told something like you have to cut 10% from the budget and 50% of the budget is fixed. Finally, I thought he answered questions well. I've heard some thought he was "evasive", but I think he simply avoids giving details when he's not sure what the details are yet, or doesn't have the facts at hand. And he answers questions thoughtfully and respectfully, which wasn't the case with some previous higher-ups in the administration....
The situation is still a bad situation. But I, at least, have some confidence in the people who are supposed to be managing it.
Tuesday, April 14, 2009
Harvard Happenings
After I get done with teaching today, there's a lot going on around Harvard.
First, we will be having a reception to welcome our newly incoming Dean of the School of Engineering and Applied Sciences, Cherry Murray. I don't know much about her, unfortunately, but if anyone does and wants to send me e-mail with information, feel free. I admit, my major concern with a new Dean coming in is the question of how we're going to grow Computer Science -- and related fields, including Applied Math and Electrical Engineering -- at Harvard, particularly in the face of the financial crisis. I'd like to think it's clear we have the need, and I'd like to hear when we think we're going to have the means.
After that, this afternoon there will be a "town hall meeting" (as opposed to a faculty meeting) to discuss the dire financial situation at Harvard. Endowment funding is expected to drop by 8% next year, much more than originally expected. Staff layoffs are looking imminent -- perhaps some will be announced at this meeting. In short, the budget situation is grim. As an example that's both in some sense absurd and appropriate, next year for our regular Computer Science faculty lunch meeting, we'll have to brown bag it. As a perhaps more compelling example, the Teaching Assistant to student average ratio is going to have to go up substantially next year in computer science. (We've had an embarassingly luxurious TA-to-student ratio, so I believe that this change will only take us to something that most would consider normal, but it will still be a change for us.) The problem at a university is that so many costs are essentially fixed -- primarily staff costs -- that there's very little in the class of optional things that can be cut substantially. Until the economy turns around, I expect more grim news -- at town hall meetings or otherwise -- ahead.
First, we will be having a reception to welcome our newly incoming Dean of the School of Engineering and Applied Sciences, Cherry Murray. I don't know much about her, unfortunately, but if anyone does and wants to send me e-mail with information, feel free. I admit, my major concern with a new Dean coming in is the question of how we're going to grow Computer Science -- and related fields, including Applied Math and Electrical Engineering -- at Harvard, particularly in the face of the financial crisis. I'd like to think it's clear we have the need, and I'd like to hear when we think we're going to have the means.
After that, this afternoon there will be a "town hall meeting" (as opposed to a faculty meeting) to discuss the dire financial situation at Harvard. Endowment funding is expected to drop by 8% next year, much more than originally expected. Staff layoffs are looking imminent -- perhaps some will be announced at this meeting. In short, the budget situation is grim. As an example that's both in some sense absurd and appropriate, next year for our regular Computer Science faculty lunch meeting, we'll have to brown bag it. As a perhaps more compelling example, the Teaching Assistant to student average ratio is going to have to go up substantially next year in computer science. (We've had an embarassingly luxurious TA-to-student ratio, so I believe that this change will only take us to something that most would consider normal, but it will still be a change for us.) The problem at a university is that so many costs are essentially fixed -- primarily staff costs -- that there's very little in the class of optional things that can be cut substantially. Until the economy turns around, I expect more grim news -- at town hall meetings or otherwise -- ahead.
Monday, April 13, 2009
New Preprint: An Analysis of Random-Walk Cuckoo Hashing
Some people have asked if being the chair of STOC destroyed my work life for several months. Not really. It certainly was work, and like other administrative duties, it took time away from research. But it didn't completely obliterate my research time. (Arguably, it had less effect than, say, having a new baby daughter...)
So here's an example of new research output: An Analysis of Random-Walk Cuckoo Hashing with Alan Frieze an Pall Melsted. For background, you can read one of my earlier posts on cuckoo hashing. The question we were aiming to answer is what I consider to be one of the big ones left for cuckoo hashing: why does the random-walk approach -- where when you have to kick out an element you choose one randomly and keep going -- work so well? The natural intuition is that in such a setting an insertion of a new element should take at worst logarithmic time with high probability -- each time an element is kicked out to make room for another element, it should have an empty space available with constant probability. But nobody has been able to make that intuition stick.
We obtained a polylogarithmic bound on the time for insertion with random walk hashing. The main idea is to use a 2-step approach: show that the set of items "close" to an empty space (within an augmenting path of length O(log log n) of an empty space) is reasonably large, and then show that a random walk reaches one of those items quickly (polylogarithmic number of steps). Once we get to a close item, there is an inverse polylogarithmic probability of taking the augmenting path to get to the empty space, so breaking it up in this way gives us the desired bound.
It takes a fair bit of analysis of the cuckoo graph to get this result (which has been open for a surprisingly long time). There's still plenty of room to work with in both directions -- a better upper bound, and is a super-logarithmic lower bound possible for some value for the number of choices each item has -- but it was gratifying to make progress on what I think is turning out to be a much more challenging problem than people originally might have expected.
So here's an example of new research output: An Analysis of Random-Walk Cuckoo Hashing with Alan Frieze an Pall Melsted. For background, you can read one of my earlier posts on cuckoo hashing. The question we were aiming to answer is what I consider to be one of the big ones left for cuckoo hashing: why does the random-walk approach -- where when you have to kick out an element you choose one randomly and keep going -- work so well? The natural intuition is that in such a setting an insertion of a new element should take at worst logarithmic time with high probability -- each time an element is kicked out to make room for another element, it should have an empty space available with constant probability. But nobody has been able to make that intuition stick.
We obtained a polylogarithmic bound on the time for insertion with random walk hashing. The main idea is to use a 2-step approach: show that the set of items "close" to an empty space (within an augmenting path of length O(log log n) of an empty space) is reasonably large, and then show that a random walk reaches one of those items quickly (polylogarithmic number of steps). Once we get to a close item, there is an inverse polylogarithmic probability of taking the augmenting path to get to the empty space, so breaking it up in this way gives us the desired bound.
It takes a fair bit of analysis of the cuckoo graph to get this result (which has been open for a surprisingly long time). There's still plenty of room to work with in both directions -- a better upper bound, and is a super-logarithmic lower bound possible for some value for the number of choices each item has -- but it was gratifying to make progress on what I think is turning out to be a much more challenging problem than people originally might have expected.
Wednesday, April 08, 2009
ICALP paper list
The ICALP paper list is posted. It looks like a great set of papers; I encourage the authors to make them available on-line as soon as possible so those of us who can't travel the distance can have a look.
Some interesting titles (showing my bias):
Applications of a Splitting Trick by Martin Dietzfelbinger and Michael Rink: I tend to use the word "trick" when describing the key idea of a proof, although I was consistently told in graduate school not to. (I was told it makes it sound less like a mathematically important idea when you called it a trick. I admit, I don't interpret "trick" that way -- to me, a trick is exactly a clever and useful mathematically important idea in that context -- but I've generally avoided the term.) I want to know what the trick is.
A Better Algorithm for Random k-SAT by Amin Coja-Oghlan: I have a weakness for (random) SAT algorithms.
Sort Me If You Can: How to Sort Dynamic Data by Aris Anagnostopoulos, Ravi Kumar, Mohammad Mahdian and Eli Upfal: I don't know what that means, but the title grabbed me.
Multiple Random Walks and Interacting Particle Systems by Colin Cooper, Alan Frieze and Tomasz Radzik
AND
Derandomizing Random Walks in Undirected Graphs Using Locally Fair Exploration Strategies by Colin Cooper, David Ilcinkas, Ralf Klasing and Adrian Kosowski: I just gave my Algorithms + Data Structures lecture on the random-walk 2-SAT algorithm, so random walks are on my mind. And of course I have a weakness for random walk algorithms.
Some interesting titles (showing my bias):
Applications of a Splitting Trick by Martin Dietzfelbinger and Michael Rink: I tend to use the word "trick" when describing the key idea of a proof, although I was consistently told in graduate school not to. (I was told it makes it sound less like a mathematically important idea when you called it a trick. I admit, I don't interpret "trick" that way -- to me, a trick is exactly a clever and useful mathematically important idea in that context -- but I've generally avoided the term.) I want to know what the trick is.
A Better Algorithm for Random k-SAT by Amin Coja-Oghlan: I have a weakness for (random) SAT algorithms.
Sort Me If You Can: How to Sort Dynamic Data by Aris Anagnostopoulos, Ravi Kumar, Mohammad Mahdian and Eli Upfal: I don't know what that means, but the title grabbed me.
Multiple Random Walks and Interacting Particle Systems by Colin Cooper, Alan Frieze and Tomasz Radzik
AND
Derandomizing Random Walks in Undirected Graphs Using Locally Fair Exploration Strategies by Colin Cooper, David Ilcinkas, Ralf Klasing and Adrian Kosowski: I just gave my Algorithms + Data Structures lecture on the random-walk 2-SAT algorithm, so random walks are on my mind. And of course I have a weakness for random walk algorithms.
Monday, April 06, 2009
Geography, and Where to Send
This week I'm finishing up some work for submission, though I'm having trouble figuring out where to send them. Both ESA and APPROX/RANDOM have deadlines on April 12. Then WADS has a deadline April 24. (Update: my bad, I misread; that's the date for when acceptance i s announced, not the deadline!!!!) Actually, it looks like WADS and APPROX/RANDOM are scheduled on the same dates, and even though APPROX/RANDOM is at Berkeley, Christos Papadimitriou and Richard Karp are apparently invited speakers at WADS, so I probably wouldn't get to see them. Arguably one of the papers is big enough it could be worth waiting for SODA, but my co-authors seem disinclined. Arguably I'd like to submit both to the same place to minimize possible travel, and this also gives the edge to APPROX/RANDOM. (Berkeley's not close to me, but my co-authors may end up going, and if I go, there's always enough things for me to do in the Bay Area that I can make the trip worthwhile.) I'm sure there are other deadlines I'm missing that someone can tell me about in the comments.
While it's nice to have lots of possibilities, somehow all this makes me feel that our community has perhaps a few too many smaller conferences and workshops. Somehow when multiple things are scheduled on the same day with some frequency in our relatively small community, it seems like we ought to consider whether this is the right setup.
While it's nice to have lots of possibilities, somehow all this makes me feel that our community has perhaps a few too many smaller conferences and workshops. Somehow when multiple things are scheduled on the same day with some frequency in our relatively small community, it seems like we ought to consider whether this is the right setup.
Wednesday, April 01, 2009
STOC schedule
The Local Arrangements Team have the preliminary schedule up at the STOC Web site. I hope it helps everyone make their travel plans.
Since it's (more-or-less) apparent from the schedule, I suppose I'll announce here that the Best Paper Prize will be shared by:
A Constructive Proof of the Lovasz Local Lemma
Robin A. Moser
Public-Key Cryptosystems from the Worst-Case Shortest Vector Problem
Chris Peikert
The former also was awarded the Best Student Paper. These talks will be given without a parallel talk. (Who wants to give a talk against an award paper?)
Please don't tell the authors, as I haven't let them know yet. Perhaps, if we all keep quiet about it, they'll be surprised at the conference.
Since it's (more-or-less) apparent from the schedule, I suppose I'll announce here that the Best Paper Prize will be shared by:
A Constructive Proof of the Lovasz Local Lemma
Robin A. Moser
Public-Key Cryptosystems from the Worst-Case Shortest Vector Problem
Chris Peikert
The former also was awarded the Best Student Paper. These talks will be given without a parallel talk. (Who wants to give a talk against an award paper?)
Please don't tell the authors, as I haven't let them know yet. Perhaps, if we all keep quiet about it, they'll be surprised at the conference.
Subscribe to:
Posts (Atom)