There's been plenty of interesting stuff popping up in the news, so it's time to collect a bit.
This article describes a current patent case in cryptography, where apparently Ron Rivest testified (by deposition) and Whitfield Diffie took the stand. While I can't help but wonder if the article has dramatized the proceedings a bit, it does give some idea of what patent trials are like. I found it interesting both legally and technically. Anyone interested in the legal/expert witness side of technical work should find it worthwhile.
Update: The case was decided, as Suresh notes.
In other legal news worth noting, Google Books was found to be fair use. The decision is online (and available at the link) -- I'd definitely recommend reading it if you're interested in fair use and the legal framework for how fair use is currently (at least of this decision) is understood.
Harvard's CS50 course just made the Boston Globe business section. I'll quote the conclusion from the new boss, David Parkes: “There’s a new willingness among the student body to take risks, to not follow what has been the default path of going into medical school or going into finance,” said David Parkes, Harvard’s dean for computer science. “I think part of it is that students are seeing a new way to contribute to society through computer science”
Michael Nielsen put up a chapter of the latest book he's working on, on neural networks.
I found myself more interested than I expected when reading the article How Academia Resembles a Drug Gang. The issue of PhD overproduction is already well known, but this brought in a new dimension for me, with the discussion of "dualisation" -- employment systems with an "insider" class with unusually high benefits (supposedly, from the article, tenured professors) and a large "outsider" class of people trying to get on the inside and supporting their lifestyle (from the article, untenured part-time faculty and PhD students). Probably worth thinking about in terms of trends in our field, but also just generally, I'm now curious about the economics. Dualisation doesn't seem like it would lead to long-term stable systems -- what's the model?
I'll have to pay more attention to the news myself these days. I was asked to serve on the Communications of the ACM editorial board, news division, and found myself unable to find a suitable reason to decline.
Finally, a question. 'Tis the season to purchase some new machinery. So is retina display for my next laptop a must-have, not worth it, or somewhere in between? I'd appreciate opinions.
Tuesday, November 26, 2013
Monday, November 18, 2013
Easy Now
In the past year or so, I've gotten reviews back on multiple papers with the complaint that the result was too simple. My interpretation of review: sure you might have come up with an efficient data structure that solved an at least seemingly interesting and/or useful problem, but wasn't the underlying math and/or data structure approach "too easy"? (The reviews were either from theory conferences or, my interpretation, from theoretically minded reviewers in settings where there might have been a mix on reviewers.) I'll admit, it's a bit disheartening. And I'll also acknowledge that blogging about it probably seems like sour grapes. But here it goes.
From my standpoint, easy is a plus, not a minus. I've taken to describing it this way. In computer science we're used to measuring algorithms and data structures in terms of the two basic tradeoff costs -- time and space. The third basic cost -- error -- takes some people a while to get used to. That is, your algorithm can be "wrong" (either probabilistically, or in that it gives an approximate answer) in some hopefully well-defined way in order to trade off against time and space needs -- many times you're willing to settle for a good answer instead of the optimal answer if it's much quicker. But there's also a 4th basic tradeoff cost -- programmer time -- that I find the theory community is happy to just ignore. Simple is good, because more people will use simple things, even if they're not the most efficient possible, because often the usual time/space efficiency isn't really the bottleneck. Coding up something that works is. This is why Bloom filters show up in most of my talks (Suresh, drink!); for me they provide an outstanding example of issues related to the 3rd and 4th tradeoff costs.
But I was inspired to go ahead and post something about this because of the following from Paul Krugman's blog today. (His title is "The Power of Two" -- if that's not a sign, what is?) I figured he says it (everything?) better than I could, though you'll have to substitute CS keywords for economics keywords in the below.
From my standpoint, easy is a plus, not a minus. I've taken to describing it this way. In computer science we're used to measuring algorithms and data structures in terms of the two basic tradeoff costs -- time and space. The third basic cost -- error -- takes some people a while to get used to. That is, your algorithm can be "wrong" (either probabilistically, or in that it gives an approximate answer) in some hopefully well-defined way in order to trade off against time and space needs -- many times you're willing to settle for a good answer instead of the optimal answer if it's much quicker. But there's also a 4th basic tradeoff cost -- programmer time -- that I find the theory community is happy to just ignore. Simple is good, because more people will use simple things, even if they're not the most efficient possible, because often the usual time/space efficiency isn't really the bottleneck. Coding up something that works is. This is why Bloom filters show up in most of my talks (Suresh, drink!); for me they provide an outstanding example of issues related to the 3rd and 4th tradeoff costs.
But I was inspired to go ahead and post something about this because of the following from Paul Krugman's blog today. (His title is "The Power of Two" -- if that's not a sign, what is?) I figured he says it (everything?) better than I could, though you'll have to substitute CS keywords for economics keywords in the below.
If this strikes you as too easy, and you think that real economics should involve harder math, well, I feel sorry for you — you just don’t get what it’s all about. (You’re what Rudi Dornbusch used to call a “fearful plumber”). And by the way, coming up with a really simple formulation of what seems at first like a very hard problem can take a lot of work. It sure did in the case of the MF lecture, where I actually did start with a really ugly saddle-path thingie until I realized that formulating the sudden stop the right way would make all of that go away.
Simple doesn’t mean stupid. Thinking that it does, does.
Subscribe to:
Posts (Atom)