Stuck on a flight home, I passed time reading a hotel copy of the Wall Street Journal, to find more interesting things than I would have thought.
What first caught my eye is an interesting piece by Jordan Ellenberg, a math professor who also writes a lot about mathematics for more popular consumption, about The Wrong Way to Treat Child Geniuses. Especially for any readers who did any of those science/math programs as a child (notably John Hopkins/SMPY) -- but not exclusively -- it's definitely worth a read.
Then there were also interesting articles about Ray Kurzweil and an amusing article about entitled Ticket to Dine: the Restaurant Reservation Revoultion. The last article described a business model that is simultaneously disturbing and why-didn't-I-think-of-it -- using bots to snap up reservations on OpenTable to popular restaurants, and then selling/scalping the reservations. Evil, or genius? You decide....
Saturday, May 31, 2014
Wednesday, May 28, 2014
ITCS 2015
I was asked to announce the call for ITCS. Here is a link to the call for papers, with the most important info:
ITCS (previously known as ICS) seeks to promote research that carries a strong conceptual message (e.g., introducing a new concept or model, opening a new line of inquiry within traditional or cross-interdisciplinary areas, or introducing new techniques or new applications of known techniques). ITCS welcomes all submissions, whether aligned with current theory of computation research directions or deviating from them.Important DatesPaper Submission Deadline: Friday,August 8, 2014, 5PM PDT
Apparently in some fit of weakness I agreed to be on the PC.
I think an ongoing question about ITCS is how well it lives up to its "mission statement". Part of the question is whether ITCS is necessary -- do FOCS/STOC/SODA not do a sufficiently good job of accepting "conceptual" papers -- and sufficient(ly doing a good job) -- is it really focusing on accepting conceptual papers, or is it just the same-old. So I guess I'll be able to look under the hood to see how it's going.
My current/upcoming PCs have been SIGCOMM and coNEXT (2014), and ITCS and ICALP part C (2015). I'm feeling happily diverse in what I get to read.
Wednesday, May 21, 2014
What's Important in Algoirthms
I saw this interesting article up on 20 questions with Don Knuth, worth reading just for the fun of it. But the following question on work in algorithm design and analysis particularly interested me, and I nodded especially hard resonating with the statement:
Thus I think the present state of research in algorithm design misunderstands the true nature of efficiency. The literature exhibits a dangerous trend in contemporary views of what deserves to be published.
But here's the whole question/answer for context.
15. Robert Tarjan, Princeton: What do you see as the most promising directions for future work in algorithm design and analysis? What interesting and important open problems do you see?
Don Knuth: My current draft about satisfiability already mentions 25 research problems, most of which are not yet well known to the theory community. Hence many of them might well be answered before Volume 4B is ready. Open problems pop up everywhere and often. But your question is, of course, really intended to be much more general.
In general I'm looking for more focus on algorithms that work fast with respect to problems whose size, n, is feasible. Most of today's literature is devoted to algorithms that are asymptotically great, but they are helpful only when n exceeds the size of the universe.
In one sense such literature makes my life easier, because I don't have to discuss those methods in TAOCP. I'm emphatically not against pure research, which significantly sharpens our abilities to deal with practical problems and which is interesting in its own right. So I sometimes play asymptotic games. But I sure wouldn't mind seeing a lot more algorithms that I could also use.
For instance, I've been reading about algorithms that decide whether or not a given graph G belongs to a certain class. Is G, say, chordal? You and others discovered some great algorithms for the chordality and minimum fillin problems, early on, and an enormous number of extremely ingenious procedures have subsequently been developed for characterizing the graphs of other classes. But I've been surprised to discover that very few of these newer algorithms have actually been implemented. They exist only on paper, and often with details only sketched.
Two years ago I needed an algorithm to decide whether G is a so-called comparability graph, and was disappointed by what had been published. I believe that all of the supposedly "most efficient" algorithms for that problem are too complicated to be trustworthy, even if I had a year to implement one of them.
Thus I think the present state of research in algorithm design misunderstands the true nature of efficiency. The literature exhibits a dangerous trend in contemporary views of what deserves to be published.
Another issue, when we come down to earth, is the efficiency of algorithms on real computers. As part of the Stanford GraphBase project I implemented four algorithms to compute minimum spanning trees of graphs, one of which was the very pretty method that you developed with Cheriton and Karp. Although I was expecting your method to be the winner, because it examines much of the data only half as often as the others, it actually came out two to three times worse than Kruskal's venerable method. Part of the reason was poor cache interaction, but the main cause was a large constant factor hidden by O notation.
Thus I think the present state of research in algorithm design misunderstands the true nature of efficiency. The literature exhibits a dangerous trend in contemporary views of what deserves to be published.
But here's the whole question/answer for context.
15. Robert Tarjan, Princeton: What do you see as the most promising directions for future work in algorithm design and analysis? What interesting and important open problems do you see?
Don Knuth: My current draft about satisfiability already mentions 25 research problems, most of which are not yet well known to the theory community. Hence many of them might well be answered before Volume 4B is ready. Open problems pop up everywhere and often. But your question is, of course, really intended to be much more general.
In general I'm looking for more focus on algorithms that work fast with respect to problems whose size, n, is feasible. Most of today's literature is devoted to algorithms that are asymptotically great, but they are helpful only when n exceeds the size of the universe.
In one sense such literature makes my life easier, because I don't have to discuss those methods in TAOCP. I'm emphatically not against pure research, which significantly sharpens our abilities to deal with practical problems and which is interesting in its own right. So I sometimes play asymptotic games. But I sure wouldn't mind seeing a lot more algorithms that I could also use.
For instance, I've been reading about algorithms that decide whether or not a given graph G belongs to a certain class. Is G, say, chordal? You and others discovered some great algorithms for the chordality and minimum fillin problems, early on, and an enormous number of extremely ingenious procedures have subsequently been developed for characterizing the graphs of other classes. But I've been surprised to discover that very few of these newer algorithms have actually been implemented. They exist only on paper, and often with details only sketched.
Two years ago I needed an algorithm to decide whether G is a so-called comparability graph, and was disappointed by what had been published. I believe that all of the supposedly "most efficient" algorithms for that problem are too complicated to be trustworthy, even if I had a year to implement one of them.
Thus I think the present state of research in algorithm design misunderstands the true nature of efficiency. The literature exhibits a dangerous trend in contemporary views of what deserves to be published.
Another issue, when we come down to earth, is the efficiency of algorithms on real computers. As part of the Stanford GraphBase project I implemented four algorithms to compute minimum spanning trees of graphs, one of which was the very pretty method that you developed with Cheriton and Karp. Although I was expecting your method to be the winner, because it examines much of the data only half as often as the others, it actually came out two to three times worse than Kruskal's venerable method. Part of the reason was poor cache interaction, but the main cause was a large constant factor hidden by O notation.
Friday, May 02, 2014
Reviewing Question: What's Important
Nick Feamster on Google+ recently shared (and gave me permission to blog about) the following review comment:
Without wanting in any way to excuse this reviewer, I do want to say that this review highlights an ever-increasing problem: I believe review decisions are more and more becoming dominated by subjective decisions about what topics are "important". I realize some may say it has ever been thus, and I acknowledge that the importance of the underlying problem has always been a factor in judging a paper. I think the subjective judgment has become more significant in both in systems and in theory over the years for multiple reasons. As the field has expanded there's less of an underlying agreement and common understanding of what's important. Many times the reviewer may not know the area well enough to judge the importance, and there is every-growing potential for area bias: the problems in my area are (more) important. Further, there are far too many papers for the available slots so reasons to reject have to be found. As the above comment suggests, one can always call into question the importance of the problem the paper aims to solve.
But finally, I think, it fits in with an issue that keeps coming up for me: reviewers are too arrogant. If they don't see why the problem is important, then the issue must be with the research or the writing; it couldn't be with their reading or understanding of the problem space. Reviewers will have opinions regarding the importance of the works they read -- no getting around that -- and they should where possible give advice to authors on how to best present their results. But they could often be a bit more judicious in recognizing that they are expressing their opinion and offering advice; they are not, in the end, the final arbiter of a paper's eventual, actual importance.
I don't see subjective decisions in "importance" going away. But I think they could be given a bit more care, both in how they are used in the final decision-making, and in how reviewers express their opinions on "importance" to authors.
If you'll excuse me now, I don't have time to blog further, I have to go plan where my children should go to college and where I'll eventually ship off my aging parents. (Thank goodness I have a tenured position.)
This review comment is super enlightening:It's so cringeworthy, it's funny. You could substitute "data usage" with pretty much any research topic you feel like, and you have a review that's almost certainly accurate and justifies rejection. Which is what makes it such a wonderful, terrible review.
"I think there is a general problem with the overall goal of this study and area of research. This seems to be making data usage a critical thing people should be paying attention to. People have real issues and I am not at all sure that this one deserves attention. They have real concerns like, are they going to lose their job, where should their children go to college, should they encourage their elderly parents to move into a retirement center."
Without wanting in any way to excuse this reviewer, I do want to say that this review highlights an ever-increasing problem: I believe review decisions are more and more becoming dominated by subjective decisions about what topics are "important". I realize some may say it has ever been thus, and I acknowledge that the importance of the underlying problem has always been a factor in judging a paper. I think the subjective judgment has become more significant in both in systems and in theory over the years for multiple reasons. As the field has expanded there's less of an underlying agreement and common understanding of what's important. Many times the reviewer may not know the area well enough to judge the importance, and there is every-growing potential for area bias: the problems in my area are (more) important. Further, there are far too many papers for the available slots so reasons to reject have to be found. As the above comment suggests, one can always call into question the importance of the problem the paper aims to solve.
But finally, I think, it fits in with an issue that keeps coming up for me: reviewers are too arrogant. If they don't see why the problem is important, then the issue must be with the research or the writing; it couldn't be with their reading or understanding of the problem space. Reviewers will have opinions regarding the importance of the works they read -- no getting around that -- and they should where possible give advice to authors on how to best present their results. But they could often be a bit more judicious in recognizing that they are expressing their opinion and offering advice; they are not, in the end, the final arbiter of a paper's eventual, actual importance.
I don't see subjective decisions in "importance" going away. But I think they could be given a bit more care, both in how they are used in the final decision-making, and in how reviewers express their opinions on "importance" to authors.
If you'll excuse me now, I don't have time to blog further, I have to go plan where my children should go to college and where I'll eventually ship off my aging parents. (Thank goodness I have a tenured position.)
Subscribe to:
Posts (Atom)