Most every computer science conference has (at least) two rates for registration: students and other. Ostensibly, this is because we want to encourage student attendance at conferences. While it's not often put in these terms, it should be said that this is part of the reason registration fees seem so high: non-students are effectively subsidizing students. (Which, I think, is fine.)
What about post-docs? I've had this question in the past when I was running local arrangements for a conference and have unsurprisingly heard it come up again recently.
I've looked around and, as far as I can tell, separate registration rates for post-docs don't appear common. I don't think any of the conferences I regularly participate in have them. If you know of conferences that do, please comment.
Since CS seems increasingly to be moving to a state where postdocs will be common -- possibly the norm -- this seems to be a question worth considering. The problem is there is a big range in postdocs. We'd like to charge postdocs with money (such as, say, postdocs in research labs that pay for conference travel) the full rate to keep standard registration rates low, but we'd like to charge postdocs that don't have specific travel money less so as to encourage their attendance (and not burden them financially). While it would be nice to have two different postdoc rates -- postdocs with money and postdocs without -- I don't think that would go over well with conference organizers or the people who could pay for postdocs.
The default we seem to have wandered into is that postdocs are charged the full rate, but I'm unconvinced that the system ended up that way because of careful thought. Perhaps it is because until recently postdocs have been relatively rare. It seems a question worth revisiting. Any opinions?
Friday, April 30, 2010
Wednesday, April 28, 2010
Conference Registration Deadlines (and Fees)
I'm spending the morning registering for conferences. STOC early deadline is April 30, ISIT is May 1, and EC is May 6. I'm sure there are others coming up. Remember to sign up now! (Why pay the "late signup" fee of $100 or so?) And please say hello if I see you at any of these events.
In terms of pricing, early rates are $400 for EC, $500 for STOC, and $675 for ISIT. I'll let people argue the relative value one obtains for one's money at the different conferences. It's worth noting the STOC fee includes the Tutorial Day on Saturday; tutorials for EC and ISIT are extra.
In terms of pricing, early rates are $400 for EC, $500 for STOC, and $675 for ISIT. I'll let people argue the relative value one obtains for one's money at the different conferences. It's worth noting the STOC fee includes the Tutorial Day on Saturday; tutorials for EC and ISIT are extra.
Tuesday, April 27, 2010
Last Day of Class!
Sometimes, I think the students are so excited about the end of classes, they don't realize that it's almost equally exciting for the faculty. While I enjoy teaching, by the end of the semester, I'm looking forward to the break as much as they are.
I'm hoping the free time will lead to a small research burst (SODA deadline for inspiration -- plenty of time!) but sadly the end of classes doesn't quite match up to the end of academic responsibilities. Besides the final exam to prepare (and administer and grade), May appears to be the month for all the end-of-year-items: oral qualifying exams, Ph.D. defenses, even a faculty retreat day. (Never mind the usual other business -- a PC meeting, a workshop...) Summer doesn't really seem to start until June. But still, the last class is a nice milestone, an achievement I can enjoy as readily as the students who have survived the semester with me.
I'm hoping the free time will lead to a small research burst (SODA deadline for inspiration -- plenty of time!) but sadly the end of classes doesn't quite match up to the end of academic responsibilities. Besides the final exam to prepare (and administer and grade), May appears to be the month for all the end-of-year-items: oral qualifying exams, Ph.D. defenses, even a faculty retreat day. (Never mind the usual other business -- a PC meeting, a workshop...) Summer doesn't really seem to start until June. But still, the last class is a nice milestone, an achievement I can enjoy as readily as the students who have survived the semester with me.
Thursday, April 22, 2010
Random Links for the Day
FemaleScienceProfessor tells an Evil Reviewer story.
The CRA blog tells us that DARPA is back, in terms of funding university research. Ed Lazowska has related posts here and here at the CCC blog.
If you think deciding authorship is complicated normally, what about for Polymath projects?
For my last lecture in Algorithms and Data Structures, I try to show something that I, at least, think is amazing: maximal palindromes can be found in linear time. (pp. 197-198 of Gusfield's book Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology).
But this year, I didn't miss enough lectures during the semester. So I still have one more lecture -- a good excuse to finally add a brief discussion of Bubblesearch (randomized greedy) and maybe also Human-Guided Tabu Search into the class.
The CRA blog tells us that DARPA is back, in terms of funding university research. Ed Lazowska has related posts here and here at the CCC blog.
If you think deciding authorship is complicated normally, what about for Polymath projects?
For my last lecture in Algorithms and Data Structures, I try to show something that I, at least, think is amazing: maximal palindromes can be found in linear time. (pp. 197-198 of Gusfield's book Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology).
But this year, I didn't miss enough lectures during the semester. So I still have one more lecture -- a good excuse to finally add a brief discussion of Bubblesearch (randomized greedy) and maybe also Human-Guided Tabu Search into the class.
Tuesday, April 20, 2010
Advisors and Publications
There have been several interesting comments regarding the student/advisor relationship on my previous posts (here, here, and here) on co-authorship. It's clearly an issue that stirs up strong feelings in many, with several anonymous students (or ex-students) protesting do-nothing advisors who still put their names on the papers, and some anonymous advisors suggesting graduate students underestimate the value they provide.
I tried to do a little reading on the subject, and found some food for thought on my searches of the web. (References are very welcome in the comments; here's one and another and another and another, three articles and a guideline for students, that seem pretty good.) Most schools seem to have a basic official policy or guidelines in place. Some examples include this bit from the Harvard medical school, this from Duke, and this from Berkeley. The best I've found is this statement from Penn, which is amusing first because it seems to blatantly plagiarize directly from the (earlier dated) Berkeley one, but also because it links to the specific policies of individual departments. Indeed, one issue here is that is clear is that different fields seem to have different interpretations of what constitutes authorship, as well as different protocols for author ordering. The Berkeley/Penn statement go so far as to say, "In some fields, the Principal Investigator of the lab is first author of all publications." (Take that, graduate students!) So perhaps one issue is that computer science, as a relatively new field, hasn't set up its tradition for authorship and author ordering; as we've discussed, even within computer science, theorists default to alphabetical order while systems defaults to students first/ordered by contribution.
As far as official policies go, though, in general the authorship bar seems to be set intentionally low -- clearly tilted in the direction of advisors. While there is often a statement that each author should have contributed intellectually to the work -- which would not include just funding the project -- conception and design of the project is considered sufficient for that. (This seems to match "the PI rules the lab" mentality of some fields -- you may have done all the work, but the PI has set up the entire framework for what the lab works on, so that counts.) Similarly, while its generally understood that all authors should be involved in the writing, that can be limited to reviewing (and revising) the work. (PIs are, after all, too busy writing grant proposals to spend too much time on papers.)
As for my own opinion, I must admit, I'm on the side of the advisor. That's not surprising; after all, I am currently one. As a graduate student, I may have had my complaints about my advisor, but when I switched sides, I became a lot more understanding and sympathetic. In my experience, many graduate students do undervalue the contributions of their advisors, and the work they put into the students in general (and specific papers in particular). I'm not saying there aren't bad advisors out there, and that there aren't cases where advisors put names on papers they shouldn't, but my benefit of the doubt will tend to fall to the advisor.
Also, independent of the underlying ethical questions, my personal take is also that graduate students may not realize the "cost" of having your advisor on the paper is small. Whether using alphabetical order or by contribution, I think the default assumption is that the student was the "primary" author on a work unless other information is available (even when this assumption is unwarranted, which I think is a non-trivial fraction of the time). (Others may certainly disagree.) Here, again, recommendation letters and direct word-of-mouth, as well as longer established histories, are extremely important, moreso in my experience than who has their name on what papers.
We can, certainly, get back to arguing about what is the "right" answer, where the line should be drawn for an advisor to put their name on the paper. Or perhaps we should go to the extreme of having (as suggested in the HMS guidelines or the authors' guide for Nature) a small writeup for each paper where a description of each authors' contributions are provided. (Sarcasm note: as I'm sure I've previously stated, I would hate such a system.) More realistically, as these many guidelines all seem to state, authors need to talk about this and set expectations earlier in the process.
Where does this leave the unhappy graduate students? Ideally, entering graduate students should try to find out potential advisors' authorship policies before signing up -- ask the current students for an off-the-record honest appraisal. Or, ideally, graduate students should talk to their advisors if they have an issue -- let them know early on if you think you're writing a solo paper. If you've waited until the paper is being written, and then tell your advisor you don't think they've done enough, I don't think you've set up the situation appropriately.
And what should you do when your advisor says, "No, I'm the PI, my name goes on the paper."? As a practical matter, keeping in mind the strong advisor-oriented tilt of the authorship policies I've seen, realistically, I'd suggest finding a way to live with that, or find a different advisor who has expectations more in line with your own.
I tried to do a little reading on the subject, and found some food for thought on my searches of the web. (References are very welcome in the comments; here's one and another and another and another, three articles and a guideline for students, that seem pretty good.) Most schools seem to have a basic official policy or guidelines in place. Some examples include this bit from the Harvard medical school, this from Duke, and this from Berkeley. The best I've found is this statement from Penn, which is amusing first because it seems to blatantly plagiarize directly from the (earlier dated) Berkeley one, but also because it links to the specific policies of individual departments. Indeed, one issue here is that is clear is that different fields seem to have different interpretations of what constitutes authorship, as well as different protocols for author ordering. The Berkeley/Penn statement go so far as to say, "In some fields, the Principal Investigator of the lab is first author of all publications." (Take that, graduate students!) So perhaps one issue is that computer science, as a relatively new field, hasn't set up its tradition for authorship and author ordering; as we've discussed, even within computer science, theorists default to alphabetical order while systems defaults to students first/ordered by contribution.
As far as official policies go, though, in general the authorship bar seems to be set intentionally low -- clearly tilted in the direction of advisors. While there is often a statement that each author should have contributed intellectually to the work -- which would not include just funding the project -- conception and design of the project is considered sufficient for that. (This seems to match "the PI rules the lab" mentality of some fields -- you may have done all the work, but the PI has set up the entire framework for what the lab works on, so that counts.) Similarly, while its generally understood that all authors should be involved in the writing, that can be limited to reviewing (and revising) the work. (PIs are, after all, too busy writing grant proposals to spend too much time on papers.)
As for my own opinion, I must admit, I'm on the side of the advisor. That's not surprising; after all, I am currently one. As a graduate student, I may have had my complaints about my advisor, but when I switched sides, I became a lot more understanding and sympathetic. In my experience, many graduate students do undervalue the contributions of their advisors, and the work they put into the students in general (and specific papers in particular). I'm not saying there aren't bad advisors out there, and that there aren't cases where advisors put names on papers they shouldn't, but my benefit of the doubt will tend to fall to the advisor.
Also, independent of the underlying ethical questions, my personal take is also that graduate students may not realize the "cost" of having your advisor on the paper is small. Whether using alphabetical order or by contribution, I think the default assumption is that the student was the "primary" author on a work unless other information is available (even when this assumption is unwarranted, which I think is a non-trivial fraction of the time). (Others may certainly disagree.) Here, again, recommendation letters and direct word-of-mouth, as well as longer established histories, are extremely important, moreso in my experience than who has their name on what papers.
We can, certainly, get back to arguing about what is the "right" answer, where the line should be drawn for an advisor to put their name on the paper. Or perhaps we should go to the extreme of having (as suggested in the HMS guidelines or the authors' guide for Nature) a small writeup for each paper where a description of each authors' contributions are provided. (Sarcasm note: as I'm sure I've previously stated, I would hate such a system.) More realistically, as these many guidelines all seem to state, authors need to talk about this and set expectations earlier in the process.
Where does this leave the unhappy graduate students? Ideally, entering graduate students should try to find out potential advisors' authorship policies before signing up -- ask the current students for an off-the-record honest appraisal. Or, ideally, graduate students should talk to their advisors if they have an issue -- let them know early on if you think you're writing a solo paper. If you've waited until the paper is being written, and then tell your advisor you don't think they've done enough, I don't think you've set up the situation appropriately.
And what should you do when your advisor says, "No, I'm the PI, my name goes on the paper."? As a practical matter, keeping in mind the strong advisor-oriented tilt of the authorship policies I've seen, realistically, I'd suggest finding a way to live with that, or find a different advisor who has expectations more in line with your own.
Monday, April 19, 2010
Should Harvard's Intro CS Class Do Away with Grades
There's been discussions of late on the possibility of offering our intro CS class in two flavors -- a "graded" and "SAT/UNSAT" version. (Same class, just two different grading options formalized into two course numbers.)
I was toying with whether or not to do a post about the issue when fortunately my colleague Matt Welsh wrote a comprehensive post on it at his blog Volatile and Decentralized and saved me the trouble. So here's the post -- Should Harvard's Intro CS Class Do Away with Grades?
I was toying with whether or not to do a post about the issue when fortunately my colleague Matt Welsh wrote a comprehensive post on it at his blog Volatile and Decentralized and saved me the trouble. So here's the post -- Should Harvard's Intro CS Class Do Away with Grades?
Guest Post : Justin Thaler/New Paper
I'm happy to introduce Justin Thaler, a first-year graduate student at Harvard that I'm advising. You can find out more about Justin at his home page. Justin volunteered to write a post about a paper we (Cormode, Mitzenmacher, Thaler) have put on the arxiv. (I note, in line with current discussions, the author list is in alphabetical order, but as Justin is the student, you can rightfully assume he did most of the work.)
------------------------------
I'm happy to write a guest post announcing a new paper, "Streaming Graph Computations with a Helpful Advisor (arxiv link)" by me, Michael, and Graham Cormode of AT&T Labs -- Research. In our paper, we consider a variation of the streaming paradigm in which a streaming algorithm is allowed access to a powerful advisor who may annotate the data stream. We're primarily motivated by the emergence of commercial cloud computing services, like Amazon EC2, but we also have in mind other settings in which outsourcing of computation is desirable, such as weak peripheral devices that need to delegate computation they cannot handle on their own.
In many of our motivating applications, the helper is not a trusted entity; the commercial stream processing service may have executed a buggy algorithm, experienced a hardware fault or communication error, or may even be deliberately deceptive. For example, since executing a computation is costly, a cloud computing service may have a financial incentive not to complete the computation they were hired to perform, as long as they can convince their client otherwise. As a result, we would like the helper to prove that she executed the computation correctly, especially if providing the proof is not too costly.
In our paper, we primarily consider problems on graph streams, which are of high interest given the recent explosion in the number and scale of real-world structured data sets including the web, social networks, and other relational data. Many results for graph streams have been negative; apparently most graph algorithms fundamentally require flexibility in the way they query edges, and therefore the combination of adversarial order and limited memory makes many problems intractable in the standard streaming model. Consequently, these problems are ripe for outsourcing.
We prove a host of positive results for many standard graph problems in our model, many of which are optimal or near-optimal. We also provide a protocol achieving optimal tradeoffs between proof-length and working memory for matrix-vector multiplication, which is my personal favorite.
While we're introducing our paper to the blogosphere, it seems worthwhile to mention some other blog posts closely related to our work. Richard Lipton describes work by himself, Atish Das Sarma and Danupon Nanongkai on the Best Order Streaming Model which happens to be a special case of our own
http://rjlipton.wordpress.com/ 2009/08/24/streaming-models- both-old-and-new/#more-3294.
In a more recent post, Professor Lipton describes a different notion of "security" in Cloud Computing.
http://rjlipton.wordpress.com/ 2010/04/08/can-we-trust-cloud- computing/#more-4665
The concern there is on keeping the data private, and without explicit streaming constraints, but it's good to see other emphasis on trust within outsourced computations.
------------------------------
I'm happy to write a guest post announcing a new paper, "Streaming Graph Computations with a Helpful Advisor (arxiv link)" by me, Michael, and Graham Cormode of AT&T Labs -- Research. In our paper, we consider a variation of the streaming paradigm in which a streaming algorithm is allowed access to a powerful advisor who may annotate the data stream. We're primarily motivated by the emergence of commercial cloud computing services, like Amazon EC2, but we also have in mind other settings in which outsourcing of computation is desirable, such as weak peripheral devices that need to delegate computation they cannot handle on their own.
In many of our motivating applications, the helper is not a trusted entity; the commercial stream processing service may have executed a buggy algorithm, experienced a hardware fault or communication error, or may even be deliberately deceptive. For example, since executing a computation is costly, a cloud computing service may have a financial incentive not to complete the computation they were hired to perform, as long as they can convince their client otherwise. As a result, we would like the helper to prove that she executed the computation correctly, especially if providing the proof is not too costly.
In our paper, we primarily consider problems on graph streams, which are of high interest given the recent explosion in the number and scale of real-world structured data sets including the web, social networks, and other relational data. Many results for graph streams have been negative; apparently most graph algorithms fundamentally require flexibility in the way they query edges, and therefore the combination of adversarial order and limited memory makes many problems intractable in the standard streaming model. Consequently, these problems are ripe for outsourcing.
We prove a host of positive results for many standard graph problems in our model, many of which are optimal or near-optimal. We also provide a protocol achieving optimal tradeoffs between proof-length and working memory for matrix-vector multiplication, which is my personal favorite.
While we're introducing our paper to the blogosphere, it seems worthwhile to mention some other blog posts closely related to our work. Richard Lipton describes work by himself, Atish Das Sarma and Danupon Nanongkai on the Best Order Streaming Model which happens to be a special case of our own
http://rjlipton.wordpress.com/
In a more recent post, Professor Lipton describes a different notion of "security" in Cloud Computing.
http://rjlipton.wordpress.com/
The concern there is on keeping the data private, and without explicit streaming constraints, but it's good to see other emphasis on trust within outsourced computations.
Saturday, April 17, 2010
Bursts -- a new Barabasi book
At the end of the month, Barabasi's new book, entitled Bursts: The Hidden Pattern Behind Everything We Do, will be released. Here's the pre-order Amazon link, and he's apparently put a web page up with information about the book. Barabasi is something of a controversial figure in the networking community, as mentioned previously in this blog here; for example, there are those that feel he overstates claims without evidence (the power laws on the Internet controversy). However, his earlier book, Linked: How Everything Is Connected to Everything Else and What It Means, was one of a series of books (including also Watts' Six Degrees: The New Science of Networks) that popularized networks and power laws in the public consciousness a few years ago. So it will be interesting to see how this book plays out, both in how it does with general audiences, and what scientists think of the content.
I've also still been waiting for the Easley/Kleinberg book Networks, Crowds, and Markets: Reasoning About a Highly Connected World. It seems to be on pre-order for June. While this will be a more "academic" book -- it's written as a textbook for an introductory course -- I'm wondering if it will reach the public consciousness.
I've always wondered my more computer scientists don't try to write "popular science" books. It would seem that we work in an area that should be of more popular interest than physicists, but they seem to write a lot more for widespread public consumption. It's something I'd like to try -- I started playing with an idea a few years back (after tenure) but the project stalled. Writing a book like that seems to be a multi-year commitment, and perhaps it's hard in our "get the paper ready for the next deadline" culture. It certainly was hard for me to keep up momentum, and find the right voice and message. But perhaps someday.
I've also still been waiting for the Easley/Kleinberg book Networks, Crowds, and Markets: Reasoning About a Highly Connected World. It seems to be on pre-order for June. While this will be a more "academic" book -- it's written as a textbook for an introductory course -- I'm wondering if it will reach the public consciousness.
I've always wondered my more computer scientists don't try to write "popular science" books. It would seem that we work in an area that should be of more popular interest than physicists, but they seem to write a lot more for widespread public consumption. It's something I'd like to try -- I started playing with an idea a few years back (after tenure) but the project stalled. Writing a book like that seems to be a multi-year commitment, and perhaps it's hard in our "get the paper ready for the next deadline" culture. It certainly was hard for me to keep up momentum, and find the right voice and message. But perhaps someday.
Tuesday, April 13, 2010
What's the Co-Author Line?
Continuing the past discussion on authorship issues, I've been considering the questions: at what point has a person involved in the project done so little work that they should not be included as an author? For today's post, let's not consider the case of advisors, which is arguably a special case that should be considered separately (and we will return to in a further post).
It's interesting that my preferred approach of alphabetical ordering of authors arguably makes this question thornier, since the tendency of alphabetical ordering to equalize the contribution among authors suggest that there should be a contribution threshold below which you should be removed as a co-author. This is probably still true under other ordering schemes as well, but it seems more striking under alphabetical ordering.
Having thought carefully about it, I've come to a firm conclusion: it seems like a challenging issue with no right answer. In spirit, I agree with David Eppstein's comment in the previous post:
"The rule I tend to follow is that there are no 1% coauthors. If you haven't contributed an idea that you can point to as part of the paper, or done some of the hard work of turning ideas into proofs, or done some of the implementation or data analysis, you don't get to be a co-author."
In practice, I've found it's a lot fuzzier. Sometimes the "idea" is the initial formulation of the problem, not the solution. Should that count as a contributed idea? Many papers, I find, boil down to one key idea, and the rest is just details. Four of you spend a bunch of time kicking around ideas that don't work, then one sees the idea that works, and the paper really writes itself from there. Should the other three not be co-authors? These aren't hypotheticals. I've been in situations where I've talked with someone about a problem, which I considered at that point "my" problem, with the idea that we would possibly work together on it, only to have them present me with a finished paper they were submitting on their own a few months later (apparently without realizing that I might be peeved by this). I've been in a situation where one co-author, after much dead-end work by everyone, came up with the key insight and then thought that the result should be a single-author paper. I'm sure I (and others) can come up with similarly unclear situations.
I recognize that, in both of these circumstances, one could argue that the other party was in the right. But it really doesn't matter who was right; in the end, these became people I would be very wary of working with in the future (and perhaps they felt the same of me). As we've discussed before, one of the great benefits of collaborating is establishing long-term collaborations over multiple problems. That didn't happen. What I've learned was that, in such situations, one should be clear ahead of time what the expectations are. Being clear when starting a collaboration about expectations might be a little awkward, but seems better than awkwardness later. If you're telling someone a problem and consider it from that point on a collaboration, tell them. I personally think the default assumption should be that if you're in the room on multiple occasions when working on a problem, you should be considered a co-author even if you're not the one to make the breakthrough, but if there's a reason that shouldn't be the case, somebody should speak up sooner rather than later. Different people may use different rules or guidelines for what they consider sufficient for co-authorship on a paper, and setting expectations is really the key.
As another story, I should admit I have been involved in a paper where I was a 1% contributor. I started working with a group on a problem, but it was a group in a distant location, and while I was there -- for a short period -- no progress was made. The group, with much more work, eventually solved the problem. I was asked by the senior author whether I wanted to be on the paper, and I declined. I thought a potentially awkward situation was handled perfectly by this senior author. They were up front that, as it turned out, I hadn't contributed tangibly to the paper, but were willing to accept that, in this case, it was an accident of circumstance rather than intent. I was glad they respected me enough to discuss it openly and leave it as my decision, even though in my mind the proper course was clear. (Indeed, in this case, I thought it was so clear I would not have been surprised or offended if they hadn't offered.) I assume that if I had decided otherwise I would have been a co-author on the paper without argument, and they would then have to consider that history in future dealings with me. I also think that, if I had said I wanted to be a co-author on the paper, it would have been appropriate for them to add me, because perhaps we had had different perceptions and expectations regarding the work.
In any case, I think the issue is a difficult one, made even more so because it's tied to quite directly to one's job performance, which tends to color self-perception. Perhaps others can add more clarity to the question in the comments.
It's interesting that my preferred approach of alphabetical ordering of authors arguably makes this question thornier, since the tendency of alphabetical ordering to equalize the contribution among authors suggest that there should be a contribution threshold below which you should be removed as a co-author. This is probably still true under other ordering schemes as well, but it seems more striking under alphabetical ordering.
Having thought carefully about it, I've come to a firm conclusion: it seems like a challenging issue with no right answer. In spirit, I agree with David Eppstein's comment in the previous post:
"The rule I tend to follow is that there are no 1% coauthors. If you haven't contributed an idea that you can point to as part of the paper, or done some of the hard work of turning ideas into proofs, or done some of the implementation or data analysis, you don't get to be a co-author."
In practice, I've found it's a lot fuzzier. Sometimes the "idea" is the initial formulation of the problem, not the solution. Should that count as a contributed idea? Many papers, I find, boil down to one key idea, and the rest is just details. Four of you spend a bunch of time kicking around ideas that don't work, then one sees the idea that works, and the paper really writes itself from there. Should the other three not be co-authors? These aren't hypotheticals. I've been in situations where I've talked with someone about a problem, which I considered at that point "my" problem, with the idea that we would possibly work together on it, only to have them present me with a finished paper they were submitting on their own a few months later (apparently without realizing that I might be peeved by this). I've been in a situation where one co-author, after much dead-end work by everyone, came up with the key insight and then thought that the result should be a single-author paper. I'm sure I (and others) can come up with similarly unclear situations.
I recognize that, in both of these circumstances, one could argue that the other party was in the right. But it really doesn't matter who was right; in the end, these became people I would be very wary of working with in the future (and perhaps they felt the same of me). As we've discussed before, one of the great benefits of collaborating is establishing long-term collaborations over multiple problems. That didn't happen. What I've learned was that, in such situations, one should be clear ahead of time what the expectations are. Being clear when starting a collaboration about expectations might be a little awkward, but seems better than awkwardness later. If you're telling someone a problem and consider it from that point on a collaboration, tell them. I personally think the default assumption should be that if you're in the room on multiple occasions when working on a problem, you should be considered a co-author even if you're not the one to make the breakthrough, but if there's a reason that shouldn't be the case, somebody should speak up sooner rather than later. Different people may use different rules or guidelines for what they consider sufficient for co-authorship on a paper, and setting expectations is really the key.
As another story, I should admit I have been involved in a paper where I was a 1% contributor. I started working with a group on a problem, but it was a group in a distant location, and while I was there -- for a short period -- no progress was made. The group, with much more work, eventually solved the problem. I was asked by the senior author whether I wanted to be on the paper, and I declined. I thought a potentially awkward situation was handled perfectly by this senior author. They were up front that, as it turned out, I hadn't contributed tangibly to the paper, but were willing to accept that, in this case, it was an accident of circumstance rather than intent. I was glad they respected me enough to discuss it openly and leave it as my decision, even though in my mind the proper course was clear. (Indeed, in this case, I thought it was so clear I would not have been surprised or offended if they hadn't offered.) I assume that if I had decided otherwise I would have been a co-author on the paper without argument, and they would then have to consider that history in future dealings with me. I also think that, if I had said I wanted to be a co-author on the paper, it would have been appropriate for them to add me, because perhaps we had had different perceptions and expectations regarding the work.
In any case, I think the issue is a difficult one, made even more so because it's tied to quite directly to one's job performance, which tends to color self-perception. Perhaps others can add more clarity to the question in the comments.
Sunday, April 11, 2010
Crimson Article on Women in Computer Science
From a few days ago:
Computer Science at Harvard Sees Large Gender Imbalance
It's listed as a news article, although I don't think this is news. It is something we are trying to figure out, and do better:
“It’s something that we talk about a lot,” said Associate Dean for Computer Science and Engineering J. Gregory Morrisett. “We are coordinating with a bunch of departments around the world and are trying a lot of different things in the hopes that we will uncover some of the issues and correct for them.”
Computer Science at Harvard Sees Large Gender Imbalance
It's listed as a news article, although I don't think this is news. It is something we are trying to figure out, and do better:
“It’s something that we talk about a lot,” said Associate Dean for Computer Science and Engineering J. Gregory Morrisett. “We are coordinating with a bunch of departments around the world and are trying a lot of different things in the hopes that we will uncover some of the issues and correct for them.”
Friday, April 09, 2010
More on Authorship
My last post, on what I called "60-40" papers, where one author does non-trivially more of the work than the other(s), seems to have generated some interesting comments, worth following up on. There seem to be multiple issues in the comments that, to me, appear essentially orthogonal:
1) How does the community properly assign credit for 60-40 papers? Should we use author ordering or some other mechanism to assign credit?
2) What about advisors who do minimal to zero work but put their name on the paper?
3) At what point has a person involved in the project done so little work that they should not be included as an author (by either withdrawing willingly, or possibly by being told "you're not an author"). (I think of this as separate from the "advisor" issue.)
Let me start with item 1, assigning credit. I promoted the approach used in theory (derived, apparently, from mathematics) of alphabetical order, claiming credit comes out through things like letters and who gives the talk, and is determined more clearly over the course of a career. Many question this; indeed, many other fields use entirely different systems. Many fields use author order to signal the level of contribution in some way, so that being "first author" has significant meaning. At the extreme, the journal Nature, for example, suggests that author contributions should be fully specified in each article in their guide to authors:
"Author Contributions: authors are required to include a statement to specify the contributions of each co-author. The statement can be up to several sentences long, describing the tasks of individual authors referred to by their initials."
Graduate student and postdocs, in particular, are more concerned with systems that clarify credit, and this is understandable. They have short career track records, and want a job; making sure that they get their proper credit often seems, to them, quite imperative.
I'd like to defend the alphabetical, no-explicit-credit-assigned system, and then provide a couple of stories. (If you find that indulgent, you can skip the stories.)
One philosophical approach is to try to start from a blank slate. Forget about your current situation, and how your field does things. Your starting point is that you're just starting a career in science. What sort of system do you want to use? I'd argue you'd want to use a system that would lead to long-lasting, productive collaborations; that would have minimal overhead; and that would still provide meaningful ways of calibrating people over appropriate time periods. I think pure alphabetical does that. It removes the need to fight over (or even discuss) who contributed exactly what, leading more easily to frequent and repeated collaboration. To be clear, I have a strong bias: collaborations, I think, are great for scientific production, and on the whole make research much more fun. Alphabetical order is clearly easy. And while it's weak on allowing someone to find out how much each individual author contributed to a specific multi-author paper, over the course of several papers, I think the calibration works, especially when augmented with additional information such as letters in job searches and promotion cases. Further, it's not clear that other systems are really stronger in terms of assigning credit. Authors can disagree on contributions -- how does this get settled, and what does it do to future collaborations; in multi-author situations where order ostensibly matters many advisors will game the system, for example by putting students first regardless of their contribution in order to prep them for the job market or out of professional courtesy; and it's not clear how, for example, to value different types of contributions, like ideas vs. data collection and analysis. My bias is that the blank slate scientist starting their career would pick the alphabetical order system.
I have at least one data point for this conclusion: myself. (Here's where the stories start.) In graduate school, a bunch of us students got together and wrote a paper. This was a case where I was definitely the 60 author, and I thought it would be best if I was first author. The other students didn't object, but since I knew it wasn't standard for theory, I asked my advisor. (He wasn't a co-author for this paper, so his view was not biased in that regard.) He told me it was my choice, but that I needed to recognize the following: I would possibly get more credit for this paper, but, from then on, I would have adopted a system where, for every paper, I'd have to face the possibility of constructing the author order with my co-authors. Did I want to have that discussion for every paper down the line? I went with alphabetical order and have never looked back. I always recommend alphabetical order, although when I work with people in other areas I do defer to whatever system they want to use, and tell them they can put me wherever they like in the ordering. (It is true that, with tenure, one can care much, much less about such things.)
On the other side, another story. When I applied for my CAREER grant, apparently I was on the borderline, and it took quite some time to get the final word. I asked the NSF officer for feedback -- especially in case I needed to resubmit. (Apparently, enough money came through in the end to fund me.) One thing he said was that a lot of my work had been co-authored with very talented people, and it wasn't clear what my contributions were. This was a case where, obviously, there were no recommendation letters to draw from. Still, I was offended then by the comment, and looking back I still find it ridiculous. At that point, I'd written multiple papers with these other authors (who were not my advisor) -- clearly they thought I was contributing something worthwhile. And why was the assumption that they were the 60 contributor, instead of me? It's not clear that using author ordering would have helped in this case, or that such cases are at all frequent. But it does help me understand alternative points of view on the underlying question.
1) How does the community properly assign credit for 60-40 papers? Should we use author ordering or some other mechanism to assign credit?
2) What about advisors who do minimal to zero work but put their name on the paper?
3) At what point has a person involved in the project done so little work that they should not be included as an author (by either withdrawing willingly, or possibly by being told "you're not an author"). (I think of this as separate from the "advisor" issue.)
Let me start with item 1, assigning credit. I promoted the approach used in theory (derived, apparently, from mathematics) of alphabetical order, claiming credit comes out through things like letters and who gives the talk, and is determined more clearly over the course of a career. Many question this; indeed, many other fields use entirely different systems. Many fields use author order to signal the level of contribution in some way, so that being "first author" has significant meaning. At the extreme, the journal Nature, for example, suggests that author contributions should be fully specified in each article in their guide to authors:
"Author Contributions: authors are required to include a statement to specify the contributions of each co-author. The statement can be up to several sentences long, describing the tasks of individual authors referred to by their initials."
Graduate student and postdocs, in particular, are more concerned with systems that clarify credit, and this is understandable. They have short career track records, and want a job; making sure that they get their proper credit often seems, to them, quite imperative.
I'd like to defend the alphabetical, no-explicit-credit-assigned system, and then provide a couple of stories. (If you find that indulgent, you can skip the stories.)
One philosophical approach is to try to start from a blank slate. Forget about your current situation, and how your field does things. Your starting point is that you're just starting a career in science. What sort of system do you want to use? I'd argue you'd want to use a system that would lead to long-lasting, productive collaborations; that would have minimal overhead; and that would still provide meaningful ways of calibrating people over appropriate time periods. I think pure alphabetical does that. It removes the need to fight over (or even discuss) who contributed exactly what, leading more easily to frequent and repeated collaboration. To be clear, I have a strong bias: collaborations, I think, are great for scientific production, and on the whole make research much more fun. Alphabetical order is clearly easy. And while it's weak on allowing someone to find out how much each individual author contributed to a specific multi-author paper, over the course of several papers, I think the calibration works, especially when augmented with additional information such as letters in job searches and promotion cases. Further, it's not clear that other systems are really stronger in terms of assigning credit. Authors can disagree on contributions -- how does this get settled, and what does it do to future collaborations; in multi-author situations where order ostensibly matters many advisors will game the system, for example by putting students first regardless of their contribution in order to prep them for the job market or out of professional courtesy; and it's not clear how, for example, to value different types of contributions, like ideas vs. data collection and analysis. My bias is that the blank slate scientist starting their career would pick the alphabetical order system.
I have at least one data point for this conclusion: myself. (Here's where the stories start.) In graduate school, a bunch of us students got together and wrote a paper. This was a case where I was definitely the 60 author, and I thought it would be best if I was first author. The other students didn't object, but since I knew it wasn't standard for theory, I asked my advisor. (He wasn't a co-author for this paper, so his view was not biased in that regard.) He told me it was my choice, but that I needed to recognize the following: I would possibly get more credit for this paper, but, from then on, I would have adopted a system where, for every paper, I'd have to face the possibility of constructing the author order with my co-authors. Did I want to have that discussion for every paper down the line? I went with alphabetical order and have never looked back. I always recommend alphabetical order, although when I work with people in other areas I do defer to whatever system they want to use, and tell them they can put me wherever they like in the ordering. (It is true that, with tenure, one can care much, much less about such things.)
On the other side, another story. When I applied for my CAREER grant, apparently I was on the borderline, and it took quite some time to get the final word. I asked the NSF officer for feedback -- especially in case I needed to resubmit. (Apparently, enough money came through in the end to fund me.) One thing he said was that a lot of my work had been co-authored with very talented people, and it wasn't clear what my contributions were. This was a case where, obviously, there were no recommendation letters to draw from. Still, I was offended then by the comment, and looking back I still find it ridiculous. At that point, I'd written multiple papers with these other authors (who were not my advisor) -- clearly they thought I was contributing something worthwhile. And why was the assumption that they were the 60 contributor, instead of me? It's not clear that using author ordering would have helped in this case, or that such cases are at all frequent. But it does help me understand alternative points of view on the underlying question.
Wednesday, April 07, 2010
60-40 papers
A recent paper I worked on was a 60-40 paper. That's what I call it when one of the authors does noticeably more of the work. Really, it could be a 70-30 paper, or some other division; or with multiple authors, it could be a 50-30-20 paper. But I use the phrase 60-40 paper to refer to all of these situations. In this case, I was the 40.
60-40 papers aren't at all abnormal, and I've done enough papers not to let it bother me. When I'm the "40" author, I usually try whenever possible to do what I can to help even things out, for example in the writing/editing/revising stages; when I'm the "60" author, I recognize that the other authors have contributed, and the paper wouldn't be what it is without them. I've had amusing discussions with one co-author where we ended up admitting we both thought we were the "40" author for the paper we were writing. That was a collaboration that lasted for several papers; apparently, we both thought we were getting a good deal. I don't think I've been in many collaborations where multiple authors thought they were the "60", but my guess is those could be problematic.
Fan Chung has a nice page up with advice for graduate students that I think puts the 60-40 issue in perspective. At the end, under research collaboration:
What about the division of credit?
-- In math, we use the Hardy-Littlewood rule. That is, authors are alphabetically ordered and everyone gets an equal share of credit.
-- The one who has worked the most has learned the most and is therefore in the best position to write more papers on the topic.
-- If you have any bad feeling about sharing the work or the credit, don't collaborate. In mathematics, it is quite okay to do your research independently. (Unlike other areas, you are not obliged to include the person who fund your research.) If the collaboration already has started, the Hardy-Littlewood rule says that it stays a joint work even if the contribution is not of the same proportion. You have a choice of not to collaborate the next time. (If you have many ideas, one paper doesn't matter. If you don't have many ideas, then it really doesn't matter.) You might miss the opportunity for collaboration which can enhance your research and enrich your life. Such opportunity is actually not so easy to cultivate but worth all the efforts involved.
I'd just add a bit to this. Usually the "60" author will, actually, get more credit in various ways: usually they're the one to give the talk on the paper, for example. (It can also come out in letters when really needed.) And it's not so clear that a string of 60-40 collaborations with one author repeatedly being the 60 is so bad; without the 40, the research or the paper might not ever get done! Good collaborations are indeed enriching. To some, particularly graduate students, this approach and attitude might seem strange, but I recommend considering Fan's suggested understanding of collaboration.
To all the co-authors out there who have been the 60 to my 40, I appreciate your putting up with me. And to all the co-authors who have been the 40 to my 60, as long as we had a good time working on the paper, no worries, and thanks!
60-40 papers aren't at all abnormal, and I've done enough papers not to let it bother me. When I'm the "40" author, I usually try whenever possible to do what I can to help even things out, for example in the writing/editing/revising stages; when I'm the "60" author, I recognize that the other authors have contributed, and the paper wouldn't be what it is without them. I've had amusing discussions with one co-author where we ended up admitting we both thought we were the "40" author for the paper we were writing. That was a collaboration that lasted for several papers; apparently, we both thought we were getting a good deal. I don't think I've been in many collaborations where multiple authors thought they were the "60", but my guess is those could be problematic.
Fan Chung has a nice page up with advice for graduate students that I think puts the 60-40 issue in perspective. At the end, under research collaboration:
What about the division of credit?
-- In math, we use the Hardy-Littlewood rule. That is, authors are alphabetically ordered and everyone gets an equal share of credit.
-- The one who has worked the most has learned the most and is therefore in the best position to write more papers on the topic.
-- If you have any bad feeling about sharing the work or the credit, don't collaborate. In mathematics, it is quite okay to do your research independently. (Unlike other areas, you are not obliged to include the person who fund your research.) If the collaboration already has started, the Hardy-Littlewood rule says that it stays a joint work even if the contribution is not of the same proportion. You have a choice of not to collaborate the next time. (If you have many ideas, one paper doesn't matter. If you don't have many ideas, then it really doesn't matter.) You might miss the opportunity for collaboration which can enhance your research and enrich your life. Such opportunity is actually not so easy to cultivate but worth all the efforts involved.
I'd just add a bit to this. Usually the "60" author will, actually, get more credit in various ways: usually they're the one to give the talk on the paper, for example. (It can also come out in letters when really needed.) And it's not so clear that a string of 60-40 collaborations with one author repeatedly being the 60 is so bad; without the 40, the research or the paper might not ever get done! Good collaborations are indeed enriching. To some, particularly graduate students, this approach and attitude might seem strange, but I recommend considering Fan's suggested understanding of collaboration.
To all the co-authors out there who have been the 60 to my 40, I appreciate your putting up with me. And to all the co-authors who have been the 40 to my 60, as long as we had a good time working on the paper, no worries, and thanks!
Monday, April 05, 2010
Sexual Harassment Policies (Yale v. Harvard)
My brother, in what I assume is a blatant attempt to be mentioned in this blog (Hi Steve!!!), sent me a link to the following article about a new rule (or, as the article describes it, "A Sad Day") at Yale, banning professors from having sex with undergraduates in all circumstances (not just students that, say, are in their classes). More details at for example the Yale Alumni Magazine.
I was all ready to start looking down my nose at the competition for being slow to adopt what are in my mind obvious rules to have, but decided to check Harvard's policy first. (Always a good idea.) Harvard's policy, arguably, isn't even as strong as Yale's old policy. (Harry Lewis will, I imagine, correct me if I am mistaken in my interpretations or usage of documents.) The relevant information seems to be here. The policy description includes the following, under the heading UNPROFESSIONAL CONDUCT IN RELATIONSHIPS BETWEEN INDIVIDUALS OF DIFFERENT UNIVERSITY STATUS:
"Officers and other members of the teaching staff should be aware that any romantic involvement with their students makes them liable for formal action against them."
This seems to suggest that faculty can't have "romantic involvement" with their students, but some old letter to the Crimson suggests that the wording is much weaker than that (the article is here, the letter is here). Strictly speaking (according to the letter), the wording seems to suggest that faculty members involved with students face the risk of a the student filing a sexual harassment/unprofessional conduct complaint; but if the relationship is brought to light by a third party, there's no (apparent) cause for disciplinary action. IANAL, but this seems like a possible interpretation; I'm not sure what the current interpretation is here at Harvard.
Indeed, later on the policy states:
"Amorous relationships between members of the Faculty and students that occur outside the instructional context can also lead to difficulties."
The rest of the paragraph suggests potential problems if Faculty engage in "romantic involvement" with students who they are not directly teaching, but seems to make clear (by my reading) it's not forbidden in any sense.
I've certainly heard arguments in the past that such rules shouldn't exist. I can even see that there are potentially complicated lines -- should a professor in the Faculty of Arts and Sciences not be allowed to date a Harvard Law student? (Extra credit: why or why not?) But given the potential for abuse (both intentional and unintentional) of the power relationship, I'm unapologetically on the "no faculty - undergraduate romance" side. Or, as it says in the Yale Alumni Magazine:
'An imbalance of power forms the rationale for treating Yale College students differently from their older counterparts. Undergrads, the revised handbook says, “are particularly vulnerable to the unequal institutional power inherent in the teacher-student relationship and the potential for coercion, because of their age and relative lack of maturity.” '
Duh. Good for Yale.
I was all ready to start looking down my nose at the competition for being slow to adopt what are in my mind obvious rules to have, but decided to check Harvard's policy first. (Always a good idea.) Harvard's policy, arguably, isn't even as strong as Yale's old policy. (Harry Lewis will, I imagine, correct me if I am mistaken in my interpretations or usage of documents.) The relevant information seems to be here. The policy description includes the following, under the heading UNPROFESSIONAL CONDUCT IN RELATIONSHIPS BETWEEN INDIVIDUALS OF DIFFERENT UNIVERSITY STATUS:
"Officers and other members of the teaching staff should be aware that any romantic involvement with their students makes them liable for formal action against them."
This seems to suggest that faculty can't have "romantic involvement" with their students, but some old letter to the Crimson suggests that the wording is much weaker than that (the article is here, the letter is here). Strictly speaking (according to the letter), the wording seems to suggest that faculty members involved with students face the risk of a the student filing a sexual harassment/unprofessional conduct complaint; but if the relationship is brought to light by a third party, there's no (apparent) cause for disciplinary action. IANAL, but this seems like a possible interpretation; I'm not sure what the current interpretation is here at Harvard.
Indeed, later on the policy states:
"Amorous relationships between members of the Faculty and students that occur outside the instructional context can also lead to difficulties."
The rest of the paragraph suggests potential problems if Faculty engage in "romantic involvement" with students who they are not directly teaching, but seems to make clear (by my reading) it's not forbidden in any sense.
I've certainly heard arguments in the past that such rules shouldn't exist. I can even see that there are potentially complicated lines -- should a professor in the Faculty of Arts and Sciences not be allowed to date a Harvard Law student? (Extra credit: why or why not?) But given the potential for abuse (both intentional and unintentional) of the power relationship, I'm unapologetically on the "no faculty - undergraduate romance" side. Or, as it says in the Yale Alumni Magazine:
'An imbalance of power forms the rationale for treating Yale College students differently from their older counterparts. Undergrads, the revised handbook says, “are particularly vulnerable to the unequal institutional power inherent in the teacher-student relationship and the potential for coercion, because of their age and relative lack of maturity.” '
Duh. Good for Yale.
NSF Review Issues
My understanding is that the turnaround time on NSF decisions should be approximately 6 months. (See, for instance, their own diagram of the review process.) So I admit to getting a bit edgy after month 7 has come and gone without hearing anything on a proposal I have in. I went back at looked at my NSF proposal history, and found a small bright side: proposals that were accepted seemed to take a longer time for the decision feedback to arrive. Sadly, this rule did not seem to be universal, and my personal sample size is too small for rigorous conclusions. Feel free to share your own anecdotal evidence. Meanwhile, I'll try my best to forget about it until we get to month 8.
Also, this year, I have been asked (more than once) to review a single proposal "off-panel" (that is, I didn't serve on the panel that the proposal was part of). I can't recall having been asked to do this before, and wonder if there's a policy change behind it or if it's business-as-usual and I'm only now noticing it. I certainly don't mind -- I'm more than happy to help the NSF, and even more happy if I can do so without having to travel to DC. On the other hand, I worry that this approach might cause the same sort of problems that can occur with subreviewers, such as consistency across reviews.
Also, this year, I have been asked (more than once) to review a single proposal "off-panel" (that is, I didn't serve on the panel that the proposal was part of). I can't recall having been asked to do this before, and wonder if there's a policy change behind it or if it's business-as-usual and I'm only now noticing it. I certainly don't mind -- I'm more than happy to help the NSF, and even more happy if I can do so without having to travel to DC. On the other hand, I worry that this approach might cause the same sort of problems that can occur with subreviewers, such as consistency across reviews.
Friday, April 02, 2010
Energy Sustainability
I spent an entertaining hour this afternoon listening to David MacKay of Cambridge (UK) give a talk about the Future of Energy here at Harvard. I've mentioned David in this blog before. He did a lot of early work on low density parity check codes and deletion codes, so we've run in the same circles for quite some time. But now, besides his well-known book on information theory (Amazon link, free downloadable version), he's written a book on sustainable energy (Sustainable Energy - Without the Hot Air : Amazon Link, free downloadable version) that was the subject of his talk. David's also recently been named Chief Scientific Advisor to the Department of Energy and Climate Change (UK).
The talk was based on the book. David's starting point is the question, "What would we have to do to move to a world where we weren't using fossil fuels?" (The "we" he's talking about is usually the UK, but it applies elsewhere as well.) He then takes a truly scientific approach. He considers various possible renewable energy sources (wind, solar, biomass, tides), and estimates things like their energy output per unit area. Based on these calculations, he figures out how much land would be required. So, for instance, if you were willing to cover 1/2 of Britain with windmills, things might look OK, but that's not a very likely possibility. He also considers the demand side of the equation, and what might feasibly be done there.
The book (and the talk) are not overtly political. Whether you believe in global warming or not, the question of sustainable energy is important -- for national security concerns, you might not want to be dependent on getting your energy from, for example, oil-rich countries. His book is not about the politics; rather, he tackles these questions as a scientist, producing the numbers that are needed for intelligent, reasoned discussion and debate on the issues. That might sound dry and, possibly, boring, but not in David's hands. He's blessed with a fine wit and a charming style that comes out in the book and even more so when he's speaking. (His slide showing a collage of posters from places protesting the introduction of windmills into their community, for instance, received a lot of laughs.)
David has done a truly rare thing as a scientist, writing a book firmly about the science of an issue of current import that people are actually reading and that is raising the level of debate. I admire his courage in taking on a challenging assignment, and hope his work helps lead to the positive changes he is looking for.
The talk was based on the book. David's starting point is the question, "What would we have to do to move to a world where we weren't using fossil fuels?" (The "we" he's talking about is usually the UK, but it applies elsewhere as well.) He then takes a truly scientific approach. He considers various possible renewable energy sources (wind, solar, biomass, tides), and estimates things like their energy output per unit area. Based on these calculations, he figures out how much land would be required. So, for instance, if you were willing to cover 1/2 of Britain with windmills, things might look OK, but that's not a very likely possibility. He also considers the demand side of the equation, and what might feasibly be done there.
The book (and the talk) are not overtly political. Whether you believe in global warming or not, the question of sustainable energy is important -- for national security concerns, you might not want to be dependent on getting your energy from, for example, oil-rich countries. His book is not about the politics; rather, he tackles these questions as a scientist, producing the numbers that are needed for intelligent, reasoned discussion and debate on the issues. That might sound dry and, possibly, boring, but not in David's hands. He's blessed with a fine wit and a charming style that comes out in the book and even more so when he's speaking. (His slide showing a collage of posters from places protesting the introduction of windmills into their community, for instance, received a lot of laughs.)
David has done a truly rare thing as a scientist, writing a book firmly about the science of an issue of current import that people are actually reading and that is raising the level of debate. I admire his courage in taking on a challenging assignment, and hope his work helps lead to the positive changes he is looking for.
Subscribe to:
Posts (Atom)