Monday, May 31, 2010

Blogging from Bertinoro

I'm here at Bertinoro (a beautiful, out-of-the-way, small town in Italy) for the 2010 version of Random GRAALS (= graphs and algorithms), a workshop that we've been having every few years, usually with around 30 or so people, which provides an opportunity to hear some nice talks, and gives a bunch of us an excuse to go to Bertinoro.  Sadly, I'll have to leave a bit early, so I can have some "home days" before STOC/EC/ISIT.  Busy start of summer.

Leslie Goldberg and Mark Jerrum started off with a joint talk on the complexity of certain graph counting problems, focusing on the connections between the hardness of approximate counting of independent sets on bipartite graphs and the hardness of computing the Tutte polynomial at various points.  They (along with others) have been looking at variations on this theme for quite some time;  there was a very nice bit about developing gadgets for randomized reductions by making using of random graphs with "bistable" characteristics, specifically 2nd order phase transitions, that seemed like fun.  Aris Anagnostopoulos talked about Approximation Algorithms for Dynamic Data -- for example, suppose you're trying to keep a set of item counts sorted, but the counts are changing over time, so as you do comparisons to sort, the underlying ground truth can be changing.  The data changes slowly enough that it doesn't change arbitrarily over time steps, but instead in a limited fashion;  however, it changes fast enough -- or comparisons are expensive enough -- you can't just resort everything each time step.  Uriel Feige talked about hat puzzles on graphs, with a nice collection of open problems on this generalization of the standard hat puzzle.  I talked about random stuff I've been doing, including the improved analysis of the lossy difference aggregator, and the Carousel paper on network logging that appeared at NSDI.  I then forced the audience to sit through a brief version of our talk on Swoopo -- which will be presented at EC next week.  Berthold Vocking talked about combinatorial auctions for secondary spectrum auctions, looking at models of conflict graphs and independent sets.

I enjoyed all the talks, but I have to say, as always I'm impressed with Uri's talk -- he somehow makes everything seem approachable and fun.  

Now we're going out for what, I assume, will be a wonderful Italian dinner.

Friday, May 28, 2010

Extension Results

At the start of semester, at one point I had 48 people signed up for my extension Algorithms and Data Structures course, which I said from day one would not last.  (The extension class is for Distance Education students, offered by Harvard's Division of Continuing Education;  students see recorded versions of the Harvard class lectures made available online, and I run the class very much like the Harvard class.)  17 made it through to get final grades.  That's well within the normal range.

My reviews from the extension school have generally, over the years, been satisfactory.  As I usually have minimal personal interaction with the students, I've never expected great reviews.  Strangely, for the last two years, while my teaching review numbers for the regular Harvard class have significantly declined (due to a vocal minority that appears to not enjoy the class), the numbers for the extension class have significantly risen.  Perhaps I should be concluding that I'm better on camera than in person?

Really, I'm at a loss to explain.  Obviously, the extension reviews are from self-selected survivors, so they're more likely to appreciate the class.  In the regular class, I notice that the poor reviews seem to come almost exclusively from those who are taking the class because it is (almost, but not really) a requirement for being a CS major -- hence, possibly, from people who don't really want to be there in the first place.  I might think extension students have lower expectations, in terms of grades.  For administrative reasons they really want to get at least a B in my class, and I think they're prepped to understand coming in that that's a difficult goal.  I should point out that most -- indeed, nearly all -- of the 17 left at the end got a B or higher.  Most also come in quite aware of the gaps in their background and work very hard to make up for them.

However, none of these things seem to explain the delta over previous years, and in particular the difference in direction in the delta between these two groups of students.  Go figure.

In any case, the extension students from CS 124 this semester should be very proud of themselves.  I believe my class is one of the more challenging undergraduate Algorithms and Data Structures classes at any university -- extension or otherwise.  Many (most?) of these students have jobs, families, and other responsibilities that make taking any class extremely difficult.  They should know that I'm impressed by them, and I hope my class turns out to be a useful experience for them.

Wednesday, May 26, 2010

Giving in to Student Demands

I received my student reviews today, and while I could easily post days' worth of discussions on them, suffice to say there's a non-trivial minority of students who seem unhappy.  While it's entirely possible that these students simply do not like my personality (which is unlikely to change to suit their needs) and hence would not like the class regardless of what I do, I'm attempting to reflect more constructively and find things to change next year.  (So, in some ways, this is just a note to myself that I happen to be sharing.) 

One change I guess I'll have to make is to officially label one textbook Mandatory.  I suggest both the standard Introduction to Algorithms and Algorithm Design as useful optional textbooks in the syllabus, and instead provide students with my lecture notes.  But several complain they find the lecture notes insufficient, so I'm guessing they don't understand that optional does not actually mean "unnecessary for everyone", and the best solution I can come up with is to make a book mandatory.  I can see other uses for this, such as assigning problems from the book as good review problems or problems to cover in section, so perhaps it's really a win for everyone.  If anyone cares to offer any insight into which of these two are most useful (keeping in mind I'll still be using and making available my lecture notes) please advise in comments.    

(This subset of students also complains that the lecture notes haven't really changed since I first wrote them about ten years ago, which is true enough, although I don't believe that, for example, Prim's algorithm has changed substantially in that time either.  The latest edition of Intro to Algs seems to be from 2009, while Algorithm Design now goes all the way back to 2005, so maybe Intro to Algs should win based on newness.)

A number of students also complain about the midterm.  Not just that it's too hard -- which is fine for them to complain about, it is indeed quite hard, although I personally would label it "just right" -- but that it's the same day an assignment is due, just before spring break.  Somehow, having the midterm after spring break, or before they've done an assignment on the material, has just always seemed like a bad idea to me, but it's clearly not working for them.

I think I've found a creative solution, which is simply to abolish the midterm.  This has many advantages beyond dealing with the complaints above.  For example, the past couple of years another faculty member has complained about my choice of midterm day, because they wanted to schedule their midterm that day, and didn't think it would be appropriate for the students taking both classes to have to take two midterms.  So now I can yield this midterm day.

Further advantages come from my midterm replacement idea.  I still want to have in-class assessment, so instead of a midterm, I'll break it up into shorter in-class quizzes.  I'll get a more timely idea of what students aren't understanding; we can go over the quiz problems, giving the students more feedback (partially solving a different complaint, that because I don't hand out answers to problem sets, students don't get enough feedback);  and now class attendance will be essentially mandatory.  I'll let students drop their lowest one or two quizzes, so if they miss a class quiz, they can use a drop, but students will have to get to class more often.  (Besides giving lecture notes, my lectures are taped, so currently some students often don't come to class and catch the video later.)  The biggest downside is that I'll have to drop some material to make time for this approach.  I have some fun lectures at the end about a Least Common Ancestor algorithm and suffix trees (maximal palindromes in linear time!) that I'm sure I'll have to drop at a minimum, but even I admit those are a bit esoteric for a first algorithms course.  I can always make the notes available for the interested.   

I imagine these changes will fail to please everyone.  But I think they're worth a try.  As I reread this, I recognize this might all sound sarcastic, and while I've sprinkled in some sarcasm in the discussion, these do seem like things I'll plan to do next year. There are also other changes I'll try to implement with the TAs -- a common complaint is that they can be too picky on details in grading (true enough) and that more math review is needed (certainly possible, best done by splitting up sections into super-advanced/standard/math-phobic?).  Then we'll see if any of this effort pays off in next year's reviews.    

Tuesday, May 25, 2010

Links to Look At

Matt Welsh has what he tells me is his most-read post ever about The Secret Lives of Professors.  I'll help promote it here, but save my commentary for the comments there.

FemaleScienceProfessor has had two great posts this week.  Yesterday's post was about the Canada Excellence Research Chair program -- "to attract Canadian and international leading scientists" in critical areas of research -- that gave a lot of money to 19 chosen faculty -- all men.  Today's post raises a challenging question -- under what circumstances would you retract a letter of recommendation?

As many other have noted, Martin Gardner has passed away.  Here's his Wikipedia page

As the 2nd Innovations in Computer Science page is up, it must also be time to remind people of SLOGN

A blog post on the CCC at 3 years

Sunday, May 23, 2010

Grade Inflation?

I know Harvard is supposed to be famous for its grade inflation, but that's not generally the case in my class (and, I think, for our CS classes in general).  Having recently turned in grades, and having already started getting concerned notes from students regarding grades, I'm curious:

For an undergraduate class ---

What's the median grade you give?

Does it vary significantly from year to year?

Does the class size matter?  (Do you tend to give higher average grades in smaller classes?)

Just curious.

Friday, May 21, 2010

ACM elections reminder

I was asked to remind everyone that if they want to vote on the ACM elections, the deadline is May 24th.  Here is the official ACM election page.  You might note that Salil Vadhan is running for Member at Large...

In particular, a blog reader asked me to point to this position page on the candidates' opinions on open access, which is a fairly important topic for the future.

Tuesday, May 18, 2010

Harvard News Items

Some highlights:

1) From the Crimson:  "An undergraduate has been dismissed from Harvard College after allegedly hacking into online accounts of the teaching staff in his courses." 

Makes me wish I was still on Faculty Council; I'm trying to figure out how the student "...designed a website to obtain the Harvard University IDs and the personal identification numbers (PIN) of his instructors."  (Designing the website is easy;  how did he get his instructors to log into it?)  I'm still trying to figure out if those antics are more or less impressive than the former Harvard student who sent in completely bogus data and recommendation letters in all sorts of various applications.

2)  Here's two articles (USA Today, Business Week) about the IRS looking into university spending, and in particular whether rich universities are hoarding endowment money.  The idea that universities who are spending under 5% should be encouraged (or forced) to do otherwise probably seemed a lot more compelling back in 2007-2008 when apparently they were surveying the universities about their spending;  with some schools (Harvard, Princeton) having to payout 6% this year to deal with recent losses, I'd imagine it seems less compelling now...

I've heard most endowments have made up a lot of their losses this year.  Harvard, ever a black box, has not released any information I'm aware of... anyone out there know more than I do?

3)  Various articles out (here's one from the Times Online) about the very negative portrayal of Facebook's Zuckerburg in the upcoming Facebook movie "The Social Network."  I'll probably have to go see it, if only to see the scenes with the Harvard Professor (check the IMDB credits) -- supposed to be, from what I hear, a representation of our own Matt Welsh.

Sunday, May 16, 2010

Robobees Redux

While I intentionally try to avoid the political on this blog, I did make an exception previously when I heard that the Harvard Robobees project had made #1 on Sean Hannity's "List of Government's Most Reckless Spending," because that's downright stupid.  I've actually noticed that they've got the transcript from his show about this online.  (The video is also available here.)  Let's see what they actually say:

Begin Transcript:

And finally, we are here, and No. 1 will shock you. For that, we go to Ainsley Earhardt, who is standing by in Cambridge, Massachusetts, tonight — Ainsley.
AINSLEY EARHARDT, FOX NEWS CORRESPONDENT: Sean, No. 1 brings us here to Harvard University. This school got $9.3 million in stimulus money to build flying robotic bees, which they hope will one day help monitor traffic and even pollinate crops. We were in the community today, and this town was buzzing.
(BEGIN VIDEOTAPE)
UNIDENTIFIED MALE: Probably isn't going to stimulate the economy in the short term, which is what the stimulus package was supposed to do.
UNIDENTIFIED MALE: I don't think Harvard is doing anything that's wrong or improper. And I work with this organization a lot, and they do a lot of great research.
UNIDENTIFIED MALE: It seems like a project like that, while certainly admirable, and could definitely contribute in some way in the future, would probably be best put on hold for a while.
(END VIDEOTAPE)
EARHARDT: Did this$ 9.3 million project create any jobs? Well, according to Recovery.gov's Web site it created 1.66 jobs.
Well, we called Harvard, and they did give us a statement, and we're reading that, in part: "The three percent of stimulus funding provided for research was not only intended to create jobs immediately, it was also intended to stimulate economic growth, which is precisely what science funding does.
Designing and developing miniaturized flying robotic instruments that will prove useful in any number of ways, including surveillance applications on the battlefield and in weather forecasting, is an extremely important project."
So Sean, no one says that this project is not important. Folks here are just wondering how badly it will sting.
Back to you.
HANNITY: And thanks, Ainsley.

End Transcript.

Gee, you'd think they'd be able to come up with more damning things to day about the project, given that it was #1 on their list.  I'm actually disappointed reading this.  The only "shocking" thing about it is how little negative they actually say.  It sounds like some interns made up the list, decided robotic bees sounded goofy so they should put it up at #1, and then when they actually did even a little digging, realized they couldn't say all that much bad about it.  Even if one chooses to subscribe the usual tropes that the fringe (or not-so-fringe) right wing doesn't understand or care about science and technology that might range from useful to vital for our food supply, and that the only "good research" is weapons research, there are so many obvious potential military applications for such technology that they can't even make that sort of argument against this research project.  The only thing they seem to say is, "Well, this was part of the stimulus, how many jobs did it create?"  First, I'm not clear that the Expeditions budget directly came from stimulus money -- perhaps someone with better knowledge can clarify that?  But even if it did, Harvard's response statement seems quite appropriate.  

For a more reasonable take on the project, you could look, well, just about anywhere, including Engadget, Electronics Weekly, Network World, and several others.  Heck, even Robot Armageddon had a more reasoned take on the work (and they're worried about, well, the forthcoming robot armageddon). 

What's sad, though, is how often Hannity's list -- and the poor thinking behind it -- has been copied and passed around through the web.  A little looking around on Google shows it's been repeated probably hundreds of times in various places.  Clearly a lot of people have seen it, and probably many aren't inclined to get actual real information on their own.  While I haven't heard of any actual negative impacts on the Robobees research group, I think it's best for the community to be vigilant against this sort of ridiculous attack -- because, in the end, it could affect us all.  

Friday, May 14, 2010

The Value of Experiments

Several days ago Lance wrote about "Is Complexity Math or Science?", and in that context wrote "We don't do experiments...", which has caused me to overreact.

I'm not sure who he meant by "We" -- I guess complexity theorists -- but I found the statement very strange at the time.  I do experiments all the time.  (Note:  Here I am considering computer simulations of various sorts as "experiments".  Some people may quibble at this, but since it seems ALL THE REST OF SCIENCE is moving toward increasing use of computers, including and perhaps especially computer simulation, I think it would be odd not to call a computer scientist running a simulation an experiment.) 

For example, a couple of days ago, I thought I had proved something about random sequences for a project I'm working on.  I asked the graduate student on the project to code it up to check my work -- I often sanity check proofs with simulation code when I can -- and he shipped me some results that seemed surprising.  They were consistent with my proof, but effectively showed that things behaved even better than I had proven (or expected). 

So on the car ride home, I thought about it, and came up with what I think is a nice proof that explains what the student found in the simulation experiments.  This improved proof will end up in the eventual paper, I'm sure. 

While I wouldn't call myself a complexity theorist, it seems to me the results I'm working on here are in the class of complexity results -- I'm trying to show that certain permutations have certain properties with high probability, and I'm showing it by developing an algorithm that allows me to prove what I want.  Perhaps that's sufficiently far removed from "complexity theory" that some people think it doesn't count, but then it seems you'd have to throw the whole probabilistic method out of complexity theory, which seems strange to me.

My point is that the actual use of computers -- for example to simulate processes in order to sanity check proofs or develop insights and conjectures -- is part of how I do my theoretical research.  While certainly that approach might not be for everyone, I have deep concerns when a major-blogging-theorist says something like "We don't do experiments..."  I worry that increasingly that computer science graduate students in theory are loathe to actually use computers, and indeed that this is part of a broader problem that theory students specialize narrowly so early they don't get exposure to and an understanding of the rest of computer science.

So I'd just like to say, I personally find experiments very helpful. 

Thursday, May 13, 2010

Another Book on Networks

Another book on networks, namely Networks: An Introduction, by Mark Newman, will be available in about a week or so.  Mark is known to many in the networking community;  though he comes from the physics side, he has worked with computer scientists such as Aaron Clauset and Cristopher Moore.

The Easley/Kleinberg book Networks, Crowds, and Markets: Reasoning About a Highly Connected Worldstill sems to be due out at the end of July. 

Tuesday, May 11, 2010

Complexity of Distributed Systems, Market Crash Style

Apparently, the stock market crash of 5/6/10 was not, as originally thought, due to a "standard" human/computer error where someone meant to sell a million shared but instead entered billion.  (Although, heck, that's an interesting enough possibility to think about on its own.)  Instead, it seems more like it was the result of unforeseen interactions and feedback between the market protocols and the computerized trading systems, running at high computer speeds, without a human in the loop to act as a brake or failsafe.  See, for example, this CNN post and this WSJ blog post, and dozens of further articles in the weeks to come.

I can't help but think that this should inspire some computer science research.  Let us take as a given that these unintended interactions, causing some stocks to be sold for pennies, should be viewed as a mistake.  How do we design a system that avoids such mistakes, with the least impact on the users (who are, clearly, in this case self-interested agents)?  Can we do so without humans in the loop, or is there a role for a human in the computerized, algorithmic trading world?  Can we develop a system whereby we could check for potential problems of this sort by having the big traders provide information about their trading strategies to some centralized agency, but in a way that no actual (or minimal) information about their trading strategies is actually released in the analysis?

If this isn't a motivating example for distributed systems/algorithmic game theory computer science research, I don't know what is.

Friday, May 07, 2010

Aldous/Diaconis: Longest Increasing Subsequences

For some research I'm currently doing, I ran across a truly wonderful "old" paper,
Longest increasing subsequences: from patience sorting to the Baik-Deift-Johansson theorem
by Aldous and Diaconis, which appeared in the Bulletin of the American Mathematical Society back in 1999.  It's easily findable online.

The fun in this paper is that it highlights the probabilists' notion of reduction, which is similar but slightly different from the standard CS notion.  Generally, it involves taking one process or class of objects and mapping it (usually bijectively) to another process or class of objects that can be analyzed.  And longest increasing subsequences, as it turns out, can be mapped into lots of things.  For example, a standard connection is with greedy patience sorting.  Here's the example from the paper.  Take a shuffled deck of cards:

7 2 8 1 3 4 10 6 9 5

Greedy patience sorting puts the cards into piles using the following rules:
1)  A card can be placed on top of any higher card.  When being greedy, we place the card on the leftmost possible pile.
2)  If no higher card is showing, thc card starts a new pile to the right of all other piles.

So for this sequence, the piles appears as follows (with the top card bolded for each pile.

7
------
2
7
------
2
8
------
1
2
8
------
1
3
7  8
------
1
3
7  8  4
------
1
3
7  8  4  10
------
1
3       6
7  8  4  10
------
1
3       6
7  8  4  10  9
------
1           5
3       6
7  8  4  10  9
------

The longest increasing subsequence is equal to the number of piles at the end of patience sorting.  (Exercise, left to reader.) 

The paper then goes on to show other objects that connect to the longest increasing subsequence, including Young tableaux, an interacting particle system on the real line, and determinants of certain matrices.  These connections allow various analyses of longest increasing subsequences on random permutations.

The longest increasing subsequence, by itself, certainly sounds like a nice problem, but when you see these mappings to a variety of different objects, you're overwhelmed by the feeling that there's something fundamental there -- worth learning more about.  And that's what makes the paper such a fun read.

Thursday, May 06, 2010

Poll : Conference Acceptances

Consider your favorite conference(s).  Do you think they're accepting too many papers, too few, or just right?  (With SIGCOMM decisions out, STOC/EC/CCC coming up, etc., it seems like a timely question to ask.)  Keep in mind if you think conferences should be accepting more papers, it means longer conferences or more parallel sessions.  When thinking about adding papers, there are different levels you could imagine -- 10% bigger, or 50% bigger.

I'm sure with the SIGCOMM acceptances/rejections out some people think that SIGCOMM could accept more papers.  (33 were accepted;  there was a nominal target of around 36.)  I'm sure some other people like the bar to be kept high.  Increasing paper acceptances by about 10% would get us to the nominal target, a minimal change.  Increasing paper acceptances by about 50% might change the nature of the conference non-trivially, but I'm sure there are arguments on both sides about whether that would be a good or bad thing.  45-50 papers is still not a huge conference. 

Since we're talking across multiple conferences, rather than do a blog poll, please comment in the comments.

Tuesday, May 04, 2010

SIGCOMM PC, Not Liveblogging

I am absolutely, positively, not liveblogging from the SIGCOMM PC, as that is, I am rightly told, a bad idea.  But these are my impressions after the fact.  (I'm told notifications have gone out.)

This was a big PC -- 50 people.  They did get a great space for the meeting that worked very well -- it didn't seem crowded, we could actually hear each other, etc.  It did seem like  a very large PC -- at the PC meeting, if I've calculated right, each PC member would have on average 7-8 papers to discuss, which seems low for spending a day-plus at a PC meeting.  I think the chairs wanted 50 in order to get a wider range of expertise on the PC.  (External PC reviewers are very rare, as compared to theory conferences.)  On the plus-side, it's an interesting meeting, and I enjoyed listening the the various discussions.  (And, of course, I got some other work done in the times my papers weren't being discussed.)  50 PC members, however, just seems too big.  I would suggest that 40 would have left the workload manageable, still left the committee with enough expertise, and made the PC more interesting for everyone there. 

I'm afraid to say that, unsurprisingly, as a PC we were moving far too slowly, and were, I think, far too negative.  First day, by dinner time, we still had over a dozen papers to discuss -- and all decisions up to that point were supposed to be considered tentative, to be finalized in day 2, which meant plenty more discussion if there was time.  Also, at the end of the first day, had fewer than 20 papers in the Accept/Accept If Room categories (albeit several were in the "Tabled" category, which generally meant more PC members were looking at it or the reviewers were discussing some finer points);  we could, ostensibly, accept about 36 papers.

Matt Welsh wrote an excellent and humorous post on the Psychology of Program Committees, which to some extent applies.  Overall, though, I'm not clear what the problem is.  I think the PC is quite negative;  on the other hand, as I've mentioned, I also think the papers I had to review were pretty bad, so I guess I'm part of the problem.  In terms of timing, though, I think I'm part of the solution -- as was Matt Welsh.  (Early on, asked about a paper that had already been summarized, Matt briefly said, "I liked it.  It's not perfect, but it's very good, everyone rated it high, let's accept."  And we quickly moved on...)

I wonder if there needs to be an attitude change for SIGCOMM (and related) PCs -- a thought that I know has been expressed before.  Papers on the borderline tend to get killed.  There were certainly a couple of papers I would have pushed for that I thought would be reasonable accepts, but was conflicted out of engaging in discussions.  (Perhaps that's why I liked them, and one can interpret individually whether you think that means strong conflict rules are a good or bad idea.)  I'm not sure if starting a meeting by saying, "We're GOING to accept X papers, so find the best ones!" is a possible approach, or even a good idea.  But maybe it's worth trying.    

Tuesday we started early and tried to speed up to get through everything.  Overall I think we did a better job Tuesday, generally moving faster with fewer slowdowns, and we were up against a deadline as people had planes to catch.  I've said before that, overall, I like the conflict rules used by the SIGCOMM PC.  That being said, with time running out, I think a more flexible approach that doesn't require people to physically leave the room should be acceptable (as I've also said in the past, for theory conferences).  I certainly think it's very desirable that conflicts leave the room, but it does take time, and that has to be traded off against other concerns.

In the end, after various reconsiderations and re-animations (some papers, indeed, seem to rise from the dead), we accepted 33 papers;  announcements should now be out to authors, even though the PC still gets some time to finalize reviews.  We finished right on schedule.  

     

Monday, May 03, 2010

The Computer Science Ecosystem

Inspired a bit by some inspirational, high-level talks I've seen the last few days (see Matt's take on Ed Lazowska's talk at Harvard here), I've been thinking about what I'll call the computer science ecosystem.  The inspirational talks often refer a lot to the grand sweeping visions -- science will all be done on the cloud, computerized automobile systems will make driving safer and more efficient, etc.  These generally sound amazing, although to some they can also sound a bit out there.  (Robobees and Sean Hannity, anyone?) On the other hand, there's a lot of basic work down in the trenches creating the building blocks that eventually make the sweeping visions possible.  While some of that work is explicitly done as part of these bigger projects, a lot of scientists are working on their own little pieces of the puzzle without specific regard to the big picture.  And a lot of good work gets done that way too.

So what is the right sort of "research ecosystem" we should have, that balances between BIG and small projects, between individual researchers pursuing their goals and larger groups, between researchers focused on "core areas and problems" and researchers focused on interdisciplinary problems?  How do we promote and build such an ecosystem that lets the various types of research flourish, hopefully in ways that the sum is greater than the parts?

I don't pretend that I have the answers to these important questions.  They're certainly thought about over at the NSF, which decides how much money to distribute among small, medium, and large proposals, and how much to give to different programs, some of which specifically aim to promote interdisciplinary research.  And discussions about these questions have been (and I assume still are) a significant issue for the ACM-SIGACT Committee for the Advancement of Theoretical Computer Science, which gives feedback to the NSF on such matters.

Certainly a concern that always springs to mind is that the funding model for CS will come to be dominated by big projects, leaving little room for the individual researcher or even small groups more common in theory.  But in terms of increasing the funding pie overall for CS, the big project ideas are often much more persuasive, which seems to be the push taken by the Computing Community Consortium.  I don't know currently what the dollar spend looks like from NSF or the other funding agencies;  right now it still feels to me that we have a suitably diverse research ecosystem, but perhaps my view is limited.