Apparently, the stock market crash of 5/6/10 was not, as originally thought, due to a "standard" human/computer error where someone meant to sell a million shared but instead entered billion. (Although, heck, that's an interesting enough possibility to think about on its own.) Instead, it seems more like it was the result of unforeseen interactions and feedback between the market protocols and the computerized trading systems, running at high computer speeds, without a human in the loop to act as a brake or failsafe. See, for example, this CNN post and this WSJ blog post, and dozens of further articles in the weeks to come.
I can't help but think that this should inspire some computer science research. Let us take as a given that these unintended interactions, causing some stocks to be sold for pennies, should be viewed as a mistake. How do we design a system that avoids such mistakes, with the least impact on the users (who are, clearly, in this case self-interested agents)? Can we do so without humans in the loop, or is there a role for a human in the computerized, algorithmic trading world? Can we develop a system whereby we could check for potential problems of this sort by having the big traders provide information about their trading strategies to some centralized agency, but in a way that no actual (or minimal) information about their trading strategies is actually released in the analysis?
If this isn't a motivating example for distributed systems/algorithmic game theory computer science research, I don't know what is.