Apparently, the stock market crash of 5/6/10 was not, as originally thought, due to a "standard" human/computer error where someone meant to sell a million shared but instead entered billion. (Although, heck, that's an interesting enough possibility to think about on its own.) Instead, it seems more like it was the result of unforeseen interactions and feedback between the market protocols and the computerized trading systems, running at high computer speeds, without a human in the loop to act as a brake or failsafe. See, for example, this CNN post and this WSJ blog post, and dozens of further articles in the weeks to come.
I can't help but think that this should inspire some computer science research. Let us take as a given that these unintended interactions, causing some stocks to be sold for pennies, should be viewed as a mistake. How do we design a system that avoids such mistakes, with the least impact on the users (who are, clearly, in this case self-interested agents)? Can we do so without humans in the loop, or is there a role for a human in the computerized, algorithmic trading world? Can we develop a system whereby we could check for potential problems of this sort by having the big traders provide information about their trading strategies to some centralized agency, but in a way that no actual (or minimal) information about their trading strategies is actually released in the analysis?
If this isn't a motivating example for distributed systems/algorithmic game theory computer science research, I don't know what is.
Tuesday, May 11, 2010
Subscribe to:
Post Comments (Atom)
4 comments:
According to the CNN article, at least, the problem was not really "unintended interactions" but that different rules were being applied in two different settings where the rules should have been the same.
The issue, as with most AGT research, is that the real problem is ill-defined. (How do you distinguish between the case when the stock market goes down for legitimate or illegitimate reasons? Why is it ok for the market to increase by 10% without trading being slowed, but not ok for it to go down by 10%?) Once you have a clean problem amenable to theoretical analysis, it stops being relevant to the real world.
Perhaps more research is necessary. However, we definitely should not jump to prescriptions like the ones you suggest. The market recovered quite well, and probably will recover much faster next time since there is clearly money to be made.
This article lists more of the dynamics http://bit.ly/dyogGT
This may not be a new problem at all but a variation on the problems associated with algorithmic trading. Some have blamed the 1987 crash on automatic trading schemes that were in place based on computer models. (I believe I first heard of this from Professor Robert Neugeboren in his Strategy, Conflict & Cooperation course, taught in Extension, but I am unsure) The cause of the 1987 crash is still somewhat debated, so I expect we may not see consensus on the 5/6/10 event for some time.
After reading the article linked above I wonder if the present case is not somewhat different. If I understand correctly, the NYSE itself went into "slow mode" which is a built-in short circuit that gets humans involved. It seems this did protect prices somewhat on NYSE itself but other exchanges made their own decisions to wait or to continue trading. So while the problem of interaction between automatic trading strategies may not be new, the problem here may be differences in the automatic safeguards put in place by the exchanges themselves. It seems that NYSE may already have a somewhat working strategy--at least by switching into "slow mode" they have shown that their systems were capable of flagging the problem and indicating that humans should get involved. However, expecting humans to make the right decision in 30-60 seconds may have been too much to ask.
some of the people designing these algorithms are big boys in the field. don't assume that they don't know that this can happen. in fact, if it costs less to make this happen than you can gain by learning about your competitors' systems, maybe it's a cheap way to get a leg up on the rest of the market.
:)
Post a Comment