In April 2006, I made commentary
about the opportunities and risks of algo trading with the view that
"there are huge dangers and successful execution demands deep
technological know-how. However, the only way in which markets can
operate is to seek out and trade off risks against rewards, and the
greater the risks, the greater the rewards."
The greater the risks, the greater the rewards … rewards when it goes right, losses when it goes wrong.
be honest, the concern back then was a growing sense of risk created by
systems that have assumed but untested risk models, combined with hedge funds leveraging markets such that the market spikes and bursts are exaggerated ten-fold.
So it was disappointing, but not surprising, to read in the Times this morning
that Goldman Sachs’ systems screwed up and lost 30% of their flagship global
equity funds value in a week, because their systems made incorrect
assumptions. In a call made yesterday David Viniar, Goldmans’ CFO,
said: "We are seeing things that were 25-standard deviation events,
several days in a row. There have been some issues in some of the
other quantitative spaces, but nothing like what we saw last week."
a "25-standard deviation event" only happens once every 100,000 years
or more according to the models built into the systems, but then they
occurred last week and the fund lost around $1.5 billion.
would say that something that should take place every 100,000 years
also seems to be starting to occur with more frequency. For example,
Highbridge Capital (controlled by JPMorgan), AQR Capital, Renaissance
Technologies, DE Shaw, Lehmans and Barclays Global Investors also made
similar announcements in the last week, with Lehman Brothers saying:
"models are behaving in the opposite way we would
predict and have seen and tested for over very long time periods."
Long Term Capital Management (LTCM) almost broke the markets a decade
ago due to risk models that were proven to be wrong, have we learned
our lessons or are we just repeating the mistakes of the past?