Home / Technology / FPGA – the next trading revolution?

FPGA – the next trading revolution?

This week is a trading technology week, chairing panels at two trading technology events.

The first is focused upon trading architectures and I’m surrounded by engineers.

Forget strategists, technologists, programmers or developers.

Engineers.

The reason is that it’s all about FPGA’s.

What the hell?

FPGAs – Field Programmable Gate Arrays.

This is basically a chip that allows you to place a program on the chip for processing at lightspeed.

It links with low latency, high frequency trading, except that the debate about low latency was all about speed of processing; FPGAs are now all about using massively parallel processing (MPP) to analyse what is being processed.

It’s a data flow analytic, rather than a process flow throughput service.

And yes, it’s the next evolution of debate about trading architecture, after high frequency trading (HFT).

Strangely enough, I’d given up on these things years ago as I thought MPP was done and dusted.

Instead we moved onto discussions around cloud, grid, virtualisation and such like.

The big conversation in fact was about colocated data centres and getting massive amounts of processors to crunch through data in low latency volumes.

Now it’s moved from processing to analysis, which is why parallelism is back on the agenda, or concurrency as some call it.

The reason it’s become important is that if you can run an analytic on the chip, then it can be done far quicker than through the CPU.

This means you can run risk analytics of high frequency data throughput in real-time, rather than after the event.

You can also run complex simulations of massive amounts of data quickly, easily and cheaply.

FPGA is back on the agenda because it’s also far simpler than ten years ago.

Ten years ago, these systems needed hard coding and sat firmly in the telecommunications sector.

As an engineering technology, it was incredibly complex but now the level of tool support from vendors is much greater than ever before, and has made it far easier to use.

In other words, the design issues are no longer part of the problem.

These things can now just be programmed into the system.

It is important as effectively FPGAs allow a compute cycle of data to be programmed onto a cahip or, in this case, a processing board.

The result is savings in heat and power usage, but also a massive increase in raw compute power.

For example, one bank was talking about Monte Carlo simulations that showed performance levels 30 times better than doing this through a CPU and 175 times better in efficiency terms.

Bear in mind that Monte Carlo simulations can involve fifty year or more scenarios with roll back, querying, resets and roll forward all built into the modelling and now in real-time.

That’s complex and involves massive amounts of data analytics.

A little like taking petabytes of real-time data and churning through it all in real-time.

Forget batch and overnight.  It’s all real-time.

This is why FPGAs are being used extensively for scenario modelling of real-time risks, calculating positions for the traders of the world in their positioning against Greece, Italy and each other in real-time.

And all of this is achieved with far less power and far faster clock times than traditional CPU processing.

This is because FPGAs are actually SoCs.

WTF?

SoC – System on a Chip.

What this means is that you can put a board into the computer processor, and on that board is all the analytical system requirements to run fast cycle analytics.

So the software and hardware are married as one on the chip – a complete System on a Chip (SoC).

Imagine therefore that you have massive amounts of data flowing in high frequency and high volume across low latency engines throughout the world’s markets looking to trade.

Then you have lots of FPGAs deployed across those processes, looking at the data and tracking risk, opportunity, collateral, liquidity, credit and more.

Each FPGA is given a specific analytical function as a SoC, and it allows you to run huge volumes of trading without concern about the processing power limitations of old, or analytical delays due to hitting CPU limits.

This is why FPGA is big news in trading, allowing traders to take huge amounts of streaming data flows, analysing the data concurrently, and all with easy implementation and deployment.

So we’ve gone from the technical low latency discussions to how to analyse these streams of data – these massive volume, high speed systems – and working out how to analyse all that data in real-time using FPGAs.

We’ve gone from process and processing to analysis and data flow.

That’s the architectural discussion of today.

 

 

 

About Chris M Skinner

Chris M Skinner
Chris Skinner is best known as an independent commentator on the financial markets through his blog, the Finanser.com, as author of the bestselling book Digital Bank, and Chair of the European networking forum the Financial Services Club. He has been voted one of the most influential people in banking by The Financial Brand (as well as one of the best blogs), a FinTech Titan (Next Bank), one of the Fintech Leaders you need to follow (City AM, Deluxe and Jax Finance), as well as one of the Top 40 most influential people in financial technology by the Wall Street Journal’s Financial News. To learn more click here...

Check Also

100 years from now, will we look back and think how ignorant we were?

I was talking about space exploration with a colleague the other day. They looked at …

  • Mark Piper

    Some people in our industry have been playing with FPGAs for many years. What amused me most was the firm that bought a bunch of PS3s – their fast graphic processing made them very efficient FPGA processors -I’m not sure they ever had the courage to run strategies on them, but they certainly would win first prize for outside the box thinking….

  • Björn Ekeblom – FPGA designer

    You make it sound so easy, analytical processing is quite abstract. Abstract processes are difficult to implement to be fast and efficient on an FPGA. If you can break the analytics into small repeatable processes then put it into HW, all others should be run by some sort of SW run in a “Network-on-Chip” implementing a “Sea of Cores”. See research by Johnny Öberg, Ph D.
    Mark, you should differ between GPUs and FPGAs. If you want to fiddle with FPGAs you dont buy PS3s. You buy ready made development boards at first and thereafter make your own hardware. I’m quite sure they didn’t use an FPGA as a GPU in the PS3. (Eventhough you could implement a sort of GPU on one.) http://electronics.howstuffworks.com/playstation-three2.htm

  • hft

    I am new in trading..I was not aware about FPGA..This post shares very useful information about FPGA..This information is very beneficial for me in future…

  • Good summary. For a specific example and a few more technical details, check out http://softwaretrading.co.uk/2012/02/12/fpgas-the-software-in-the-hardware/.