Home / Technology / Is banking technology ‘biological suicide’?

Is banking technology ‘biological suicide’?

It’s been noteworthy that many banks and exchanges have been facing computer failures recently. 

There appears to be more and more of them.  The best known one over here was the RBS glitch, but there have been many more over the past three years:

The Flash Crash of 2010 

The issues faced by Aussie banks

The London Stock Exchange outages 

Santander’s systems consolidation issues 

In recent times, it seems to be getting more frequent:

NASDAQ’s failures during the Facebook IPO 

The $440 million Knight Capital glitch 

BATS going batty

The Madrid and Tokyo stock exchanges outages

RBS was followed by Nationwide  

And it is probably going to get worse

What’s going on?

It appears that the issues arise for one of three reasons:

  1. Old technology that cannot cope with the modern world;
  2. Upgrading legacy systems and screwing it up; and
  3. Running systems that are fit-for-purpose, but hide known risks.

The first category is the one that will occur more and more often, as banks have so many legacy systems across their core back office operations.

It is far easier to change and add new front office systems – new  trading desks, new channels or new customer service operations – than to replace core back office platforms – deposit account processing, post-trade services and payment systems.

Why?

Because the core processing needs to be highly resilient; 99.9999999999999999999999% and a few more 9’s fault tolerant; and running 24 by 7.

In other words these systems are non-stop and would highly expose the bank to failure if they stop working.

It is these systems that cause most of the challenges for a bank however.

This is because, being a core system, they were often developed in the 1960s and 1970s.

Univac2

Picture source: Web Software QA

Back then, computing technologies were based upon lines of code fed into the machine through packs and packs of punched cards.

The cards would take years to program and days to update in batch.

Tens of thousands of lines of code would inter-relate in modules that would mean any change to any minutiae in any single line of code would rip through the rest of the programme and potentially corrupt it.

That’s why banks would not change or touch these systems and is the reason why, once they were up and running and working, they would be left to run and work non-stop.

“If it ain’t broke, don’t touch it”, was the mantra.

The systems were then added to layer by layer, as new requirements came along.

ATMs were added, call centres and then internet banking.

And the core systems just about kept up.

This process is less true in the investment world – where many systems were replaced lock, stock and barrel for that old bugbear Y2K – but the retail bank world let their core systems become so ingrained and embedded that changing, replacing or removing them became sacrosanct.

Then the world moved on, and technology became a rapid fire world of consumer focused technologies.  Add to this the regulatory regime change, which would force banks to respond more and more rapidly to new requirements, and the old technologies could not keep up.

Finally, the technology had to change.

Data01

Picture source: NYU

This is why banks have been working hard to consolidate and replace their old infrastructures, and why we are seeing more and more glitches and failures.

As soon as you upgrade an old, embedded, non-stop fault tolerant machine however, you are open to risk.

The 99.9999+% non-stop machine suddenly has to stop.

That's the issue.

A competent bank derisks the risk of change by testing, tesing and testing, whilst an incompetent bank may test but not enough.

Luckily, most banks and exchanges are competent enough to test these things properly by planning correctly through roll forward and roll back cycles. 

The real issue with an upgrade or consolidation though is that it has be done more and more frequently due to the combined forces of regulatory, technology and customer change.

The mobile internet world squeezes and exposes the legacy on the one hand – this is why many banks have struggled to incorporate mobile services with their internet banking services – whilst the global, European and national regulatory requirements are placing further pressures on the core processes as well.

Just look at the erosion of processing fees thanks to the Payment Services Directive and the Durbin amendment to Dodd-Frank, or the intraday and soon real-time margin calls for collateralisation under EMIR and Dodd-Frank, if you want to see how that changes things (not to even mention Basel III).

Finally, assuming you managed a successful migration to the new world, there are still massive exposures to risk.

In this case known risks that are hidden, as was shown by the Knight Capital issues.

Take this summary from Bloomberg:

A decade ago, the firm suffered almost no consequences in a similar breakdown, when officials agreed to void trades after Knight unintentionally sold 1 million of its own shares.

The refusal to let Jersey City, New Jersey-based Knight out this time shows that brokers face increasing risks from technology errors after regulators toughened rules following the so-called flash crash two years ago. Knight, which mistakenly bought and sold exchange-traded funds and New York Stock Exchange (NYX) companies, was forced into a rescue that ceded most of the firm to six investors led by Jefferies Group Inc.

“This is really a wake-up call,” David Whitcomb, founder of Automated Trading Desk. “They made one obviously terrible mistake in bringing online a new program that they evidently didn’t test properly and that evidently blew up in their face.”

We live in a world where tech drives our markets and yet the fear of changing tech is killing us.

 

For more on this, I recommend two must-reads.

The first, for investment market enthusiasts, read the Harvard Business Review’s analysis of high speed trading being “biological suicide”.

Second, if you really want to get into a bank’s head, read E M Forster’s The Machine Stops.  Definitely scary and pretty much spot on in terms of describing today’s banking world.

Skynet

Picture source: Technorati

 

About Chris M Skinner

Chris M Skinner
Chris Skinner is best known as an independent commentator on the financial markets through his blog, the Finanser.com, as author of the bestselling book Digital Bank, and Chair of the European networking forum the Financial Services Club. He has been voted one of the most influential people in banking by The Financial Brand (as well as one of the best blogs), a FinTech Titan (Next Bank), one of the Fintech Leaders you need to follow (City AM, Deluxe and Jax Finance), as well as one of the Top 40 most influential people in financial technology by the Wall Street Journal’s Financial News. To learn more click here...

Check Also

40yr

The 40-year-old Version

As mentioned, ANZ are happy keeping their 40-year-old version of a system – why does …

2 comments

  1. Nice thought provoking one again Chris…
    You give 3 reasons for the exposure but perhaps assumed the most important one – People. The reason the risk is increasing exponentially is that the people who understand these 30 and 40 year old core bank systems and code module interdependencies are now retiring, being retired, or subject to lay-off programmes. So just like in ‘The Machine Stops’, ‘The Mending Apparatus’ is loosing its capability. It is this factor which is ratcheting up the operational risk. Bank executives should note that hope and prayer are no substitute…

  2. Thought provoking piece – perhaps we’re facing a technological cliff as well as a fiscal one

Click on a tab to select how you'd like to leave your comment

Leave a Reply

Your email address will not be published. Required fields are marked *