Chris Skinner's blog

Shaping the future of finance

It’s the AI’s fault … urmm, no it’s not

I was wondering today what would happen if my AI tool sent money to a third party and it was fraudulent. Who would be liable? So, I did a bit of research to find the answer and here it is.

The general rule is that it’s your fault, as it’s not the AI that made the mistake. It’s you.

If an AI engine trades on your behalf and gets it wrong, the law doesn’t chase the algorithm but looks for the accountable human or firm behind it. This means that liability sits with whoever had control, duty, and benefit from the trading activity.

The thing is that the duty could be in one of three places: you, the company or the vendor who provided the AI. It’s rather nuanced.

If you’re trading your own money using an AI tool, then any losses are on you. The AI is just an execution mechanism, like a spreadsheet or a broker interface. Losing money because the model made a bad call is treated no differently from you making a bad call yourself. You might have recourse against the provider if the system was defective or mis-sold, but not simply because it made poor decisions.

If the AI is trading on behalf of clients – say you’re running a fund or advisory service – then the bar is much higher. Under UK regulation, the Financial Conduct Authority expects clear accountability for algorithmic decisions. You remain responsible for suitability, risk controls, and oversight. If the AI behaves badly and causes client losses, “the machine did it” won’t wash. You’re expected to have governance, monitoring, and kill switches in place. You are still responsible.

Then there is the case of a third-party AI vendor being involved and things get more difficult. You might try to push liability onto them, but that depends on contract terms. Most providers explicitly limit liability and position their systems as “decision support,” not autonomous fiduciaries. To succeed in a claim, you’d have to show a defect, negligence, or misrepresentation, not just that the model performed poorly.

It doesn’t stop there however, as there are also edge cases regulators care about. If the AI causes market disruption – think runaway trading or manipulation – you could face enforcement under market abuse rules, even if the behaviour was unintended. Again, responsibility follows the operator.

The bottom-line is this: legally, AI doesn’t take the risk … you do. The machine can execute faster and at scale, but accountability doesn’t scale away with it. The accountability will always be with you, your company and your decisions.

The real protection isn’t legal, it’s operational – tight model validation, clear limits, human override, and an understanding of exactly what the system is allowed to do before you let it near real money.

In other words, the machines purely do what you tell them to do. You will always be accountable for the machines operations and how it works. If you ever want to get away with “oh, that’s the machine’s fault”, it won’t wash. Bear this in mind as you build your intelligent bank.

Chris Skinner Author Avatar

Chris M Skinner

Chris Skinner is best known as an independent commentator on the financial markets through his blog, TheFinanser.com, as author of the bestselling book Digital Bank, and Chair of the European networking forum the Financial Services Club. He has been voted one of the most influential people in banking by The Financial Brand (as well as one of the best blogs), a FinTech Titan (Next Bank), one of the Fintech Leaders you need to follow (City AM, Deluxe and Jax Finance), as well as one of the Top 40 most influential people in financial technology by the Wall Street Journal's Financial News. To learn more click here...