
I started a theme two years ago, in prep for the next new book, about how we can trust people digitally. In a deep fake world, we cannot. A good example is the Hong Kong staffer who transferred $25m to an account after talking with her deep fake CFO in a video call.
The critical factor here is that scams using AI and deep fakes will be persistent and growing over the next decade. Deloitte’s Centre for Financial Services predicts that gen AI could create fraud losses to reach US$40 billion in the United States by 2027, up from US$12.3 billion in 2023 and a compound annual growth rate of 32%.
Earlier this year, we had the case of a French woman who sent almost $1m to a deep fake Brad Pitt. A recent Medus survey found that the majority (53%) of finance professionals have been targeted by attempted deepfake schemes. Even more concerning is the fact that more than 43% admitted to ultimately falling victim to the attack.
Notably, as the US Department of Homeland Security notes: “The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see”.
I see this and believe this. That is the nub. Put it another way, whatever you see online, do not believe it.
It’s a little like movies, and I tell my children all the time that whatever you see in the movies is not real. It’s there for fun. The same is true online now, although it is hard if you are on a Zoom call with your deep fake CFO.
In a report by Wired talking with David Maimon, a professor of criminology at Georgia State University, found that AI romance scams and other kinds of AI fraud has been rising fast recently.
“We’re seeing a dramatic increase in the volume of deepfakes, especially in comparison to 2023 and 2024. It wasn’t a whole lot. We’re talking about maybe four or five a month,” he says. “Now, we’re seeing hundreds of these on a monthly basis across the board, which is mind-boggling.”
You can even find ways to create your deepfake persona quite easily on YouTube.
So, what’s the solution?
Well, surprisingly, it is the same technology. Deepfakes created by AI can be spotted by AI. For those who use ChatGPT, launched by OpenAI, there are capabilities to spot deep fakes as part of their platform.
There are many other developments but, as David Maimon notes: “ The major thing we have to understand is that the technology we have right now is not good enough to detect those deepfakes. We’re still very much behind.”
This is the core point.
For every technology innovation, the core drivers are sex and crime. I’ve blogged about this many times before, and it is basically that sex sells and where there is something to sell, there is money.
No wonder deep fake romance scams are rife and the message is: don’t believe what you see digitally.
Chris M Skinner
Chris Skinner is best known as an independent commentator on the financial markets through his blog, TheFinanser.com, as author of the bestselling book Digital Bank, and Chair of the European networking forum the Financial Services Club. He has been voted one of the most influential people in banking by The Financial Brand (as well as one of the best blogs), a FinTech Titan (Next Bank), one of the Fintech Leaders you need to follow (City AM, Deluxe and Jax Finance), as well as one of the Top 40 most influential people in financial technology by the Wall Street Journal's Financial News. To learn more click here...

