Content Library
Back
Share

In the age of AI, banks and fintechs must do more to protect their customers from social engineering fraud

AI social engineering blog 1

Picture this: you get a call from a loved one’s real phone number. When you answer the phone, you hear their voice for a moment—then someone joins the call to tell you your loved one will be harmed and/or kidnapped if you don’t send money immediately. The call feels so real and terrifying that you have no reason to doubt the kidnapper, so you send them money via a peer-to-peer (P2P) payments service. They hang up on you, and when you call your loved one back, they answer—completely unharmed. The original call used caller ID spoofing to make you think it was coming from your loved one’s phone, and your loved one’s voice was generated using sophisticated deepfake technology. And you’re out hundreds or even thousands of dollars.

This horrifying scenario is becoming increasingly common as artificial intelligence (AI) becomes more accessible to anyone, including bad actors. Anytime a new technology emerges, the benefits for society come alongside increased risk. When it comes to AI/ML, there are new opportunities for fraud detection and prevention, but there are also new tactics fraudsters can rely on to steal people’s money. And in the arms race of AI, it is clear that the fraudsters are winning at the moment. In a recent survey from McAfee, 77% of victims in AI-enabled scam calls said they lost money, and more than a third of those victims said they lost more than $1,000.

AI is a powerful tool for fraudsters

Sophisticated phishing emails, fabricated documents, deepfakes, and voice cloning scams are just a few examples of how fraudsters have weaponized AI in the past year to carry out social engineering fraud, which is when a fraudster impersonates a legitimate party and reaches out to a target through an everyday social interaction. Fraudsters have used AI to impersonate bankers reaching out to customers and even customers reaching out to bankers. Paranoia has led to some consumers going so far as to establish a code or safeword with their friends and family to help them confirm if they are being scammed. A recent survey also found that 75% of Americans never answer calls from unknown numbers.

As a product director at Alloy, I've seen firsthand how sophisticated today’s financial fraud techniques can be. I scrutinize every external communication I receive for red flags, but I know even I am susceptible to an attack. I worry that my friends and family are too.

What financial services businesses need to do about AI-aided social engineering fraud

Fintechs and banks face an uphill battle when it comes to addressing social engineering scams because they can’t always adapt as quickly as fraudsters can. Financial services businesses operate within the law, but fraudsters don’t. Fraudsters are unregulated, don’t have to contend with legacy technology, and are myopically focused on stealing money. Meanwhile, financial institutions are highly regulated and are responsible for not only protecting your money, but also delivering returns and a great customer experience. As a result, while a bad actor can take advantage of the dark web’s version of ChatGPT, or steal PII data, a bank has to work harder to combat the attack ethically and legally when using AI tools.

Fortunately, most financial institutions are filled with savvy people who are constantly evaluating new technology to stay ahead of new threats. For example, many banks and fintech companies now use behavioral analytics and biometric data as part of their fraud checks.

Banks and fintechs can turn the tide in the arms race and use AI to their advantage. Alloy and many other identity risk companies are working on building models that rely on customers' onboarding signals and ongoing behaviors to predict the likelihood of a fraud attack in real-time. With this tool, financial services businesses can identify fraudsters before they're able to commit a crime.

Some banks are increasingly incorporating AI directly into their fraud prevention programs. JPMorgan has begun using large language models (LLMs) to detect signs of compromise in phishing emails. Several British banks, including Lloyds Banking Group and Halifax, have implemented Mastercard’s consumer fraud risk model, which is trained on years of transaction data to help predict if someone is trying to transfer money to an account associated with previous scams. When it comes to AI, there are many possibilities for fraud detection.

A world where AI is everywhere

While the hype around AI’s societal benefits is justified, financial services businesses and their customers should also remember the risks. U.S. consumers reported losing $8.8B to scams in 2022, and that was before AI became fraudsters’ preferred social engineering tool. To truly prepare for the threat that AI presents in the coming years, banks should consider fighting AI with AI - getting ahead of the scammers and saving consumers potentially millions of dollars.

To stay ahead of fraudsters, banks and fintechs need to understand who their customers are, instead of only monitoring for suspicious transactions.

Related content

Back