Content Library
Back
Share

In the age of AI, financial institutions and fintechs must do more to protect their customers from social engineering fraud

A majority of Americans are worried that AI is making scams harder to detect.

AI social engineering blog 1

Picture this: You get a call from a loved one’s real phone number. When you answer the phone, you hear their voice for a moment. Then, someone joins the call to tell you that your loved one will be harmed if you don’t send money immediately. The terrifying scenario creates an instant sense of urgency and an emotional response that overrides any logical reason to doubt the caller. So you send them money. Once they get it, they hang up on you — and when you call your loved one back, they answer completely unharmed.

You’ve just been scammed. A criminal used caller ID spoofing to make you think the call was coming from your loved one’s phone, using sophisticated deepfake technology to generate their voice. And now, your bank account is lighter by hundreds or even thousands of dollars.

This is known as an emergency scam, and it’s becoming increasingly common as artificial intelligence (AI) becomes more accessible to bad actors. For financial institutions trying to keep their AI/ML checks up to date, AI offers new opportunities for fraud detection and prevention. But right along with them are new tactics fraudsters can use to steal people’s money. 

The truth is, in the arms race of AI, financial organizations are still trying to catch up to fraudsters and cybercriminals. Alloy’s 2025 State of Scams Report found that 85% percent of Americans worry that advances in AI technology are making scams harder to detect, creating an opportunity for financial institutions and fintechs to step in as protectors. At the same time, over a quarter (27%) of Americans have experienced an AI-generated scam personally, or know someone who has, revealing the extent of the problem at hand.

From phishing and vishing to deepfakes and other social engineering attacks, AI is a powerful tool for fraudsters

There are countless forms of social engineering fraud — schemes in which cybercriminals try to bait people into sharing sensitive information that can be used to commit fraud. These methods have only been enhanced and expanded with advances in AI.

There’s the phishing attack family, which includes phishing emails, vishing (voice phishing) phone calls, and even smishing (sms phishing) text messages. AI has made spear phishing, an extremely targeted version of phishing, all the easier for a fraudster to employ. 

Then there are deepfakes, which use AI to clone the appearance or voice of a real person in a video or audio recording to solicit confidential information — like your “boss” asking for your company credit card details. This tactic is especially favored by AI-powered scammers, with one 2024 report finding a 3,000% increase in deepfakes.

Fraudsters have used AI to impersonate bankers reaching out to customers, often using a fake data breach, discovery of malware, or other cybersecurity issue as their cover. There are even instances of customers reaching out to bankers.

And AI helps scammers run these schemes in extreme volumes. In August 2025 alone, Americans received an average of 63 spam texts each — that’s 19.2 billion spam texts total in just one month. The sheer scale, along with the increasing believability, makes it difficult for both consumers and financial organizations to stop hackers before they gain access to sensitive financial information.

How financial service providers can thwart AI-aided social engineering fraud

Financial institutions and fintechs face an uphill battle when it comes to addressing social engineering scams, primarily because they can’t always adapt as quickly as cyber threats can. Financial service businesses operate within the law, but fraudsters have free reign. They’re unregulated, they don’t have to contend with what can be slow, legacy technology, and they have only one priority: perfecting their social engineering tactics to steal larger and larger amounts of money. And it’s working. Approximately one-fourth of Gen Z and Millennial consumers have lost $5,000 or more to fraud

Meanwhile, banks, credit unions, and fintechs are highly regulated. They’re not only responsible for protecting your money,  delivering returns, and providing a great customer experience; they must also comply with extensive legal requirements to prevent financial crime. That gives bad actors a distinct advantage. They can use the dark web’s version of ChatGPT, “FraudGPT,” to steal personally identifiable information (PII) data, such associal security numbers, and commit fraud, while a financial institution has legal guardrails on the policies and tools it uses to fight back. That means financial institutions and fintechs must work harder to combat cyberthreats, remaining ethically and legally compliant when using AI tools.

Fortunately, most financial institutions and fintechs are filled with savvy people who are constantly evaluating new technology to stay ahead of new threats. For example, many financial institutions and fintech companies now use behavioral analytics and biometric data as part of their fraud checks.

Leveraging AI is a powerful way that financial institutions and fintechs can protect their customers. Alloy and other identity risk companies are building models that analyze customers' onboarding signals and ongoing behaviors to predict the likelihood of a fraud attack in real time. With this tool, financial service providers can identify fraudsters that may have previously flown under the radar.

Some banks are also incorporating AI directly into their fraud prevention programs. McKinsey reports working with several financial institutions on agentic AI-powered KYC/AML processes. Several British banks, including Lloyds Banking Group and Halifax, have implemented Mastercard’s consumer fraud risk model, which is trained on years of transaction data to help predict if someone is trying to transfer money to an account associated with previous scams. When it comes to AI, there are also many possibilities for fraud detection and prevention.

A world where AI is everywhere

While the hype around AI’s capabilities is justified, financial institutions, fintechs, and their customers should also stay vigilant when it comes to the risks. The FTC counted 2.6 million fraud reports in 2024 — the same year US consumers lost a record $12.5B to fraud.

To truly prepare for the threat that AI presents in the coming years, banks should consider using scammers’ tools against them and investing more heavily in AI-powered fraud-fighting tools. They’ll be better positioned to get ahead of the fraudsters, building long-lasting trust and saving consumers potentially millions of dollars.

Take the next step with Alloy

Stay two steps ahead of scammers by understanding who your customers are — instead of just monitoring for suspicious transactions. Read about why your fraud model might be broken from Alloy’s CEO Tommy Nicholas.

Or, get in touch with our team to schedule a demo, and see how hundreds of leading financial organizations are using Alloy’s fraud protection platform to take control.

Related content

Back