Content Library
Back
Share

FraudGPT and GenAI: How will fraudsters use AI next?

Where will fraudsters go next inscribe blog

Artificial Intelligence (AI) has been a source of both fear and excitement in the financial services industry for years. The advent of GenerativeAI (GenAI), with its mainstream debut in late 2022 with the launch of ChatGPT, has only intensified the buzz around AI. But what does this mean for fraudsters? And more importantly, what does this mean for the crucial task of fraud prevention? 

GenAI has also ushered in a new popular tool for bad actors: FraudGPT. With FraudGPT, not only can legacy fraudsters automate their attacks faster than ever before, but amateur fraudsters can join in the fun too and create sophisticated fraud attacks — also known as “DIY fraud” — that give financial institutions (FIs) and fintechs a run for their money. 

In this blog, we’ll take a look at new AI-driven fraud trends and what FIs and fintechs can do about it. 

What is FraudGPT?

FraudGPT is a subscription-based GenAI tool without the ethical guardrails that ChatGPT and Google’s Bard have in place. It uses an advanced AI model designed to assist in creating fraudulent content. 

Leveraging the capabilities of GPT-4 and similar technologies, FraudGPT can generate highly convincing fake documents (bank statements, pay stubs, utility bills), synthetic identities, and deceptive narratives used in social engineering scams. This tool represents a significant leap in the capabilities of fraudsters, who can now use AI to automate and enhance their fraudulent activities.

How are fraudsters using new AI tools?

Fraudsters are putting AI to work across a multitude of fraud schemes. Let’s examine some of the most prevalent ones. 

Advanced social engineering scams

GenAI tools like FraudGPT have significantly enhanced the capabilities of fraudsters in executing social engineering scams. Social engineering scams occur when a fraudster impersonates a legitimate party and reaches out to a target through an everyday social interaction. There are several common types of social engineering scams:

  • Phishing: A fraud tactic where bad actors impersonate legitimate people/businesses via email.
  • Smishing: A fraud tactic where bad actors impersonate legitimate people/businesses via SMS/text message.
  • Vishing: A fraud tactic where bad actors impersonate legitimate people/businesses via voice calls/voice messages.

GenAI can generate highly convincing and personalized phishing, smishing, and vishing attacks by mimicking legitimate communications and incorporating specific details about the target. This level of customization makes it harder for victims to distinguish between genuine and fraudulent communications. Additionally, GenAI can generate persuasive deepfake videos used to coerce victims into sending money or granting account access. 

These advanced AI tools also automate the process of creating these scams altogether, allowing fraudsters to scale their operations and produce more sophisticated attacks.

Synthetic identity creation

AI has significantly advanced the creation of synthetic identities, generating highly convincing identities by synthesizing personal information such as names, addresses, and social security numbers. These synthetic identities are often created by combining real and fictitious data, making them difficult to detect through traditional verification methods. 

AI can also generate realistic images that correspond to these identities, producing lifelike, but fraudulent images and photos that can pass for legitimate IDs. This capability poses a substantial challenge for financial institutions and other organizations that rely on identity verification to prevent fraud.

Credential stuffing attacks

As data breaches continue to become more common, credential-stuffing attacks have increased too. 

Credential stuffing attacks are a type of brute force attack where fraudsters commit account takeover (ATO) fraud by obtaining users’ credentials for one account and using the credentials to attempt to log in to other unrelated accounts. For example, if a fraudster obtains a user’s email address and password for an Amazon account through a data breach, and then the fraudster uses that same information to try to log in to different bank accounts in hopes that the user has the same username and password for other accounts. 

AI-powered bots have exacerbated the spike in credential stuffing even further because they allow fraudsters to automate the process of testing large volumes of usernames and passwords across multiple accounts, ultimately increasing the success rate of getting into accounts. 

More sophisticated document forgery

AI has also revolutionized the realm of document fraud, making it possible to create fake documents that are nearly indistinguishable from authentic ones. Machine learning models can be trained to:

  • Mimic real documents: These tools can accurately replicate fonts, layouts, and security features, creating forgeries that are difficult to detect without advanced forensic analysis. AI can also perform advanced image manipulation to enhance the realism of these documents. (For example, it can adjust lighting, shadows, and textures in ID photos to make them appear more genuine.)
  • Manipulate data on real documents. For instance, fraudsters can use AI to alter account balances on bank statements, making it appear that an individual has more funds than they actually do. Similarly, AI can change contract dates, modify transaction records, and adjust other critical data points on documents to deceive verification processes. This type of data manipulation can be particularly challenging to detect because the underlying document may be genuine, with only specific details altered.

As fraudsters continue to exploit AI for their purposes, the defense against such activities must evolve accordingly, employing equally advanced technologies to protect against these sophisticated threats.

How can banks and fintechs combat AI-driven fraud attacks?

Any company in financial services faces significant challenges from AI-powered fraud, but they also have numerous tools at their disposal to counter these threats effectively. Risk experts recommend using a “Swiss Cheese Model” as a useful framework for understanding how to mitigate fraud. 

This approach visualizes multiple layers of defense (represented as slices of Swiss cheese) to prevent fraud from occurring. Each layer has potential weaknesses or "holes," but when aligned properly, the likelihood of a threat passing through all layers diminishes significantly.

By orchestrating a Swiss Cheese Model of cutting-edge fraud detection solutions, risk leaders can strengthen their defenses against sophisticated fraudulent activities. 

Swiss cheese model inline

Here are some key elements of an effective Swiss Cheese Model for combatting AI-powered fraud detection: 

Data orchestration

Financial institutions and fintechs should use a variety of data vendors to cross-verify information and detect inconsistencies. Examples include identity verification services, credit bureaus, and social media analysis. By combining data from different vendors, banks and fintechs can improve their ability to spot suspicious patterns that single data sources might miss.

Learn more about data orchestration here

Document fraud detection 

Advanced document fraud detection is essential for identifying counterfeit documents. These tools use machine learning and AI to analyze documents for signs of tampering, inconsistencies, and forgeries. By examining elements such as font mismatches, irregularities in image quality, and discrepancies in document metadata, these systems can detect fraudulent documents more accurately than manual reviews. Banks and fintechs should integrate these tools into their onboarding and underwriting verification processes to enhance security and efficiency.

Learn more about document fraud detection here

Step-up verification

Step-up verification involves increasing the level of security for applications or transactions that are deemed high-risk. This approach requires users to provide additional verification when they perform activities that fall outside their usual behavior, such as large transactions or access from unfamiliar devices. 

Historically, many organizations relied on one-time passwords and additional security questions because they were easy to use. However, organizations can just as easily leverage more sophisticated step-up methods, such as document verification and biometric verification, with Alloy’s SDK. By implementing step-up verification, banks and fintechs can add an extra layer of protection for sensitive actions without inconveniencing users for routine transactions.

Seamlessly add step-up verification methods with Alloy’s codeless SDK

Real-time monitoring and machine learning models

Machine learning models are crucial for detecting patterns and anomalies that indicate fraudulent activity. These models can analyze vast amounts of transaction data to identify behaviors that deviate from the norm. Because they continuously learn from new data, machine learning models can adapt to evolving fraud tactics and improve their accuracy over time. Financial institutions should deploy these models to monitor transactions in real-time, flagging suspicious activities for further investigation.

How to use data and machine learning in fraud prevention

AI Agents

AI agents can be employed to enhance fraud detection and response efforts. These agents can monitor transactions, flag suspicious behavior, and even interact with customers to verify their identities or gather additional information. AI agents can work around the clock, providing continuous surveillance and rapid responses to potential threats. Integrating AI agents into customer service and fraud prevention teams can help financial institutions address issues promptly and efficiently.

Learn more about AI Agents

The tools to fight AI-driven fraud already exist

AI-driven fraud presents a formidable challenge, but banks and fintech companies have a robust toolkit to combat these sophisticated attacks. Proactively adopting these tools and strategies will help safeguard against this evolving landscape.

Alloy partners with Inscribe AI to help financial institutions and fintechs fight fraud.

Related content

Back