Listen to this article instead

Cybersecurity Awareness Month is here again, and this year, there’s both good news and bad. The good news is that the average cost of a data breach has gone down for the first time in years, largely thanks to AI-powered early detection systems. The bad news is that it’s still $4.44 million.

While artificial intelligence is helping mitigate breach costs, it’s also empowering the bad guys, accelerating the never-ending cycle of higher walls and taller ladders. Specifically, generative AI and emerging agentic systems are enabling cybercriminals to perform more advanced attacks at scale, ramping up both the number and the complexity of the attacks that companies and consumers are facing.

To help you better understand the cybersecurity landscape as we move into 2026, let’s look at three of those growing trends: 

  • Smishing
  • Synthetic identities and deepfakes
  • Agentic cybercrime

The first uses the services provided by payment companies as cover for malicious activity. The latter two represent new threats that payments companies need to be planning for today, as financial services remain a top target for cyber attacks.

 

Graphic highlighting the top 3 cybersecurity attacks payment companies need to watch for in 2025

As SMS Payments Become Mainstream, Smishing Threats Are Rising

Today’s consumers want payments to be convenient, contactless and, increasingly, on their mobile devices. SMS payment solutions like NMI TXT2PAY check all three boxes. But, as more consumers adopt text payments, it’s opening up a new attack vector for bad actors using generative AI: smishing.

Smishing (SMS phishing) is a type of advanced phishing where attackers send fake payment requests to consumers’ phones. Since SMS messages have higher open rates, often cited at around 98%, compared to emails, smishing is often more effective than traditional phishing attacks. Attackers also take advantage of our familiarity with multi-factor authentication (MFA) and one-time codes to exploit digital wallets.

Here’s what an attack might look like:

Step 1: Cybercriminals send out millions of fake texts telling the recipient that they need to make a small but urgent payment for something plausible, like a toll road.

Step 2: When a victim clicks the link in the text, they land on a highly realistic, mobile-optimized business page where they’re prompted for their information and card details to complete the payment.

Step 3: Beyond stealing just an initial payment, the smishing attack will ask the victim to enter a one-time code to authenticate themselves. But what the code is actually doing is giving the attacker permission to add the victim’s card to a digital wallet on the attacker’s device. Now, the attacker has ongoing access to the victim’s card in a way that bypasses future MFA.

Smishing has been on the rise since 2023, driven primarily by Chinese cybercrime groups. But, from 2024 through the start of 2025, smishing attacks have compromised 115 million payment cards in the United States alone, marking a massive ramp-up in scale. That scale-up will likely accelerate in the years ahead, as generative AI tools like ChatGPT make it possible to generate mass SMS content and realistic website copy in seconds, with none of the telltale grammar or spelling errors that once gave scams away.

While this is still a purely consumer-side attack, payments companies providing SMS payments and digital wallet integrations have a duty to educate both merchants and end customers on how these attacks work, what they look like and how to avoid falling for them in an era where mobile payments are becoming mainstream.

Synthetic Identities and Deepfakes

In 2025, freely accessible generative AI is creating new opportunities for attackers to sneak past security measures. Traditionally, cybersecurity has been about building systems to detect bad actors and walls to keep them out. But today, in a world where infrastructure is stored in the cloud, employees often work from home, and cyberattackers are more creative than ever, vulnerabilities are emerging from inside company networks.

One way AI is fueling this shift is through the rise of synthetic identities. Cybercriminals are now using AI to create highly convincing fake identities, which can be used for everything from pig butchering scams to infiltrating companies by posing as legitimate employees.

Real World Case: Please Welcome Our New Team Members, North Korean Cyberattackers!

In 2024, the North Korean cybercrime group Famous Chollima was caught successfully infiltrating companies by walking in the front door as official employees after being hired using fake identities. Crowdstrike identified over 150 targeted companies and found that 50% of the fake job applications had resulted in successful data theft.

Famous Chollima used generative AI to carry out the attack at scale. Its applicants used AI-generated synthetic identities and fake LinkedIn profiles to create realistic employment histories and backstories. They also likely employed large language models (LLMs) like ChatGPT in real-time to generate answers to interview questions, helping them pass face-to-face screenings.

In the past, this kind of operation would have been cost-prohibitive and labor-intensive. But in the age of LLMs, it’s both trivial for savvy attackers to carry out and highly scalable.

The Prominence of AI Deepfakes

AI is also enabling next-generation phishing attacks through the use of deepfakes. Deepfakes are highly realistic AI recreations of a real person’s voice or even video likeness. They allow attackers to pose as a trusted individual in order to gain access or directly steal data or money.

There have already been multiple high-profile deepfake attacks, including one in which an attacker cloned the voice of Ferrari CEO Benedetto Vigna. That attack was unsuccessful, but others have worked. In 2024, a UK engineering firm transferred $25 million to cyberattackers after an employee’s video call with a deepfaked senior manager. While big paydays are certainly a risk, the more insidious threat is deepfakes being used for more subtle attacks aimed at gaining access to internal networks.

The proliferation of internal threats is driving a shift toward zero-trust security architectures, which eliminate the assumption that internal credentials are safe in favor of tightly authenticating every access request, no matter who it comes from. For payments companies (which are high-value targets due to their access to card data), adopting zero-trust best practices will be a critical part of keeping merchants and consumers safe in the years ahead.

Agentic Cybercrime Creates Attackers That Never Sleep

Today, most AI applications are LLM-based and require a human user to prompt an action before anything can happen. But, we’re slowly seeing the rollout of more advanced AI agents — systems that can, at least in theory, act proactively and autonomously, without a human user always in the loop. For cyberattackers, automation is nothing new, and large bot networks have long been a key tool. But AI agents mean that more complex, sophisticated tasks can now be automated, creating an army of attackers that can probe vulnerabilities 24/7.

In effect, agentic systems allow hackers to make previously dumb automations smart. Mastercard cites the example of using smart targeting and self-refining password lists instead of just a basic bot to fight against a credential stuffing attack.

Despite widespread claims that 2025 would be the “year of the agent,” very few true AI agents currently exist, and of the ones that do, failure rates on basic office tasks exceed 70%. But for cybercrime purposes, basic agentic functions, even with astronomical failure rates, are still a force multiplier in a game that relies on pure volume anyway. That could mean significant increases in the number of cyberattacks that companies face in the years ahead.

Keep Payments Data at Arm’s Reach To Mitigate Cyberattack Damage

AI is helping companies clamp down on breaches faster, but it’s also making it faster, easier and more cost-effective for cybercriminals to launch complex attacks at scale. In the years to come, the big question will be which side ultimately benefits most from the proliferation of generative AI, next-gen agents and more traditional systems like machine-learning-driven fraud and attack detection.

In the age of AI threats, one of the best ways to mitigate exposure to cyber attacks is to limit your handling and storage of sensitive card data altogether. At NMI, we remain committed to maximizing security and minimizing compliance burden by providing modern security technologies and a payment acceptance platform to limit your risks.

Solutions like network tokenization and NMI Customer Vault offer all the convenience of recurring payments and one-click checkout, while limiting your  risks of storing or even touching card data yourself. Those tools, combined with our expert security team, full Payment Card Industry (PCI) Level 1 compliance and advanced fraud offerings like Kount, make NMI the most secure way to sell and serve payments to your merchants.

To find out more, reach out to a member of our team today.

Don’t just turn on payments, transform the way you do business

  • Generate New Revenue By adding or expanding payment offerings to your solution, you can start earning higher monthly and transaction-based recurring revenue.
  • Offer the Power of Choice Allow merchants to choose from 125+ shopping cart integrations and 200+ processor options to streamline their onboarding.
  • Seamless White Labeling Make the platform an extension of your brand by adding your logo, colors and customizing your URL.

Talk to Our Team

Invalid number

By submitting your information, you agree to NMI's Privacy Policy & Terms and Conditions

235K+ Connected devices
300+ EMV device certifications
$440+ Billion Annual Payments Volume
5.8+ Billion Annual Transactions