Resources / The Evolution Blog

Adversarial AI: The Next Big Threat to Business Security

Rebekah Moody

8 May 2025

Why Adversarial AI Attacks Are the Next Great Business Blind Spot

AI is democratizing cybercrime, lowering the barrier for launching sophisticated adversarial AI attacks. What once required deep technical expertise or fraud-as-a-service infrastructure is now accessible to anyone with a browser and an LLM—turning opportunists into capable attackers almost overnight.

No longer limited to basic scripts and social engineering, fraudsters are now leveraging large language models (LLMs) and AI coding assistants like Copilot to build intelligent, adaptive attacks at scale.

At the MRC Barcelona conference last week, Ben Davey, Darwinium’s VP Products, and Jon Ferrari, head of fraud prevention at Apollo.io, presented some chilling real-world examples of what this looks likein action, during a live account takeover hack: the construction of an AI-powered phishing proxy capable of intercepting credentials, generating fake content, and launching automated fraud - all in real-time.

So how can businesses prepare for this new wave of adversarial AI attacks? Let’shave a look at some of the key takeaways from the presentation - and what businesses can do to better protect customer accounts from this rapidly accelerating threat.

The Threat: AI-Powered Phishing Proxies in Action

In the hack, Ben and Jon showed how easy it was to build a forwarding proxy on a marketplace-style website using developer tools like GitHub Copilot and an LLM agent. The proxy intercepted authentication tokens (cookies) from real users, wrote them to a JSON file, and used this data to generate fraudulent listings that appeared legitimate. Within the prompts, Ben even showed how you could ask the LLM to scan existing listings to aggregate an average cost for a particular item, and then write the fraudulent listing advertizing the same product for 10% cheaper. It was a small example of how effectively AI tools can be used to make content more legitimate or attractive to a genuine user.

This setup allowed the attackers to:

  • Steal user credentials without detection
  • Automate content creation with human-like language
  • Embed browser extensions and use tools like Luna Proxy to mask their activity
  • Scale the attack cheaply and efficiently

The scariest part? These tools are completely accessible to low-skilled attackers. With the help of AI, even non-technical users can now automate tasks that once required deep technical expertise.

Why Traditional Defenses Are Falling Short

Legacy fraud prevention methods - such as static rules, blacklists, or basic rate-limiting - are not equipped to detect this new class of attacks. AI capabilities enable features such as:

  • Human-like interaction patterns, which can often bypass CAPTCHAs and bot detection
  • Geolocation spoofing to appear local, or coming from a consistent or trusted location
  • Behavioralmatching, attempting to make automated activity look like a real user via variations to rates, typing styles and speeds etc

Ben and Jon really brought to life the fact that this new attack threat is no longer about spotting typos in phishing emails. It’s about fighting adaptive, intelligent adversaries.

Rethinking Fraud Defense for the Agentic AI Era

To defend against adversarial AI like these, businesses must fundamentally rethink the ways that they separate human and non-human traffic, and detect risky versus trusted interactions. Ben introduced the themes of dynamic, behavior-based, and intent-aware systems. However, this is no longer about simply separating human from bot, given that many sites have legitimate automated traffic using their services or buying their products. This is about understanding the intent of the traffic that is hitting your site, what it is trying to do and why. Here are some of the key building blocks Ben covered:

Using Behavior to Understand Intent

  • Behavioral Biometrics and Beyond

Ben covered the importance of monitoring how users interact with your site - mouse movements, keystroke timing, navigation flow—and create behavioral baselines. Even if AI can mimic language, it struggles to perfectly mirror organic user behavior relating to how a user uniquely interacts with their devices.

  • Journey Behaviors

However, behavioral analytics goes beyond just looking at behavioral biometrics. It also covers the way traffic is navigating through your site. Are they following a journey through steps that is consistent with a trusted user? Or are they navigating to quickly to a particular page or field. Are they missing steps in a typical journey that trusted users follow? Conversely, is this behavior consistent with behavior that has previously been identified as risky, such as quick navigation and completion of a new product listing?

  • Anomaly Detection of User Behaviors

Leverage machine learning models trained on live interaction data. Instead of just checking credentials, check the context: Is this device consistent with past logins? Is the timing pattern human? Is the payment volume / frequency consistent with what has been seen on trusted interactions?

  • Intent Modelling

Distinguish between legitimate and fraudulent automation by looking beyond just non-human signals, to what the user is doing, and the full context around the transaction. For example, many digital services are now using automated bots to buy products on behalf of users - but so might a fraudster. Understanding why an action is being taken is just as important as what the user is doing in a specific moment.

Tune your Responses to Hinder Attackers and Not Customers

  • Rate Limiting with Friction

Rather than outright blocking, slow down suspicious activity. This increases operational costs for attackers without frustrating legitimate users. Even minor delays can disrupt automation at scale.

Build Agentic AI Resilience Testing into Your Defense Strategy

  • Red Team AI Simulations

Use LLMs to simulate adversarial behavior. By stress-testing your systems with agentic AI tools, you’ll understand where your defenses are weakest. Mapping risks within user journeys - such as during a new account origination - can be challenging. Where are adversaries slipping through the cracks of fraud defenses?

Creating an automated system that can propose an attack scenario on a particular point in a customer journey, and then using adversarial AI simulations to stress test responses and defense options to better safeguard the business from such attacks, is the holy grail.

Using AI to Defeat AI Threats

The same tools that drive innovation are also enabling a new generation of scalable fraud. As the Barcelona demo revealed, adversarial AI isn’t a futuristic threat - it’s already here, cheap, and effective. But businesses that embrace proactive, adaptive, and AI-informed defense strategies will be far better positioned to protect their platforms and their users.

The question is no longer if AI will be used against you - it’s how prepared are you when it is?

Share

  • Share to Facebook
  • Share to Twitter
  • Share to LinkedIn