
Navigating Agentic Commerce
AI Survey Reveals Agentic Commerce is Reshaping Fraud Risk. Fast
AI Survey Reveals Agentic Commerce is Reshaping Fraud Risk. Fast
Explore the findings from Darwinium’s latest research on agentic commerce, AI-enabled fraud, deepfakes, and the growing gap between what teams can detect and what they can confidently act on. Join the webinar to hear the results unpacked live and receive the full report.
New Survey Report • 500 Fraud, Risk, & Security Leaders
97%
of organizations have seen AI-facilitated attacks increase in the past 12 months
93%
have already encountered deepfake fraud attempts, 45% multiple times
95%
have agentic AI as a top-5 priority on their 2026 security roadmap

The Largest Study of Its Kind on AI Fraud & Agentic Commerce
Darwinium surveyed 500 fraud, risk, and security leaders across the United States (70%) and United Kingdom (30%) in early 2025. The majority operate at VP level or above (64%), with 73% serving as executive owners accountable for fraud prevention outcomes.
All responses were anonymized and used for research purposes only. The findings represent a cross-section of industries on the front lines of the AI fraud challenge.
- 40% Fintech & Banking
- 30% Gaming & Gambling
- 20% eCommerce & Marketplace
- 11% Travel & Hospitality
✔ Surveyed 500 leaders across the US and UK
✔ Fraud, risk, and security decision-makers
✔ Full report shared with webinar attendees

Navigating Agentic Commerce
How AI-Driven Commerce Is Reshaping Fraud Risk.
What Fraud and Risk Leaders Need to Know.
Based on Darwinium's latest research survey of 500 fraud, risk and security executives across the US and UK.
Report
Download the report
AI fraud is not emerging. It is operational reality.
The survey paints a sharp picture. AI-assisted fraud is already a dominant attack vector, while legitimate agentic traffic is rising across browsing, account actions, and purchases. Most organizations can tell that automation exists. Far fewer can determine whether that automation is acting with legitimate intent.
AI-assisted fraud is already mainstream
Three quarters of respondents estimate that more than 25% of their current fraud attempts are AI-assisted. Half place the figure between 26% and 50%, and another quarter believe it is already above 50%.
Deepfakes have moved from edge case to common threat
Video deepfakes, AI-generated documents and images, voice cloning, and AI-written impersonation messages are now showing up across onboarding, login, payments, disputes, and post-purchase interactions.
Attackers are getting sharper, faster, and harder to catch
Respondents reported more fraud-as-a-service, stronger evasion techniques, more convincing narratives, faster iteration, and persistent attacks spread across multiple steps in the journey.
The Scale of the Problem Is Larger Than Most Realize
One of the most consistent patterns in this survey is a gap between what organizations know and what they can act on. AI-enabled fraud is vast, growing rapidly, and already deeply embedded in the threat landscape.
50% of organizations estimate that between 26–50% of their current fraud attempts are AI-assisted. A further 25% believe the figure exceeds 50%. Only 5% consider AI involvement negligible. This is not a niche attack method, it is the dominant attack method.
75% estimate 26%+ of current fraud attempts are AI-assisted
55% saw attacks increase significantly or dramatically

"AI has lowered the barrier to entry for sophisticated fraud. The tools are commoditizing, the targeting is sharpening, and the attacks are learning to look more human. The window to detect and stop them is narrowing."
Deepfakes Are Widespread
Only 1% report no exposure and no expectation of future exposure. Deepfakes are appearing across every touchpoint, from onboarding through login, payments, returns, refunds, and disputes.
45%
Fraud-as-a-service & automation kits improving
43%
Targeting precision (personalization, contextual scams)
42%
Better evasion (rotating identity signals, human-like behavior)
93% Have Already Encountered Deepfake Fraud
41%
More convincing narratives (chat, email & voice)
40%
Faster iteration, rapid testing of defenses
36%
Multi-step persistent attacks over days or weeks
Most Organizations Cannot Stop Fraud Across the Full Journey
Despite the scale and sophistication of the threat, the majority of organizations are defending with tools designed for traditional fraud attacks — and their coverage reflects it.
64 Checkpoint-Only Coverage
64% of organizations can only stop fraud at a few checkpoints (51%) or at just one main checkpoint (12%). Only 36% believe they have effective end-to-end coverage.
50% Fragmented Vendor Stack
50% of respondents rely on 5–6 separate vendors across the customer journey, and 16% use 7 or more. Each tool creates a handoff, and every handoff is a gap that AI-powered attacks will exploit.
45% Readiness Gap
Only 45% say they are fully prepared with tested response playbooks. Half describe themselves as moderately prepared. When incidents occur, organizations are improvising rather than executing a proven response.

Teams rely on multiple vendors and handoffs, creating blind spots between onboarding, account activity, checkout, and post-purchase flows.
What the results mean for fraud and risk leaders
Legacy fraud tooling behaves like a castle gate guarding two doors while the rest of the wall has quietly become fog. AI-driven attacks move across the full customer journey. So do legitimate AI shopping agents. That turns fragmented point controls into soft seams.
62% estimate false-positive costs at $1M+ annually. The cost of blocking good customers and good agentic traffic is nearing parity with letting bad actors through.
The New Risk Landscape for Fraud Leaders
As agentic commerce grows, organizations face escalating false-positive costs, missing intent signals, and unanswered questions about liability.
Agentic commerce is both a risk shift and a revenue opportunity
Legitimate AI agents are already comparing prices, browsing products, building carts, taking account actions, and making purchases on behalf of consumers. The organizations that can identify trusted intent can unlock this channel. The ones that cannot are stuck with a painful coin toss: block too much and lose revenue, or allow too much and absorb fraud risk.
48% allow by default
Nearly half of organizations allow agentic traffic and rely on monitoring after the fact, accepting risk they cannot fully quantify.
31% block by default
Others shut the door unless traffic is explicitly allowlisted, trading security ambiguity for missed revenue and customer friction.
20% manage case by case
A smaller segment applies rules by endpoint or action, but this often adds complexity without resolving the central issue of intent.

The business case is bigger than fraud loss alone
The survey outlines a financial picture that stretches beyond direct fraud. Add up direct losses, blocked revenue, and future customer lifetime value erosion, and the economic exposure quickly grows into a very expensive storm system.
Chargebacks, support costs, disputes, and other direct impacts continue to grow as AI-enabled attackers become more effective and more scalable.
Five Priorities for Fraud, Risk and eCommerce Leaders
The survey data converges on a clear strategic conclusion: the current era demands fraud infrastructure built for the agentic age, not retrofitted from the human-traffic era.
With 64% of organizations defending only a fraction of the customer journey, the single highest-leverage investment is shifting from point-in-time controls to continuous, end-to-end visibility. AI-powered attackers and legitimate AI agents alike operate across the entire lifecycle.

Navigating Agentic Commerce
How AI-Driven Commerce Is Reshaping Fraud Risk.
Based on Darwinium's latest research survey of 500 fraud, risk and security executives across the US and UK.
Report