The Evolution Blog

Managing Trust and Risk in the Era of Agentic Commerce

5 November 2025|
By Ben Davey

This blog explores some of the challenges and recommendations for eCommerce merchants and marketplaces as they move from deploying blunt bot controls that block rogue automated traffic, to adaptive ones that can identify trusted versus risky agentic interactions using intent and context.

What is agentic commerce and how is it changing the trust relationship that brands have with their customers?

Agentic commerce describes shopping powered by AI-driven agents acting on our behalf. This represents a seismic shift for eCommerce merchants and marketplaces and for those responsible for protecting customers and brands.

A key choice for your digital estate will be when and how to welcome traffic, and how to reliably separate trusted versus rogue agents. We must adapt to the new paradigm of customers using agents to make purchases. And this raises a number of questions relating to trust, risk and product innovation: all of which are critical for merchants looking to protect their brand and unlock new revenue streams for sustained growth.

In recent months there have been significant advancements in agent-to-site:

  • AI-infused browsers such as Perplexity’s Comet and OpenAI Atlas behave like a remote control on your device and for most intents and purposes, transact as if they are using your device - details like IP address, screen resolution, browser are all in lock-step with that of a human user.
  • Cloud-based agents such as OpenAI’s “agent mode” when used in chatgpt.com or Browser-use will appear as-if they are coming from data center networks such as AWS

From a fraud fighter’s perspective, this can present a number of challenges:

  • Agents will come from a variety of locations that may or may-not have previously been associated in the past with a given customer
  • Browser automation suites and scripts can masquerade as legitimate agentic traffic.
  • Good agents or agent-infused browsers can be commandeered by bad actors, preloaded with compromised credentials or piggybacked with automation software (using the Chrome Developer Tools protocol, for example).
  • Cloud-based agentic services can be particularly problematic. Unauthenticated, you may see a single cluster of devices on a particular subnet associated with potentially thousands of accounts.

This creates opportunity and risk for bad actors to exploit your digital estate, especially if your defenses are geared towards accepting agentic traffic.

The challenge merchants and marketplaces now face as customers start to use agents to browse and make purchases is: the old risk model of blocking non-human - i.e. automated bot - traffic, no longer works. You’d end up blocking customers using agents to make legitimate purchases. But how do you accept purchases from legitimate agents without loading the experience with friction, to verify that agent is doing something legitimate?

The key here is understanding intent. Whilst it might be normal for an agent-infused browser to search or look through product listings or content to find things of interest, it is certainly not normal for it to hit up thousands of results - this behavior is called crawling. And it might be your competitors collecting business intelligence or stealing content. Likewise, if an agent is buying up abnormally high volumes of discounted stock / items on sale, this may be a rogue agent you want to block.

For signups and logins you’re faced with a different scenario. Often times the first interaction you see with a device is going to be the most important. With AI-infused browsers there’s an element of combined human-bot interaction, where an LLM navigates to a login form, a user then manually performs a login, then subsequent interactions driven by the LLM and agent interface. Places like this are prime candidates for setting up strong device binding.

On payments you’re faced with yet another dilemma. If an agent automatically buys something on a user’s behalf that they don’t want or wasn’t fit for purpose, the risk of chargebacks increases dramatically. Consequently it may be of benefit to add friction or force a human user to make the final call on a purchase.

At Darwinium, we’re focused on the idea of adaptive decisioning for bots and automated traffic. The idea is that there should be multiple policies with various thresholds at particular customer-defined parts of the journey that may:

  1. Permit - Conditionally allow agentic traffic (within the constraints of velocity associated with a device, accounts associated with a device, etc) during discovery and low-risk exploration
  2. Verify - Escalate to human confirmation where financial or identity risk is introduced
  3. Prevent - Block malicious, high-velocity, or unauthorized automation outright

Accurate agent classification is critical. Just because a browser advertises itself as Atlas or Comet doesn’t necessarily make it such. Accurate device fingerprinting is as important as ever, but this must also be coupled with behavioral and timing insights to determine whether there is a human behind the screen at the time of interaction or whether it is being fully automated. 
We believe this approach provides a strong compromise between the facilitation of this new economy and providing protection where it is needed.

As agentic commerce evolves, AI browser vendors will likely shift sensitive interactions toward cloud-based services equipped with keychains or other Know Your Agent (KYA) handshake mechanisms. These services will verify the human behind the agent and establish cryptographic trust between their gateway and your site.

But even with this infrastructure in place, adaptive decisioning will remain a necessity. The living intelligence layer that determines how much trust to extend, when to require human confirmation, and when to intervene. Static cryptographic handshakes can prove identity, but they can’t interpret context, velocity, or intent.

Summary

In conclusion, the three key recommendations for eCommerce merchants and marketplaces when it comes to managing trust and risk in the era of agentic commerce:

  1. Build an agentic strategy that is able to adapt to interactions based on behavior and risk tolerance.
  2. Implement tailored interventions that conditionally allow traffic that looks customer authorized, inserting a step up at the point of payment, or when identity risk is introduced.
  3. Ensure your bots strategy continues to robustly block malicious traffic that is performing high risk actions such as scraping, or credential stuffing.

Darwinium is building toward this future today by integrating adaptive decisioning into commerce workflows so that KYA doesn’t just establish trust, but continually earns it. This is how we make agentic-commerce–friendly experiences both possible and safe.