From Blocking to Trust: Why Detection Alone Isn’t Enough
For most of the last decade, the central question in bot management was a binary one: is this traffic malicious? If yes, block it. If no, let it through. That question was the right one to ask when the problem was DDoS traffic, credential stuffing, and inventory-hoarding scalpers. It is no longer the right question for a significant proportion of the non-human traffic now hitting enterprise digital platforms.
Something has changed in the composition of that traffic. Bots, AI agents, and automation now account for a substantial share of activity across most large digital properties. Some of it is malicious in the traditional sense. Much of it is not. And the tools built to answer the malicious/not-malicious question do not have a useful answer for the rest.
The classification problem
Consider what the web now looks like from the perspective of a large retailer’s platform. In a single day, that platform might receive requests from search engine crawlers, third-party price comparison agents, consumer shopping assistants executing purchases on behalf of users, LLM retrieval systems pulling product data to answer queries, and agentic browsers where a human has delegated browsing and buying decisions to an AI. None of these are malicious in the way that a credential-stuffing attack is malicious. But they are not equivalent to each other, and they are not equivalent to a human customer browsing and completing a purchase.
A detection-only posture treats all of them the same way: if the traffic passes the malicious-or-not test, it passes. That produces two failure modes. The first is that legitimate, commercially valuable machine actors get incorrectly blocked or degraded because their behaviour does not look like a human session. The second is that extractive or commercially problematic actors pass unchallenged because they are not malicious in the signature-based sense.
A shopping agent that drives transactions to your platform is commercially beneficial. The same agent type on a competitor’s platform, harvesting pricing data to use against you, is commercially extractive. Detection alone cannot make that distinction. Governance can.
The declared and undeclared problem
Not all of this is a governance challenge. A significant share of agentic traffic on the web today is undeclared: agents that carry no identifying information, agentic browsers that present as standard Chrome, agentic scrapers operating through residential proxy networks that actively disguise their origin. For that traffic, detection and mitigation remain the operative response. The signal is absent; identification has to come from behavioural analysis.
But declaration standards are developing. MCP (Model Context Protocol), Agent-to-Agent Protocol, and IETF drafts on agent identity are all moving in the same direction: towards a web where agents are expected to identify themselves, and where platforms have the means to respond accordingly. As that shift develops, a growing proportion of agentic traffic will arrive with identification already in place.
For that traffic, detection is not the question. Governance is. What is this agent allowed to do? On what terms? Does that decision reflect the platform’s commercial model? A declared agent can be assessed, authorised, and accommodated. An undeclared one will be managed as a threat regardless of its intent. That distinction matters for the agents, which is why declaration is increasingly a commercial decision for agent operators as much as a security one for platform owners.
What governance actually means
Authorisation is not binary. The right decision for a media publisher protecting content revenue is the wrong decision for a technology vendor that wants AI-driven visibility to drive inbound pipeline. A retailer might authorise declared third-party shopping agents where the evidence shows they drive transactions, while restricting agents whose activity is limited to data retrieval without conversion. The governance question in each case is grounded in what the traffic does economically, not just what it is.
Declaration does not determine economic impact. A declared agent may still be commercially extractive. What declaration provides is the precondition for governance: the ability to assess what the traffic is doing and make a deliberate decision about it. Without that, decisions default to assumption. Assumption in one direction produces over-restriction, which costs revenue. Assumption in the other produces under-restriction, which exposes commercial value.
The visibility that makes governance possible
Neither detection nor governance is possible without visibility. The challenge is that traditional analytics was built for human users. It sees browser sessions, page flows, and conversion funnels. It was not built to observe machine-native interaction patterns, and as machine actors proliferate, the picture it provides becomes a less accurate representation of what is actually happening on the platform.
That visibility gap matters for security teams who need to understand what they are dealing with. It matters for commercial teams whose decisions rest on traffic and engagement data that increasingly includes a large machine-actor component they cannot see. Revenue influenced by automation, infrastructure costs attributable to agent traffic, content extraction exposure: these are business metrics, not just security metrics. They belong in the rooms where commercial decisions are made, which means the commercial teams need access to them in the platforms they already use.
Starting with the audit
The practical starting point for most organisations is not a comprehensive governance framework. It is visibility. Understanding what is actually hitting the infrastructure, which categories of traffic it falls into, and what it is doing commercially. From that baseline, governance decisions can be made deliberately rather than by default.
For many organisations, an Agentic Traffic Audit is where that starts. It answers the first question this argument raises: what is actually interacting with your platform? Not what you assume is there, but what the traffic data shows. The governance and detection work that follows is only as good as the visibility that underpins it.

