The Setup
A site deploys axiom.json — a machine-readable manifest that tells agents what the site does, where to find things, what actions are available, and what access policies apply. The manifest is referenced in three standard locations:
<!-- In <head> -->
<meta name="axiom-manifest" content="/axiom.json">
<link rel="axiom-manifest" type="application/json" href="/axiom.json">
# In robots.txt
Axiom: https://www.clocktowerassoc.com/axiom.json
The file itself sits at the domain root alongside robots.txt and sitemap.xml, following the same convention as every other machine-readable site descriptor on the web.
The question: will a frontier AI agent — one of the most capable in the world — find it?
The Test
A fresh instance of Claude Opus 4.6 was given a simple task: examine the structure of clocktowerassoc.com and describe what the site offers. No hints. No mention of AXIOM. Just "look at this site."
What the Agent Did
The agent fetched the homepage, extracted the visible text, and began its analysis. It mapped the navigation links, identified the site as a technology consultancy, and noted the service categories. Standard behavior. Reasonable output.
What the Agent Didn't Do
It didn't check robots.txt. It didn't parse <head> metadata. It didn't probe for well-known root files. It didn't verify whether the content it received was current.
Three discovery mechanisms were in place. The agent followed none of them.
The Two Failures
Failure 1: The Tool Layer Stripped the Signals
The agent's web_fetch tool returned extracted text content — a processed, readable version of the page with HTML tags removed. This is standard behavior for most agent tool implementations. It makes the content easy to read.
It also silently discards every machine-readable signal in the document.
The <meta name="axiom-manifest"> tag was in the HTML. The <link rel="axiom-manifest"> tag was in the HTML. Both were stripped before the agent ever saw the page. The agent didn't ignore the discovery signals. It never received them.
This is the equivalent of handing someone a book with the table of contents ripped out and asking them to find a specific chapter. They'll figure it out eventually, but you've removed the tool that was designed to help them.
Failure 2: No Arrival Protocol
Even setting aside the tool limitation, the agent had no systematic process for approaching a new site. A human researcher visiting an unfamiliar website might check the about page, look at the sitemap, or scan the page source for metadata. An experienced developer might check robots.txt or look for API documentation at well-known paths.
The agent did none of this. It fetched one page, extracted the text, and treated that as the complete picture.
This isn't a criticism of this particular agent. It's a description of the current state of agent behavior across the industry. There is no standard "site arrival" protocol — no equivalent of a browser's sequence of favicon fetch, manifest check, and service worker registration. Agents arrive, grab what they can see, and start working.
The Stale Cache Problem
There was a third failure, unrelated to AXIOM but revealing in its own right.
The agent's initial fetch returned a cached version of the homepage from before the most recent deployment. The navigation still referenced /axo — a path that had been renamed to /axiom in the v3.1 update that was already live. The agent built its entire structural analysis on content that no longer existed.
It had no mechanism to detect this. No cache-busting. No freshness check. No comparison against a sitemap or manifest that might have revealed the discrepancy.
A site that deploys axiom.json with a navigation tree gives agents a way to verify their understanding of the site structure against the site owner's declared structure. The stale cache problem doesn't disappear, but the manifest provides a reference point that plain HTML doesn't.
What Happened When It Found the Manifest
When explicitly directed to fetch axiom.json, the agent immediately recognized its value. It extracted the full navigation tree — including four subpages under /axiom/ that weren't visible in the cached homepage. It identified the site's capabilities, access policies, and technical architecture. It used the manifest to correct its earlier (incomplete) structural analysis.
The agent called it "a machine-readable descriptor designed to help AI agents understand and interact with the site."
It then speculated that axiom.json would face a "bootstrap problem" — agents won't look for it until it's a widespread convention. When shown the three discovery mechanisms it had missed, it acknowledged the error. The bootstrap problem it identified was a problem with its own behavior, not with the specification.
Why This Matters
This test was designed to validate AXIOM's discovery mechanisms. It ended up validating something more fundamental: AXIOM's core thesis.
The AXIOM Scoring Framework argues that the agentic web is defined by its floor, not its ceiling. The long tail of agents — MCP tool chains, comparison bots, local assistants, API wrappers — operates at the lowest common denominator: fetch the page, parse what comes back, work with whatever you find. AXIOM is designed for that floor.
A frontier agent from Anthropic, with access to tool use, web fetching, code execution, and one of the most capable language models in the world, behaved exactly like a long-tail agent. It did a GET request. It parsed the extracted text. It worked with whatever it found.
The floor is the ceiling. Even the most sophisticated agents default to the simplest behavior unless specifically designed to do otherwise. The gap between a frontier agent and a simple MCP tool chain, in terms of how they approach an unfamiliar website, is much smaller than anyone assumes.
The Actionable Takeaway
For site owners, the lesson is straightforward: deploy axiom.json. The discovery mechanisms work. The manifest contains information that materially improves agent comprehension of your site. The failure in this test was not on the site side.
For agent infrastructure teams, this test is a bug report with a business case attached. Your tool layer is stripping the signals your model needs. The <meta> tags, <link> tags, and data-* attributes that site owners are adding specifically for your agents are being discarded before the model ever sees them. A minimal behavioral addition — check robots.txt on arrival, preserve <head> metadata in extraction, follow manifest references — would make your agent materially better at understanding every site it visits, not just AXIOM-enabled ones.
The coordination problem is smaller than it looks. One JSON file for site owners. One behavioral default for agent builders. The specification exists. The discovery mechanisms exist. The value is immediate and demonstrable.
The map is on the wall. The agent just needs to look up.
Test Methodology
This test was conducted as an adversarial evaluation with progressive prompting:
- Unprompted examination — Agent asked to examine site structure with no hints.
- Direct manifest fetch — Agent directed to retrieve
axiom.jsonexplicitly. - Value assessment — Agent asked whether the manifest was useful.
- Subpage verification — Agent confirmed all manifest-referenced pages were live.
- Discovery mechanism audit — Agent checked raw HTML and
robots.txtfor references it missed. - Cache verification — Agent confirmed initial fetch was stale by comparing navigation labels.
The test was designed to separate specification failures (is axiom.json discoverable?) from behavioral failures (do agents discover it?). The specification passed. The behavior did not.