ASM · ACP · LEAN
Three specs for the agentic web.One file to deliver them.
Two questions every site owner needs to answer:
Will your site still work when there is no screen and no human?
Can you allow autonomous agents to transact while still blocking unwanted traffic?
robots.txt can't answer either question. It offers binary control — all or nothing. AI companies use the same user-agent for training crawlers and customer-facing agents. Block the crawler, block your customers' agents too.
One File. Two Blocks. Everything an Agent Needs.
Deploy /.well-known/agents.json and every agent on the internet can discover your site and call your APIs. No SDK. No local server. No context injection.
{ "version": "1.0","site": {"name": "...","description": "...","navigation": [ ... ],"access": { ... }}// <-- ASM: site-level manifest"services": {"name": "...","tools": [ ... ],"auth": { ... }}// <-- ACP: service-level protocol}
A website with no API uses the site block. A headless API uses the services block. A full product uses both.
ASM
Agent Site ManifestSite-level · The site block
ASM tells agents what your site is, what it offers, and how to behave. It replaces the guesswork agents currently do — fetching your homepage, parsing navigation, inferring capabilities — with a single structured declaration.
It also solves the traffic governance problem. ASM defines a three-tier model that goes beyond the binary allow/block of robots.txt:
Bulk content harvesters. Governed by robots.txt as before.
Agents acting on behalf of users. Allowed to browse, compare, transact.
Machine-to-machine integrations. Full API access with rate limits.
Scoring Framework
ASM includes a measurement system that scores agent-readiness across six weighted dimensions. A site that fails on Content Survivability can't score well on anything downstream.
Does content exist without JavaScript?
Can agents parse the page layout from the DOM?
Can agents find and invoke your CTAs?
Is business data in semantic, parseable HTML?
Can agents explore via static links?
Does the text stream tell a coherent story?
ACP
Agent Context ProtocolService-level · The services block
ACP defines how services advertise their capabilities to AI agents. Instead of requiring agents to run local adapter servers, the service hosts a structured manifest. Agents fetch it at runtime and call the API directly over HTTPS.
- No local server process
- No SDK dependency
- No context injection at startup
- No resource overhead between tasks
The service bears the cost of describing itself, not the agent. One JSON file — every agent on the internet can consume it.
"services": {
"name": "Your API",
"base_url": "https://api.example.com/v1",
"auth": {
"type": "bearer",
"instructions": "API key as Bearer token."
},
"tools": [
{
"id": "do_thing",
"description": "Does the thing.",
"endpoint": "/thing",
"method": "POST",
"input": {
"type": "object",
"required": ["name"],
"properties": {
"name": { "type": "string" }
}
}
}
]
}The Cost Inversion
The current model for agent-service integration requires agents to run a local server for every service they use. Each server injects tool definitions into the agent's context window, consumes local resources, and must be maintained by the consumer. ACP inverts this.
| MCP | ACP | |
|---|---|---|
| Who runs the adapter? | The agent (consumer) | Nobody. It's a JSON file. |
| Tool definitions live... | Injected into context at startup | Fetched on demand from service |
| What runs locally? | A server process per service | Nothing |
| Who maintains it? | The consumer | The service provider |
| Scaling to 50 services? | Poorly | Same as 1 service |
ACP does not replace MCP for local tools (file system, databases, IDE integration), private integrations behind firewalls, or use cases requiring persistent bidirectional connections. For public services with REST APIs, ACP is simpler.
LEAN
Layered Efficiency for Agentic NavigationDesign philosophy · Guides authoring of both ASM and ACP
Every token an agent carries costs money and displaces reasoning capacity. LEAN replaces “send everything, let the agent filter” with “send the minimum, let the agent expand.”
Five Principles
Default Minimal
Initial responses contain the smallest useful representation.
Depth on Demand
Full details remain one request away, never forced.
Reversible Depth
Agents can expand and contract their context.
Layer Independence
Detail at one layer doesn't force breadth at another.
State Consistency
Expanding or contracting never corrupts state elsewhere.
Four Layers
Tool Availability
What tools are loaded
Start with a minimal set, activate more at runtime.
Content Resolution
How much detail the agent sees
Orientation first, drill into full detail on demand.
Element Resolution
How the agent finds targets
Semantic search returns matches, not catalogs.
Response Resolution
What comes back after an action
Structural diffs, not full state re-sends.
~48%
Tool definition overhead reduction (Layer 1 alone)
1.9M
Tokens saved across 100-page, 550-call session
5–10x
Fewer tokens vs traditional tool servers (all layers)
Quick Start: 5 Minutes to Level 1
Create /.well-known/agents.json with your site's basic information. That's it. You're agent-discoverable.
{
"version": "1.0",
"site": {
"name": "Your Site Name",
"description": "What your site does in one sentence.",
"primary_language": "en",
"contact": "https://yoursite.com/contact",
"actions": [
{
"id": "contact",
"description": "Send a message via contact form.",
"entry_point": "/contact",
"method": "form_submit"
}
],
"navigation": [
{ "name": "Home", "path": "/" },
{ "name": "About", "path": "/about" },
{ "name": "Services", "path": "/services" }
],
"access": {
"public_content": true
},
"agent_policy": {
"tier2_allowed": true,
"tier3_allowed": false,
"crawl_delay_seconds": 1,
"max_requests_per_minute": 60
}
}
}1
Create the file
2
Fill in your details
3
Deploy to /.well-known/
Is Your Infrastructure Agent-Ready?
Most sites are invisible to the future of commerce. We audit yours against the ASM scoring framework and show you exactly what to fix.