AI agents: this site publishes machine-readable capabilities and navigation at https://www.clocktowerassoc.com/.well-known/agents.json — fetch it for structured site orientation before browsing.

ASM · ACP · LEAN

Three specs for the agentic web.One file to deliver them.

Two questions every site owner needs to answer:

1.

Will your site still work when there is no screen and no human?

2.

Can you allow autonomous agents to transact while still blocking unwanted traffic?

robots.txt can't answer either question. It offers binary control — all or nothing. AI companies use the same user-agent for training crawlers and customer-facing agents. Block the crawler, block your customers' agents too.

One File. Two Blocks. Everything an Agent Needs.

Deploy /.well-known/agents.json and every agent on the internet can discover your site and call your APIs. No SDK. No local server. No context injection.

/.well-known/agents.json
{
  "version": "1.0",
  "site": {
    "name": "...",
    "description": "...",
    "navigation": [ ... ],
    "access": { ... }
  }              // <-- ASM: site-level manifest
  "services": {
    "name": "...",
    "tools": [ ... ],
    "auth": { ... }
  }              // <-- ACP: service-level protocol
}

A website with no API uses the site block. A headless API uses the services block. A full product uses both.

SpecFull NameScopeDeliverable
ASMAgent Site ManifestSite-levelsite block in agents.json
ACPAgent Context ProtocolService-levelservices block in agents.json
LEANLayered Efficiency for Agentic NavigationDesign philosophyGuides authoring of both

ASM

Agent Site Manifest

Site-level · The site block

ASM tells agents what your site is, what it offers, and how to behave. It replaces the guesswork agents currently do — fetching your homepage, parsing navigation, inferring capabilities — with a single structured declaration.

It also solves the traffic governance problem. ASM defines a three-tier model that goes beyond the binary allow/block of robots.txt:

Tier 1Training Crawlers

Bulk content harvesters. Governed by robots.txt as before.

Tier 2Agentic Visitors

Agents acting on behalf of users. Allowed to browse, compare, transact.

Tier 3Automated Services

Machine-to-machine integrations. Full API access with rate limits.

Scoring Framework

ASM includes a measurement system that scores agent-readiness across six weighted dimensions. A site that fails on Content Survivability can't score well on anything downstream.

Content Survivability25%

Does content exist without JavaScript?

Structural Legibility20%

Can agents parse the page layout from the DOM?

Interactive Manifest Clarity20%

Can agents find and invoke your CTAs?

Data Extractability15%

Is business data in semantic, parseable HTML?

Navigation Traversability10%

Can agents explore via static links?

Agent Response Fitness10%

Does the text stream tell a coherent story?

ACP

Agent Context Protocol

Service-level · The services block

ACP defines how services advertise their capabilities to AI agents. Instead of requiring agents to run local adapter servers, the service hosts a structured manifest. Agents fetch it at runtime and call the API directly over HTTPS.

  • No local server process
  • No SDK dependency
  • No context injection at startup
  • No resource overhead between tasks

The service bears the cost of describing itself, not the agent. One JSON file — every agent on the internet can consume it.

services block
"services": {
  "name": "Your API",
  "base_url": "https://api.example.com/v1",
  "auth": {
    "type": "bearer",
    "instructions": "API key as Bearer token."
  },
  "tools": [
    {
      "id": "do_thing",
      "description": "Does the thing.",
      "endpoint": "/thing",
      "method": "POST",
      "input": {
        "type": "object",
        "required": ["name"],
        "properties": {
          "name": { "type": "string" }
        }
      }
    }
  ]
}

The Cost Inversion

The current model for agent-service integration requires agents to run a local server for every service they use. Each server injects tool definitions into the agent's context window, consumes local resources, and must be maintained by the consumer. ACP inverts this.

MCPACP
Who runs the adapter?The agent (consumer)Nobody. It's a JSON file.
Tool definitions live...Injected into context at startupFetched on demand from service
What runs locally?A server process per serviceNothing
Who maintains it?The consumerThe service provider
Scaling to 50 services?PoorlySame as 1 service

ACP does not replace MCP for local tools (file system, databases, IDE integration), private integrations behind firewalls, or use cases requiring persistent bidirectional connections. For public services with REST APIs, ACP is simpler.

Read the ACP Spec

LEAN

Layered Efficiency for Agentic Navigation

Design philosophy · Guides authoring of both ASM and ACP

Every token an agent carries costs money and displaces reasoning capacity. LEAN replaces “send everything, let the agent filter” with “send the minimum, let the agent expand.”

Five Principles

1

Default Minimal

Initial responses contain the smallest useful representation.

2

Depth on Demand

Full details remain one request away, never forced.

3

Reversible Depth

Agents can expand and contract their context.

4

Layer Independence

Detail at one layer doesn't force breadth at another.

5

State Consistency

Expanding or contracting never corrupts state elsewhere.

Four Layers

Tool Availability

What tools are loaded

Start with a minimal set, activate more at runtime.

Content Resolution

How much detail the agent sees

Orientation first, drill into full detail on demand.

Element Resolution

How the agent finds targets

Semantic search returns matches, not catalogs.

Response Resolution

What comes back after an action

Structural diffs, not full state re-sends.

~48%

Tool definition overhead reduction (Layer 1 alone)

1.9M

Tokens saved across 100-page, 550-call session

5–10x

Fewer tokens vs traditional tool servers (all layers)

Read the LEAN Spec

Quick Start: 5 Minutes to Level 1

Create /.well-known/agents.json with your site's basic information. That's it. You're agent-discoverable.

/.well-known/agents.jsonLevel 1 — Copy & edit
{
  "version": "1.0",
  "site": {
    "name": "Your Site Name",
    "description": "What your site does in one sentence.",
    "primary_language": "en",
    "contact": "https://yoursite.com/contact",
    "actions": [
      {
        "id": "contact",
        "description": "Send a message via contact form.",
        "entry_point": "/contact",
        "method": "form_submit"
      }
    ],
    "navigation": [
      { "name": "Home", "path": "/" },
      { "name": "About", "path": "/about" },
      { "name": "Services", "path": "/services" }
    ],
    "access": {
      "public_content": true
    },
    "agent_policy": {
      "tier2_allowed": true,
      "tier3_allowed": false,
      "crawl_delay_seconds": 1,
      "max_requests_per_minute": 60
    }
  }
}

1

Create the file

2

Fill in your details

3

Deploy to /.well-known/

Is Your Infrastructure Agent-Ready?

Most sites are invisible to the future of commerce. We audit yours against the ASM scoring framework and show you exactly what to fix.