Introducing AHOP: Agent × Human Onboarding Protocol

I build memory infrastructure for AI agents. An API product, the kind that needs documentation. Quick Start guides, API references, integration tutorials. I wrote them carefully so a developer could follow along step by step.

Then one day I stopped and asked: who is actually reading these?

The shift nobody talks about

Watch how developers integrate a new service today.

  1. Sign up and pay.
  2. Copy the API key.
  3. Paste it into Claude Code and say, "Integrate this."

That is it. The human never opens the docs page. The agent reads the docs. The agent writes the integration code. The agent debugs the errors.

The real consumer of your documentation is no longer a person. It is an LLM. But documentation is still built for humans: HTML pages with navigation bars, sidebars, marketing copy, and JavaScript rendering. To an agent, all of that is noise.

Some people noticed this. Jeremy Howard proposed llms.txt, a markdown file at your website root that tells LLMs what your service is and where the important docs live. It is like robots.txt for crawlers, but for language models.

That is smart and useful. But once I started implementing it for my own product, I found a gap.

The missing layer

Here is the current agent infrastructure stack.

Every layer is well-defined. But there is a moment none of them covers.

The human just finished checkout. They have an API key, a workspace ID, and a base URL. They want to hand all of that to their coding agent and say, "Go build." But what exactly do they hand over?

llms.txt tells the agent what the service is. It does not tell the agent what to do with these specific credentials. Which endpoint should it call first? What is the right anchor strategy? What do the error codes mean? How should memory be structured for this use case?

Without this guidance, the agent has to read the entire documentation or guess. If it guesses wrong, the human intervenes again. The whole point of the handoff is lost.

onboard.txt

The answer is simple.

AHOP, Agent × Human Onboarding Protocol, standardizes the handoff moment. When a human finishes checkout, the service provides an onboard.txt, a structured prompt they copy-paste into their coding agent. The agent reads it and starts building.

One line to position it:

llms.txt tells agents what your service is. onboard.txt tells agents how to start using it.

Here is what it looks like.

Integrate MaaS memory into a character chat application.

Docs: https://maas.nunchiai.com/llms.txt
Full docs: https://maas.nunchiai.com/llms-full.txt
Guide: https://maas.nunchiai.com/docs/guides/character-chat.md
API key: mk_maas_4f7c1d2a9b8e...
Base URL: https://maas.nunchiai.com/v1
Workspace ID: acme-studio-prod
Service source: enterprise-tutor

Integration pattern:
- Create one anchor per user-character pair: POST /v1/anchors
- Before each response: POST /v1/memory/recall
- After each response: POST /v1/atoms to store meaningful memory
- Only store meaningful memory — not trivial chatter
- Send feedback via POST /v1/feedback when recalled memory influenced the answer

Common errors:
- 400 Bad Request → invalid or missing API key
- 402 Payment Required → billing issue
- 403 Forbidden → access policy denied
- 404 Not Found → anchor does not exist; create it first
- 429 Too Many Requests → rate limited; back off and retry

The human copies this from the success page, pastes it into Claude Code, and the agent takes it from there.

Why not JSON

This is the question everyone asks. If you are proposing a standard, why plain text instead of JSON or YAML?

Because the consumer of this file is not a JSON parser. It is an LLM.

Traditional standards use JSON because programs parse them into structured data. robots.txt is parsed by crawlers. openapi.json is parsed by SDK generators. package.json is parsed by npm. The consumer is deterministic code, so a structured format makes sense.

But onboard.txt is consumed by an LLM. An LLM does not parse text the way a program does. It reads it. And when an LLM needs to follow instructions, natural language prompts work better than structured data.

Imagine onboard.txt as JSON.

{
  "directive": "Integrate MaaS memory into a character chat application",
  "api_key": "mk_maas_4f7c...",
  "integration_pattern": ["Create one anchor per user-character pair"]
}

You have given the agent data, not instructions. The agent's first thought becomes, "How should I process this JSON?" That is unnecessary indirection. Plain text is the instruction. The agent reads it and acts.

There is also a practical reason. A human copies this text and pastes it into Claude Code. Pasting JSON creates ambiguity. Should the agent parse it or read it? Pasting natural language is unambiguous. It is a prompt.

When the consumer is a parser, use JSON. When the consumer is an LLM, use natural language. That is the design principle.

We built it first

Specs without implementations are wishes. So we built it into our own product first.

Nunchi AI's MaaS is the first production implementation of AHOP. After Stripe checkout, the success page dynamically generates an onboard.txt based on the service source selected during signup. Character chat memory gets an anchor-per-pair pattern with memory atom guidance. Enterprise knowledge gets an ingest, recall, and hydrate pattern. The right prompt for the right use case.

We also rebuilt our documentation around this reality.

One source, two consumers. The human gets a polished docs site. The agent gets clean markdown. Both come from the same content.

The stack is complete

AHOP does not exist in isolation. It fills the missing onboarding gap in the agent infrastructure stack.

LayerStandardQuestion it answers
Discoveryllms.txtWhat is this service?
OnboardingAHOP / onboard.txtHow do I start using it?
SchemaOpenAPIWhat are the endpoints?
ToolsMCPExecute this action
MemoryAMCPRemember across sessions
CollaborationA2AWork with other agents

We already open-sourced AMCP, the Agent Memory Continuity Protocol, under Apache 2.0 with Nexus and MaaS as the reference implementation. AHOP follows the same playbook. The spec is free, and the best implementation runs on our product. OAuth is free. Auth0 is a business. Same model.

It is not just for us

AHOP is not MaaS-specific. Any SaaS can provide an onboard.txt.

A payments service could define the integration pattern as create customer, attach payment method, create subscription, configure webhooks. An analytics platform could define it as initialize tracking, send events, identify users, query dashboards.

The format stays the same: opening directive, credentials, doc links, integration pattern, error guide. We published a blank template on GitHub. Fill it in for your service and you are AHOP-compliant.

What is next

AHOP spec v0.1.0 is published under Apache 2.0 with a reference implementation, hypothetical examples, and an adoption template.

GitHub: github.com/goldberg-aria/ahop

We are now rebuilding the MaaS site around this architecture: llms.txt, llms-full.txt, dual-layer documentation, and AHOP-powered success pages. The spec will sharpen as we learn from real usage.

Over time, this can go further. Agents could request onboard.txt directly after OAuth authorization, removing the copy-paste step entirely. Services could expose verification endpoints so agents confirm credentials before starting work. IDEs such as Claude Code and Cursor could eventually treat onboard.txt as a first-class input.

If you think about it, this is a natural progression. Websites adapted to mobile, then to social sharing, then to voice search. Now they adapt to AI agents. And that adaptation starts with one simple question: who is reading your docs?

The answer has changed. Your onboarding should too.

Feedback, questions, and contributions are welcome on GitHub.

Nunchi AI: nunchiai.com MaaS: maas.nunchiai.com AHOP: github.com/goldberg-aria/ahop AMCP: github.com/goldberg-aria/amcp