Hypermedia is the missing control surface for AI
Clear affordances deliver reliable agents.
Recently I found myself circling back to a familiar corner of web architecture, one we tend to visit only when something breaks. Hypermedia. Not as a style preference, not as an academic footnote, but as a practical mechanism we quietly removed and are now scrambling to reintroduce under different names.
The timing matters. We are no longer designing APIs primarily for humans writing code once and walking away. We are designing systems that machines will explore, retry, and adapt against continuously. And in that environment, the absence of a live control surface is not just inconvenient; it is destabilizing.
Schema
For a long time, we convinced ourselves that schemas were enough.
Schemas describe structure. They tell you what fields exist, what types they hold, and what shape a request or response should take. They are very good at this. But they are silent about something more important: what can I do next, right now, given the current state of the system.
Hypermedia answers that question.
Links, forms, actions, transitions; these are not decorations. They are expressions of possibility. They tell a client, human or machine, which moves are currently available, which ones are blocked, and which ones require preconditions to be satisfied. In other words, hypermedia encodes verbs, not just nouns.
This distinction did not matter much when clients were carefully scripted and tightly coupled. Humans could read documentation, infer intent, and use code to patch over gaps with experience. But agents do not have that luxury. They operate at runtime, under uncertainty, and often without a predefined plan.
This is why removing hypermedia did not actually simplify systems. It just displaced decision making into client logic, assumptions, and folklore. The service stopped participating in the conversation and started issuing monologues.
When you look back at the original REST architectural constraints articulated by Roy Fielding, hypermedia was never optional. It was the mechanism that allowed independent evolution without coordination. Take it away, and you get brittle integrations that only appear stable until conditions change.
And conditions always change.
A useful way to say this out loud is simple: schemas explain how to talk; hypermedia explains what to do.
Design gets fragile when it forgets to expose its verbs.
Agents
This lands very close to the problems we are now seeing with agentic systems.
We keep asking how to make AI safer, more controllable, more aligned. The answers usually arrive dressed as policy, guardrails, or external oversight. All of those matter. But there is a more direction operational layer underneath them.
Systems that clearly expose their affordances are not just easier to use, they are easier to observe and supervise.
An agent operating against hypermedia does not need to guess what an endpoint might allow. It does not need to hallucinate a workflow. It can attempt an action and receive a concrete response that either advances its goal or returns a constraint. Failure becomes information. Exploration becomes bounded.
This looks suspiciously like how intelligent behavior has always worked in humans and machines.
Long this latest version of AI, Doug Engelbart argued that progress came not from automating intelligence, but from improving the surfaces through which humans and systems interacted. Hypermedia is one of those surfaces. We treated it as expendable, then wondered why our systems became opaque.
There is an irony here worth sitting with. In the rush to make APIs more machine friendly, we optimized for static contracts and removed the one thing machines are actually good at consuming. A language model can read documentation, but it cannot rely on it. It can parse schemas, but schemas do not negotiate.
Hypermedia negotiates.
It says: here is what you can do now, given where we are.
That is not an interface style. That is a control surface.
Legibility
Hypermedia does not make systems intelligent; it makes them legible.
And legibility is the minimum requirement for any system we expect to share responsibility with.


