The Interface Is No Longer the Code
How statistical parrots are changing the way we design and build programs
For most of computing’s history, we’ve treated the interface as something tangible: a file, a protocol, a spec you could point to. That made sense when the audience was human. A developer could read the documentation, follow the examples, and wire one system to another.
But that world is fading.
Today, much of our software is read not by people, but by machines. By co-pilots, assistants, and autonomous agents that don’t understand our words so much as predict them. They operate statistically, not semantically. And that means the interface has shifted.
It’s no longer the code, or the OpenAPI document, or the test checklist. It’s the context that surrounds them: the semantic field that shapes what happens next.
That’s where API Stories come in. They carry intent, not just implementation. They describe what an API means, not just what it does. And in an age of generative systems, that difference changes everything.
The Starting Point
Most teams start their API work with structure. They define endpoints, sketch data models, and argue about status codes. It feels productive because it’s tangible. But really, it’s like starting a story halfway through the second act and hoping the first one writes itself later.
I used to draw API design as a straight line: idea → design → build → deploy → document → monitor. Neat, predictable, and, while a compelling teaching aid, not at all representative of the real world. The real design is circular. It loops back on itself. You learn, adjust, and circle again. It is not a pipeline; it is an orbit.
In The Design of Everyday Things, author and director of UC San Diego’s “The Design Lab”, Donald Norman describes human action as a continuous feedback loop: goal → plan → perform → perceive → interpret → compare. That model has always stuck with me because it perfectly captures the rhythm of good API design. Every iteration begins with intent, takes an action, and then pauses to interpret what just happened. We design, observe, and adjust again and again. It is not a march of progress; it is a dialogue between intention and experience.
And like any orbit, it needs a center of gravity, something that keeps all that motion from flying apart. Something has to act as the gravity well, the force that keeps every artifact and iteration connected to its purpose.
For some teams, that center is a codebase or a data table. For others, it is a prototype or an OpenAPI file. Those are fine, but at the design stage, they are results, not sources. The source is the story.
For me, the gravity lies deeper in the system. The true anchor is the why of the API, the reason it exists, the problem it is meant to solve. It is the API Story. The story is the central artifact that gives everything else form.
Everything else spins around that: the screen mockups, the service stubs, the OpenAPI documents, the JSON Schemas, the Mermaid diagrams. They are all expressions of the same intent, radiating from a single core.
When you start from that center, Norman’s action cycle comes alive. Each loop, goal to evaluation, perception to correction, makes sense. You can move forward or backward, from idea to implementation or from implementation back to idea, without losing your way or tripping over static details that slow the progress of creating an API solution.
Once you recognize that story is the true source, the next question is what that story looks like in practice. How do you express intent in a way that both humans and machines can understand and use? That’s where the shape of a story begins to take form. For APIs, that pattern is not found in code or schema alone. It lives in the story itself, and it has a shape worth studying.
The Shape of a Story
If the API Story is the source, its shape reveals how intent becomes design. Every team has its own vocabulary for describing that intent such as user stories, use cases, scenarios, etc. but an API Story gives it structure. It’s not just prose. It’s a design language in and of itself: a way of capturing meaning in a form that both people and machines can read, reason about, and eventually act upon.
The story carries intent forward, serving as both a design artifact and a shared understanding.
That’s the key shift. In traditional documentation, we describe what the system does. In a story, we describe what the system means. And that difference changes everything; especially now that generative systems are not just reading our specs but drawing inferences with them.
In the process I teach, each story expresses its intent through five simple parts: Purpose, Data Properties, Resources, Actions, and Rules. Together they form a kind of grammar for designing systems that can evolve.
Purpose
Every story begins with a purpose. It’s the simplest part to write and the easiest to skip. The purpose captures why the API exists: what change it brings about in the world. Without it, everything else becomes arbitrary.
Purpose is the anchor. It tells both the human designer and the machine interpreter what success looks like. When AI tools reason about APIs, they don’t care about syntax; they care about outcome. “What is this API trying to accomplish?” That single sentence of purpose becomes the context through which every action, rule, and data property can be understood.
Data Properties
If purpose defines why, data properties define what: the information the API must know or express to fulfill that purpose.
Each property represents a piece of shared understanding between systems. When we choose those properties carefully, we create an intentional vocabulary. It’s not just a schema; it’s a language for meaning. When a generative model reads that structure, it doesn’t just see field names. It sees hints about relationships, identities, and priorities. Well-designed data carries intent forward the way a sentence carries thought.
Resources
Resources are the nouns of the story: the people, places, and things that act or are acted upon.
In code, resources become endpoints or objects. In a story, they’re characters. Each one represents something with continuity and identity. When we frame them this way, we stop designing endpoints and start designing relationships. That shift is what lets both humans and AI systems reason about the system dynamically: not as a set of isolated paths, but as a world that can be explored.
Actions
Actions are the verbs: the things that can happen.
Every action expresses a potential change in state: something that can be done, requested, or triggered. When you write actions as part of a story, you’re not listing functions. You’re defining affordances — possibilities that exist under specific conditions.
That’s where intent meets behavior. A good action tells both the implementer and the consumer what’s possible, when, and why. It’s also what makes stories generative: an AI system can look at an action, infer its prerequisites and effects, and plan sequences of behavior without needing hard-coded scripts.
Rules
Rules describe the boundaries of the story: what must remain true even when everything else changes.
They encode ethics, safety, and domain logic. They’re not just constraints; they’re the shape of trust. A well-defined rule helps systems reason safely within shared limits. It’s what allows a generative agent to act autonomously without crossing the boundaries you’ve set.
Rules give the story structure. Without them, intent dissolves into possibility.
Each of these parts works together to form a design language for intentional systems. It’s not a format to translate but a language to interpret. Humans read it as narrative. Machines read it as structure. Both see the same meaning, and that shared understanding is what allows AI to reason alongside us, not just after us.
That’s what makes API Stories powerful. They don’t replace specifications; they precede them. They give every future artifact (OpenAPI, ALPS, JSON Schema, diagrams, etc.) something to align with. The story holds the intent. Everything else flows from it.
Context as the Interface
As generative systems take on more of the work of interpreting and composing our designs, the interface has already shifted. What we’re designing now isn’t the code itself, but the context that surrounds it — the environment that shapes how statistical systems make their next move.
John Searle pointed this out decades ago in his piece “Minds, Brains, and Programs”: computers don’t understand symbols; they manipulate them.
That remains true today. Generative AI doesn’t reason; it predicts. Every output is a probability, not a thought. But what’s changed is that we can now shape the environment in which that manipulation happens. It is the context that biases those probabilities toward meaning instead of noise.
That’s where API Stories come in. An API Story expresses intent in a structured, human-readable form that also happens to be language, exactly what these systems consume. By describing why an API exists, not just what it does, we create text with semantic gravity. The story becomes a contextual field that guides prediction, pulling generative systems toward alignment with purpose.
We’re not teaching AI to reason. We’re teaching it what to attend to.
Biologist Deborah M. Gordon has shown that ant colonies coordinate not through intelligence but through environment. As she writes in “Ants at Work”, “There is no central control in an ant colony. Each ant decides what to do next by responding to local stimuli... The colony regulates its activity collectively through these local interactions.”
Generative systems work the same way. Their apparent intelligence depends entirely on the context they inhabit; on the signals and structures that shape their probabilistic choices.
API Stories let us design that context. They give us a way to embed purpose, relationships, and constraints directly into the language environment itself, influencing the statistical space where AI models operate. The result isn’t “understanding,” but alignment — output that better reflects what we meant.
That’s the quiet power of story-driven design. We’re not building sentient collaborators. We’re building contextual environments: spaces where stochastic systems generate semantically aligned responses.
In this view, the interface is no longer the code, or the OpenAPI document, or the checklist of QA tests. It’s the context that surrounds them all. It is the shared semantic atmosphere that influences what comes next: the next token, the next message, the next decision.
API Stories give us the means to author that atmosphere. They let us seed context with purpose so that both humans and machines operate within the same field of meaning. That’s how we design for the world we actually have: a world where intelligence is statistical, context is the medium, and intent is the force that gives it shape.
Closing the Circle
In the end, it always comes back to language. The words we choose in our stories are not decoration; they are design. They are the signals that guide both people and machines toward meaning.
Generative AI doesn’t reason about architecture. It predicts language. But when that language carries intent, when it encodes purpose, relationships, and rules, those predictions start to align with what we actually meant to build.
That is the quiet leverage of an API Story. Its words influence the statistical choices a model makes when it assembles the satellites of design: the OpenAPI documents, the JSON Schemas, the diagrams, the test suites, the mock services. Each of those artifacts becomes an echo of the same original gravity: the story that gave it form.
Douglas Engelbart reminded us that the goal of computing is not automation but augmentation; the ability to extend human capability through systems that help us see, think, and act more effectively. API Stories work the same way. They don’t make AI systems smarter; they make our collaboration with them more meaningful.
We don’t need to make machines understand us. We need to give them better material to imitate.
That’s what API Stories do. They infuse the generative process with intention. They give our tools context worth predicting from. And they remind us that in every system, from the simplest endpoint to the most complex network of agents, meaning still begins, and ends, with the story we tell.
If this piece made you stop and think about where AI is taking design, you’ll love Signals. It’s where I explore how systems evolve; and how we can evolve with them. Subscribe to join thousands of designers and developers building for change.



