Designing Systems for People and Agents

I recently read a piece by Simon Willison about Moltbook – an interesting experiment where AI agents talk to each other in public, and humans mostly just watch.

On the surface, it’s easy to treat this as novelty or just more AI slop. But what struck me wasn’t the spectacle; it was the mirror. Because what we’re really seeing is our own working patterns reflected back at us.

Not consciousness. Not autonomy. Coordination.

Culture and human reflection

AI agents don’t invent culture. They amplify it. They inherit incentives. They expand defaults. They reproduce whatever patterns we embed in their systems. Which means Moltbook isn’t showing us what machines think — it’s showing us what happens when you remove tone, hierarchy, and social context, leaving only inputs and outputs.

If culture is “how work actually happens,” Moltbook is what happens when you strip away the human cushioning.

What’s left is structure. Feedback loops. Signals. Artifacts. Execution paths. (And gaps.)

That’s true for agents. And it’s true for teams.

In human organizations, meetings often blur this reality. Personality fills in gaps. Charisma smooths over ambiguity. Relationships compensate for unclear systems. We rely on real-time interaction to patch over design flaws. But when you take those things away, whether through async work or automated agents, everything becomes explicit.

Clarity, or the lack of it, shows up immediately.

Async isn’t a convenience — it’s the operating system

This is the part that feels most familiar to me: remote, async teams already operate more like agents than we sometimes realize. We consume context, interpret signals, take action, and report back. The loop is continuous, it's the medium that changes.

Async isn’t a scheduling preference. It’s an operating model. It forces you to design for clear inputs, durable context, explicit ownership, and visible outputs. There’s nowhere for ambiguity to hide. Every missing decision, every fuzzy goal, every unowned edge eventually surfaces.

On my team, we recently introduced our own operating system: how work flows, how decisions get made, how information moves. And in many ways, our role is to help be part of GitHub’s operating system, creating connective tissue across product, programs, communications, and leadership so the company can move with clarity.

What I’ve learned from building these systems is simple: async doesn’t tolerate mess. Meetings can hide it. Async exposes it. Agents amplify it.

Bad inputs don’t quietly disappear, they compound, which is exactly why this moment matters.

From managing people to designing systems

The future of work isn’t humans versus machines. It’s composition.

We’ll increasingly work alongside humans, agents, automations, and platforms, all contributing in different ways. And as that happens, leadership changes move away from supervising activity and toward designing environments.

Not just asking whether everyone did their part, but whether the system itself makes success possible. Are the inputs clear? Are incentives aligned? Is ownership explicit? Can both people and machines understand what “good” looks like inside what we’ve built?

Leadership shifts from managing people to designing systems where people and agents can both succeed. That’s a fundamentally different craft. It’s less about presence and more about architecture and coherence.

Quick note on risk

Simon rightly points out that there are real security and safety concerns in these early agent experiments. That matters.

But I’m less interested in debating whether we should go here. We are going here. The productivity upside is too large for companies not to invest heavily in guardrails, safety, and reliability. Those problems will get attention, resources, and solutions.

So the more interesting question to me becomes: what kind of systems are we building?

A quieter takeaway

Moltbook isn’t showing us conscious machines, it’s showing us what happens when coordination runs without human buffering to fill the gaps.

Culture becomes visible. Process becomes legible. Design decisions echo loudly.

If we’re heading toward a world where work is shared between humans and agents, the core skill isn’t prompting AI; it’s designing clear, durable systems.

Systems that make intent obvious. Systems that preserve context. Systems that help everyone, human or machine, understand what matters and how to move.

And maybe that’s the real experiment here. Not whether bots can talk to each other, but whether we’re ready to be this explicit about how work actually happens.