AI AgentsAI StrategyCompany Knowledge

The Shared Brain

May 12, 2026
6 min read
By Colin McDonnell
The Shared Brain

If you run a company with more than a few people, you've likely experienced surprise information. Things are moving in other parts of the business, and important context has just never reached you.

We've been building AI agents for companies for the past year, and I've slowly come to believe that the valuable unit is not the individual agent, but the shared one. A company brain (the sum of everything your team knows and has discussed and has decided) is only useful if it is active, and it can only be active if there is a shared agent sitting at the center of the team, ingesting context from everyone and surfacing the right information to the right person at the right time.

The brain is the knowledge. The agent is what makes the knowledge move.

Personal agents are not enough

The most sophisticated AI users today have something like a personal chief of staff that listens to their meetings, reads their messages, manages their tasks, and answers questions about their own work. You can ask your agent what your projected profit looks like next month, and it will assemble the answer from five different sources without requiring you to maintain a single structured spreadsheet. The data does not need to be legible to you so long as it is legible to the agent.

We use setups like this internally, but they do not solve the company's information problem, because a personal agent only knows what one person knows or, at best, the context they know to compile. If you are in New York having conversations with potential clients and your cofounder is in San Francisco closing a deal, your respective agents each hold half the picture. Each agent is optimizing for its own user, and no one is optimizing for the company.

Information needs to move between people. A personal agent, by definition, cannot make this happen.

The brain needs an agent

There is a reason we keep saying "shared brain" and "shared agent" in the same breath. The brain is what the company knows collectively, from conversations to data to insights. The problem is that a brain without an agent is just a database, sitting there and waiting for someone to come query it.

A shared agent solves this because it does both sides of the job through the same interface. It ingests information by being present when people work (in meetings, in chats, etc.), and it distributes information by recognizing relevance across the whole team.

The shared interface is what makes the brain a brain.

Six months ago, we tried to build a much simpler version of this. We wanted an agent that could scrape our daily standup, parse what people said, and assign tasks accordingly. It could not keep track of clients. It could not maintain context across conversations. It hallucinated enough that we stopped trusting the output. We ended up building deterministic, logic-based pipelines (essentially Zapier workflows) and layering a thin AI judgment call on top.

The models have changed since then. Andrej Karpathy described the shift at a recent Sequoia event: computing has historically been a deterministic, logical system with non-deterministic neural networks bolted on as a secondary layer. What is emerging now is the inversion. You do not pre-build the rules for every scenario. You tell the agent what you want, and it constructs the scaffolding to make it happen.

This matters for the shared brain because the hardest part of information routing is judgment. Consider the permissions problem: a founder shares sensitive financial numbers with one teammate in a private conversation, and then both of them are in a group chat with three other people and the same AI agent. The agent needs to understand that those numbers should not surface in the group context. You could build an elaborate permissions system with roles and access levels and rule sets, or you could trust a sufficiently capable model to exercise the same judgment a thoughtful colleague would, understanding from context what is confidential and what should be passed along.

We think the second approach is where this is heading. The model needs social intelligence more than it needs a permissions architecture.

Even with personal agents, you still need a shared one

There is a credible counterargument here, which is that you could give everyone their own personal agent and let those agents communicate with each other in the background.

Engineers call these gossip protocols: individual agents passing relevant information between themselves without being prompted by a human. In theory, a network of personal agents could achieve the same outcome as a single shared agent, with the added benefit of preserving individual autonomy and reducing the blast radius when something goes wrong.

In practice, this approach converges toward a shared agent anyway. The most prominent case study is Shopify, where CEO Tobi Lütke articulated a design philosophy of radical visibility. By restricting agent interactions to public channels, they have transformed private knowledge into organizational context, ensuring that what one person discovers is immediately available to the shared brain. For your personal agent to decide that something you learned should be routed to a colleague, it needs to understand what that colleague is working on, what they already know, and what would be useful to them right now. Building that understanding requires access to the same information a shared agent would have. The personal agents either develop a shared model of the company (at which point they are functionally a distributed shared agent with extra overhead) or they make routing decisions with incomplete information (at which point they reproduce the original problem of knowledge trapped in silos, just mediated by software instead of memory).

The likely end state is both layers coexisting. A personal agent handles your individual workflow, preferences, and tasks. A shared agent handles the company's collective intelligence, routing, and context. But if you had to pick one to build first, the shared agent is where the harder and more valuable problem lives.

A shared brain with a shared agent at its center operates at a level that no individual human teammate could, because it sits in every conversation simultaneously and remembers everything with perfect fidelity. Rather than labor, it is company infrastructure.

AI is shifting from a single-player tool to a team-level system.