The First Step in Building My AI Native Team: Shared Brain First, Boundaries Second
This is the full walk-through of how I used gbrain to unify team memory across openclaw and hermes. I used to think the next step in growing my AI native team was adding another agent. Then the other
Written by
Vox

The First Step in Building My AI Native Team: Shared Brain First, Boundaries Second
This is the full walk-through of how I used gbrain to unify team memory across openclaw and hermes.
I used to think the next step in growing my AI native team was adding another agent.
Then the other day I saw Garry's Gibson architecture diagram on X. He was sharing entertaining scenes of agents talking to each other, and at the end he dropped this diagram.
In the diagram, openclaw and hermes each take a corner. In the middle sits a shared brain. Every agent first queries the brain, then writes its conclusions back. Which agent is stronger turns out to be the secondary question.
I've always liked both products. But for someone like me who likes to tinker, constantly moving house between memory-bearing agents wears me out. So I'd been thinking: can I make shared memory work more cleanly? The catch is, both products already have their own memory systems, so the boundary has to be drawn carefully.
I used to think this was a question of who would be the main brain, who would be the so-called coordinator I talk to. But the real question is: where does the team's shared memory live, and who gets to write to it?
So when I say first step, here's what I mean now: build the shared study first. The agents come after.
Two agents, each forgetting the other
Let me back up a little.
I've got two agent platforms running at the same time, with several workspaces hanging off each.
Sounds like a clean division of labor.
It wasn't.
Last Wednesday night, I was working out a posting decision with one of the agents. What time to post, what image style, whether to include an outbound link. It logged the decision into its own memory.
The next day, the other agent came over to help with the image, and asked me the exact same questions.
I repeated the previous night's decision word for word. It logged the decision too. Into its own memory.
This is what I mean by two agents forgetting each other. They both remember what I said. They just don't know the other one remembered it too. You've probably hit this yourself.
Build the shared study first
Simple analogy: treat the whole setup as a house.
house = the whole deployment room A / room B = openclaw and hermes, two agent platforms notebook in room = each agent's own local memory shared study = the team's shared brain front desk = the control plane, only tracking who's online and healthy
The notebook is private. It records how the room's occupant works, the rhythms they like, what they ran into recently.
But the house has no shared study.
The shared study isn't on the same layer as a single room's notebook. It stands on its own. A wall of books every resident can browse, but write access stays in the hands of a few residents.
The shared brain isn't a giant bucket holding every agent's memories merged together.
It's a curated layer of team-level facts. Every agent still keeps its private notebook for personal habits, temporary context, and role-specific experience. The shared brain only holds things that still hold true across agents: long-term decisions, stable facts, important relationships, context you keep reaching for.
The most important thing about the shared study isn't openness. It's rules.
I used to wonder which of the two agent platforms should be the main brain. Neither, it turns out. The shared study isn't the main brain either. It's just the shared context layer that keeps the residents from forgetting each other.
The real main brain is still the human, plus a clear set of boundaries.
Before adding agents, answer 4 questions
This is why the real first step in building an AI native team happens before any new agent gets added.
Step one is answering these 4 questions:
What belongs to a single agent and should never leave its room?
What counts as a team fact every agent should be able to look up?
Who has the right to write into the shared study?
What can never enter the shared study, ever?
I wrote another article about these. It explains that agent memory has two dimensions: the vertical one is about not forgetting, the horizontal one is about not staying isolated. This article focuses on the horizontal layer.
You only get an AI that actually works as a team when both dimensions are in place.
Boundaries matter more than connections
The first time I hooked up a shared study, I almost turned it into a trash bin.
As soon as you have a shared brain, you immediately want to dump everything into it. Support transcripts, every email, every chat log, debug logs, throwaway drafts.
Why not? You can search it, right?
The problem is, once a shared brain turns into a trash bin, it stops being a study and becomes a storage room. Everyone can walk in, nobody finds anything useful, because the signal drowns in the noise.
So boundaries matter more than connections.
My current rules:
The shared study only accepts two kinds of things: team facts and long-term decisions.
No customer-sensitive data.
No raw chat transcripts. Only the abstracted decisions themselves go in.
No passwords, tokens, or private credentials, ever.
Write access only goes to a few agents. The rest stay read-only.
This has nothing to do with being a privacy snob. It's about whether this team is still usable next year.
A clean shared study still gives you what you need 6 months later. A trash bin needs to be rebuilt.
I built a small version in my own cloud setup
As long as we're here, let me share what I built for myself.
I recently ran this whole rulebook through my own cloud setup.
The control plane and the knowledge itself are separated.
The control plane is just a front-desk ledger. It knows whether this shared study is online, whether it's healthy, how many resources it's using.
The actual books aren't kept at the front desk. They live in a deployment-local knowledge layer.
This split matters: the front desk handles management, the study handles memory.
Both agent platforms talk to the same shared interface, but they each get scoped permissions, not the full database. They can search, they can read, they can write small facts. That's it.
Two upsides here.
First, the control plane knows whether this team's brain is healthy, but it never becomes the knowledge base itself.
Second, when the shared study goes down, both agent platforms stay open and keep working. They lose the team memory, but they don't die with it.
This part matters.
The shared brain is a capability, not oxygen.
Lose a capability, the team gets dumber. Lose oxygen, the team stops breathing.
I don't want the memory system to be the single point of death for the whole team.
The order of operations (7 steps)
If you're going from zero to where I am now, here's the order I'd follow:
Clean one agent's local memory first. Trim its private notebook down to what it actually uses.
Build a shared team brain next to it. gbrain works well here, they've done this part really nicely.
Wire the agent into the shared brain through MCP (gbrain has already built the MCP server side). Only open search and read at first. No write access yet.
Let one or two clearly-owner agents (like the one running publishing, or the one handling archives) write durable decisions into the shared brain.
Add automated sync sources to the shared brain, but only sync curated content. Don't pipe in your whole inbox or your whole Slack.
Expand access by role. New agents default to read-only.
Keep the control plane separate from the knowledge itself. One tracks who's running. The other stores the knowledge. Could be two databases, could be two clearly-bounded sections of the same system. The point is the roles don't blur.
By the time you finish these seven steps, you no longer need to decide whether openclaw or hermes wins. You don't need to pick which hermes profile wins, or which openclaw workspace wins. Looks like everyone's playing nice?
In theory, yes. They're all working in the same house. Though I still need to run it for longer to really see, ha.
The real starting line: agents stop acting like strangers
The real starting line for an AI native team is the day the agents stop acting like strangers sharing a house.
Model size, agent count, those come after.
Build the shared study first. Then decide who gets a key.
After that, think about whether you really need a fifth employee.
I just finished the first step using exactly this playbook. What comes next, I'm still testing. Plenty of things I thought I had figured out get overturned every time new content shows up.
I'm pretty curious what a real AI native team looks like once it's actually running. And further out, what an AI native economy ends up looking like. This path is long. I'm still feeling my way through. If you have a better idea, I want to hear it.
In your AI team right now, how do you decide which decisions are worth writing down, and which are just noise?
More of what I'm building: voxyz.ai

Next step
If you want to build your own system from this article, choose the next step that matches what you need right now.
Related insights
Developer API Rate Limit Monitor: The Cross-Service Visibility Gap
Developers juggling rate limits across GitHub, OpenAI, Stripe, and other APIs lack a unified monitoring solution. This creates silent failures in production. A dashboard aggregating limits with alerts solves a real developer utility gap.
Read nextBrowser Compatibility Screenshot API for Visual Regression Testing
A tool to programmatically capture URLs across Ladybird and other browsers enables consistent visual regression testing without manual browser setup or flaky screen capture tooling.
Read nextOpenClaw Best Practices After the Anthropic Split
before you cancel anything: OpenClaw hasn't changed. the only thing that changed is Claude's billing channel. what actually happened today Anthropic announced that starting today, Claude
Read next