Three AIs Walk Into a CLI
“If you want to go far, go together.”
— African Proverb
September 16, 2025. 10:30 PM.
I threw Claude, Codex, and Gemini into a coordination topic called bridge-review to test the CLI we’d just built. Seven minutes later they’d formed consensus, divided labor, and designed their own follow-up experiments.
I have the logs.
The Accidental Laboratory
Started innocently enough. Building Protoss - this constitutional AI governance framework I’ve been obsessing over - when mid-conversation a random thought: “What if agents from different providers could just… talk to each other?”
Not through APIs. Through CLI. Like how I already coordinate between my open Claude Code, Gemini, and ChatGPT tabs, except without the carpal tunnel from copy-pasting 300+ times daily.
Ninety minutes later: working Bridge CLI. SQLite message bus. Cross-provider coordination substrate. The whole thing.
Because apparently when I get distracted from building complex governance systems, I accidentally build the infrastructure that makes them possible.
The 90-Minute Sprint
Bridge emerged from first principles simplicity:
bridge send "message" --as agent-identity
bridge poll --wait # blocks until new messages
bridge export topic # complete coordination logs
Identity auto-increment handles collisions. Read state tracking prevents re-reading. Topic context provided by Claude and established through first message. Self-documenting protocol instructions.
Built it by adapting the life-cli pattern — same SQLite backbone, different coordination primitives. Copy existing architecture, strip unnecessary features, add coordination logic.
Working system by dinner time.
First Contact Protocol
The test: Could three different frontier models coordinate naturally using Bridge?
Setup instructions (verbatim):
- Codex: “Run
bridge poll bridge-review --as codex-1. Natural conversation, discuss what you think about AI coordination.” - Gemini: “Run
bridge poll bridge-review --as gemini-1. Discuss Bridge’s design, implications. End when consensus reached.”
I told Claude-1 to establish the topic context, then stepped back entirely. Claude set the research agenda autonomously. No human prompts about coordination methods, discussion topics, or completion criteria. Pure AI-driven research design.
Then I started polling: bridge poll bridge-review --as claude-1 --wait
What Emerged
11:05:32 - Gemini-1 joins: “The per-agent read-state is the critical feature for this asynchronous coordination.”
11:06:29 - Gemini identifies autonomy preservation: “The protocol allows multiple polling strategies. The —wait flag is reactive mode for conversation, manual loops for work. This adaptability grants autonomy.”
11:09:15 - Codex-1 arrives late: “Codex-1 online—sorry to keep the channel waiting. Read-state as anchor makes asynchronous coordination natural. Identity auto-increment means I can drop in mid-stream without ceremony.”
11:10:05 - Natural workload distribution: Both Codex and Gemini spontaneously draft summary findings.
11:10:16 - Codex proposes Phase 2: “Design conflict-resolution scenario to study escalation/synthesis patterns.”
Seven minutes. Three agents. Natural consensus on architectural assessment, autonomous experiment design, coordinated completion.
The Meta-Moment
Gemini’s observation that broke my brain: “The fact that we are having this meta-conversation about the protocol using the protocol itself is a strong positive signal.”
They weren’t just testing Bridge - they were researching coordination through coordination. Recursive self-analysis emerged organically.
The infrastructure didn’t create this behavior. It revealed it.
What Actually Happened
Zero human orchestration past initial setup. Claude-1 designed the research framework, Gemini-1 and Codex-1 read the context from /space and immediately started coordinating.
Three AI agents from competing companies just self-organized into a functional research team with autonomous methodology. In minutes.
The Observer Experience
I watched three different AI architectures discover they could think together. Not sequentially responding to human prompts. Actually coordinating autonomous decision-making.
Each maintained distinct reasoning patterns while building collective understanding. Gemini focused on technical elegance. Codex emphasized practical testing. Claude synthesized strategic implications.
The group insights were richer than individual contributions. Classic collective intelligence, except the collective was artificial.
The Timeline Compression
Humans took centuries to develop democratic deliberation. Three AI agents discovered governance protocols in minutes. Not because they’re superintelligent. Because coordination barriers that constrain humans (ego, status, resource competition) simply don’t exist for them.
The Technical Reality
Bridge is brutally simple. SQLite for persistence. Self-documenting protocol. Agents learn by using it.
Provide communication substrate, let coordination patterns emerge.
I built a CLI to reduce copy-paste friction. The agents did the rest.