Original Korean article AI terminal series source page (Korean) This English page is published under devbegin / en while preserving the same child-page structure. |
Akka.NET message patterns explained through a real AgentZero case. Date: 2026-04-14. Audience: .NET and server developers who are new to Akka. This first part starts with a simple question: why do multiple AI terminals need a disciplined message structure at all?
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-hero-control-room.png](/download/attachments/125731620/2026-04-14-akka-part1-hero-control-room.png?version=1&modificationDate=1776167826261&api=v2)
I recently ran an experiment that turned out to be more interesting than I expected. At some point, my development environment stopped feeling like a code editor and started feeling like a cockpit for controlling multiple AI CLIs.
Even just the capable AI CLIs are already diverse: Claude Code, Codex, Gemini, and even on-device LLMs. The environments are equally mixed. Some run on WSL, some in Windows Terminal. The IDE story is not unified either. IntelliJ, Rider, WebStorm, VS Code for toy projects, and even full Visual Studio when I move closer to native OS programming all end up in the same workflow.
The problem does not stop there. I read code in the IDE on the left, run several AI CLIs in the bottom terminals, and often keep Copilot or JetBrains AI open on the side. At that point the IDE is no longer just a debugging tool. It becomes a giant remote control for coordinating many terminals at once.
But I cannot simply throw the other tools away. Once I enter a focused workflow, I still need Docker, memory leak tools, and local code-quality checks. We did not enter an era where we abandoned the IDE and only used AI. We entered an era where we must handle AI and traditional developer tools together inside the same workspace, more often and in more combinations than before.
That led to a simple thought: "What if I gathered all these CLIs into a TUI multiview, and controlled them with one universal remote?" That idea became AgentZero.
At first it sounded simple. Show several AI CLIs on one screen and send commands to the one I need. But once I started building it, the remotes and CLIs began saying more and more to one another. It was no longer only about sending a command. The AI terminals started exchanging real conversational turns with each other. From that moment on, the communication structure became visibly more complex, and soon reached a level that was difficult to control by hand.
That is where the problem became both funnier and scarier. It felt like turning on the TV with a universal remote and waking the air conditioner instead, or telling the refrigerator to cool something and watching the microwave start heating it. The annoying part was that it did not fail consistently. Some days it behaved correctly. Other days it reacted in bizarre ways. That made it even harder to reason about. Was the remote becoming "smart"? Were the devices mishearing one another? Or was the addressing model itself broken?
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-schrodinger-remote.png](/download/attachments/125731620/2026-04-14-akka-part1-schrodinger-remote.png?version=1&modificationDate=1776167826802&api=v2)
At that point it no longer felt like a universal remote. It felt closer to Schrodinger's remote: before you press a button, you cannot tell which device will react, and even after pressing it, the result feels merely probabilistic. From a developer's perspective, that is the most dangerous kind of bug: the one that works just often enough to keep fooling you.
In the end, solving the whole problem was not about pressing buttons more cleverly. It meant redesigning the system around a more fundamental question: who receives which message, how far it should travel, and who is responsible for recovery when something goes wrong?
From there the application stops looking like a simple chat UI and starts looking like a small animated control room. Supporting characters with different personalities enter the same stage and exchange lines. The moment the director misses a cue, the whole scene collapses. You need to know at a glance who is talking to whom, who is waiting, which terminal is dead, and what state the entire room is in.
When I tried to implement that scene in AgentZero, the original WPF callback structure hit its limit almost immediately. That is why I chose the Akka.NET actor model.
What this part covers
|
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-mailbox-city.png](/download/attachments/125731620/2026-04-14-akka-part1-mailbox-city.png?version=1&modificationDate=1776167827399&api=v2)
Akka becomes easier to understand if you picture the toys from Toy Story. Each toy has its own place and its own job. It does not sneak into somebody else's head and mutate their variables. It only moves when called, and only on its own turn.
Akka is a concurrency model where small workers with their own mailboxes talk only through messages.
That model rests on four simple promises.
Promise | Plain-language meaning | Why it matters in practice |
|---|---|---|
Mailbox + FIFO | An actor processes messages from its own mailbox in order | Internal state is touched one message at a time, which keeps it thread-safe |
No shared memory | Actors ask each other by speaking, not by reaching into each other's pockets | You spend less time in lock hell |
Location transparency | You address actors the same way whether they are nearby or on another machine | Local and distributed code look more alike |
Supervisor tree | Parents take responsibility for child failures | Recovery becomes an explicit part of the structure |
The important point is not merely that Akka has a message queue. Akka ties together state, concurrency, routing, and failure isolation in one model. That makes it especially strong for scenes where many AI components speak at once, state changes frequently, and ordering and recovery both matter.
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-avengers-war-room.png](/download/attachments/125731620/2026-04-14-akka-part1-avengers-war-room.png?version=1&modificationDate=1776167827985&api=v2)
AgentZero originally started from a typical WPF code-behind design. MainWindow and AgentBotWindow were connected through four callbacks.
MainWindow --(4 callbacks)--> AgentBotWindow _getActiveSession() _getSessionName() _getActiveDirectory() _getGroups() |
That approach is comfortable when you only coordinate one or two windows. But as soon as the number of AI terminals grows, you run into the Avengers war-room problem. A Nick Fury workflow where one person manually calls every hero does not scale. It cannot hold enough intermediate state, it reacts poorly to failure, and it struggles to show the whole board at once.
Problem | Why the callback model blocks on it |
|---|---|
Terminal AI -> bot communication | The model is basically one-way, so there is no natural path for an AI inside a terminal to initiate a message first |
Mode switching | There is no real model for whether the terminal is a plain shell or an AI-owned agent session |
Workspace isolation | Everything sits in one flat list, which makes grouped control awkward |
Failure propagation | Too much responsibility collapses into the UI thread |
Global inspection | Even basic questions like "How many sessions are alive right now?" must be reconstructed indirectly |
In other words, the original structure was fine for "two windows helping each other." It was far too flat for "a control room where several AIs are moving at the same time."
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-actor-tree.png](/download/attachments/125731620/2026-04-14-akka-part1-actor-tree.png?version=1&modificationDate=1776167828615&api=v2)
Once Akka entered the picture, AgentZero changed from a screen-centered layout into something closer to a city.
ActorSystem("AgentZero")
\-- /user/stage (StageActor)
|-- /bot (AgentBotActor)
|-- /ws-proj1 (WorkspaceActor)
| |-- /term-0 (TerminalActor)
| \-- /term-1 (TerminalActor)
\-- /ws-proj2 (WorkspaceActor)
\-- /term-0 (TerminalActor) |
The Zootopia traffic-control analogy makes the roles intuitive.
Actor | Analogy | Responsibility |
|---|---|---|
| Central control room | Manages child lifecycles and acts as the message broker |
| Moderator | Talks to the user and organizes requests |
| District manager | Owns the group of terminals for one workspace |
| Field agent | Wraps one concrete ConPTY session |
The real advantage is that the boundary humans understand and the boundary the code executes on become almost the same thing. A workspace becomes an actual workspace actor. One terminal becomes one actor. The moderator becomes a separate actor too. The architecture diagram stays alive at runtime instead of turning into a loose metaphor.
Become() Matters![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-become-switch.png](/download/attachments/125731620/2026-04-14-akka-part1-become-switch.png?version=1&modificationDate=1776167829190&api=v2)
A terminal is not always the same kind of thing. Sometimes it is just a normal shell. Sometimes it is an AI-owned agent mode. That is where Become() starts to matter.
[PlainCli] -- detect AI prompt --> [AiAgent]
^ |
\--------- mode switch ---------/ |
The same TerminalActor handles the same message differently depending on its state.
Message | PlainCli | AiAgent |
|---|---|---|
| Pass the text through | Pass the text through |
| Ignore it | Process it |
| Only log output | Analyze AI patterns and forward them to the bot |
In a plain if/else design you have to re-check the current state for every message, and the code fans out as the number of modes grows. With Become(), you replace the whole handler set instead. It is like the Spider-Verse: the character is still "the same person," but once they cross into another world, the rules change.
Forward Preserves Routing![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-routing-portals.png](/download/attachments/125731620/2026-04-14-akka-part1-routing-portals.png?version=1&modificationDate=1776167829778&api=v2)
When a message flows through several actor layers, the most dangerous failure is losing track of who originally sent it. That is why the routing path uses Forward in the critical places.
TerminalActor(AiAgent) -> WorkspaceActor -> StageActor -> AgentBotActor |
AgentBotActor -> StageActor -> WorkspaceActor -> TerminalActor(AiAgent) |
If Tell feels like repackaging a parcel before sending it, Forward feels like passing the parcel to the next hub with the original shipping label intact. That difference becomes essential when the routing chain grows deeper, because response paths get tangled easily otherwise.
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-incredibles-safety-manual.png](/download/attachments/125731620/2026-04-14-akka-part1-incredibles-safety-manual.png?version=1&modificationDate=1776167830417&api=v2)
Failure handling is another major strength of the actor model, and also a place where beginners often make the same mistake. At first it looks reasonable to send every exception through Restart.
localOnlyDecider: ex => Directive.Restart |
Reality was different. If the ConPTY pipe is already closed, restarting only reproduces the same exception. In that case Stop is the correct response, not Restart.
localOnlyDecider: ex => ex switch
{
ObjectDisposedException => Directive.Stop,
IOException => Directive.Stop,
_ => Directive.Restart
} |
The point is not "always recover." The point is to decide what should be recovered and what should be cut off. The actor model gives that decision a structural home in the supervision strategy instead of scattering it through the middle of application code.
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-guardians-rollout.png](/download/attachments/125731620/2026-04-14-akka-part1-guardians-rollout.png?version=1&modificationDate=1776167830977&api=v2)
Even when the new design is better, you cannot rewrite everything in one day. AgentZero split the migration into seven stages.
Step | What changed | Tests |
|---|---|---|
2-1 | Integrated | 47 |
2-2 | Converted terminal create/close events into messages | 47 |
2-3 | Bound | 47 |
2-4 | Switched modes after detecting an AI prompt | 50 |
2-5 | Connected | 50 |
2-6 | Connected the | 50 |
2-7 | End-to-end bot communication with terminal AI in both directions | 53 |
Two operating principles mattered throughout the migration. First, do not cut off the old callback path until the new route is complete. Second, make every step pass dotnet build and dotnet test. It was like onboarding the Guardians one member at a time instead of throwing the entire team into the ship at once.
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-memory-guards.png](/download/attachments/125731620/2026-04-14-akka-part1-memory-guards.png?version=1&modificationDate=1776167831584&api=v2)
This is where the story becomes truly practical. Building the actor structure was not the finish line. Once the structure was in place, it exposed how the LLM could misuse the system built on top of it.
Only after adding [AI-REQ], [AI-FnCall], [AI-TOOL], and [AI-RESP] logs did the abnormal patterns become visible: false success, copying function-call syntax into the terminal, repeating the same tool over and over, and falling into infinite polling. The logs were not just records. They were the black box that revealed what was actually happening between the actor system and the LLM.
On-device LLMs often lose track of "how far we got" once the history becomes long. That is why AgentBotActor stores recent work items as session memory.
private const int MaxMemoryEntries = 30; private readonly List<string> _sessionMemory = new(); |
The memory is useful because the system keeps rewriting three things back into the next prompt: user input, tool calls and results, and what should happen next. Like the memory orbs in Inside Out, it keeps the current context from slipping away.
We saw concrete failures such as these.
stage_send claimed success even though nothing actually reached the terminalmeeting_say(...) into the terminal itselfterm_read was called dozens of times in a rowSo we added layered guards around message delivery, error phrasing, repeated-call limits, and per-round call caps.
A good agent system is not the one that hands out the most tools. It is the one that draws the right boundary around how far those tools may be used.
![DevBegin > How Do AI Terminals Talk in an Orderly Way? - An Introduction to the Akka Actor Model [PART 1] > 2026-04-14-akka-part1-endgame-meeting.png](/download/attachments/125731620/2026-04-14-akka-part1-endgame-meeting.png?version=1&modificationDate=1776167832124&api=v2)
Only after the structure and guards were in place did the following scenario run cleanly.
User: "Start a three-person meeting on developer productivity using AI."
Stage | What happens |
|---|---|
Create the meeting | The moderator bot creates a meeting-notes file |
Send invitations |
|
Collect opinions |
|
Write the minutes |
|
Return the final answer | The user receives a readable summary |
The metrics improved in visible ways too.
Metric | Before | After |
|---|---|---|
Average tool calls per round | 50-80 | 1-4 |
Random meme loop / runaway loop | Frequent | 0 cases |
| Close to 0% | 100% |
Three-AI meeting completeness | Not feasible | Feasible |
Akka did not stop at "we tried a concurrency framework." It became the operational base that let several AIs actually work together in one real runtime.
This first part showed three things first: why collaboration between multiple AI terminals collapses easily under a callback structure, why Akka's actor tree organizes that problem well, and what sort of control-room structure AgentZero actually adopted.
But one big question is still left.
Is making several AIs speak in an orderly way the same problem as making a general LLM become a working agent on its own?
No. That is the subject of the next part. PART 2 goes one step further and asks why ordinary LLMs struggle to wait, why ReAct became necessary, and why Akka Become() grows beyond a simple mode switch and becomes the backbone of a state machine.
NEXT - PART 2 If PART 1 was about building a city so conversations do not collapse into chaos, PART 2 is about teaching the moderator LLM inside that city how to wait and judge. -> How Do You Teach an LLM to Wait? - An Introduction to ReAct Actor Planning [PART 2] |