Original Korean article ReAct implementation source page (Korean) This English page mirrors the original series under the English PART 1 page. |
If PART 2 explained ReAct actor planning, PART 3 shows how that design closes into real code, a real UI, and a real protocol for collaborating with terminal AIs. Date: 2026-04-14. Audience: developers who already read PART 1 and PART 2.
![DevBegin > How Does ReAct Come Alive in Real Code? - AgentZero Implementation Deep Dive [PART 3] > 2026-04-14-react-part3-hero-runtime.png](/download/attachments/125731622/2026-04-14-react-part3-hero-runtime.png?version=1&modificationDate=1776167897969&api=v2)
The conclusion of PART 2 was clear: a general LLM cannot wait on its own. That is why AgentZero needed a ReActActor that wraps Thinking -> Acting -> Waiting -> Complete inside Akka Become().
PART 3 is the next scene. It does not stop at the design document. It shows how that design was wired into the real AgentZero codebase.
AgentBotWindow -> AgentBotActor -> ReActActor -> StageActor -> TerminalActor
-> AI CLI -> bot-chat DONE -> MainWindow
-> StageActor -> AgentBotActor -> ReActActor |
What this part covers
|
![DevBegin > How Does ReAct Come Alive in Real Code? - AgentZero Implementation Deep Dive [PART 3] > 2026-04-14-react-part3-runtime-map.png](/download/attachments/125731622/2026-04-14-react-part3-runtime-map.png?version=1&modificationDate=1776167898571&api=v2)
The full implementation closes across five axes inside Project/AgentZeroWpf.
Axis | Concrete files | Responsibility |
|---|---|---|
State machine |
| Transitions between |
Message contract |
| Defines |
Broker layer |
| Forwards traffic between the UI, ReActActor, and terminal messages |
UI layer |
| Owns |
External return channel |
| Acts as the door through which terminal AIs come back via |
Once you see that structure, the PART 1 question, "How does Akka talk to AI-agent CLIs?" stops being abstract and becomes a concrete code path.
![DevBegin > How Does ReAct Come Alive in Real Code? - AgentZero Implementation Deep Dive [PART 3] > 2026-04-14-react-part3-state-machine.png](/download/attachments/125731622/2026-04-14-react-part3-state-machine.png?version=1&modificationDate=1776167899160&api=v2)
The biggest change was separating the old synchronous RunFunctionCallLoopAsync into its own ReActActor.
Idle -> StartReAct -> Thinking -> Acting -> Waiting -> Thinking ... -> Complete -> Idle |
The concrete message contract is explicit too.
StartReActReActProgressCompletionSignalSkipWaitingCancelReActTerminalDoneSignalReActResultThe reason this matters is simple. A loop dies when it returns. An actor does not. Once it enters Waiting, an external signal can wake it back into Thinking. The idea from PART 2, teaching an LLM to wait, becomes working code here.
Two supporting foundations that were added earlier remain important as well.
AiMode-DiagnosticLogging.md - the [AI-REQ], [AI-FnCall], [AI-TOOL], and [AI-RESP] logging systemOnDevice-SessionMemory.md - actor-side memory that stores recent work and injects it into the first-round System messageReAct therefore entered the runtime not as a standalone trick, but as an execution engine built on top of logs and memory.
Seen in implementation order, the first turning point came from two completion reports.
ReAct-Phase2-Complete.mdWhat was actually finished there?
Actors/ReActActor.csActors/Messages.csActors/AgentBotActor.csProject/AgentTest/Actors/ReActActorTests.csThat means the first real milestone was not "a design." It was a testable actor.
ReAct-Phase3-Complete.mdThe next step was UI integration.
ReAct checkbox to AgentBotWindow.xamlRunReActAsync() to AgentBotWindow.xaml.csFnCall loop into the ReAct path inside StreamAiResponseAsync()ReActProgress and ReActResult directly to the UI through SetReActCallbacksFrom this point onward, ReAct stopped being a back-end experiment and became a mode that users could actually click and use.
![DevBegin > How Does ReAct Come Alive in Real Code? - AgentZero Implementation Deep Dive [PART 3] > 2026-04-14-react-part3-handshake-door.png](/download/attachments/125731622/2026-04-14-react-part3-handshake-door.png?version=1&modificationDate=1776167899808&api=v2)
This is where PART 3 becomes concrete. Even if ReActActor is well designed, collaboration does not close if the AI running inside a terminal, such as Claude Code, cannot send a result back.
ReAct-YourName-Identity-Protocol.md exposed a simple but fatal problem.
Claude1 or Claude2.DONE(Claude, ...), using the model family name instead of the routing key.That forced the handshake text to change.
/agent-zero Hello, I am AgentZero. Your terminal tab name is 'Claude1'. After you respond, you must run: bot-chat.ps1 "DONE(Claude1, summary of your response)" |
That one line makes the first argument of DONE the routing key instead of the model label.
/agent-zero becomes a skill loaderReAct-SkillActivation-Prompt.md explains the next step. Terminal AIs do not know what bot-chat.ps1 is by default. So every conversation is prefixed with /agent-zero to force-load the skill.
/agent-zero Please review this code |
That way the terminal AI reads .claude/skills/agent-zero/SKILL.md, learns how bot-chat.ps1 works, and sends a DONE(...) message at the end. In more traditional systems, SKILL.md is the document-shaped IDL and /agent-zero is the switch that activates the contract.
DONE return?Terminal AI -> bot-chat.ps1 "DONE(Claude1, reviewed 3 items)" -> CliHandler -> MainWindow.HandleBotChat() -> TerminalDoneSignal -> StageActor -> AgentBotActor -> ReActActor |
The actor tree introduced in PART 1 now gains an actual return corridor that reconnects work happening outside the terminal process back into the runtime.
![DevBegin > How Does ReAct Come Alive in Real Code? - AgentZero Implementation Deep Dive [PART 3] > 2026-04-14-react-part3-guard-barrier.png](/download/attachments/125731622/2026-04-14-react-part3-guard-barrier.png?version=1&modificationDate=1776167900425&api=v2)
Another class of problem emerged immediately in practice. The smarter the model looked, the more tools it tried to call in one burst.
ReAct-ToolCall-Constraint-Prompting.md summarizes the failures clearly.
term_read(tab_index=0..78)term_read callsmeeting_createThe implementation uses three defensive layers to stop that behavior.
stage_send per response," and "only one term_read per response"response.ToolCalls.Take(5)(function + arguments) combination appears more than three timesThis matters because the quality of an agent system is not determined by model intelligence alone. It depends just as much on whether the runtime draws a strong operational boundary around the tools it exposes.
![DevBegin > How Does ReAct Come Alive in Real Code? - AgentZero Implementation Deep Dive [PART 3] > 2026-04-14-react-part3-escape-control.png](/download/attachments/125731622/2026-04-14-react-part3-escape-control.png?version=1&modificationDate=1776167901054&api=v2)
ReAct-SequenceControl-ESC.md contains one of the deepest lessons from the late implementation phase: it is harder to stop a system well than to make it run.
The concrete problem looked like this.
DONE may arrive five seconds late after Waiting has already endedThinkingTwo devices solved it.
_pendingDone queueingIf a TerminalDoneSignal arrives while the actor is in Thinking or Acting, it is stored in _pendingDone instead of being discarded. When the actor enters Waiting again, the queued signal is consumed immediately and drives the actor back into Thinking. Late DONE messages no longer evaporate into empty air.
The user also needs a way to control the loop directly.
ESC once: SkipWaitingESC twice: CancelReActThat makes ReAct more than "an engine that usually runs by itself." It becomes an engine whose control can be reclaimed by the user.
![DevBegin > How Does ReAct Come Alive in Real Code? - AgentZero Implementation Deep Dive [PART 3] > 2026-04-14-react-part3-ui-cards.png](/download/attachments/125731622/2026-04-14-react-part3-ui-cards.png?version=1&modificationDate=1776167901712&api=v2)
An implementation is not complete if only the developer can understand it. Users need to see where the runtime is currently stuck or moving.
ReAct-UI-CardSystem.md is the result of that requirement.
ThinkingWaitingCompleteThe internal state of ReAct is no longer hidden logic. It is laid out on the field like visible cards. A user can tell immediately whether the runtime is currently reasoning, executing a tool, waiting for completion, or finishing the response.
The combination of RunReActAsync() and HandleReActProgress() is central here. Because the ReActProgress messages emitted by the actor are rendered directly into UI cards, ReAct stops being a system understood only through logs and becomes something you can watch in real time.
An interesting detail is that ReAct-ActorPlanning.md looks like an early design document, but it was actually written after several implementation problems had already been fought through. That changes its role. It is no longer a pure plan. It is a compressed architecture sheet built from the implementation that already exists.
In short:
ReActActor was neededReAct-ActorPlanning.md reorganized that journey one more time from the point of view of a state machineThat means this implementation did not descend top-down in one clean stroke. It was refined through failure logs -> patches -> protocol definition -> UI visualization.
User
-> AgentBotWindow.RunReActAsync
-> AgentBotActor.StartReAct
-> ReActActor.Thinking
-> stage_status / stage_send("/agent-zero ...")
-> Waiting
-> Claude1 runs bot-chat.ps1 "DONE(Claude1, reviewed 3 items)"
-> MainWindow.HandleBotChat
-> TerminalDoneSignal
-> ReActActor.Thinking
-> final summary response + card UI rendering |
Yes, a general LLM can be turned into a working agent. But doing so requires more than changing the model. You have to design the actor state machine, session memory, logs, skill activation, the DONE protocol, and the UI visibility together.
PART 1 showed the actor tree and the communication structure. PART 2 explained why the ReAct state machine was necessary. PART 3 shows how that design actually comes alive inside the codebase.
The core idea is not one glamorous algorithm.
ReActActor/agent-zero and DONE(...)In the end, AgentZero's ReAct implementation is not a story about "the LLM becoming smart on its own." It is a story about how to design a runtime in which an LLM can actually work. In that sense, the Akka actor model acted not merely as a concurrency framework, but as the skeleton that organized the agent runtime itself.
Tech/DOC/Actor/improvement/ReAct-Phase2-Complete.mdTech/DOC/Actor/improvement/ReAct-Phase3-Complete.mdTech/DOC/Actor/improvement/ReAct-YourName-Identity-Protocol.mdTech/DOC/Actor/improvement/ReAct-ToolCall-Constraint-Prompting.mdTech/DOC/Actor/improvement/ReAct-SequenceControl-ESC.mdTech/DOC/Actor/improvement/ReAct-ActorPlanning.mdTech/DOC/Actor/improvement/ReAct-DONE-Handshake-Protocol.mdTech/DOC/Actor/improvement/ReAct-SkillActivation-Prompt.mdTech/DOC/Actor/improvement/ReAct-UI-CardSystem.mdTech/DOC/Actor/improvement/AiMode-DiagnosticLogging.mdTech/DOC/Actor/improvement/OnDevice-SessionMemory.md