Self-Correcting Agents: Reflexion, CRITIC, and ReAct Loops Compared
Three self-correction patterns dominate 2026 agent design. Side-by-side analysis of where each one wins, where each one fails, and how to combine them.
Why Self-Correction Stopped Being Optional
The frontier-model accuracy gains between 2024 and 2026 came as much from inference-time correction as from raw pretraining. The same model with no correction loop and the same model with a tuned correction loop differ by 8 to 15 points on hard tasks. The 2026 question is not whether to add a correction loop, but which pattern to use.
Three patterns dominate: Reflexion, CRITIC, and the ReAct loop with explicit verifier. Each one has a different mental model of "what was wrong" and a different cost profile.
ReAct With Verifier
flowchart LR
T[Thought] --> A[Action]
A --> O[Observation]
O --> V{Verifier OK?}
V -->|Yes| T2[Next Thought]
V -->|No| Fix[Repair Thought]
Fix --> A
T2 --> A2[Next Action]
The original ReAct loop interleaves thoughts with actions. The 2026 upgrade adds an explicit verifier (often a smaller, fast LLM or a deterministic check) that gates each observation. Cheap, low-overhead, well-suited to tool-using agents where each tool result has objective acceptance criteria.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Wins when: tool outputs are verifiable (compiles? passes lint? matches schema?). Fails when: errors are semantic and only visible at the trajectory level.
CRITIC
CRITIC adds an external knowledge-grounded critic step after each output. The critic compares the model's claims to a ground-truth source — often via web search, a database, or a code interpreter — and emits criticism that feeds back into the next attempt.
flowchart LR
P[Proposal] --> C[Critic: ground claims to evidence]
C -->|Issues found| R[Refine]
C -->|All grounded| Out[Output]
R --> P
Wins when: factual hallucination is the failure mode (Q&A, summarization, research agents). Fails when: the ground-truth source itself is wrong or unavailable, or when the critic is the same model as the proposer (self-grading is unreliable on hard problems).
Reflexion
Reflexion sits at the trajectory level. After a complete run, the agent generates a verbal self-reflection on what went wrong, stores it in memory, and starts the next run with that reflection in context. It targets the case where individual steps look fine but the trajectory is wrong.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
flowchart TB
Run1[Run 1: fail] --> Refl[Self-Reflection]
Refl --> Mem[(Reflection Memory)]
Mem --> Run2[Run 2: with reflection in context]
Run2 --> Eval{Pass?}
Eval -->|Yes| Done[Done]
Eval -->|No| Refl
Wins when: failure is structural ("I should have asked the user for X first") and a fresh attempt is cheap. Fails when: tasks are non-resettable (you cannot retry a sent email) or the reflection itself hallucinates.
Combining Them
The strongest 2026 production agents use all three at different layers:
- ReAct with verifier at the step level — cheap, fast, catches most errors
- CRITIC at sub-task boundaries — invoked when the agent is about to commit a side effect
- Reflexion between full task attempts — only on retry-safe tasks, not first attempts
Cost matters. Reflexion is the most expensive because it can multiply your token count by the number of retries. CRITIC adds a fixed overhead per checkpoint. ReAct verifiers are usually small models so the overhead is sub-10 percent.
A 2026 Reference Implementation
OpenHands, Devin reproductions, Anthropic Claude Code, and Cursor's Composer all implement variants. The common structure is:
- Each tool call has an attached verifier (compiler error? lint failure? schema mismatch?). Failures route back to the same step.
- Side-effect-bearing tools (file write, email, payment) require a CRITIC pass against the original goal.
- Whole-task failures emit a reflection that is stored in episodic memory and surfaced at the start of the next attempt.
Sources
- Reflexion paper (Shinn et al.) — https://arxiv.org/abs/2303.11366
- CRITIC paper (Gou et al.) — https://arxiv.org/abs/2305.11738
- ReAct paper (Yao et al.) — https://arxiv.org/abs/2210.03629
- OpenHands repository — https://github.com/All-Hands-AI/OpenHands
- "Self-correction in LLMs" survey 2025 — https://arxiv.org/abs/2308.03188
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.