Skip to content
Learn Agentic AI
Learn Agentic AI12 min read33 views

LangGraph State Management: TypedDict, Reducers, and State Channels

Master LangGraph state management with TypedDict schemas, annotation reducers for message lists, custom state channels, and strategies for complex multi-step agent workflows.

State Is the Foundation of LangGraph

Every node in a LangGraph workflow reads from and writes to a shared state object. Understanding how state is defined, updated, and merged is the single most important concept for building reliable agent graphs. Get state management wrong and your agents will overwrite data, lose context, or produce unpredictable results.

Defining State with TypedDict

State schemas are defined as Python TypedDict classes:

flowchart TD
    USER(["User input"])
    SUPER["Supervisor node<br/>routes by state"]
    A["Specialist node A<br/>research"]
    B["Specialist node B<br/>writing"]
    TOOL{"Tool call<br/>needed?"}
    EXEC["Tool executor<br/>ToolNode"]
    CHK[("Postgres<br/>checkpointer")]
    INT{"interrupt for<br/>human approval?"}
    HUMAN(["Human reviewer"])
    OUT(["Final response"])
    USER --> SUPER
    SUPER --> A
    SUPER --> B
    A --> TOOL
    B --> TOOL
    TOOL -->|Yes| EXEC --> SUPER
    TOOL -->|No| INT
    INT -->|Yes| HUMAN --> SUPER
    INT -->|No| OUT
    SUPER <--> CHK
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CHK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
    style HUMAN fill:#f59e0b,stroke:#d97706,color:#1f2937
from typing import TypedDict

class ResearchState(TypedDict):
    query: str
    sources: list[str]
    summary: str
    iteration_count: int

Each field represents a channel of data flowing through the graph. When a node returns a dictionary, LangGraph merges those values into the current state. By default, returned values overwrite existing values for each key.

The Problem with Default Overwrite

Consider a node that adds a source URL:

def search_node(state: ResearchState) -> dict:
    new_source = "https://example.com/article"
    return {"sources": [new_source]}

Without a reducer, this overwrites the entire sources list on every call. If you ran two search nodes sequentially, the second would erase results from the first. This is where reducers become essential.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Annotation Reducers

Reducers define how state updates merge with existing values. You declare them using Annotated types:

from typing import Annotated
from operator import add

class ResearchState(TypedDict):
    query: str
    sources: Annotated[list[str], add]
    summary: str
    iteration_count: int

Now sources uses the add operator as its reducer. When a node returns {"sources": ["new_url"]}, LangGraph calls existing_sources + ["new_url"] instead of replacing the list.

The add_messages Reducer

For chat-based agents, LangGraph provides a specialized add_messages reducer that handles message deduplication by ID:

from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage, AIMessage

class ChatState(TypedDict):
    messages: Annotated[list, add_messages]
    context: str

The add_messages reducer appends new messages to the list. If a message with the same ID already exists, it updates that message in place rather than duplicating it. This is critical for tool-calling loops where the LLM might regenerate responses.

Custom Reducers

You can write any function as a reducer. It takes the existing value and the new value, then returns the merged result:

def max_reducer(existing: int, new: int) -> int:
    return max(existing, new)

def unique_list_reducer(existing: list, new: list) -> list:
    seen = set(existing)
    result = list(existing)
    for item in new:
        if item not in seen:
            result.append(item)
            seen.add(item)
    return result

class AnalysisState(TypedDict):
    messages: Annotated[list, add_messages]
    max_score: Annotated[int, max_reducer]
    unique_tags: Annotated[list, unique_list_reducer]

Custom reducers give you precise control over how concurrent or sequential node outputs combine.

State Channels and Defaults

You can provide default values by using a class-based approach or by passing initial state on invocation. The recommended pattern is to always pass a complete initial state:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

initial_state = {
    "query": "agentic AI frameworks",
    "sources": [],
    "summary": "",
    "iteration_count": 0,
}

result = graph.invoke(initial_state)

This makes the starting condition explicit and avoids KeyError exceptions when nodes access state fields that were never initialized.

Nested and Complex State

State fields can hold any serializable Python type including dictionaries, Pydantic models, and dataclasses:

from pydantic import BaseModel

class DocumentRef(BaseModel):
    url: str
    relevance: float
    snippet: str

class DeepResearchState(TypedDict):
    messages: Annotated[list, add_messages]
    documents: Annotated[list[DocumentRef], add]
    metadata: dict

Using Pydantic models inside state gives you validation and type safety for complex nested data structures.

FAQ

What happens if two nodes write to the same state key without a reducer?

The last write wins. If node A sets summary = "X" and node B sets summary = "Y", and B runs after A, the final value is "Y". Use a reducer if you need to combine values rather than overwrite.

Can I remove items from a list state channel?

Yes. Write a custom reducer that supports removal signals. For example, you could return a special wrapper object that tells the reducer to filter out certain items, or you can replace the entire list by not using a reducer on that field.

Is there a size limit on LangGraph state?

There is no hard limit imposed by LangGraph itself, but state is serialized for checkpointing. Extremely large state objects — such as those containing full document texts — will slow down serialization and increase memory usage. Keep state lean and store large data externally with references.


#LangGraph #StateManagement #TypedDict #Reducers #Python #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Human-in-the-Loop Hybrid Agents: 73% Fewer Errors in 2026

Fully autonomous agents are still a fantasy in production. LangGraph's interrupt() lets you pause for human approval mid-graph without losing state. We cover approve/edit/reject/respond actions and CallSphere's escalation ladder.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.