Skip to content
Learn Agentic AI
Learn Agentic AI13 min read20 views

LangGraph: Building Stateful Multi-Agent Workflows with Graphs

Learn LangGraph's graph-based approach to building stateful, multi-step AI workflows — including nodes, edges, conditional routing, state management, and human-in-the-loop patterns.

Why LangGraph Over Plain Agents

LangChain agents follow a linear loop: reason, act, observe, repeat. This works for simple tool-using agents, but falls short for complex workflows that need branching logic, parallel execution, human approval steps, or multiple specialized agents collaborating.

LangGraph models workflows as directed graphs. Each node is a function that transforms state. Edges define the flow between nodes, and conditional edges enable dynamic routing. The state is a typed object that persists across the entire execution, and checkpointing lets you pause, resume, or replay workflows.

Core Concepts

A LangGraph workflow has four elements:

flowchart TD
    USER(["User input"])
    SUPER["Supervisor node<br/>routes by state"]
    A["Specialist node A<br/>research"]
    B["Specialist node B<br/>writing"]
    TOOL{"Tool call<br/>needed?"}
    EXEC["Tool executor<br/>ToolNode"]
    CHK[("Postgres<br/>checkpointer")]
    INT{"interrupt for<br/>human approval?"}
    HUMAN(["Human reviewer"])
    OUT(["Final response"])
    USER --> SUPER
    SUPER --> A
    SUPER --> B
    A --> TOOL
    B --> TOOL
    TOOL -->|Yes| EXEC --> SUPER
    TOOL -->|No| INT
    INT -->|Yes| HUMAN --> SUPER
    INT -->|No| OUT
    SUPER <--> CHK
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CHK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
    style HUMAN fill:#f59e0b,stroke:#d97706,color:#1f2937
  1. State — a typed dictionary that flows through the graph
  2. Nodes — functions that read and modify state
  3. Edges — connections between nodes (static or conditional)
  4. Graph — the compiled workflow that orchestrates execution

Defining State

State is a TypedDict that represents all the information your workflow needs.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]
    next_action: str
    retry_count: int

The Annotated type with add_messages tells LangGraph to append new messages to the list rather than replacing it. This is how conversation history accumulates across nodes.

Building a Simple Graph

Here is a basic two-node workflow: one node generates a response, another checks if the response is satisfactory.

from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages

class State(TypedDict):
    messages: Annotated[list, add_messages]
    is_satisfactory: bool

llm = ChatOpenAI(model="gpt-4o-mini")

def generate(state: State) -> dict:
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

def evaluate(state: State) -> dict:
    last_message = state["messages"][-1].content
    is_good = len(last_message) > 50  # Simple quality check
    return {"is_satisfactory": is_good}

# Build the graph
graph = StateGraph(State)
graph.add_node("generate", generate)
graph.add_node("evaluate", evaluate)

graph.add_edge(START, "generate")
graph.add_edge("generate", "evaluate")

# Conditional edge: retry or finish
def should_retry(state: State) -> str:
    if state["is_satisfactory"]:
        return "end"
    return "retry"

graph.add_conditional_edges(
    "evaluate",
    should_retry,
    {"end": END, "retry": "generate"},
)

# Compile and run
app = graph.compile()

result = app.invoke({
    "messages": [("human", "Write a haiku about Python programming")],
    "is_satisfactory": False,
})
print(result["messages"][-1].content)

The graph generates a response, evaluates it, and retries if the evaluation fails. This retry loop is trivial in a graph but awkward to implement in a linear agent.

Multi-Agent Collaboration

LangGraph excels at orchestrating multiple specialized agents. Each agent is a node, and a router decides which agent handles the next step.

from langgraph.graph import StateGraph, START, END

class MultiAgentState(TypedDict):
    messages: Annotated[list, add_messages]
    current_agent: str

def router(state: MultiAgentState) -> dict:
    last_msg = state["messages"][-1].content.lower()
    if "code" in last_msg or "bug" in last_msg:
        return {"current_agent": "coder"}
    elif "research" in last_msg or "find" in last_msg:
        return {"current_agent": "researcher"}
    return {"current_agent": "generalist"}

def coder_agent(state: MultiAgentState) -> dict:
    response = coding_llm.invoke(state["messages"])
    return {"messages": [response]}

def researcher_agent(state: MultiAgentState) -> dict:
    response = research_llm.invoke(state["messages"])
    return {"messages": [response]}

def generalist_agent(state: MultiAgentState) -> dict:
    response = general_llm.invoke(state["messages"])
    return {"messages": [response]}

graph = StateGraph(MultiAgentState)
graph.add_node("router", router)
graph.add_node("coder", coder_agent)
graph.add_node("researcher", researcher_agent)
graph.add_node("generalist", generalist_agent)

graph.add_edge(START, "router")
graph.add_conditional_edges(
    "router",
    lambda s: s["current_agent"],
    {"coder": "coder", "researcher": "researcher", "generalist": "generalist"},
)
graph.add_edge("coder", END)
graph.add_edge("researcher", END)
graph.add_edge("generalist", END)

app = graph.compile()

Human-in-the-Loop with Checkpointing

LangGraph's checkpointer lets you pause execution for human review and resume later.

from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
app = graph.compile(
    checkpointer=checkpointer,
    interrupt_before=["execute_action"],  # Pause before this node
)

config = {"configurable": {"thread_id": "user-123"}}

# Run until the interrupt point
result = app.invoke(
    {"messages": [("human", "Delete all inactive users")]},
    config=config,
)
# Execution pauses before "execute_action"

# Human reviews and approves
# Resume execution
result = app.invoke(None, config=config)

The interrupt_before parameter pauses the graph before the specified node executes. State is saved to the checkpointer, so you can resume from a different process or after a server restart.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Streaming Graph Execution

LangGraph supports streaming at multiple levels.

# Stream state updates from each node
for event in app.stream(
    {"messages": [("human", "Analyze this data")]},
    stream_mode="updates",
):
    print(event)

# Stream individual tokens from LLM nodes
async for event in app.astream_events(
    {"messages": [("human", "Write an essay")]},
    version="v2",
):
    if event["event"] == "on_chat_model_stream":
        print(event["data"]["chunk"].content, end="")

FAQ

When should I use LangGraph instead of a simple LangChain agent?

Use LangGraph when your workflow needs branching logic, multiple agents, human approval steps, or persistent state across interactions. For a single agent with a few tools that operates in a straightforward loop, AgentExecutor is simpler and sufficient.

How does LangGraph handle state persistence in production?

LangGraph supports multiple checkpointer backends. MemorySaver is for development. For production, use SqliteSaver, PostgresSaver, or implement a custom checkpointer backed by Redis or your preferred database. State is serialized and restored automatically.

Can LangGraph nodes run in parallel?

Yes. When multiple edges lead from the same node to different nodes without dependencies between them, LangGraph can execute those nodes concurrently. Use the Send API for map-reduce patterns where you dynamically create parallel branches at runtime.


#LangGraph #MultiAgent #StateManagement #Workflow #Python #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Human-in-the-Loop Hybrid Agents: 73% Fewer Errors in 2026

Fully autonomous agents are still a fantasy in production. LangGraph's interrupt() lets you pause for human approval mid-graph without losing state. We cover approve/edit/reject/respond actions and CallSphere's escalation ladder.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.