Skip to content
Learn Agentic AI
Learn Agentic AI13 min read8 views

Building a Multi-Agent System with LangGraph: Supervisor and Worker Patterns

Build multi-agent systems in LangGraph using subgraph composition, supervisor routing, and parallel worker execution to create specialized agent teams that collaborate on complex tasks.

When Single Agents Are Not Enough

A single agent with many tools quickly hits a ceiling. As you add more tools, the LLM becomes less reliable at selecting the right one. The system prompt grows unwieldy. Different tasks require different model configurations or temperature settings. Multi-agent systems solve this by decomposing complex workflows into specialized agents, each focused on a narrow domain, coordinated by a supervisor.

The Supervisor Pattern

In the supervisor pattern, one agent acts as a router that decides which specialized worker agent should handle each step:

flowchart TD
    USER(["User input"])
    SUPER["Supervisor node<br/>routes by state"]
    A["Specialist node A<br/>research"]
    B["Specialist node B<br/>writing"]
    TOOL{"Tool call<br/>needed?"}
    EXEC["Tool executor<br/>ToolNode"]
    CHK[("Postgres<br/>checkpointer")]
    INT{"interrupt for<br/>human approval?"}
    HUMAN(["Human reviewer"])
    OUT(["Final response"])
    USER --> SUPER
    SUPER --> A
    SUPER --> B
    A --> TOOL
    B --> TOOL
    TOOL -->|Yes| EXEC --> SUPER
    TOOL -->|No| INT
    INT -->|Yes| HUMAN --> SUPER
    INT -->|No| OUT
    SUPER <--> CHK
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CHK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
    style HUMAN fill:#f59e0b,stroke:#d97706,color:#1f2937
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

class TeamState(TypedDict):
    messages: Annotated[list, add_messages]
    next_agent: str

supervisor_llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def supervisor(state: TeamState) -> dict:
    system = SystemMessage(content="""You are a supervisor routing tasks.
    Based on the user request, decide which worker to invoke:
    - 'researcher' for information gathering
    - 'writer' for content creation
    - 'coder' for code generation
    - 'FINISH' if the task is complete
    Respond with ONLY the worker name.""")

    response = supervisor_llm.invoke(
        [system] + state["messages"]
    )
    return {"next_agent": response.content.strip().lower()}

Worker Agents

Each worker is a focused agent with its own system prompt and tools:

researcher_llm = ChatOpenAI(model="gpt-4o-mini")
writer_llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
coder_llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def researcher(state: TeamState) -> dict:
    system = SystemMessage(
        content="You are a research assistant. Find and summarize information."
    )
    response = researcher_llm.invoke(
        [system] + state["messages"]
    )
    return {"messages": [response]}

def writer(state: TeamState) -> dict:
    system = SystemMessage(
        content="You are a content writer. Create polished, well-structured text."
    )
    response = writer_llm.invoke(
        [system] + state["messages"]
    )
    return {"messages": [response]}

def coder(state: TeamState) -> dict:
    system = SystemMessage(
        content="You are a Python developer. Write clean, tested code."
    )
    response = coder_llm.invoke(
        [system] + state["messages"]
    )
    return {"messages": [response]}

Assembling the Supervisor Graph

Connect the supervisor to workers with conditional routing:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
def route_to_worker(state: TeamState) -> Literal[
    "researcher", "writer", "coder", "__end__"
]:
    next_agent = state["next_agent"]
    if next_agent == "finish":
        return "__end__"
    return next_agent

builder = StateGraph(TeamState)
builder.add_node("supervisor", supervisor)
builder.add_node("researcher", researcher)
builder.add_node("writer", writer)
builder.add_node("coder", coder)

builder.add_edge(START, "supervisor")
builder.add_conditional_edges("supervisor", route_to_worker)

# All workers route back to supervisor after completing
builder.add_edge("researcher", "supervisor")
builder.add_edge("writer", "supervisor")
builder.add_edge("coder", "supervisor")

graph = builder.compile()

The supervisor evaluates each response and decides whether to hand off to another worker or finish. This creates a loop where the supervisor orchestrates a multi-step collaboration.

Subgraph Composition

For complex workers that are themselves multi-step graphs, use subgraph composition:

def build_research_subgraph() -> StateGraph:
    """Build a research agent with search and analysis steps."""

    class ResearchState(TypedDict):
        messages: Annotated[list, add_messages]

    def search(state: ResearchState) -> dict:
        # Perform web search
        return {"messages": [{"role": "assistant", "content": "Search results..."}]}

    def analyze(state: ResearchState) -> dict:
        # Analyze search results
        return {"messages": [{"role": "assistant", "content": "Analysis..."}]}

    sub = StateGraph(ResearchState)
    sub.add_node("search", search)
    sub.add_node("analyze", analyze)
    sub.add_edge(START, "search")
    sub.add_edge("search", "analyze")
    sub.add_edge("analyze", END)
    return sub.compile()

research_graph = build_research_subgraph()

# Use the subgraph as a node in the parent graph
builder.add_node("researcher", research_graph)

The parent graph treats the subgraph as a single node. State flows in, the subgraph processes it through its own internal nodes, and the final state flows back to the parent.

Parallel Worker Execution

LangGraph supports sending work to multiple nodes simultaneously:

from langgraph.graph import Send

def fan_out(state: TeamState) -> list[Send]:
    """Send the task to multiple workers in parallel."""
    return [
        Send("researcher", state),
        Send("writer", state),
    ]

builder.add_conditional_edges("supervisor", fan_out)

The Send object directs execution to a specific node with a given state. Returning multiple Send objects causes parallel execution, and the results are merged using the state reducers.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Putting It All Together

result = graph.invoke({
    "messages": [HumanMessage(
        content="Research the latest trends in AI agents, "
        "then write a blog post about the findings."
    )],
    "next_agent": "",
})

# The supervisor coordinates: researcher gathers info, writer creates content
for msg in result["messages"]:
    print(f"{msg.__class__.__name__}: {msg.content[:80]}...")

The supervisor first routes to the researcher, then after receiving the research results, routes to the writer to produce the final output.

FAQ

How many worker agents can a supervisor manage?

There is no hard limit, but LLM-based routers become less reliable with more than 8-10 options. For larger systems, use a hierarchical pattern with multiple supervisors, each managing a team of 3-5 specialists.

Can worker agents communicate directly with each other?

In the standard supervisor pattern, workers communicate through the shared state — they read each other's outputs from the message history. Direct agent-to-agent communication is possible by having workers write to specific state channels that other workers read from.

How do I handle a worker that gets stuck in a loop?

Add loop counters to state and check them in the supervisor. If a worker has been called more than N times without progress, the supervisor should either try a different worker or terminate with a partial result.


#LangGraph #MultiAgent #SupervisorPattern #Subgraphs #Python #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Human-in-the-Loop Hybrid Agents: 73% Fewer Errors in 2026

Fully autonomous agents are still a fantasy in production. LangGraph's interrupt() lets you pause for human approval mid-graph without losing state. We cover approve/edit/reject/respond actions and CallSphere's escalation ladder.

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.