Skip to content
Learn Agentic AI
Learn Agentic AI11 min read7 views

Conditional Routing in LangGraph: Building Decision Points in Agent Workflows

Build intelligent decision points in LangGraph using conditional edges, router functions, and multi-path branching to create agents that dynamically choose their execution path.

Beyond Linear Workflows

A linear chain of nodes — A then B then C — can only model the simplest workflows. Real agent systems need to make decisions: should the agent search the web or query a database? Should it ask for clarification or proceed with the answer? Should it loop back and try again or terminate? Conditional edges are how LangGraph implements this branching logic.

Adding Conditional Edges

A conditional edge evaluates the current state and returns the name of the next node to execute:

flowchart TD
    USER(["User input"])
    SUPER["Supervisor node<br/>routes by state"]
    A["Specialist node A<br/>research"]
    B["Specialist node B<br/>writing"]
    TOOL{"Tool call<br/>needed?"}
    EXEC["Tool executor<br/>ToolNode"]
    CHK[("Postgres<br/>checkpointer")]
    INT{"interrupt for<br/>human approval?"}
    HUMAN(["Human reviewer"])
    OUT(["Final response"])
    USER --> SUPER
    SUPER --> A
    SUPER --> B
    A --> TOOL
    B --> TOOL
    TOOL -->|Yes| EXEC --> SUPER
    TOOL -->|No| INT
    INT -->|Yes| HUMAN --> SUPER
    INT -->|No| OUT
    SUPER <--> CHK
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CHK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
    style HUMAN fill:#f59e0b,stroke:#d97706,color:#1f2937
from langgraph.graph import StateGraph, START, END
from typing import TypedDict, Annotated, Literal
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]
    needs_tool: bool

def router(state: AgentState) -> Literal["tool_node", "respond"]:
    if state["needs_tool"]:
        return "tool_node"
    return "respond"

builder = StateGraph(AgentState)
builder.add_node("agent", agent_node)
builder.add_node("tool_node", tool_node)
builder.add_node("respond", respond_node)

builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", router)
builder.add_edge("tool_node", "agent")
builder.add_edge("respond", END)

graph = builder.compile()

The router function inspects state and returns a string matching one of the registered node names. LangGraph calls this function after the source node completes and routes execution accordingly.

Router Functions with LLM Output

The most common pattern checks whether the LLM response contains tool calls:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
from langchain_core.messages import AIMessage

def should_use_tools(state: AgentState) -> Literal["tools", "end"]:
    last_message = state["messages"][-1]
    if isinstance(last_message, AIMessage) and last_message.tool_calls:
        return "tools"
    return "end"

builder.add_conditional_edges("agent", should_use_tools, {
    "tools": "tool_node",
    "end": END,
})

The optional third argument to add_conditional_edges is a mapping from return values to node names. This decouples the router logic from the exact node names in the graph.

Multi-Path Branching

Routers can return more than two destinations. Use this for classification-style routing:

def classify_query(state: AgentState) -> Literal[
    "search", "calculate", "database", "clarify"
]:
    last_msg = state["messages"][-1].content.lower()

    if "search" in last_msg or "find" in last_msg:
        return "search"
    elif "calculate" in last_msg or "math" in last_msg:
        return "calculate"
    elif "query" in last_msg or "database" in last_msg:
        return "database"
    else:
        return "clarify"

builder.add_conditional_edges("classifier", classify_query)

Each branch leads to a specialized node that handles that category of request. The classifier node uses the LLM to categorize intent, then the router directs execution to the appropriate handler.

Implementing Cycles with Conditional Edges

Cycles are what make agents truly powerful. An agent loop typically looks like this: reason, optionally call tools, then decide whether to continue or stop:

def agent_loop_router(state: AgentState) -> Literal["tools", "finish"]:
    messages = state["messages"]
    last = messages[-1]

    if hasattr(last, "tool_calls") and last.tool_calls:
        return "tools"
    return "finish"

builder.add_node("agent", call_model)
builder.add_node("tools", execute_tools)
builder.add_node("finish", format_response)

builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", agent_loop_router)
builder.add_edge("tools", "agent")  # cycle back
builder.add_edge("finish", END)

The edge from tools back to agent creates a cycle. The agent keeps calling tools until the LLM decides it has enough information, at which point the router sends execution to the finish node.

Guard Rails with State Counters

Prevent infinite loops by tracking iteration counts in state:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

class SafeAgentState(TypedDict):
    messages: Annotated[list, add_messages]
    loop_count: int

def safe_router(state: SafeAgentState) -> Literal["tools", "finish"]:
    if state["loop_count"] >= 5:
        return "finish"
    last = state["messages"][-1]
    if hasattr(last, "tool_calls") and last.tool_calls:
        return "tools"
    return "finish"

def increment_and_call(state: SafeAgentState) -> dict:
    response = llm.invoke(state["messages"])
    return {
        "messages": [response],
        "loop_count": state["loop_count"] + 1,
    }

This guarantees the agent terminates after at most 5 iterations, regardless of the LLM output.

FAQ

Can a conditional edge route to END directly?

Yes. You can return END from a router function or map a return value to END in the edge mapping. This is the standard way to terminate a workflow from a conditional branch.

What happens if the router returns a node name that does not exist?

LangGraph raises a ValueError at compile time if you use the mapping dictionary, or at runtime if the returned string does not match any registered node. Always use Literal type hints to catch mismatches early.

Can I have multiple conditional edges from the same node?

No. Each node can have only one outgoing edge definition — either a fixed edge or a conditional edge. If you need multiple branching decisions, chain them through intermediate nodes that each evaluate one condition.


#LangGraph #ConditionalRouting #AgentWorkflows #DecisionLogic #Python #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Human-in-the-Loop Hybrid Agents: 73% Fewer Errors in 2026

Fully autonomous agents are still a fantasy in production. LangGraph's interrupt() lets you pause for human approval mid-graph without losing state. We cover approve/edit/reject/respond actions and CallSphere's escalation ladder.

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.