Skip to content
Learn Agentic AI
Learn Agentic AI12 min read12 views

Human-in-the-Loop with LangGraph: Approval Gates and Manual Intervention Points

Implement human approval gates in LangGraph using interrupt_before, interrupt_after, and resume patterns to build agent workflows that pause for human review before executing sensitive actions.

Why Agents Need Human Oversight

Fully autonomous agents are powerful but dangerous in production. An agent that can send emails, modify databases, or make API calls to external services should not do so without guardrails. Human-in-the-loop patterns let you build agents that pause at critical decision points, present their intended actions to a human reviewer, and only proceed after explicit approval.

LangGraph implements this through interrupts — points in the graph where execution pauses and waits for external input before continuing.

Setting Up Interrupts

Interrupts require a checkpointer because the graph state must be persisted while waiting for human input:

flowchart TD
    USER(["User input"])
    SUPER["Supervisor node<br/>routes by state"]
    A["Specialist node A<br/>research"]
    B["Specialist node B<br/>writing"]
    TOOL{"Tool call<br/>needed?"}
    EXEC["Tool executor<br/>ToolNode"]
    CHK[("Postgres<br/>checkpointer")]
    INT{"interrupt for<br/>human approval?"}
    HUMAN(["Human reviewer"])
    OUT(["Final response"])
    USER --> SUPER
    SUPER --> A
    SUPER --> B
    A --> TOOL
    B --> TOOL
    TOOL -->|Yes| EXEC --> SUPER
    TOOL -->|No| INT
    INT -->|Yes| HUMAN --> SUPER
    INT -->|No| OUT
    SUPER <--> CHK
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CHK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
    style HUMAN fill:#f59e0b,stroke:#d97706,color:#1f2937
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to the specified recipient."""
    # Real implementation here
    return f"Email sent to {to}"

tools = [send_email]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
tool_node = ToolNode(tools)

class State(TypedDict):
    messages: Annotated[list, add_messages]

checkpointer = MemorySaver()

Using interrupt_before

The interrupt_before parameter on compile() pauses execution before a specified node runs:

def call_agent(state: State) -> dict:
    return {"messages": [llm.invoke(state["messages"])]}

def route(state: State) -> Literal["tools", "end"]:
    last = state["messages"][-1]
    if hasattr(last, "tool_calls") and last.tool_calls:
        return "tools"
    return "end"

builder = StateGraph(State)
builder.add_node("agent", call_agent)
builder.add_node("tools", tool_node)
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", route, {
    "tools": "tools",
    "end": END,
})
builder.add_edge("tools", "agent")

graph = builder.compile(
    checkpointer=checkpointer,
    interrupt_before=["tools"],
)

Now every time the agent wants to execute a tool, the graph pauses before the tools node runs. The caller can inspect the pending tool calls and decide whether to approve.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

The Approval Loop

Here is the complete pattern for running the graph with human approval:

from langchain_core.messages import HumanMessage

config = {"configurable": {"thread_id": "approval-demo"}}

# Initial invocation — will pause before tools
result = graph.invoke(
    {"messages": [HumanMessage(content="Send an email to [email protected] saying hello")]},
    config=config,
)

# Inspect what the agent wants to do
state = graph.get_state(config)
pending_calls = state.values["messages"][-1].tool_calls
print("Agent wants to execute:")
for call in pending_calls:
    print(f"  {call['name']}({call['args']})")

# Human approves — resume execution with None input
approved = input("Approve? (y/n): ")
if approved.lower() == "y":
    result = graph.invoke(None, config=config)
    print("Execution completed:", result["messages"][-1].content)
else:
    print("Execution rejected by human reviewer.")

Passing None to invoke() tells LangGraph to resume from the checkpoint without adding new input. Execution continues from exactly where it paused.

Using interrupt_after

Sometimes you want to pause after a node runs rather than before. This is useful for review-then-continue patterns:

graph = builder.compile(
    checkpointer=checkpointer,
    interrupt_after=["agent"],
)

With interrupt_after, the agent node completes and its output is saved to state, then execution pauses. The human can review the agent's reasoning or proposed tool calls, then resume or modify the state before continuing.

Modifying State Before Resuming

You can edit the graph state before resuming, which lets humans correct agent mistakes:

from langgraph.checkpoint.base import empty_checkpoint

# After interrupt, modify the state
graph.update_state(
    config,
    {"messages": [HumanMessage(content="Actually, send it to [email protected] instead")]},
)

# Resume with the modified state
result = graph.invoke(None, config=config)

This pattern is powerful for correction workflows where the human wants to adjust the agent's plan without starting over from scratch.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Selective Interrupts

Not every tool call needs approval. You can implement selective interruption by checking tool names in a custom node:

SENSITIVE_TOOLS = {"send_email", "delete_record", "make_payment"}

def check_approval(state: State) -> Literal["needs_approval", "safe"]:
    tool_calls = state["messages"][-1].tool_calls
    for call in tool_calls:
        if call["name"] in SENSITIVE_TOOLS:
            return "needs_approval"
    return "safe"

Route sensitive tool calls through an approval gate while letting safe tools execute automatically.

FAQ

Can I set a timeout for human approval?

LangGraph itself does not have a built-in timeout mechanism for interrupts. You implement timeouts in your application layer — for example, a web server that cancels the workflow if no approval arrives within a time window. The checkpointed state persists indefinitely until resumed or discarded.

What happens if I never resume an interrupted graph?

The state remains checkpointed and can be resumed at any time, even days later. The graph does not consume resources while paused. This makes interrupts suitable for asynchronous approval workflows where a human might review actions hours after the agent proposes them.

Can I combine interrupt_before and interrupt_after?

Yes. You can pass different node lists to each parameter. For example, interrupt before tool execution for approval and interrupt after the final response for quality review. Both can be active on the same compiled graph.


#LangGraph #HumanintheLoop #ApprovalGates #AgentSafety #Python #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Human-in-the-Loop Hybrid Agents: 73% Fewer Errors in 2026

Fully autonomous agents are still a fantasy in production. LangGraph's interrupt() lets you pause for human approval mid-graph without losing state. We cover approve/edit/reject/respond actions and CallSphere's escalation ladder.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.