Skip to content
Learn Agentic AI
Learn Agentic AI12 min read4 views

Advanced Handoff Patterns: Conditional Handoffs, Handoff Chains, and Dynamic Agent Selection

Master complex agent routing with conditional handoff logic, multi-step handoff chains, runtime agent creation, and context transformation between agents in the OpenAI Agents SDK.

Beyond Simple Handoffs

A basic handoff passes control from one agent to another with a static list of targets. That works for demos, but production multi-agent systems need conditional routing, chained handoffs through multiple specialists, and agents created dynamically at runtime based on context.

The OpenAI Agents SDK provides the building blocks for all of these patterns. This guide shows you how to implement each one.

Conditional Handoffs with Filters

The simplest advanced pattern is a conditional handoff — an agent only hands off when certain criteria are met. You implement this with a handoff filter function.

flowchart TD
    INPUT(["Task input"])
    SUPER["Supervisor agent<br/>plans plus monitors"]
    W1["Worker 1<br/>research"]
    W2["Worker 2<br/>code"]
    W3["Worker 3<br/>writing"]
    CRITIC{"Output meets<br/>rubric?"}
    REWORK["Rework or<br/>retry path"]
    SHARED[("Shared scratchpad<br/>and memory")]
    OUT(["Final result"])
    INPUT --> SUPER
    SUPER --> W1 --> CRITIC
    SUPER --> W2 --> CRITIC
    SUPER --> W3 --> CRITIC
    W1 --> SHARED
    W2 --> SHARED
    W3 --> SHARED
    SHARED --> SUPER
    CRITIC -->|Pass| OUT
    CRITIC -->|Fail| REWORK --> SUPER
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CRITIC fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OUT fill:#059669,stroke:#047857,color:#fff
    style SHARED fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
from agents import Agent, Runner, handoff
from agents.extensions import handoff_filters
import asyncio

def requires_premium(ctx, input_data) -> bool:
    """Only handoff to premium agent if user has premium access."""
    user_tier = ctx.context.get("user_tier", "free")
    return user_tier == "premium"

premium_agent = Agent(
    name="premium_support",
    instructions="You provide detailed, priority support to premium customers.",
)

free_agent = Agent(
    name="free_support",
    instructions="You provide standard support with links to documentation.",
)

triage_agent = Agent(
    name="triage",
    instructions="""You handle incoming requests. Route premium users to
    premium_support. Route everyone else to free_support.""",
    handoffs=[
        handoff(premium_agent, filter=requires_premium),
        free_agent,
    ],
)

Handoff Chains: Multi-Step Processing Pipelines

Some workflows need an input to pass through multiple agents in sequence — each one enriching or transforming the data before the next step.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
# Stage 1: Extract structured data from raw input
extractor = Agent(
    name="data_extractor",
    instructions="""Extract key entities from the user's message:
    names, dates, amounts, and categories. Pass to the validator.""",
    handoffs=[],  # Will be set after validator is defined
)

# Stage 2: Validate extracted data
validator = Agent(
    name="data_validator",
    instructions="""Validate the extracted data for consistency.
    Check date formats, verify amounts are positive, flag missing fields.
    Pass validated data to the processor.""",
    handoffs=[],
)

# Stage 3: Process and respond
processor = Agent(
    name="processor",
    instructions="""Take the validated data and execute the requested
    action. Confirm completion to the user.""",
)

# Wire the chain
validator.handoffs = [processor]
extractor.handoffs = [validator]

This creates a pipeline: extractor -> validator -> processor. Each agent focuses on one responsibility.

Dynamic Agent Selection at Runtime

Static handoff lists do not cover scenarios where the target agent depends on runtime data — like routing to a language-specific agent based on detected input language.

from agents import Agent, Runner

# Pre-built specialist agents
specialists = {
    "python": Agent(name="python_expert", instructions="You are a Python expert."),
    "javascript": Agent(name="js_expert", instructions="You are a JavaScript expert."),
    "rust": Agent(name="rust_expert", instructions="You are a Rust expert."),
    "go": Agent(name="go_expert", instructions="You are a Go expert."),
}

def build_router_agent(detected_language: str) -> Agent:
    """Create a router that hands off to the right specialist."""
    target = specialists.get(detected_language, specialists["python"])

    return Agent(
        name="language_router",
        instructions=f"""The user is asking about {detected_language}.
        Hand off to the appropriate specialist immediately.""",
        handoffs=[target],
    )

async def handle_question(question: str, language: str):
    router = build_router_agent(language)
    result = await Runner.run(router, input=question)
    return result.final_output

Handoff with Context Transformation

Sometimes the receiving agent needs the conversation history reshaped. You can attach an on_handoff callback that transforms context before the target agent receives it.

from agents import Agent, handoff, RunContextWrapper

async def summarize_for_handoff(ctx: RunContextWrapper, input_data):
    """Compress conversation history into a summary for the next agent."""
    history = ctx.context.get("conversation_history", [])
    summary = " | ".join(
        f"{msg['role']}: {msg['content'][:100]}" for msg in history[-5:]
    )
    ctx.context["handoff_summary"] = summary
    return input_data

escalation_agent = Agent(
    name="escalation",
    instructions="""You handle escalated issues. Check the handoff_summary
    in context to understand what has been tried so far.""",
)

frontline_agent = Agent(
    name="frontline",
    instructions="You handle initial customer requests. Escalate complex issues.",
    handoffs=[
        handoff(
            escalation_agent,
            on_handoff=summarize_for_handoff,
            tool_description="Escalate to senior support with conversation summary",
        ),
    ],
)

Circular Handoffs with Guard Rails

Agents can hand back to each other, but you need a guard to prevent infinite loops.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

class HandoffCounter:
    def __init__(self, max_handoffs: int = 5):
        self.count = 0
        self.max = max_handoffs

    def increment(self):
        self.count += 1
        if self.count >= self.max:
            raise RuntimeError(f"Max handoffs ({self.max}) exceeded")

counter = HandoffCounter(max_handoffs=3)

reviewer = Agent(
    name="reviewer",
    instructions="""Review the draft. If it needs revision, hand back to
    the writer with feedback. If it is good, respond with the final version.""",
)

writer = Agent(
    name="writer",
    instructions="Write or revise content based on feedback. Send to reviewer when done.",
    handoffs=[reviewer],
)

reviewer.handoffs = [writer]  # Circular reference

FAQ

How do I prevent infinite handoff loops?

Implement a counter in your shared context that tracks the number of handoffs. Before each handoff, check the counter and raise an exception or return a fallback response if it exceeds your threshold. The SDK does not enforce a limit automatically.

Can I pass data between agents during a handoff?

Yes. Use the shared RunContext to store data that persists across handoffs. Each agent reads from and writes to the same context dictionary, so the receiving agent can access anything the sender stored there.

What happens if a handoff target agent fails?

The error propagates up through the Runner. Wrap your Runner.run call in a try/except to catch failures and implement fallback logic — like routing to a general-purpose agent or returning a graceful error message to the user.


#OpenAIAgentsSDK #AgentHandoffs #MultiAgentSystems #Routing #Python #Orchestration #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Streaming Agent Responses with OpenAI Agents SDK and LangChain in 2026

How to stream tokens, tool-call deltas, and intermediate steps from an agent — with code for both the OpenAI Agents SDK and LangChain — and the gotchas that bite in production.

Agentic AI

Tool Selection Accuracy: The Eval Most Teams Skip — and Should Not (2026)

Your agent picked the wrong tool 12% of the time and the final answer was still right. That's a latent bug. Here's the eval pipeline that surfaces it.

Agentic AI

OpenAI Agents SDK vs Assistants API in 2026: Migration Guide with Eval Parity

Honest principal-engineer comparison of the OpenAI Agents SDK and the legacy Assistants API, with a migration checklist and eval-parity strategy so you don't ship regressions.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Parallel Tool Calling in the OpenAI Agents SDK: When It Helps, When It Hurts (2026)

OpenAI's parallel function calling can cut latency in half — or burn money on dependent calls. The architecture, code, and an eval that proves the win.

Agentic AI

Token-Level Evaluation of Streaming Agents: TTFT, Stream Smoothness, and Mid-Stream Hallucination Detection

Streaming changes the eval game — final-answer correctness isn't enough when users perceive the answer one token at a time. Here's the metric set that matters.