Skip to content
Learn Agentic AI
Learn Agentic AI12 min read3 views

Migrating from LangChain to OpenAI Agents SDK: A Practical Guide

A hands-on guide to migrating AI agent code from LangChain to the OpenAI Agents SDK. Covers concept mapping, code translation, testing strategies, and gradual migration paths.

Why Teams Migrate from LangChain

LangChain was the first widely adopted framework for building LLM applications, and it earned that position by moving fast. But as production requirements matured, teams encountered pain points: deep abstraction layers that obscured what prompts actually reached the model, rapidly changing APIs with frequent breaking changes, and heavyweight dependency trees.

The OpenAI Agents SDK takes a different approach: minimal abstractions, explicit control flow, and built-in primitives for the patterns that matter most in production — tool calling, agent handoffs, guardrails, and tracing.

Concept Mapping: LangChain to Agents SDK

Understanding the conceptual mapping is the first step. Here is how the core primitives translate:

flowchart LR
    INPUT(["User input"])
    AGENT["Agent<br/>name plus instructions"]
    HAND{"Handoff to<br/>another agent?"}
    SUB["Sub-agent<br/>specialist"]
    GUARD{"Guardrail<br/>passed?"}
    TOOL["Tool call"]
    SDK[("Tracing<br/>OpenAI dashboard")]
    OUT(["Final output"])
    INPUT --> AGENT --> HAND
    HAND -->|Yes| SUB --> GUARD
    HAND -->|No| GUARD
    GUARD -->|Yes| TOOL --> AGENT
    GUARD -->|Block| OUT
    AGENT --> OUT
    AGENT --> SDK
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style SDK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
LangChain OpenAI Agents SDK Notes
ChatOpenAI Agent(model="gpt-4o") Model config lives on the Agent
Tool / @tool @function_tool Decorator-based, type-safe
AgentExecutor Runner.run() Manages the agent loop
ConversationBufferMemory Conversation history in input Explicit message list
Chain Agent handoffs Compose via handoffs=[]
OutputParser output_type=MyModel Pydantic model on Agent

Translating a LangChain Agent to Agents SDK

Here is a typical LangChain agent that looks up product information:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
# ── LangChain version ──
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate

@tool
def lookup_product(product_id: str) -> str:
    """Look up product details by ID."""
    # database call here
    return f"Product {product_id}: Widget Pro, $49.99, in stock"

llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a product assistant."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_tools_agent(llm, [lookup_product], prompt)
executor = AgentExecutor(agent=agent, tools=[lookup_product])
result = executor.invoke({"input": "Tell me about product P-1234"})

And here is the equivalent in the OpenAI Agents SDK:

# ── OpenAI Agents SDK version ──
from agents import Agent, Runner, function_tool

@function_tool
def lookup_product(product_id: str) -> str:
    """Look up product details by ID."""
    return f"Product {product_id}: Widget Pro, $49.99, in stock"

agent = Agent(
    name="Product Assistant",
    instructions="You are a product assistant.",
    model="gpt-4o",
    tools=[lookup_product],
)

result = Runner.run_sync(agent, "Tell me about product P-1234")
print(result.final_output)

The SDK version is roughly half the code. The agent loop, tool execution, and response parsing are handled internally by Runner.

Migrating Chains to Handoffs

LangChain uses chains to compose multiple steps. The Agents SDK uses handoffs to delegate between specialized agents.

from agents import Agent, Runner

billing_agent = Agent(
    name="Billing Agent",
    instructions="Handle billing questions. Access account data.",
    model="gpt-4o",
)

shipping_agent = Agent(
    name="Shipping Agent",
    instructions="Handle shipping and delivery questions.",
    model="gpt-4o",
)

triage_agent = Agent(
    name="Triage Agent",
    instructions="Route the user to the right specialist agent.",
    model="gpt-4o",
    handoffs=[billing_agent, shipping_agent],
)

result = Runner.run_sync(triage_agent, "Where is my order?")
print(result.final_output)

Gradual Migration Strategy

Do not rewrite everything at once. Migrate one agent or chain at a time.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

# Compatibility wrapper: run both and compare
async def migrate_with_comparison(user_input: str):
    langchain_result = executor.invoke({"input": user_input})
    sdk_result = Runner.run_sync(agent, user_input)

    match = langchain_result["output"] == sdk_result.final_output
    log_comparison(user_input, langchain_result, sdk_result, match)

    # Return SDK result when confidence is high
    return sdk_result.final_output

FAQ

Can the Agents SDK work with non-OpenAI models like LangChain does?

Yes. The Agents SDK supports any model via the LiteLLM integration. Install openai-agents[litellm] and use model strings like litellm/anthropic/claude-sonnet-4-20250514. The tool calling and handoff mechanics work the same regardless of the model provider.

How do I migrate LangChain memory to the Agents SDK?

The Agents SDK does not have a built-in memory abstraction. Instead, you pass conversation history explicitly as a list of messages in the input parameter. Extract your existing conversation history from LangChain memory stores and format it as standard message dicts.

What about LangChain's document loaders and vector store integrations?

Those are data pipeline tools, not agent framework features. You can keep using LangChain's document loaders and vector stores alongside the Agents SDK. Wrap the retrieval logic in a @function_tool and the agent calls it like any other tool.


#LangChain #OpenAIAgentsSDK #Migration #Python #FrameworkMigration #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.

Agentic AI

Parallel Tool Calling in the OpenAI Agents SDK: When It Helps, When It Hurts (2026)

OpenAI's parallel function calling can cut latency in half — or burn money on dependent calls. The architecture, code, and an eval that proves the win.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Production RAG Agents with LangChain and RAGAS Evaluation in 2026

Build a production RAG agent with LangChain, then measure faithfulness, answer relevance, and context precision with RAGAS. The four metrics that matter and how to wire them up.

Agentic AI

OpenAI Agents SDK vs Assistants API in 2026: Migration Guide with Eval Parity

Honest principal-engineer comparison of the OpenAI Agents SDK and the legacy Assistants API, with a migration checklist and eval-parity strategy so you don't ship regressions.

Agentic AI

Tool Selection Accuracy: The Eval Most Teams Skip — and Should Not (2026)

Your agent picked the wrong tool 12% of the time and the final answer was still right. That's a latent bug. Here's the eval pipeline that surfaces it.