Skip to content
Learn Agentic AI
Learn Agentic AI11 min read16 views

LangChain Tool Creation: @tool Decorator, StructuredTool, and Custom Tools

Master LangChain tool creation patterns including the @tool decorator, StructuredTool class, Pydantic input schemas, async tools, and error handling for production-grade agent tools.

Tools Are How Agents Interact with the World

An LLM can reason and generate text, but it cannot look up a database, call an API, or read a file on its own. Tools bridge this gap. When you give an agent tools, the LLM can decide to invoke a function, receive its result, and incorporate that information into its reasoning. The quality of your tool definitions — names, descriptions, and input schemas — directly determines how reliably your agent uses them.

The @tool Decorator

The simplest way to create a LangChain tool is the @tool decorator. It extracts the function name, docstring, and type annotations automatically.

flowchart TD
    USER(["User message"])
    LLM["LLM call<br/>with tools schema"]
    DECIDE{"Model wants<br/>to call a tool?"}
    EXEC["Execute tool<br/>sandboxed runtime"]
    RESULT["Append tool_result<br/>to messages"]
    GUARD{"Output passes<br/>guardrails?"}
    DONE(["Final reply"])
    BLOCK(["Refuse and log"])
    USER --> LLM --> DECIDE
    DECIDE -->|Yes| EXEC --> RESULT --> LLM
    DECIDE -->|No| GUARD
    GUARD -->|Yes| DONE
    GUARD -->|No| BLOCK
    style LLM fill:#4f46e5,stroke:#4338ca,color:#fff
    style EXEC fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style DONE fill:#059669,stroke:#047857,color:#fff
    style BLOCK fill:#dc2626,stroke:#b91c1c,color:#fff
from langchain_core.tools import tool

@tool
def search_database(query: str, limit: int = 10) -> str:
    """Search the product database for items matching the query.

    Args:
        query: The search terms to look for.
        limit: Maximum number of results to return.
    """
    # Implementation here
    results = db.search(query, limit=limit)
    return f"Found {len(results)} products: {results}"

The docstring is critical — the LLM reads it to decide when and how to use the tool. Include what the tool does and what each parameter means. Type annotations define the input schema that the LLM must follow.

You can customize the name and control whether the result is returned directly to the user:

@tool("product_search", return_direct=True)
def search_database(query: str) -> str:
    """Search for products by name or category."""
    return do_search(query)

Setting return_direct=True means the tool's output is returned as the final answer without further LLM processing. This is useful for tools that produce user-facing output.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Pydantic Input Schemas

For more complex inputs, define a Pydantic model as the input schema. This gives you validation, default values, and detailed field descriptions.

from langchain_core.tools import tool
from pydantic import BaseModel, Field

class FlightSearchInput(BaseModel):
    origin: str = Field(description="Airport code of departure city (e.g., SFO)")
    destination: str = Field(description="Airport code of arrival city (e.g., JFK)")
    date: str = Field(description="Travel date in YYYY-MM-DD format")
    max_stops: int = Field(default=1, description="Maximum number of stops allowed")

@tool("search_flights", args_schema=FlightSearchInput)
def search_flights(
    origin: str, destination: str, date: str, max_stops: int = 1
) -> str:
    """Search for available flights between two airports on a given date."""
    flights = flight_api.search(origin, destination, date, max_stops)
    return format_flight_results(flights)

The Field(description=...) values are included in the tool schema that the LLM sees, so write them to be informative.

StructuredTool: Programmatic Tool Creation

When you need to build tools dynamically or from configuration, use StructuredTool.from_function.

from langchain_core.tools import StructuredTool
from pydantic import BaseModel, Field

class CalculatorInput(BaseModel):
    expression: str = Field(description="A mathematical expression to evaluate")

def calculate(expression: str) -> str:
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error: {e}"

calculator_tool = StructuredTool.from_function(
    func=calculate,
    name="calculator",
    description="Evaluate mathematical expressions using Python syntax.",
    args_schema=CalculatorInput,
)

This approach is equivalent to the @tool decorator but gives you programmatic control over every attribute.

Async Tools

For tools that call external APIs, use async implementations to avoid blocking.

from langchain_core.tools import tool
import httpx

@tool
async def fetch_weather(city: str) -> str:
    """Get the current weather for a city."""
    async with httpx.AsyncClient() as client:
        response = await client.get(
            f"https://api.weather.example.com/current?city={city}"
        )
        data = response.json()
        return f"{city}: {data['temp']}F, {data['condition']}"

When an agent calls this tool during an async execution (via ainvoke), the async version is used automatically. You can also provide both sync and async implementations:

calculator_tool = StructuredTool.from_function(
    func=calculate_sync,
    coroutine=calculate_async,
    name="calculator",
    description="Evaluate math expressions.",
)

Error Handling in Tools

Agents are more robust when tools handle errors gracefully instead of throwing exceptions.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

@tool
def query_database(sql: str) -> str:
    """Execute a read-only SQL query against the analytics database."""
    if not sql.strip().upper().startswith("SELECT"):
        return "Error: Only SELECT queries are allowed."
    try:
        results = db.execute(sql)
        return format_results(results)
    except Exception as e:
        return f"Query failed: {str(e)}. Please check the syntax."

Returning error messages as strings lets the agent see what went wrong and adjust its approach. If you raise an exception instead, the agent loop may terminate or retry blindly.

You can also set handle_tool_error=True on the tool or AgentExecutor to automatically catch exceptions and convert them to error messages for the agent.

Building a Tool Registry

For agents with many tools, organize them into a registry pattern.

from langchain_core.tools import tool

def build_tools(config: dict) -> list:
    tools = []

    if config.get("enable_search"):
        @tool
        def web_search(query: str) -> str:
            """Search the web for information."""
            return search_api.query(query)
        tools.append(web_search)

    if config.get("enable_database"):
        @tool
        def sql_query(query: str) -> str:
            """Query the database."""
            return db.execute(query)
        tools.append(sql_query)

    return tools

# Feature-flag tools per deployment
tools = build_tools({"enable_search": True, "enable_database": False})

FAQ

How many tools can I give an agent?

There is no hard limit, but more tools mean a larger system prompt and more decisions for the LLM. In practice, agents work best with 5-15 well-defined tools. If you have more, consider using a tool selector or organizing tools into groups that are loaded based on the conversation context.

Should tool descriptions be short or detailed?

Detailed but concise. The LLM uses the description to decide when a tool is appropriate and how to call it. Include what the tool does, what inputs it expects, and any constraints. Avoid vague descriptions like "A useful tool" — be specific about the use case.

How do I test LangChain tools in isolation?

Call the tool directly using tool.invoke({"param": "value"}) or await tool.ainvoke({"param": "value"}). This runs the underlying function with schema validation. Write unit tests that call tools directly before integrating them into an agent.


#LangChain #ToolCreation #AIAgents #Pydantic #Python #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Production RAG Agents with LangChain and RAGAS Evaluation in 2026

Build a production RAG agent with LangChain, then measure faithfulness, answer relevance, and context precision with RAGAS. The four metrics that matter and how to wire them up.

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

Streaming Agent Responses with OpenAI Agents SDK and LangChain in 2026

How to stream tokens, tool-call deltas, and intermediate steps from an agent — with code for both the OpenAI Agents SDK and LangChain — and the gotchas that bite in production.