Skip to content
Learn Agentic AI
Learn Agentic AI14 min read13 views

Function Tools: Turn Any Python Function into an Agent Tool

Learn how to use the @function_tool decorator to give OpenAI agents the ability to call Python functions. Covers type hints, docstrings, timeouts, and Pydantic validation.

Tools Are What Make Agents Useful

A language model without tools can only generate text. Tools give agents the ability to interact with the real world — query databases, call APIs, process files, execute calculations, and take actions. The OpenAI Agents SDK makes it trivial to turn any Python function into a tool that agents can call.

The primary mechanism is the @function_tool decorator, which automatically generates the JSON schema that the LLM needs to understand how to call your function.

The @function_tool Decorator

At its simplest, you decorate a function and add it to an agent's tool list:

flowchart LR
    INPUT(["User input"])
    AGENT["Agent<br/>name plus instructions"]
    HAND{"Handoff to<br/>another agent?"}
    SUB["Sub-agent<br/>specialist"]
    GUARD{"Guardrail<br/>passed?"}
    TOOL["Tool call"]
    SDK[("Tracing<br/>OpenAI dashboard")]
    OUT(["Final output"])
    INPUT --> AGENT --> HAND
    HAND -->|Yes| SUB --> GUARD
    HAND -->|No| GUARD
    GUARD -->|Yes| TOOL --> AGENT
    GUARD -->|Block| OUT
    AGENT --> OUT
    AGENT --> SDK
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style SDK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
from agents import Agent, Runner, function_tool

@function_tool
def get_weather(city: str) -> str:
    """Get the current weather for a given city.

    Args:
        city: The name of the city to check weather for.
    """
    # In production, this would call a real weather API
    return f"The weather in {city} is 72F and sunny."

agent = Agent(
    name="Weather Bot",
    instructions="Help users check the weather. Use the get_weather tool.",
    tools=[get_weather],
)

result = Runner.run_sync(agent, "What is the weather in Tokyo?")
print(result.final_output)

When the agent receives "What is the weather in Tokyo?", the LLM recognizes it should call the get_weather tool with city="Tokyo", receives the result, and formulates a natural language response.

How Schema Generation Works

The @function_tool decorator inspects your function to automatically generate a JSON schema:

  1. Function name becomes the tool name
  2. Type hints become the parameter types in the schema
  3. Docstring becomes the tool description
  4. Parameter descriptions are extracted from the docstring
  5. Default values mark parameters as optional
@function_tool
def search_products(
    query: str,
    category: str = "all",
    max_results: int = 10,
    in_stock_only: bool = True,
) -> str:
    """Search the product catalog.

    Args:
        query: Search terms to find products.
        category: Product category to filter by. Defaults to "all".
        max_results: Maximum number of results to return.
        in_stock_only: Whether to only show in-stock items.
    """
    return f"Found products matching '{query}' in {category}"

This generates a schema where query is required (no default value) and category, max_results, and in_stock_only are optional with their defaults.

Supported Type Hints

The SDK supports all standard Python types for tool parameters:

from typing import Optional

@function_tool
def example_tool(
    name: str,                    # String parameter
    count: int,                   # Integer parameter
    ratio: float,                 # Float parameter
    enabled: bool,                # Boolean parameter
    tags: list[str],              # List of strings
    metadata: dict[str, str],     # Dictionary
    optional_note: Optional[str] = None,  # Optional parameter
) -> str:
    """An example showing all supported types."""
    return "OK"

For complex parameter structures, use Pydantic models:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
from pydantic import BaseModel, Field
from agents import function_tool

class SearchFilters(BaseModel):
    min_price: float = Field(description="Minimum price in USD")
    max_price: float = Field(description="Maximum price in USD")
    brands: list[str] = Field(description="List of brand names to include")

@function_tool
def advanced_search(query: str, filters: SearchFilters) -> str:
    """Search products with advanced filters.

    Args:
        query: Search terms.
        filters: Advanced filtering options.
    """
    return f"Searching for '{query}' with price range ${filters.min_price}-${filters.max_price}"

Docstring Parsing Styles

The SDK extracts parameter descriptions from docstrings. It supports three common formats:

@function_tool
def create_task(title: str, priority: int) -> str:
    """Create a new task in the project.

    Args:
        title: The title of the task.
        priority: Priority level from 1 (low) to 5 (critical).
    """
    return f"Created task: {title} (P{priority})"

Sphinx Style

@function_tool
def create_task(title: str, priority: int) -> str:
    """Create a new task in the project.

    :param title: The title of the task.
    :param priority: Priority level from 1 (low) to 5 (critical).
    """
    return f"Created task: {title} (P{priority})"

NumPy Style

@function_tool
def create_task(title: str, priority: int) -> str:
    """Create a new task in the project.

    Parameters
    ----------
    title : str
        The title of the task.
    priority : int
        Priority level from 1 (low) to 5 (critical).
    """
    return f"Created task: {title} (P{priority})"

All three produce equivalent tool schemas. Use whichever style matches your project's conventions.

Pydantic Field Constraints

For more precise parameter validation, use pydantic.Field in your tool's parameter model. You can achieve this by defining a custom model and using it as the tool's input:

from pydantic import BaseModel, Field
from agents import function_tool

class BookingRequest(BaseModel):
    guest_name: str = Field(
        description="Full name of the guest",
        min_length=2,
        max_length=100,
    )
    room_type: str = Field(
        description="Type of room to book",
        pattern="^(single|double|suite)$",
    )
    nights: int = Field(
        description="Number of nights to stay",
        ge=1,
        le=30,
    )
    special_requests: str = Field(
        default="",
        description="Any special requests or accommodations",
        max_length=500,
    )

@function_tool
def book_room(request: BookingRequest) -> str:
    """Book a hotel room for a guest.

    Args:
        request: The booking details.
    """
    return f"Booked {request.room_type} room for {request.guest_name} for {request.nights} nights."

The field constraints are included in the JSON schema sent to the LLM, helping the model generate valid arguments.

Async Tools

Tools can be async functions, which is essential when they perform I/O operations:

import httpx
from agents import function_tool

@function_tool
async def fetch_url(url: str) -> str:
    """Fetch the content of a web page.

    Args:
        url: The URL to fetch.
    """
    async with httpx.AsyncClient() as client:
        response = await client.get(url, timeout=10)
        response.raise_for_status()
        return response.text[:2000]  # Truncate to avoid token limits

@function_tool
async def query_database(sql: str) -> str:
    """Execute a read-only SQL query.

    Args:
        sql: The SQL query to execute.
    """
    # Using an async database driver
    async with get_db_connection() as conn:
        rows = await conn.fetch(sql)
        return str(rows)

Async tools are executed concurrently when the model issues parallel tool calls.

Tool Timeouts

Long-running tools should have timeouts to prevent the agent loop from hanging:

@function_tool(timeout=10)
async def slow_api_call(query: str) -> str:
    """Call a potentially slow external API.

    Args:
        query: The query to send to the API.
    """
    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://slow-api.example.com/search?q={query}")
        return response.text

If the tool exceeds the timeout, the SDK raises a ToolTimeoutError, which is caught by the agent loop and reported back to the LLM as an error. The agent can then decide to retry or handle the failure gracefully.

Custom Tool Names

By default, the tool name is the function name. Override it with the name parameter:

@function_tool(name="search_knowledge_base")
def kb_search(query: str) -> str:
    """Search the internal knowledge base.

    Args:
        query: Search query.
    """
    return "Results from knowledge base..."

This is useful when the function name is not descriptive enough for the LLM, or when you want to avoid exposing internal naming conventions.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Accessing Agent Context in Tools

Tools can access the run context by accepting a RunContextWrapper as their first parameter:

from agents import function_tool, RunContextWrapper
from dataclasses import dataclass

@dataclass
class UserSession:
    user_id: str
    tenant_id: str
    permissions: list[str]

@function_tool
async def get_user_orders(
    context: RunContextWrapper[UserSession],
    limit: int = 10,
) -> str:
    """Get recent orders for the current user.

    Args:
        limit: Maximum number of orders to return.
    """
    session = context.context
    # Use session.user_id to query the correct user's orders
    return f"Orders for user {session.user_id}: [...]"

The RunContextWrapper parameter is automatically detected and excluded from the tool's JSON schema — the LLM never sees it.

A Complete Multi-Tool Agent

Here is a practical example combining multiple tools:

import asyncio
from agents import Agent, Runner, function_tool

@function_tool
def add_task(title: str, assignee: str, priority: str = "medium") -> str:
    """Add a new task to the project board.

    Args:
        title: Task title.
        assignee: Team member to assign the task to.
        priority: Priority level (low, medium, high, critical).
    """
    return f"Created task '{title}' assigned to {assignee} with {priority} priority."

@function_tool
def list_team_members() -> str:
    """Get a list of all team members and their roles."""
    return "Alice (Backend), Bob (Frontend), Carol (DevOps), Dave (QA)"

@function_tool
def get_sprint_status() -> str:
    """Get the current sprint's progress and remaining capacity."""
    return "Sprint 23: 15/20 story points completed. 5 points remaining. 3 days left."

project_manager = Agent(
    name="PM Assistant",
    instructions="""You are a project management assistant. Help users manage tasks,
check sprint status, and coordinate with team members.

When creating tasks:
- Always check the team roster first to validate assignees
- Check sprint capacity before adding new tasks
- Suggest appropriate priority levels based on context""",
    tools=[add_task, list_team_members, get_sprint_status],
)

async def main():
    result = await Runner.run(
        project_manager,
        "We need to fix the login bug urgently. Who on the team could handle it?",
    )
    print(result.final_output)

asyncio.run(main())

In this example, the agent will likely:

  1. Call list_team_members() to see who is available
  2. Call get_sprint_status() to check capacity
  3. Reason about who should handle a login bug (Backend or QA)
  4. Possibly call add_task() to create the task
  5. Provide a recommendation to the user

Best Practices

  1. Write clear docstrings. The LLM uses the tool description to decide when and how to call it. Vague descriptions lead to misuse.

  2. Use precise type hints. str is less helpful than a Pydantic model with field constraints. The more precise the schema, the more accurate the tool calls.

  3. Return strings, not objects. Tool return values are converted to strings and injected into the conversation. Return human-readable text that the LLM can reason about.

  4. Set timeouts on I/O tools. Any tool that calls an external service should have a timeout.

  5. Validate inputs inside tools. Even though the LLM sees the schema, it can still produce invalid arguments. Validate and return clear error messages.

  6. Keep tools stateless when possible. Stateless tools are easier to test, retry, and parallelize.


Source: OpenAI Agents SDK — Tools

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like