Skip to content
Learn Agentic AI
Learn Agentic AI12 min read12 views

OpenAI Function Calling: Letting LLMs Interact with Your Code

Master OpenAI's function calling feature to let language models invoke your Python functions, parse structured arguments, and build tool-augmented AI applications.

What Is Function Calling?

Function calling (also called tool use) lets an LLM decide when to invoke a function you define, generate the correct arguments as structured JSON, and then incorporate the function's result into its response. This bridges the gap between the model's language capabilities and your application's data and actions.

Use cases include fetching real-time data, querying databases, sending emails, creating records, calling external APIs — anything your code can do.

Defining Tools

You describe your functions using JSON Schema in the tools parameter:

flowchart TD
    USER(["User message"])
    LLM["LLM call<br/>with tools schema"]
    DECIDE{"Model wants<br/>to call a tool?"}
    EXEC["Execute tool<br/>sandboxed runtime"]
    RESULT["Append tool_result<br/>to messages"]
    GUARD{"Output passes<br/>guardrails?"}
    DONE(["Final reply"])
    BLOCK(["Refuse and log"])
    USER --> LLM --> DECIDE
    DECIDE -->|Yes| EXEC --> RESULT --> LLM
    DECIDE -->|No| GUARD
    GUARD -->|Yes| DONE
    GUARD -->|No| BLOCK
    style LLM fill:#4f46e5,stroke:#4338ca,color:#fff
    style EXEC fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style DONE fill:#059669,stroke:#047857,color:#fff
    style BLOCK fill:#dc2626,stroke:#b91c1c,color:#fff
from openai import OpenAI

client = OpenAI()

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a given city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "The city name, e.g., 'San Francisco'",
                    },
                    "units": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "Temperature unit",
                    },
                },
                "required": ["city"],
            },
        },
    },
]

The description fields are critical — the model reads them to decide when and how to call the function.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Making a Tool-Augmented Request

Pass the tools array along with your messages:

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful weather assistant."},
        {"role": "user", "content": "What is the weather like in Tokyo?"},
    ],
    tools=tools,
    tool_choice="auto",  # let the model decide whether to call a tool
)

message = response.choices[0].message

if message.tool_calls:
    for tool_call in message.tool_calls:
        print(f"Function: {tool_call.function.name}")
        print(f"Arguments: {tool_call.function.arguments}")
        print(f"Call ID: {tool_call.id}")

When the model decides a tool is needed, finish_reason is tool_calls and the message.tool_calls array contains one or more function calls with JSON string arguments.

The Complete Tool Call Loop

Function calling requires a multi-turn conversation. You send the request, execute the function, then send the result back:

import json

def get_weather(city: str, units: str = "celsius") -> dict:
    # In production, call a real weather API
    return {"city": city, "temperature": 22, "units": units, "condition": "partly cloudy"}

# Step 1: Send the user message with tools
messages = [
    {"role": "system", "content": "You are a helpful weather assistant."},
    {"role": "user", "content": "What is the weather in Tokyo and London?"},
]

response = client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=tools,
)

assistant_message = response.choices[0].message

# Step 2: Execute each tool call
if assistant_message.tool_calls:
    messages.append(assistant_message)  # add the assistant's tool call message

    for tool_call in assistant_message.tool_calls:
        args = json.loads(tool_call.function.arguments)
        result = get_weather(**args)

        messages.append({
            "role": "tool",
            "tool_call_id": tool_call.id,
            "content": json.dumps(result),
        })

    # Step 3: Send results back to the model
    final_response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=tools,
    )

    print(final_response.choices[0].message.content)

The model sees the tool results and produces a natural language summary for the user.

Controlling Tool Choice

The tool_choice parameter controls when tools are used:

# Let the model decide (default)
tool_choice = "auto"

# Force a specific function
tool_choice = {"type": "function", "function": {"name": "get_weather"}}

# Prevent tool use entirely
tool_choice = "none"

# Require the model to call at least one tool
tool_choice = "required"

Multiple Tools in One Application

Real applications expose several tools. The model picks the right one based on context:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

tools = [
    {
        "type": "function",
        "function": {
            "name": "search_products",
            "description": "Search the product catalog by keyword.",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string"},
                    "max_results": {"type": "integer", "default": 5},
                },
                "required": ["query"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "get_order_status",
            "description": "Check the status of an order by order ID.",
            "parameters": {
                "type": "object",
                "properties": {
                    "order_id": {"type": "string"},
                },
                "required": ["order_id"],
            },
        },
    },
]

When the user says "Where is my order #12345?", the model calls get_order_status. When they say "Show me wireless headphones", it calls search_products.

FAQ

Can the model call multiple functions in parallel?

Yes. The model can return multiple entries in the tool_calls array within a single response. You should execute them all and send back all results before making the next API call.

What happens if the function returns an error?

Return the error as the tool result content. The model will see the error and can communicate it to the user or try a different approach. For example: {"error": "Order not found"}.

How do I prevent the model from hallucinating function arguments?

Write detailed descriptions for each parameter, use enum for constrained values, and mark fields as required when they must be provided. The more specific your schema, the more reliable the arguments.


#OpenAI #FunctionCalling #Tools #Python #AIAgents #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.

Agentic AI

OpenAI Computer-Use Agents (CUA) in Production: Build + Evaluate a Real Workflow (2026)

Build a working computer-use agent with the OpenAI Computer Use tool — clicks, types, scrolls a real browser — then evaluate task success on a benchmark suite.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Parallel Tool Calling in the OpenAI Agents SDK: When It Helps, When It Hurts (2026)

OpenAI's parallel function calling can cut latency in half — or burn money on dependent calls. The architecture, code, and an eval that proves the win.