Skip to content
Learn Agentic AI
Learn Agentic AI12 min read11 views

Hosted MCP Tools: Server-Side Tool Execution with OpenAI

Use HostedMCPTool to run MCP tools server-side on OpenAI's infrastructure with zero callback overhead, including tool_config structure, approval options, and GitMCP for repository access.

What Are Hosted MCP Tools?

With MCPServerStdio and MCPServerStreamableHTTP, tool execution happens on your infrastructure — your machine runs the subprocess or your server handles the HTTP request. Hosted MCP tools flip this: OpenAI runs the MCP server and executes tools on their side.

This eliminates the callback overhead entirely. When the LLM decides to use a tool, the tool is executed within OpenAI's infrastructure in the same request cycle. No round trip back to your client, no webhook to handle, no server to maintain.

Hosted MCP is ideal when:

  • You want to use third-party MCP servers without running them yourself
  • Latency matters and you want to eliminate client-server round trips
  • You are accessing public data sources like open source repositories
  • You want the simplest possible agent setup

HostedMCPTool Configuration

The HostedMCPTool class configures a tool that OpenAI executes server-side:

flowchart LR
    HOST(["MCP host<br/>Claude Desktop or IDE"])
    CLIENT["MCP client"]
    subgraph SERVERS["MCP Servers"]
        S1["Filesystem server"]
        S2["GitHub server"]
        S3["Postgres server"]
        SX["Custom tool server"]
    end
    LLM["LLM session"]
    OUT(["Grounded action"])
    HOST <--> CLIENT
    CLIENT <-->|stdio or HTTP+SSE| S1
    CLIENT <--> S2
    CLIENT <--> S3
    CLIENT <--> SX
    CLIENT --> LLM --> OUT
    style HOST fill:#f1f5f9,stroke:#64748b,color:#0f172a
    style CLIENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style OUT fill:#059669,stroke:#047857,color:#fff
from agents import Agent, Runner
from agents.tool import HostedMCPTool

# Basic hosted MCP tool
tool = HostedMCPTool(
    tool_config={
        "type": "mcp",
        "server_label": "deepwiki",
        "server_url": "https://mcp.deepwiki.com/mcp",
        "require_approval": "never",
    }
)

agent = Agent(
    name="Research Agent",
    instructions="""You are a research assistant that can look up
    documentation and technical information using DeepWiki.
    Use the available tools to find accurate, up-to-date information.""",
    tools=[tool],
)

The tool_config Structure

Every HostedMCPTool requires a tool_config dictionary with these fields:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
tool_config = {
    # Required: always "mcp" for MCP tools
    "type": "mcp",

    # Required: a unique label identifying this server
    # Used in tool filtering and approval policies
    "server_label": "my-server",

    # Required: the URL of the MCP server
    # Must be a publicly accessible HTTPS endpoint
    "server_url": "https://mcp.example.com/mcp",

    # Required: approval policy for tool execution
    # Options: "never", "always", or a per-tool map
    "require_approval": "never",

    # Optional: specific headers for authentication
    "headers": {
        "Authorization": "Bearer token123",
    },
}

server_label

The server_label is a human-readable identifier for the MCP server. It appears in traces, logs, and tool filtering contexts. Choose descriptive labels:

# Good labels
"server_label": "github-repos"
"server_label": "company-wiki"
"server_label": "weather-api"

# Bad labels
"server_label": "server1"
"server_label": "tools"

server_url

The server_url must point to a publicly accessible MCP endpoint. OpenAI's servers will connect to this URL to execute tools, so it cannot be a localhost or private network address.

require_approval Options

The require_approval field controls whether tool executions need human approval:

# Never require approval — tools execute automatically
"require_approval": "never"

# Always require approval — every tool call needs human confirmation
"require_approval": "always"

# Per-tool approval map
"require_approval": {
    "never": {
        "tool_names": ["search", "read_file", "list_directory"]
    },
    "always": {
        "tool_names": ["write_file", "delete_file", "execute_command"]
    }
}

The per-tool map is the most practical option for production. Read operations run automatically while write operations require approval.

No Callback Overhead

The key advantage of hosted MCP is the execution flow. Compare the two approaches:

Client-side MCP (Stdio/HTTP):

  1. LLM generates tool call
  2. Response sent to your client
  3. Your client calls the MCP server
  4. MCP server executes the tool
  5. Your client sends the result back to OpenAI
  6. LLM continues with the result

Hosted MCP:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

  1. LLM generates tool call
  2. OpenAI calls the MCP server directly
  3. LLM continues with the result

Hosted MCP eliminates steps 2, 3, and 5. For agents that make multiple tool calls in sequence, this can reduce total latency significantly.

Using GitMCP for Repository Access

GitMCP is a hosted MCP service that provides access to any public GitHub repository. It exposes tools for reading files, searching code, and browsing repository structure:

from agents import Agent, Runner
from agents.tool import HostedMCPTool
import asyncio

async def main():
    # Access any public GitHub repo via GitMCP
    gitmcp_tool = HostedMCPTool(
        tool_config={
            "type": "mcp",
            "server_label": "gitmcp",
            "server_url": "https://gitmcp.io/openai/openai-agents-python",
            "require_approval": "never",
        }
    )

    agent = Agent(
        name="Code Research Agent",
        instructions="""You are a code research assistant.
        Use the GitMCP tools to explore repository contents,
        read source files, and understand codebases.
        When asked about a project, start by reading the README
        and then explore relevant source files.""",
        tools=[gitmcp_tool],
    )

    result = await Runner.run(
        agent,
        input="What are the main classes in the OpenAI Agents SDK? Show me the Agent class definition.",
    )
    print(result.final_output)

asyncio.run(main())

The server URL pattern for GitMCP is https://gitmcp.io/{owner}/{repo}. This works with any public repository.

Combining Hosted and Local MCP

You can mix hosted MCP tools with local MCP servers and native function tools:

from agents import Agent, Runner, function_tool
from agents.tool import HostedMCPTool
from agents.mcp import MCPServerStdio

@function_tool
def calculate_cost(hours: float, rate: float) -> str:
    """Calculate project cost from hours and hourly rate."""
    return f"Estimated cost: ${hours * rate:,.2f}"

async def main():
    # Hosted tool — runs on OpenAI's side
    wiki_tool = HostedMCPTool(
        tool_config={
            "type": "mcp",
            "server_label": "deepwiki",
            "server_url": "https://mcp.deepwiki.com/mcp",
            "require_approval": "never",
        }
    )

    # Local tool — runs as subprocess on your machine
    fs_server = MCPServerStdio(
        name="Filesystem",
        params={
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"],
        },
    )

    async with fs_server:
        agent = Agent(
            name="Hybrid Agent",
            instructions="""You have three types of tools:
            1. DeepWiki for researching technical documentation
            2. Filesystem for reading and writing local files
            3. A cost calculator for project estimation
            Use the appropriate tool for each task.""",
            tools=[wiki_tool, calculate_cost],
            mcp_servers=[fs_server],
        )

        result = await Runner.run(
            agent,
            input="Research FastAPI best practices from the docs, save a summary to /workspace/fastapi-notes.md, and estimate the cost for 40 hours at $150/hr",
        )
        print(result.final_output)

Multiple Hosted MCP Servers

An agent can use multiple hosted MCP servers simultaneously. Each gets its own server_label for identification:

deepwiki = HostedMCPTool(
    tool_config={
        "type": "mcp",
        "server_label": "deepwiki",
        "server_url": "https://mcp.deepwiki.com/mcp",
        "require_approval": "never",
    }
)

gitmcp_agents = HostedMCPTool(
    tool_config={
        "type": "mcp",
        "server_label": "agents-sdk-repo",
        "server_url": "https://gitmcp.io/openai/openai-agents-python",
        "require_approval": "never",
    }
)

gitmcp_cookbook = HostedMCPTool(
    tool_config={
        "type": "mcp",
        "server_label": "openai-cookbook",
        "server_url": "https://gitmcp.io/openai/openai-cookbook",
        "require_approval": "never",
    }
)

research_agent = Agent(
    name="Multi-Source Researcher",
    instructions="""You can search documentation via DeepWiki and
    explore two GitHub repositories: the Agents SDK and the OpenAI Cookbook.
    Cross-reference information across sources for comprehensive answers.""",
    tools=[deepwiki, gitmcp_agents, gitmcp_cookbook],
)

Approval Handling

When using require_approval: "always" or per-tool approval, you need to handle approval requests in your application:

from agents import Runner

async def run_with_approval(agent, user_input):
    result = await Runner.run(agent, input=user_input)

    # Check if the agent is waiting for tool approval
    while hasattr(result, "pending_approvals") and result.pending_approvals:
        for approval in result.pending_approvals:
            print(f"Tool '{approval.tool_name}' wants to execute:")
            print(f"  Args: {approval.arguments}")
            user_decision = input("Approve? (y/n): ").strip().lower()

            if user_decision == "y":
                approval.approve()
            else:
                approval.deny(reason="User declined")

        # Continue the run after approval decisions
        result = await Runner.run(
            agent,
            input=result,
        )

    return result.final_output

When to Choose Hosted MCP

Factor Hosted MCP Client-side MCP
Latency Lower (no round trips) Higher (client-server round trips)
Infrastructure None (OpenAI manages) You run the server
Private data Not suitable Full control
Public APIs Ideal Unnecessary overhead
Cost Included in API usage Your server costs
Customization Limited to server capabilities Full control

Use hosted MCP for public data sources and third-party integrations. Use client-side MCP for private data, custom business logic, and scenarios where you need full control over tool execution.

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Infrastructure

MCP Registry Catalogs in 2026: Official Registry vs Smithery vs mcp.so

The Official MCP Registry hit API freeze v0.1. Smithery has 7,000+ servers, mcp.so has 19,700+, PulseMCP is hand-curated. We compare discovery, install, and security across the major catalogs.

AI Infrastructure

MCP Servers for SaaS Tools: A 2026 Registry Walkthrough for Voice Agent Teams

The public MCP registry crossed 9,400 servers in April 2026. Here is a curated walkthrough of the SaaS MCP servers CallSphere mounts in production, with OAuth 2.1 PKCE patterns.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.

Agentic AI

OpenAI Computer-Use Agents (CUA) in Production: Build + Evaluate a Real Workflow (2026)

Build a working computer-use agent with the OpenAI Computer Use tool — clicks, types, scrolls a real browser — then evaluate task success on a benchmark suite.

Funding & Industry

OpenAI revenue run-rate — April 2026 read — April 2026 update

OpenAI's April 2026 reported revenue run-rate cleared $13B annualized, on continued ChatGPT growth, agentic Operator monetization, and enterprise API expansion.

Funding & Industry

Stargate progress update — April 2026 site and capex

OpenAI's Stargate with Oracle and SoftBank crossed a milestone in April 2026 with the first Texas site partially energized and three additional sites under construction.