Skip to content
Learn Agentic AI
Learn Agentic AI12 min read17 views

MCP Prompts: Dynamic Agent Instructions from External Sources

Use MCP prompt resources to dynamically load and parameterize agent instructions from external servers, enabling centralized prompt management with list_prompts and get_prompt.

The Problem with Hardcoded Instructions

Most agent tutorials define instructions as static strings inside Python code:

agent = Agent(
    name="Support Agent",
    instructions="You are a customer support agent for Acme Corp...",
)

This works for demos, but it breaks down in production for several reasons:

  • Different teams own different prompts. The product team writes the tone and policy guidelines. The engineering team deploys the agent. Hardcoding instructions forces both teams to coordinate on every prompt change.
  • Prompts change faster than code. A/B tests, seasonal promotions, compliance updates — instructions need to change without redeploying the agent service.
  • Multi-tenant agents need different instructions per client. A white-label SaaS product might serve dozens of customers, each with different policies and brand voices.

MCP Prompts solve this by making instructions a first-class resource that agents can fetch from external servers at runtime.

What Are MCP Prompts?

The MCP protocol defines three resource types: tools, resources, and prompts. While tools let agents take actions and resources provide data, prompts provide parameterized instruction templates that agents can retrieve on demand.

flowchart LR
    HOST(["MCP host<br/>Claude Desktop or IDE"])
    CLIENT["MCP client"]
    subgraph SERVERS["MCP Servers"]
        S1["Filesystem server"]
        S2["GitHub server"]
        S3["Postgres server"]
        SX["Custom tool server"]
    end
    LLM["LLM session"]
    OUT(["Grounded action"])
    HOST <--> CLIENT
    CLIENT <-->|stdio or HTTP+SSE| S1
    CLIENT <--> S2
    CLIENT <--> S3
    CLIENT <--> SX
    CLIENT --> LLM --> OUT
    style HOST fill:#f1f5f9,stroke:#64748b,color:#0f172a
    style CLIENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style OUT fill:#059669,stroke:#047857,color:#fff

An MCP server can expose a list of named prompts, each with optional parameters. The agent calls list_prompts() to discover available prompts and get_prompt() to fetch a specific one with parameter values filled in.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

This is different from just making an HTTP request to fetch a string. MCP prompts have a schema, support parameters with descriptions and required flags, and return structured message arrays — not just raw text.

Defining Prompts on the Server

Here is how to create an MCP server that exposes prompts:

# prompt_server.py
from mcp.server import Server
from mcp.types import Prompt, PromptArgument, PromptMessage, TextContent

server = Server("prompt-server")

PROMPTS = {
    "customer-support": {
        "description": "Instructions for a customer support agent",
        "arguments": [
            PromptArgument(
                name="company_name",
                description="The company the agent represents",
                required=True,
            ),
            PromptArgument(
                name="tone",
                description="Communication tone: friendly, professional, or casual",
                required=False,
            ),
            PromptArgument(
                name="language",
                description="Response language (default: English)",
                required=False,
            ),
        ],
        "template": (
            "You are a customer support agent for {company_name}. "
            "Your tone should be {tone}. Respond in {language}. "
            "Always verify the customer identity before discussing account details. "
            "Never share internal pricing or discount structures. "
            "If you cannot resolve an issue, escalate to a human agent."
        ),
        "defaults": {
            "tone": "professional",
            "language": "English",
        },
    },
    "sales-outreach": {
        "description": "Instructions for a sales outreach agent",
        "arguments": [
            PromptArgument(
                name="product_name",
                description="The product being sold",
                required=True,
            ),
            PromptArgument(
                name="target_industry",
                description="Industry vertical to target",
                required=True,
            ),
        ],
        "template": (
            "You are a sales development representative for {product_name}. "
            "You are targeting companies in the {target_industry} industry. "
            "Lead with value propositions relevant to their industry pain points. "
            "Ask qualifying questions before pitching features. "
            "Always aim to book a discovery call as the next step."
        ),
        "defaults": {},
    },
}

@server.list_prompts()
async def list_prompts() -> list[Prompt]:
    return [
        Prompt(
            name=name,
            description=data["description"],
            arguments=data["arguments"],
        )
        for name, data in PROMPTS.items()
    ]

@server.get_prompt()
async def get_prompt(name: str, arguments: dict | None = None) -> list[PromptMessage]:
    if name not in PROMPTS:
        raise ValueError(f"Unknown prompt: {name}")

    prompt_def = PROMPTS[name]
    args = {**prompt_def["defaults"], **(arguments or {})}

    # Validate required arguments
    for arg_def in prompt_def["arguments"]:
        if arg_def.required and arg_def.name not in args:
            raise ValueError(f"Missing required argument: {arg_def.name}")

    text = prompt_def["template"].format(**args)
    return [
        PromptMessage(
            role="user",
            content=TextContent(type="text", text=text),
        )
    ]

Fetching Prompts from the Agent Side

On the agent side, you connect to the prompt server and fetch instructions dynamically. The MCP client session exposes list_prompts() and get_prompt():

from mcp.client import ClientSession
from mcp.client.stdio import stdio_client

async def load_instructions(
    company: str, tone: str = "professional"
) -> str:
    async with stdio_client("python", ["prompt_server.py"]) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()

            # Discover available prompts
            prompts = await session.list_prompts()
            for p in prompts:
                print(f"Available: {p.name} - {p.description}")

            # Fetch a specific prompt with parameters
            result = await session.get_prompt(
                "customer-support",
                arguments={
                    "company_name": company,
                    "tone": tone,
                },
            )
            return result.messages[0].content.text

Using Dynamic Prompts with the Agents SDK

The real power comes from wiring MCP prompts into the OpenAI Agents SDK. You can use the instructions parameter as a callable that fetches prompts at runtime:

from agents import Agent, Runner
from agents.mcp import MCPServerStdio

prompt_server = MCPServerStdio(
    name="Prompts",
    params={"command": "python", "args": ["prompt_server.py"]},
)

tools_server = MCPServerStdio(
    name="Tools",
    params={"command": "python", "args": ["tools_server.py"]},
    cache_tools_list=True,
)

async def dynamic_instructions(run_context, agent):
    """Fetch instructions from the prompt server at runtime."""
    client = run_context.get("prompt_client")
    if client:
        result = await client.get_prompt(
            "customer-support",
            arguments={
                "company_name": run_context.get("company", "Acme"),
                "tone": run_context.get("tone", "professional"),
            },
        )
        return result.messages[0].content.text
    return "You are a helpful assistant."

agent = Agent(
    name="Support Agent",
    instructions=dynamic_instructions,
    mcp_servers=[tools_server],
)

Storing Prompts in a Database

For production use, you will want prompts stored in a database rather than hardcoded in the server. This enables version control, A/B testing, and non-engineer editing:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

import asyncpg
from mcp.server import Server
from mcp.types import Prompt, PromptArgument, PromptMessage, TextContent

server = Server("db-prompt-server")
db_pool = None

async def get_pool():
    global db_pool
    if db_pool is None:
        db_pool = await asyncpg.create_pool(
            "postgresql://localhost/prompts_db"
        )
    return db_pool

@server.list_prompts()
async def list_prompts() -> list[Prompt]:
    pool = await get_pool()
    rows = await pool.fetch(
        "SELECT name, description, arguments FROM prompts WHERE active = true"
    )
    return [
        Prompt(
            name=row["name"],
            description=row["description"],
            arguments=[
                PromptArgument(**arg) for arg in row["arguments"]
            ],
        )
        for row in rows
    ]

@server.get_prompt()
async def get_prompt(name: str, arguments: dict | None = None) -> list[PromptMessage]:
    pool = await get_pool()
    row = await pool.fetchrow(
        "SELECT template, defaults FROM prompts WHERE name = $1 AND active = true",
        name,
    )
    if not row:
        raise ValueError(f"Prompt not found: {name}")

    args = {**row["defaults"], **(arguments or {})}
    text = row["template"].format(**args)

    # Log prompt usage for analytics
    await pool.execute(
        "INSERT INTO prompt_usage_log (prompt_name, arguments) VALUES ($1, $2)",
        name,
        arguments,
    )

    return [
        PromptMessage(
            role="user",
            content=TextContent(type="text", text=text),
        )
    ]

Parameterized Template Patterns

Beyond simple string substitution, you can build sophisticated template patterns:

Conditional sections — Include blocks based on parameter presence:

def build_prompt(template_parts: list[dict], args: dict) -> str:
    sections = []
    for part in template_parts:
        condition = part.get("condition")
        if condition and condition not in args:
            continue
        text = part["text"].format(**args)
        sections.append(text)
    return " ".join(sections)

Versioned prompts — Serve different prompt versions for A/B testing:

@server.get_prompt()
async def get_prompt(name: str, arguments: dict | None = None):
    version = (arguments or {}).pop("version", "latest")
    pool = await get_pool()
    row = await pool.fetchrow(
        "SELECT template FROM prompts WHERE name = $1 AND version = $2",
        name,
        version,
    )
    # ... format and return

When to Use MCP Prompts vs Static Instructions

Use static instructions when:

  • The agent has a single, stable purpose
  • Only engineers modify the instructions
  • The application is single-tenant

Use MCP Prompts when:

  • Non-engineers need to update agent behavior
  • You serve multiple tenants with different requirements
  • Instructions change frequently without code deploys
  • You want centralized prompt management across multiple agents
  • You need audit logging of prompt versions and usage

MCP Prompts turn agent instructions from a code artifact into a managed resource. They give product teams direct control over agent behavior while keeping the engineering team focused on capabilities and infrastructure.

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Infrastructure

MCP Registry Catalogs in 2026: Official Registry vs Smithery vs mcp.so

The Official MCP Registry hit API freeze v0.1. Smithery has 7,000+ servers, mcp.so has 19,700+, PulseMCP is hand-curated. We compare discovery, install, and security across the major catalogs.

AI Infrastructure

MCP Servers for SaaS Tools: A 2026 Registry Walkthrough for Voice Agent Teams

The public MCP registry crossed 9,400 servers in April 2026. Here is a curated walkthrough of the SaaS MCP servers CallSphere mounts in production, with OAuth 2.1 PKCE patterns.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.

Agentic AI

OpenAI Computer-Use Agents (CUA) in Production: Build + Evaluate a Real Workflow (2026)

Build a working computer-use agent with the OpenAI Computer Use tool — clicks, types, scrolls a real browser — then evaluate task success on a benchmark suite.

Funding & Industry

OpenAI revenue run-rate — April 2026 read — April 2026 update

OpenAI's April 2026 reported revenue run-rate cleared $13B annualized, on continued ChatGPT growth, agentic Operator monetization, and enterprise API expansion.

Funding & Industry

Stargate progress update — April 2026 site and capex

OpenAI's Stargate with Oracle and SoftBank crossed a milestone in April 2026 with the first Texas site partially energized and three additional sites under construction.