Skip to content
Learn Agentic AI
Learn Agentic AI14 min read6 views

MCP Protocol Deep Dive: Understanding the JSON-RPC Foundation

Explore the Model Context Protocol specification from the ground up — JSON-RPC 2.0 message format, request/response lifecycle, transport layers, and how every MCP message is structured under the hood.

Why Protocol Internals Matter

When you connect an AI agent to an MCP server and call a tool, several layers of messaging happen beneath the surface. Understanding these layers is the difference between debugging MCP integrations in minutes versus hours. The Model Context Protocol is built on JSON-RPC 2.0 — a stateless, lightweight remote procedure call protocol that uses JSON as its data format.

Every interaction between an MCP client (the agent runtime) and an MCP server (the tool provider) follows a strict message contract. If you have ever worked with the Language Server Protocol (LSP) in code editors, the architecture will feel familiar. MCP borrows the same JSON-RPC foundation and adapts it for AI tool calling.

JSON-RPC 2.0 Message Format

JSON-RPC 2.0 defines three message types: requests, responses, and notifications. MCP uses all three.

flowchart LR
    HOST(["MCP host<br/>Claude Desktop or IDE"])
    CLIENT["MCP client"]
    subgraph SERVERS["MCP Servers"]
        S1["Filesystem server"]
        S2["GitHub server"]
        S3["Postgres server"]
        SX["Custom tool server"]
    end
    LLM["LLM session"]
    OUT(["Grounded action"])
    HOST <--> CLIENT
    CLIENT <-->|stdio or HTTP+SSE| S1
    CLIENT <--> S2
    CLIENT <--> S3
    CLIENT <--> SX
    CLIENT --> LLM --> OUT
    style HOST fill:#f1f5f9,stroke:#64748b,color:#0f172a
    style CLIENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style OUT fill:#059669,stroke:#047857,color:#fff

A request is a message from client to server (or server to client) that expects a response:

# MCP request message structure
request = {
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
        "name": "query_database",
        "arguments": {
            "sql": "SELECT * FROM users LIMIT 10"
        }
    }
}

The id field is critical — it correlates the request with its response. The method field identifies the MCP operation. The params field carries the operation-specific payload.

A response mirrors the request by its id:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
# Successful response
response = {
    "jsonrpc": "2.0",
    "id": 1,
    "result": {
        "content": [
            {
                "type": "text",
                "text": "Found 10 users in the database."
            }
        ]
    }
}

# Error response
error_response = {
    "jsonrpc": "2.0",
    "id": 1,
    "error": {
        "code": -32602,
        "message": "Invalid params: unknown tool 'query_database'"
    }
}

A notification is a message that does not expect a response. It has no id field:

# Notification — no id, no response expected
notification = {
    "jsonrpc": "2.0",
    "method": "notifications/initialized",
    "params": {}
}

MCP uses notifications for events like progress updates, log messages, and lifecycle signals.

The MCP Request/Response Lifecycle

Every MCP connection begins with a handshake. The client sends an initialize request, the server responds with its capabilities, and the client confirms with an initialized notification:

import json

# Step 1: Client sends initialize request
initialize_request = {
    "jsonrpc": "2.0",
    "id": 0,
    "method": "initialize",
    "params": {
        "protocolVersion": "2025-03-26",
        "capabilities": {
            "roots": {"listChanged": True}
        },
        "clientInfo": {
            "name": "my-agent",
            "version": "1.0.0"
        }
    }
}

# Step 2: Server responds with its capabilities
initialize_response = {
    "jsonrpc": "2.0",
    "id": 0,
    "result": {
        "protocolVersion": "2025-03-26",
        "capabilities": {
            "tools": {"listChanged": True},
            "resources": {"subscribe": True},
            "prompts": {"listChanged": True}
        },
        "serverInfo": {
            "name": "database-server",
            "version": "2.1.0"
        }
    }
}

# Step 3: Client sends initialized notification
initialized_notification = {
    "jsonrpc": "2.0",
    "method": "notifications/initialized",
    "params": {}
}

After this handshake, the client can call tools/list to discover available tools, resources/list to enumerate data sources, and tools/call to execute a specific tool. Each call follows the same request-response pattern.

Transport Layers

MCP supports multiple transport mechanisms, each suited to different deployment patterns.

stdio transport communicates over standard input and output. The client spawns the server as a subprocess and writes JSON-RPC messages to its stdin, reading responses from stdout:

import subprocess
import json

# Spawn MCP server as a subprocess
process = subprocess.Popen(
    ["python", "my_mcp_server.py"],
    stdin=subprocess.PIPE,
    stdout=subprocess.PIPE,
    stderr=subprocess.PIPE,
    text=True,
)

def send_message(msg: dict) -> dict:
    """Send a JSON-RPC message and read the response."""
    line = json.dumps(msg) + "\n"
    process.stdin.write(line)
    process.stdin.flush()
    response_line = process.stdout.readline()
    return json.loads(response_line)

result = send_message(initialize_request)
print(result)

Streamable HTTP transport uses HTTP POST for client-to-server messages and Server-Sent Events (SSE) for server-to-client streaming. This is the transport you use for remote MCP servers accessible over the network.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Custom transports are also possible. Any bidirectional byte stream that can carry newline-delimited JSON works as an MCP transport.

Standard Error Codes

MCP inherits JSON-RPC 2.0 error codes and adds its own:

# Standard JSON-RPC error codes
PARSE_ERROR = -32700       # Invalid JSON
INVALID_REQUEST = -32600   # Not a valid request object
METHOD_NOT_FOUND = -32601  # Method does not exist
INVALID_PARAMS = -32602    # Invalid method parameters
INTERNAL_ERROR = -32603    # Internal server error

When building MCP clients, always handle these error codes gracefully. A robust client retries transient errors and surfaces permanent errors to the agent with enough context for the LLM to adjust its approach.

FAQ

How does MCP differ from a plain REST API?

MCP is a bidirectional protocol built on JSON-RPC 2.0, meaning both client and server can send messages. Unlike REST, MCP includes a capability negotiation handshake, standardized tool schemas, and supports streaming via notifications. REST is request-response only and lacks the built-in discovery and schema mechanisms that MCP provides.

Can I mix different transport types in one agent?

Yes. An agent can connect to one MCP server over stdio (a local tool) and another over Streamable HTTP (a remote service). The MCP client handles transport abstraction internally — your agent code sees the same tool interface regardless of how the underlying messages are carried.

What happens if the MCP server crashes mid-request?

The client will receive an I/O error (broken pipe for stdio, connection reset for HTTP). Well-designed MCP clients implement timeouts and reconnection logic. The JSON-RPC id field ensures that even after reconnection, stale responses from a previous session are ignored because the ids will not match pending requests.


#MCP #JSONRPC #ProtocolDesign #AIAgents #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Infrastructure

MCP Registry Catalogs in 2026: Official Registry vs Smithery vs mcp.so

The Official MCP Registry hit API freeze v0.1. Smithery has 7,000+ servers, mcp.so has 19,700+, PulseMCP is hand-curated. We compare discovery, install, and security across the major catalogs.

AI Infrastructure

MCP Servers for SaaS Tools: A 2026 Registry Walkthrough for Voice Agent Teams

The public MCP registry crossed 9,400 servers in April 2026. Here is a curated walkthrough of the SaaS MCP servers CallSphere mounts in production, with OAuth 2.1 PKCE patterns.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

Multi-Agent Handoffs with the OpenAI Agents SDK: The Pattern That Actually Scales (2026)

Handoffs done right — when one agent should hand control to another, how to preserve context, and how to evaluate the handoff decision itself.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.