Skip to content
Learn Agentic AI
Learn Agentic AI17 min read23 views

How to Build an AI Coding Assistant with Claude and MCP: Step-by-Step Guide

Build a powerful AI coding assistant that reads files, runs tests, and fixes bugs using the Claude API and Model Context Protocol servers in TypeScript.

Why Build a Coding Assistant with MCP?

The Model Context Protocol (MCP) is an open standard that gives AI models structured access to external tools and data sources. Unlike traditional function calling where you hardcode tool definitions into your application, MCP provides a standardized client-server architecture where tool servers can be reused across different AI applications.

For a coding assistant, MCP is particularly powerful because it lets you expose filesystem operations, terminal commands, Git operations, and language server features as MCP tools that Claude can call. The result is a coding assistant that can genuinely read your codebase, understand project structure, run tests, and fix bugs — not just generate code in isolation.

In this tutorial, you will build a fully functional coding assistant in TypeScript that connects to MCP servers for filesystem access and command execution.

Architecture

┌─────────────────────────────────────────────────┐
│                 Coding Assistant                  │
│                                                   │
│  ┌───────────┐   ┌──────────┐   ┌────────────┐  │
│  │  Claude    │──▶│  MCP     │──▶│  MCP        │  │
│  │  API       │◀──│  Client  │◀──│  Servers    │  │
│  └───────────┘   └──────────┘   └────────────┘  │
│                                       │           │
│                        ┌──────────────┼────┐      │
│                        ▼              ▼    ▼      │
│                   Filesystem     Terminal  Git    │
└─────────────────────────────────────────────────┘

Prerequisites

  • Node.js 20+ and npm
  • Claude API key from Anthropic
  • Basic TypeScript knowledge

Step 1: Project Setup

mkdir coding-assistant && cd coding-assistant
npm init -y
npm install @anthropic-ai/sdk @modelcontextprotocol/sdk zod dotenv
npm install -D typescript @types/node tsx

npx tsc --init --target ES2022 --module NodeNext --moduleResolution NodeNext --outDir dist --strict true

Create the project structure:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
flowchart LR
    HOST(["MCP host<br/>Claude Desktop or IDE"])
    CLIENT["MCP client"]
    subgraph SERVERS["MCP Servers"]
        S1["Filesystem server"]
        S2["GitHub server"]
        S3["Postgres server"]
        SX["Custom tool server"]
    end
    LLM["LLM session"]
    OUT(["Grounded action"])
    HOST <--> CLIENT
    CLIENT <-->|stdio or HTTP+SSE| S1
    CLIENT <--> S2
    CLIENT <--> S3
    CLIENT <--> SX
    CLIENT --> LLM --> OUT
    style HOST fill:#f1f5f9,stroke:#64748b,color:#0f172a
    style CLIENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style OUT fill:#059669,stroke:#047857,color:#fff
mkdir -p src/{mcp-servers,tools,core}
touch src/index.ts src/assistant.ts src/core/claude-client.ts
touch src/mcp-servers/filesystem.ts src/mcp-servers/terminal.ts
touch .env

Step 2: Build the Filesystem MCP Server

The filesystem server exposes tools for reading, writing, and searching files:

// src/mcp-servers/filesystem.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import * as fs from "fs/promises";
import * as path from "path";

const server = new McpServer({
  name: "filesystem-server",
  version: "1.0.0",
});

const ALLOWED_ROOT = process.env.PROJECT_ROOT || process.cwd();

function validatePath(filePath: string): string {
  const resolved = path.resolve(ALLOWED_ROOT, filePath);
  if (!resolved.startsWith(ALLOWED_ROOT)) {
    throw new Error("Path traversal detected: access denied");
  }
  return resolved;
}

server.tool(
  "read_file",
  "Read the contents of a file at the given path",
  { path: z.string().describe("Relative path to the file") },
  async ({ path: filePath }) => {
    const resolved = validatePath(filePath);
    const content = await fs.readFile(resolved, "utf-8");
    return { content: [{ type: "text", text: content }] };
  }
);

server.tool(
  "write_file",
  "Write content to a file, creating it if it does not exist",
  {
    path: z.string().describe("Relative path to the file"),
    content: z.string().describe("Content to write"),
  },
  async ({ path: filePath, content }) => {
    const resolved = validatePath(filePath);
    await fs.mkdir(path.dirname(resolved), { recursive: true });
    await fs.writeFile(resolved, content, "utf-8");
    return { content: [{ type: "text", text: `Written ${content.length} bytes to ${filePath}` }] };
  }
);

server.tool(
  "list_directory",
  "List files and directories at the given path",
  { path: z.string().describe("Relative directory path").default(".") },
  async ({ path: dirPath }) => {
    const resolved = validatePath(dirPath);
    const entries = await fs.readdir(resolved, { withFileTypes: true });
    const listing = entries.map(
      (e) => `${e.isDirectory() ? "[DIR]" : "[FILE]"} ${e.name}`
    );
    return { content: [{ type: "text", text: listing.join("\n") }] };
  }
);

server.tool(
  "search_files",
  "Search for files matching a glob pattern in the project",
  {
    pattern: z.string().describe("Search pattern (e.g., '*.ts', 'test')"),
    directory: z.string().default("."),
  },
  async ({ pattern, directory }) => {
    const resolved = validatePath(directory);
    const results: string[] = [];

    async function walk(dir: string) {
      const entries = await fs.readdir(dir, { withFileTypes: true });
      for (const entry of entries) {
        const fullPath = path.join(dir, entry.name);
        if (entry.isDirectory() && !entry.name.startsWith(".") && entry.name !== "node_modules") {
          await walk(fullPath);
        } else if (entry.name.includes(pattern) || entry.name.match(new RegExp(pattern.replace("*", ".*")))) {
          results.push(path.relative(ALLOWED_ROOT, fullPath));
        }
      }
    }

    await walk(resolved);
    return { content: [{ type: "text", text: results.join("\n") || "No matches found" }] };
  }
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}

main().catch(console.error);

Step 3: Build the Terminal MCP Server

The terminal server lets Claude run commands like test suites and linters:

// src/mcp-servers/terminal.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import { exec } from "child_process";
import { promisify } from "util";

const execAsync = promisify(exec);
const server = new McpServer({ name: "terminal-server", version: "1.0.0" });

const ALLOWED_COMMANDS = [
  "npm test", "npm run lint", "npm run build", "npx tsc --noEmit",
  "npx jest", "npx vitest", "git status", "git diff", "git log",
  "cat", "head", "tail", "wc", "grep",
];

function isAllowed(command: string): boolean {
  return ALLOWED_COMMANDS.some((allowed) => command.startsWith(allowed));
}

server.tool(
  "run_command",
  "Execute a shell command in the project directory. Only safe commands are allowed.",
  {
    command: z.string().describe("The shell command to execute"),
    timeout: z.number().default(30000).describe("Timeout in milliseconds"),
  },
  async ({ command, timeout }) => {
    if (!isAllowed(command)) {
      return {
        content: [{
          type: "text",
          text: `Command not allowed: ${command}. Allowed prefixes: ${ALLOWED_COMMANDS.join(", ")}`,
        }],
      };
    }

    try {
      const { stdout, stderr } = await execAsync(command, {
        cwd: process.env.PROJECT_ROOT || process.cwd(),
        timeout,
        maxBuffer: 1024 * 1024,
      });
      const output = [stdout, stderr].filter(Boolean).join("\n--- stderr ---\n");
      return { content: [{ type: "text", text: output || "(no output)" }] };
    } catch (error: any) {
      return {
        content: [{
          type: "text",
          text: `Command failed (exit ${error.code}):\n${error.stdout || ""}\n${error.stderr || ""}`,
        }],
      };
    }
  }
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}

main().catch(console.error);

Step 4: Build the Claude Client with MCP Integration

This is the core of the assistant — it connects to Claude and routes tool calls to MCP servers:

// src/core/claude-client.ts
import Anthropic from "@anthropic-ai/sdk";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";

interface MCPServerConfig {
  name: string;
  command: string;
  args: string[];
  env?: Record<string, string>;
}

export class CodingAssistant {
  private anthropic: Anthropic;
  private mcpClients: Map<string, Client> = new Map();
  private tools: Anthropic.Tool[] = [];
  private toolToServer: Map<string, string> = new Map();
  private conversationHistory: Anthropic.MessageParam[] = [];

  constructor() {
    this.anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
  }

  async connectMCPServer(config: MCPServerConfig): Promise<void> {
    const transport = new StdioClientTransport({
      command: config.command,
      args: config.args,
      env: { ...process.env, ...config.env } as Record<string, string>,
    });

    const client = new Client({ name: "coding-assistant", version: "1.0.0" }, {});
    await client.connect(transport);

    // Discover tools from this server
    const { tools } = await client.listTools();
    for (const tool of tools) {
      this.tools.push({
        name: tool.name,
        description: tool.description || "",
        input_schema: tool.inputSchema as Anthropic.Tool.InputSchema,
      });
      this.toolToServer.set(tool.name, config.name);
    }

    this.mcpClients.set(config.name, client);
    console.log(`Connected to ${config.name} with ${tools.length} tools`);
  }

  async callTool(toolName: string, args: Record<string, unknown>): Promise<string> {
    const serverName = this.toolToServer.get(toolName);
    if (!serverName) throw new Error(`Unknown tool: ${toolName}`);

    const client = this.mcpClients.get(serverName);
    if (!client) throw new Error(`Server not connected: ${serverName}`);

    const result = await client.callTool({ name: toolName, arguments: args });
    const textContent = result.content as Array<{ type: string; text: string }>;
    return textContent.map((c) => c.text).join("\n");
  }

  async chat(userMessage: string): Promise<string> {
    this.conversationHistory.push({ role: "user", content: userMessage });

    const systemPrompt = `You are an expert coding assistant. You have access to the
    user's project through filesystem and terminal tools.

    WORKFLOW:
    1. When asked to fix a bug: read the relevant files, understand the context,
       run tests to reproduce, make the fix, run tests again to verify.
    2. When asked to add a feature: understand the codebase structure first,
       then implement following existing patterns.
    3. Always run tests after making changes.
    4. Explain what you found and what you changed.`;

    let response = await this.anthropic.messages.create({
      model: "claude-sonnet-4-20250514",
      max_tokens: 8096,
      system: systemPrompt,
      tools: this.tools,
      messages: this.conversationHistory,
    });

    // Agentic loop: keep processing until no more tool calls
    while (response.stop_reason === "tool_use") {
      const assistantContent = response.content;
      this.conversationHistory.push({ role: "assistant", content: assistantContent });

      const toolResults: Anthropic.ToolResultBlockParam[] = [];

      for (const block of assistantContent) {
        if (block.type === "tool_use") {
          console.log(`  Calling tool: ${block.name}`);
          try {
            const result = await this.callTool(
              block.name,
              block.input as Record<string, unknown>
            );
            toolResults.push({
              type: "tool_result",
              tool_use_id: block.id,
              content: result,
            });
          } catch (error: any) {
            toolResults.push({
              type: "tool_result",
              tool_use_id: block.id,
              content: `Error: ${error.message}`,
              is_error: true,
            });
          }
        }
      }

      this.conversationHistory.push({ role: "user", content: toolResults });

      response = await this.anthropic.messages.create({
        model: "claude-sonnet-4-20250514",
        max_tokens: 8096,
        system: systemPrompt,
        tools: this.tools,
        messages: this.conversationHistory,
      });
    }

    const finalText = response.content
      .filter((b): b is Anthropic.TextBlock => b.type === "text")
      .map((b) => b.text)
      .join("\n");

    this.conversationHistory.push({ role: "assistant", content: response.content });
    return finalText;
  }

  async disconnect(): Promise<void> {
    for (const [name, client] of this.mcpClients) {
      await client.close();
      console.log(`Disconnected from ${name}`);
    }
  }
}

Step 5: Build the Interactive CLI

// src/index.ts
import { CodingAssistant } from "./core/claude-client.js";
import * as readline from "readline";
import { config } from "dotenv";

config();

async function main() {
  const assistant = new CodingAssistant();

  // Connect MCP servers
  await assistant.connectMCPServer({
    name: "filesystem",
    command: "npx",
    args: ["tsx", "src/mcp-servers/filesystem.ts"],
    env: { PROJECT_ROOT: process.cwd() },
  });

  await assistant.connectMCPServer({
    name: "terminal",
    command: "npx",
    args: ["tsx", "src/mcp-servers/terminal.ts"],
    env: { PROJECT_ROOT: process.cwd() },
  });

  console.log("Coding assistant ready. Type your request or 'exit' to quit.\n");

  const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout,
  });

  const askQuestion = () => {
    rl.question("You: ", async (input) => {
      const trimmed = input.trim();
      if (trimmed.toLowerCase() === "exit") {
        await assistant.disconnect();
        rl.close();
        return;
      }

      try {
        const response = await assistant.chat(trimmed);
        console.log(`\nAssistant: ${response}\n`);
      } catch (error: any) {
        console.error(`Error: ${error.message}\n`);
      }

      askQuestion();
    });
  };

  askQuestion();
}

main().catch(console.error);

Step 6: Test the Assistant

Run the assistant and test it against a real project:

npx tsx src/index.ts

Try these prompts:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

  • "List all TypeScript files in the project"
  • "Read the package.json and tell me what dependencies we have"
  • "Run the test suite and show me any failures"
  • "Find and fix the bug in src/utils.ts — the sort function is returning wrong results"

Security Considerations

The coding assistant has access to your filesystem and can run commands. Implement these safeguards:

  1. Path sandboxing — The filesystem server validates that all paths stay within the project root
  2. Command allowlisting — The terminal server only permits specific, safe commands
  3. No secret exposure — Never include .env files or credentials in files that Claude reads
  4. Timeout limits — All commands have timeout limits to prevent runaway processes
  5. Audit logging — Log every tool call for review

FAQ

Can I use this with models other than Claude?

The MCP servers are model-agnostic — they communicate via the standard MCP protocol. You can connect them to any model that supports tool calling. Replace the Claude-specific code in claude-client.ts with your preferred model's API. The MCP client and server code remains unchanged.

How do I add support for additional languages beyond TypeScript?

Add language-specific MCP servers. For Python projects, create a server that exposes tools for running pytest, checking types with mypy, and formatting with black. The modular architecture means you can compose any combination of MCP servers for your stack.

What is the token cost per interaction?

A typical coding interaction where Claude reads 2-3 files, runs tests, and makes a fix uses approximately 5,000-15,000 input tokens and 1,000-3,000 output tokens. At current Claude pricing, this costs roughly $0.02-0.08 per interaction. Complex multi-file changes may cost more.

How do I handle large codebases that exceed the context window?

Use selective file reading rather than loading entire directories. The search_files tool helps Claude find relevant files without reading everything. You can also add a code indexing MCP server that uses embeddings to find semantically relevant code sections for a given query.

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Infrastructure

MCP Registry Catalogs in 2026: Official Registry vs Smithery vs mcp.so

The Official MCP Registry hit API freeze v0.1. Smithery has 7,000+ servers, mcp.so has 19,700+, PulseMCP is hand-curated. We compare discovery, install, and security across the major catalogs.

AI Engineering

Build a Chat Agent with Haystack RAG + Open LLM (Llama 3.2, 2026)

Haystack 2.7's Agent component plus an Ollama-served Llama 3.2 gives you tool-calling RAG with citations. Here's a complete pipeline against your own document store.

AI Infrastructure

MCP Servers for SaaS Tools: A 2026 Registry Walkthrough for Voice Agent Teams

The public MCP registry crossed 9,400 servers in April 2026. Here is a curated walkthrough of the SaaS MCP servers CallSphere mounts in production, with OAuth 2.1 PKCE patterns.

AI Engineering

Build a Voice Agent on Cloudflare Workers AI (No External LLM)

Run STT, LLM, and TTS entirely on Cloudflare's edge — no OpenAI, no ElevenLabs. Real working code with Whisper, Llama 3.3 70B, and Deepgram Aura.

AI Engineering

Building an Organization Skill Registry for Claude Agents

A practical engineering deep dive into Claude org skill registry, covering architecture, tradeoffs, and what production teams need to know about enterprise AI.

AI Engineering

How to Build Voice Agent CI/CD with Evals as Gate (GitHub Actions)

Version your prompts in git, run a 50-case eval suite on every PR, block merges below threshold, and ship a new agent prompt with confidence — full GitHub Actions tutorial.