Skip to content
AI Infrastructure
AI Infrastructure12 min read0 views

mcp-aws in 2026: Bedrock AgentCore, S3, Lambda — the Official AWS MCP Servers

AWS launched the AWS MCP server GA on May 2026. We unpack the awslabs/mcp suite: AgentCore, S3 access, Lambda hosting, ECS deployment, and how to ship a streamable-HTTP MCP on AWS.

TL;DR — AWS MCP went GA in May 2026. awslabs/mcp ships dozens of official servers (S3, Bedrock AgentCore, CloudWatch, Cost Explorer). The deployment story: Lambda for stateless tools, ECS for long-lived MCP services, AgentCore Runtime for managed.

What the MCP server does

The AWS MCP suite (awslabs/mcp) is a collection of official servers, each exposing one AWS service:

  • Bedrock AgentCore MCP — orchestrate Bedrock agents, manage identity/memory/gateways
  • S3 MCP — list buckets, get/put objects, signed URLs
  • CloudWatch MCP — query logs and metrics
  • Cost Explorer MCP — let agents reason about your AWS spend
  • CDK / Terraform MCPs — generate and validate infra-as-code

Plus a sandbox: AgentCore lets agents run Python against AWS services in a sandboxed runtime with no local filesystem or shell access.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
flowchart LR
  A[Agent] -->|MCP| B[AWS MCP Suite]
  B -->|Streamable HTTP| C[ECS / Fargate]
  B -->|stateless| D[Lambda]
  B -->|managed| E[AgentCore Runtime]
  E -->|tools| F[Bedrock]
  E -->|tools| G[S3]
  E -->|tools| H[CloudWatch]

Auth + transport (sse/stdio/http)

AWS MCP servers run Streamable HTTP in production. Auth is OAuth 2.1 + IAM — the MCP client gets an OAuth token; the MCP server exchanges it (or uses STS AssumeRole) for AWS credentials with scoped permissions. AgentCore Runtime handles this end-to-end. Lambda + API Gateway + Cognito is the DIY pattern.

How CallSphere uses it

CallSphere runs on a Postgres + k3s + AWS hybrid. We use AWS MCP for:

  • CloudWatch MCP — our SRE agent reads ALB / RDS metrics and correlates with deploy events.
  • Cost Explorer MCP — finance agents query monthly spend by tag, broken down by our 6 verticals.
  • S3 MCP — analytics agents pull rendered call transcripts (we store them encrypted in S3 with per-tenant prefixes).

For deployment, we'd put a custom MCP behind ECS Fargate with an ALB if it needs persistent connections (long streaming responses, multi-step tools), and Lambda + API Gateway if it's stateless and bursty. AgentCore Runtime is the right managed answer if you don't want to run any infra yourself.

Build / install

  1. Browse the suite: npx -y @awslabs/mcp-server-list. Pick the services you need.
  2. Each server installs separately: npx -y @awslabs/mcp-server-s3 etc.
  3. Local dev: AWS_PROFILE / AWS_ACCESS_KEY_ID env. Production: IAM role attached to the runtime.
  4. For Lambda hosting, use awslabs/run-model-context-protocol-servers-with-aws-lambda — it wraps a stdio MCP as a Lambda streamable-HTTP handler.
  5. For ECS, use the official Anthropic + AWS guide on ECS deployment with a Fargate task definition + ALB.
  6. For managed, deploy via bedrock-agentcore-cli runtime mcp deploy.

FAQ

Lambda vs ECS? Lambda for cold-start-tolerant stateless. ECS for warm caches, persistent streaming, sidecars.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

AgentCore Runtime cost? Managed-by-AWS pricing on top of underlying service usage. Read the calculator before committing.

Can I expose my Postgres MCP via AWS? Yes — wrap it with run-model-context-protocol-servers-with-aws-lambda.

S3 Files for filesystem-like access? New as of April 2026 — Lambda can mount S3 as a fs. Pair this with Filesystem MCP for cheap durable agent storage.

Pricing tier for this? AWS bills you separately. CallSphere's $1499 Enterprise tier includes our AWS-MCP-fronted analytics.

Sources

## mcp-aws in 2026: Bedrock AgentCore, S3, Lambda — the Official AWS MCP Servers: production view mcp-aws in 2026: Bedrock AgentCore, S3, Lambda — the Official AWS MCP Servers ultimately resolves into one engineering question: when do you use the OpenAI Realtime API versus an async pipeline? Realtime wins on latency for live calls. Async wins on cost, retries, and structured tool reliability for callbacks and SMS flows. Most teams need both, and the routing layer between them becomes the most load-bearing piece of the stack. ## Serving stack tradeoffs The big fork is managed (OpenAI Realtime, ElevenLabs Conversational AI) versus self-hosted on GPUs you operate. Managed wins on cold-start, model freshness, and zero-ops; self-hosted wins on unit economics past a certain conversation volume and on data residency for regulated verticals. CallSphere runs hybrid: Realtime for live calls, self-hosted Whisper + a hosted LLM for async, both routed through a Go gateway that enforces per-tenant rate limits. Latency budgets are non-negotiable on voice. End-to-end target is sub-800ms ASR-to-first-token and sub-1.4s first-audio-out; anything beyond that and turn-taking feels stilted. GPU residency in the same region as your TURN servers matters more than choosing a slightly bigger model. Observability is the unglamorous backbone — every conversation produces logs, traces, sentiment scoring, and cost attribution piped to a per-tenant dashboard. **HIPAA + SOC 2 aligned** isolation keeps healthcare traffic separated from salon traffic at the storage layer, not just the API. ## FAQ **Why does mcp-aws in 2026: bedrock agentcore, s3, lambda — the official aws mcp servers matter for revenue, not just engineering?** 57+ languages are supported out of the box, and the platform is HIPAA and SOC 2 aligned, which removes most of the procurement friction in regulated verticals. For a topic like "mcp-aws in 2026: Bedrock AgentCore, S3, Lambda — the Official AWS MCP Servers", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **What are the most common mistakes teams make on day one?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **How does CallSphere's stack handle this differently than a generic chatbot?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [urackit.callsphere.tech](https://urackit.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like