Skip to content
AI Engineering
AI Engineering10 min read0 views

Deepgram + LLM Hybrid vs End-to-End Realtime: Cost in 2026

A cascaded Deepgram STT + LLM + TTS stack lands at $0.05–$0.15 per minute. End-to-end Realtime APIs run $0.10–$0.30. The honest tradeoff is latency, not just cost.

A cascaded Deepgram STT + LLM + TTS stack lands at $0.05–$0.15 per minute. End-to-end Realtime APIs run $0.10–$0.30. The honest tradeoff is latency, not just cost.

The cost problem

flowchart LR
  Browser["Browser / Phone"] -- "WebSocket /ws" --> LB["Load Balancer<br/>sticky session"]
  LB --> Pod1["Node A · Socket.IO"]
  LB --> Pod2["Node B · Socket.IO"]
  Pod1 -- "pub/sub" --> Redis[("Redis cluster")]
  Pod2 -- "pub/sub" --> Redis
  Pod1 --> AI["AI Worker · OpenAI Realtime"]
  Pod2 --> AI
CallSphere reference architecture

There are two broad architectures for voice agents in 2026: end-to-end speech-to-speech (gpt-realtime, ElevenAgents Premium), and cascaded pipelines (STT → LLM → TTS, often Deepgram + GPT-4o-mini + Aura-2). The end-to-end stacks are simpler and lower-latency for short turns; the cascaded stacks are usually cheaper, especially with a small/cheap LLM in the middle.

The cost gap can be 2–4× depending on how you wire it. But cost is not the only axis — latency, voice quality, and barge-in behavior all change with architecture.

How Deepgram prices it

Deepgram's pricing page (May 2026) lists:

  • Nova-3 monolingual streaming STT: $0.0048/min (Pay-As-You-Go) or $0.0042/min (Growth tier with $4k annual commit)
  • Nova-3 multilingual streaming: $0.0058/min PAYG
  • Flux English streaming: $0.0065/min PAYG
  • Aura-2 TTS: $0.030 per 1,000 characters PAYG, $0.027 at Growth
  • Voice Agent API Standard tier: $0.075/min PAYG (bundles STT + LLM + TTS)
  • Voice Agent BYO TTS: $0.065/min
  • Voice Agent Custom (BYO LLM + TTS): $0.050/min
  • Voice Agent Advanced: $0.163/min

Honest math

Pretend a typical 5-minute support call with 60/40 caller-agent split.

Cascaded DIY pipeline (Deepgram Nova-3 + GPT-4o-mini + Aura-2):

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
  • STT: 5 min × $0.0048 = $0.024
  • LLM: ~12k input tokens (with 80% cache) + 2k output on gpt-4o-mini = $0.024
  • TTS: 2 min agent speech × ~150 wpm × ~5 chars/word ÷ 1k × $0.030 = $0.045
  • Total: ~$0.093/call → $0.0186/min

Deepgram Voice Agent Standard tier (bundled):

  • 5 min × $0.075 = $0.375 → $0.075/min

OpenAI gpt-realtime cached:

  • ~$0.28 per 5-min call → $0.056/min

ElevenAgents Turbo:

  • 5 × $0.10 = $0.10/min

The cascaded DIY pipeline is the cheapest at $0.019/min — about 4× cheaper than gpt-realtime cached and 5× cheaper than ElevenAgents Turbo. But you give up something: latency adds up across hops (STT TTFT + LLM TTFT + TTS TTFB), and you have to do your own VAD, barge-in, and turn-taking logic.

The real tradeoff: latency

In our internal benchmarks, voice-to-voice latency by architecture:

  • gpt-realtime end-to-end: ~430ms median
  • ElevenAgents Turbo: ~400ms median
  • Deepgram Voice Agent Standard: ~480ms median
  • DIY Deepgram + GPT-4o-mini + Aura-2 (well-engineered): ~520ms median
  • DIY Deepgram + GPT-4o + Aura-2 (cheap but slow): ~720ms median

So the DIY stack saves you 75% on cost but adds 90–290ms of latency. For short FAQ flows, that is fine. For empathetic healthcare intake, it is not.

How CallSphere optimizes

CallSphere uses the cascaded approach for the Sales agent's outbound discovery flow because the prompt is small (3.5k tokens, mostly objection handling) and most calls last under 4 minutes — the end-to-end TTFT advantage is wasted on short turns. We use Deepgram Nova-3 for STT, GPT-4o-mini with 90% prompt caching for the brain, and Aura-2 for TTS. Net: $0.024/min on Sales — a substantial save vs Realtime.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

For Healthcare we go end-to-end on OpenAI Realtime PCM16 24kHz because the 22k-token clinical prompt and emotional barge-in tolerance demand it. The cost is higher per minute but the post-call NPS gap (8.4 vs 7.1 in our internal A/B last quarter) justified it.

Across the 6 verticals — 37 agents, 90+ tools, 115+ DB tables — about 60% of agent-minutes run cascaded and 40% end-to-end. The pricing tiers ($149 / $499 / $1499) are designed so even the cheap-tier customers can run the end-to-end stack on the calls that matter. Try it on the 14-day no-card trial.

Optimization checklist

  1. Profile your call duration distribution before picking architecture.
  2. For calls under 3 minutes with small prompts, cascaded almost always wins on cost.
  3. For calls over 8 minutes with big prompts and tool calls, end-to-end with caching wins.
  4. Use Deepgram Nova-3 for English-only STT (cheapest at $0.0048/min).
  5. Pair with GPT-4o-mini and prompt caching for the LLM hop — best price/quality.
  6. Pre-warm Aura-2 with the first sentence so TTFB stays under 200ms.
  7. For multi-language flows, Deepgram Multilingual or Flux are worth the upcharge.
  8. Measure your end-to-end latency, not the marketing claim from each vendor.
  9. Run an A/B with your real users — NPS gap matters more than $0.05/min.
  10. Re-evaluate when Deepgram or OpenAI announce a price drop (frequent in 2026).

FAQ

Is cascaded always cheaper than end-to-end? At small prompt sizes, yes. At very large prompts, end-to-end with prompt caching can match or beat it because the cache rate is so steep ($0.40/M vs $4/M).

Why is Deepgram's bundled Voice Agent more expensive than DIY? You pay for orchestration, hosted VAD, barge-in handling, and the support contract.

Can I mix providers — Deepgram STT + OpenAI text LLM + ElevenLabs TTS? Yes, this is a very common production stack. CallSphere does it on Sales.

Does latency really matter that much? For empathetic flows yes — every 100ms over 600ms reduces "felt naturalness" measurably. For order-status FAQs, less so.

What about Deepgram Aura-2 vs ElevenLabs v3? Aura-2 is faster (sub-100ms TTFB) and cheaper per char. v3 is more expressive. Pick by use case.

Sources

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Infrastructure

WebRTC Over QUIC and the Future of Realtime: Where Voice AI Goes After 2026

WebTransport is Baseline as of March 2026. Media Over QUIC ships in production within the year. Here is what changes for AI voice agents — and what stays the same.

AI Infrastructure

Defense, ITAR & AI Voice Vendor Compliance in 2026

ITAR technical-data definitions don't care if a human or an LLM produced the output. CMMC Level 2 has been mandatory since November 2025. Here is what an AI voice vendor needs to ship to defense in 2026.

AI Engineering

Latency vs Cost: A Decision Matrix for Voice AI Spend in 2026

Every 100ms of latency costs you. So does every cent per minute. Here is the decision matrix we use across 6 verticals to pick where to spend and where to save on voice AI infrastructure.

AI Strategy

AI Agent M&A Activity 2026: Aircall–Vogent, Meta–PlayAI, OpenAI's Six Deals

Q1 2026 saw a record acquisition wave: Aircall bought Vogent (May), Meta acquired Manus and PlayAI, OpenAI closed six deals. The voice AI consolidation phase has begun.

AI Voice Agents

Call Sentiment Time-Series Dashboards for Voice AI in 2026

Sentiment is not a single number per call - it is a curve. The shape (started positive, dropped at minute 4, recovered) tells you what your AI did wrong. Here is the per-utterance sentiment pipeline and the dashboards we ship by vertical.

AI Infrastructure

OpenAI's May 2026 WebRTC Rearchitecture: How Voice Latency Got Real

On May 4 2026 OpenAI published its Realtime stack rebuild — split-relay plus transceiver edge. Here is what changed and what it means for production voice agents.