Skip to content
Large Language Models
Large Language Models7 min read0 views

Model Latency Profiles by Provider: TTFT, TPS, and p99 in 2026

Headline tokens-per-second numbers hide what matters. The 2026 latency profiles by provider — TTFT, TPS, and p99 — for production planning.

What Latency Numbers Actually Matter

Three numbers per LLM provider:

  • TTFT (Time to First Token): how long until generation starts
  • TPS (Tokens Per Second): throughput once generation starts
  • p99 latency: tail latency under load

Headline benchmarks usually publish only TPS at low concurrency. Production planning needs all three at realistic load.

The Three Metrics

flowchart LR
    Req[Request] --> TTFT[TTFT: ms to first token]
    TTFT --> Gen[Generation at TPS]
    Gen --> Done[Done]
    Spike[Tail load] --> P99[p99: how slow does it get under stress]

For UX, TTFT often matters more than TPS. A user who sees their first word in 200ms and the rest streamed will feel served; a user who waits 2 seconds for nothing then gets the full reply will not.

April 2026 Approximate Numbers

For typical mid-tier models in moderate load:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
Provider TTFT (ms) TPS p99 latency for 500-token response
OpenAI GPT-5 200-400 60-100 8-12s
OpenAI GPT-5-mini 150-300 100-150 5-8s
Anthropic Sonnet 4.6 200-500 50-100 7-12s
Anthropic Haiku 4.5 100-250 100-180 4-7s
Gemini 2.5 Flash 100-250 100-200 4-7s
Open-weights via Together 150-400 80-150 5-10s

These shift with load and region. Run your own benchmarks.

What Affects TTFT

  • Region routing
  • Cold-start vs warm
  • Prompt length (prefill is part of TTFT)
  • Cache hit (cached prefix has lower TTFT)
  • Model size

What Affects TPS

  • Model size
  • Inference hardware
  • Batch composition (your request shares with others)
  • Speculative decoding (improves TPS substantially when enabled)

What Affects p99

  • Provider's load shedding policies
  • Your account's rate limit headroom
  • Time of day (peak vs off-peak)
  • Specific feature flags (reasoning mode is much slower)

Optimizing for TTFT

Since TTFT often dominates UX:

  • Region-pin requests
  • Use caching aggressively
  • Pre-warm connections
  • Pick models with lower TTFT for latency-sensitive paths

Optimizing for TPS

When generating long responses:

  • Pick models with native fast generation (smaller, optimized)
  • Stream output to UI
  • Truncate output length where possible
  • Use speculative decoding if available

Optimizing for p99

The hardest. Approaches:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

  • Reserved capacity / committed throughput tiers
  • Cross-provider failover for critical paths
  • Backoff and retry with intelligent budget
  • Dedicated capacity for premium customers

For p99 to be reliable, you typically need to pay for it (reserved capacity).

Latency Across Modalities

Voice has tight budgets:

  • Realtime API TTFB: 200-400ms typical
  • TTS streaming: 30-100ms first audio
  • Together: 300-500ms perceived latency

Voice latency engineering is its own discipline.

What's Hidden

  • Token counting differs by provider (your bill and the response length matter)
  • Streaming behaviors differ (chunk size, frequency)
  • Reasoning models show progress slowly
  • Some providers add invisible "preamble" thinking

Practical Budget Setting

For a chat UI:

  • TTFT < 500ms (perceived snappy)
  • Total response < 5s for typical
  • p99 < 10s

For a voice agent:

  • TTFT < 300ms
  • TTS streaming starts < 200ms
  • Total interaction loop < 1s

Sources

## Model Latency Profiles by Provider: TTFT, TPS, and p99 in 2026 — operator perspective Behind Model Latency Profiles by Provider: TTFT, TPS, and p99 in 2026 sits a smaller, more useful question: which production constraint just got cheaper to solve — first-token latency, language coverage, structured outputs, or tool-call reliability? On the CallSphere side, the practical filter is simple: would this make a 90-second appointment-booking call faster, cheaper, or more reliable? If the answer is "maybe in a benchmark," it doesn't ship to production. ## Base model vs. production LLM stack — the gap that costs you uptime A base model is a checkpoint. A production LLM stack is a whole different artifact: eval gates that fail the build on regression, prompt caching that cuts repeated-system-prompt cost by 40-70%, structured outputs that prevent JSON drift on tool calls, fallback chains that route to a smaller-model retry when the primary times out, and request-side guardrails that cap tool calls per session before the loop spirals. CallSphere runs LLMs in tandem on purpose: `gpt-4o-realtime` for the live call (streaming audio in and out, tool calls inline) and `gpt-4o-mini` for post-call analytics (sentiment scoring, lead qualification, summary generation, and the lower-stakes async work that doesn't need realtime). That split is not a cost optimization — it's a reliability decision. Realtime is optimized for low-latency turn-taking; mini is optimized for cheap, deterministic batch scoring. Mixing them lets each do what it's good at without one regressing the other. The teams that struggle with LLMs in production almost always made the same mistake: they treated "the model" as a single dependency, instead of as a small portfolio of models, each pinned to a job, each behind its own eval suite, each with a documented fallback. ## FAQs **Q: Why isn't model Latency Profiles by Provider an automatic upgrade for a live call agent?** A: Most of the time it doesn't, and that's the right starting assumption. The relevant test is whether it improves at least one of: p95 first-token latency, tool-call argument accuracy on noisy inputs, multi-turn handoff stability, or per-session cost. CallSphere ships in 57+ languages, is HIPAA and SOC 2 aligned, and runs voice, chat, SMS, and WhatsApp from the same agent stack. **Q: How do you sanity-check model Latency Profiles by Provider before pinning the model version?** A: The eval gate is unsentimental — a regression suite that simulates real call traffic (noisy ASR, partial inputs, tool-call timeouts) measures four numbers, and a candidate has to win on three of four without losing badly on the fourth. Anything else is treated as a blog post, not a stack change. **Q: Where does model Latency Profiles by Provider fit in CallSphere's 37-agent setup?** A: In a CallSphere deployment, new model and API capabilities land first in the post-call analytics pipeline (lower stakes, async, easy to roll back) and only later in the live realtime path. Today the verticals most likely to absorb new capability first are Salon and Healthcare, which already run the largest share of production traffic. ## See it live Want to see after-hours escalation agents handle real traffic? Walk through https://escalation.callsphere.tech or grab 20 minutes with the founder: https://calendly.com/sagar-callsphere/new-meeting.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Infrastructure

WebRTC Over QUIC and the Future of Realtime: Where Voice AI Goes After 2026

WebTransport is Baseline as of March 2026. Media Over QUIC ships in production within the year. Here is what changes for AI voice agents — and what stays the same.

AI Engineering

Latency Benchmarking AI Voice Agent Vendors (2026)

Vapi 465ms optimal, Retell 580-620ms, Bland ~800ms, ElevenLabs 400-600ms — but those are best-case. We design a fair benchmark harness, P95 measurement, and a reproducible methodology for 2026.

AI Engineering

Latency vs Cost: A Decision Matrix for Voice AI Spend in 2026

Every 100ms of latency costs you. So does every cent per minute. Here is the decision matrix we use across 6 verticals to pick where to spend and where to save on voice AI infrastructure.

Agentic AI

Streaming Agent Responses with OpenAI Agents SDK and LangChain in 2026

How to stream tokens, tool-call deltas, and intermediate steps from an agent — with code for both the OpenAI Agents SDK and LangChain — and the gotchas that bite in production.

Agentic AI

Token-Level Evaluation of Streaming Agents: TTFT, Stream Smoothness, and Mid-Stream Hallucination Detection

Streaming changes the eval game — final-answer correctness isn't enough when users perceive the answer one token at a time. Here's the metric set that matters.

AI Infrastructure

OpenAI's May 2026 WebRTC Rearchitecture: How Voice Latency Got Real

On May 4 2026 OpenAI published its Realtime stack rebuild — split-relay plus transceiver edge. Here is what changed and what it means for production voice agents.