Skip to content
AI Engineering
AI Engineering10 min read0 views

Realtime Intent Detection and Routing for Voice AI With Sub-500 ms Latency in 2026

An STT-LLM-routing pipeline that detects intent on every utterance and routes to the right specialist agent in under 500 ms. We cover schema, debouncing, and how CallSphere routes 6 verticals worth of intents.

TL;DR — Run an STT → LLM intent classifier → router pipeline in under 500 ms. Use a 12-intent enum for stability, debounce flips, and let the router publish a NATS subject like call.intent.book so the right agent picks it up. CallSphere does this across 6 verticals with 18 distinct intents.

Why this pipeline

Modern voice agents don't ask "Press 1 for billing." They listen, classify intent in real time, and hand off. The bar in 2026 is < 500 ms end-to-end (STT + LLM + route) — the rhythm of natural conversation. Anything slower and the caller starts repeating themselves.

The architecture has three moving parts: an always-on STT, a small LLM (or fine-tuned classifier) that emits a structured intent, and a router that maps intent → agent.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Architecture

flowchart LR
  Audio[Caller audio] --> STT[Streaming STT<br/>Deepgram / Whisper]
  STT -->|partial transcript| Cls[Intent classifier<br/>gpt-4o-mini, structured]
  Cls -->|{intent, confidence}| Deb[Debouncer<br/>3-utterance window]
  Deb --> Router{Router}
  Router -->|book| BookA[Booking agent]
  Router -->|cancel| CanA[Cancellation agent]
  Router -->|escalate| EscA[Escalation agent]
  Router -->|info| FAQ[FAQ agent]
  Router -->|unknown| Gen[Generalist agent]

Debouncer prevents flapping when the caller starts a sentence one way and reframes mid-stream.

CallSphere implementation

CallSphere runs 37 specialist agents · 90+ tools · 115+ DB tables · 6 verticals, priced $149 / $499 / $1499 at /pricing. 14-day trial, 22% affiliate. The Healthcare orchestrator at /industries/healthcare detects 18 intents (book-appt, reschedule, refill, billing-question, complaint, hipaa-disclosure, ...) and routes via NATS subjects. Watch live at /demo.

Build steps with code

  1. Lock your intent enum to 12–20 values; every value gets a routing rule.
  2. Use Structured Outputs with enum so the model can never hallucinate a new intent.
  3. Stream partials from STT every 200 ms; classify only when ≥ 12 tokens or end-of-turn.
  4. Debounce flips — only re-route if the new intent is stable across 2 consecutive classifications.
  5. Publish to NATS with subject call.intent.{intent} and a correlation ID.
  6. Maintain a fallbackunknown routes to the generalist agent.
  7. Log to ClickHouse for retrospective routing-accuracy analysis.
const INTENTS = ["book","reschedule","cancel","refill","billing","complaint","escalate","faq","unknown"] as const;
type Intent = typeof INTENTS[number];

async function classify(text: string): Promise<{ intent: Intent; conf: number }> {
  const r = await ai.chat.completions.create({
    model: "gpt-4o-mini",
    response_format: {
      type: "json_schema",
      json_schema: {
        name: "intent",
        schema: {
          type: "object",
          properties: {
            intent: { enum: INTENTS },
            confidence: { type: "number", minimum: 0, maximum: 1 },
          },
          required: ["intent", "confidence"],
        },
      },
    },
    messages: [
      { role: "system", content: "Classify the caller's intent." },
      { role: "user", content: text },
    ],
  });
  return JSON.parse(r.choices[0].message.content!);
}

Pitfalls

  • Open-vocabulary intent — never let the model invent intents; route by enum.
  • Routing on a single utterance — debounce or you'll flap mid-sentence.
  • Skipping confidence — < 0.6 should fall through to a generalist.
  • Per-utterance LLM call — only classify on end-of-turn or every 5s rolling window; otherwise costs explode.
  • No unknown route — every intent eventually misses; have a fallback.

FAQ

Latency budget? STT 150 ms + LLM 200 ms + route 30 ms = ~380 ms p95.

Can we use a fine-tuned BERT? Yes for 12 fixed intents; cheaper and 50 ms faster, but needs 500+ labeled examples per intent.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

How does this interact with topic classification (post #3)? Topic is "what they're talking about"; intent is "what they want done." We run both in parallel.

Multi-intent in one call? Re-classify on every turn; intent changes mid-call are normal.

HIPAA? Use Azure OpenAI for healthcare; intent text never leaves the BAA boundary.

Sources

## Realtime Intent Detection and Routing for Voice AI With Sub-500 ms Latency in 2026: production view Realtime Intent Detection and Routing for Voice AI With Sub-500 ms Latency in 2026 forces a tension most teams underestimate: agent handoff state. A single LLM call is easy. A booking agent that hands a confirmed slot to a billing agent that hands a follow-up to an escalation agent — that's where context loss, hallucinated IDs, and double-bookings live. Solving it well means treating the conversation as a stateful workflow, not a chat. ## Shipping the agent to production Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop. Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries. The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals. ## FAQ **How does this apply to a CallSphere pilot specifically?** Real Estate runs as a 6-container pod (frontend, gateway, ai-worker, voice-server, NATS event bus, Redis) backed by Postgres `realestate_voice` with row-level security so multi-tenant data never crosses tenants. For a topic like "Realtime Intent Detection and Routing for Voice AI With Sub-500 ms Latency in 2026", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **What does the typical first-week implementation look like?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **Where does this break down at scale?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [salon.callsphere.tech](https://salon.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

AI Infrastructure

Defense, ITAR & AI Voice Vendor Compliance in 2026

ITAR technical-data definitions don't care if a human or an LLM produced the output. CMMC Level 2 has been mandatory since November 2025. Here is what an AI voice vendor needs to ship to defense in 2026.

AI Engineering

Latency vs Cost: A Decision Matrix for Voice AI Spend in 2026

Every 100ms of latency costs you. So does every cent per minute. Here is the decision matrix we use across 6 verticals to pick where to spend and where to save on voice AI infrastructure.

AI Infrastructure

WebRTC Over QUIC and the Future of Realtime: Where Voice AI Goes After 2026

WebTransport is Baseline as of March 2026. Media Over QUIC ships in production within the year. Here is what changes for AI voice agents — and what stays the same.

AI Strategy

AI Agent M&A Activity 2026: Aircall–Vogent, Meta–PlayAI, OpenAI's Six Deals

Q1 2026 saw a record acquisition wave: Aircall bought Vogent (May), Meta acquired Manus and PlayAI, OpenAI closed six deals. The voice AI consolidation phase has begun.

AI Infrastructure

OpenAI's May 2026 WebRTC Rearchitecture: How Voice Latency Got Real

On May 4 2026 OpenAI published its Realtime stack rebuild — split-relay plus transceiver edge. Here is what changed and what it means for production voice agents.

AI Voice Agents

Call Sentiment Time-Series Dashboards for Voice AI in 2026

Sentiment is not a single number per call - it is a curve. The shape (started positive, dropped at minute 4, recovered) tells you what your AI did wrong. Here is the per-utterance sentiment pipeline and the dashboards we ship by vertical.