Voice Agent Barge-In Handling: Server VAD, Client VAD, and the Hybrid Approach
Cleanly handling user interruptions is what separates a robotic voice agent from one that sounds human. The 2026 patterns and where they fail.
What Barge-In Actually Is
Barge-in is the user interrupting the agent mid-sentence. Done right, the agent stops talking, listens, and responds to the new utterance. Done wrong — and most 2024 voice agents got it wrong — the agent talks over the user, ignores the interruption, or hallucinates a response combining their own half-finished output with the user's input.
Three patterns ship in production in 2026: server VAD, client VAD, and hybrid. Each has tradeoffs. Most teams pick one and learn the hard way that they should have picked the other.
The Three Patterns
flowchart TB
subgraph Server[Server VAD]
S1[Caller audio] --> S2[Stream to server]
S2 --> S3[Server detects speech]
S3 --> S4[Server cancels TTS]
end
subgraph Client[Client VAD]
C1[Caller audio] --> C2[Local VAD]
C2 --> C3[Send interrupt signal]
C3 --> C4[Server cancels TTS]
end
subgraph Hybrid[Hybrid]
H1[Local fast VAD] --> H2[Local interrupt + send signal]
H2 --> H3[Server semantic VAD<br/>confirms]
H3 --> H4[Commit cancel<br/>or resume]
end
Server VAD
Audio streams to the server. The server detects speech, decides if it is an interruption, and cancels in-flight TTS. This is what OpenAI's Realtime API and most cloud voice services default to.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
- Pro: simpler client; server has more compute and better models
- Con: round-trip means 100-300ms of TTS keeps playing after the user starts talking
Client VAD
A small VAD model (Silero, WebRTC VAD, or a lightweight transformer) runs on the device. When it detects speech, it sends an interrupt signal to the server.
- Pro: lowest latency to interruption — typically 50-100ms
- Con: false-positive interrupts from coughs, ambient noise, side conversations
Hybrid
Local VAD fires immediately and pauses TTS playback locally. The server's semantic VAD evaluates the audio and either confirms the interrupt (TTS stays cancelled) or resumes (TTS continues from where it paused).
- Pro: best of both — fast response, low false-positive rate
- Con: more complex; requires resumable TTS, which not all S2S models support
What Goes Wrong in Each
The failure modes that hit production:
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
- Server VAD over PSTN: cellular jitter and packet loss make end-of-utterance detection unreliable; the system either cuts the user off or waits too long
- Client VAD in noisy environments: a TV in the background triggers an interrupt mid-sentence
- Hybrid with non-resumable TTS: the cancel-and-resume race condition where the server thinks "resume" but the TTS already finished produces stuttering output
Tuning the Endpoint Threshold
flowchart LR
Speech[Caller speaks] --> Detect{Above threshold?}
Detect -->|Yes| Hold[Hold for X ms]
Hold --> Confirm{Still speaking?}
Confirm -->|Yes| Interrupt[Trigger interrupt]
Confirm -->|No| Ignore[Ignore]
The hold-duration X is the most-tuned parameter in production voice agents. Too short (50ms) gives false positives from breaths. Too long (300ms) gives sluggish interrupts. The sweet spot for typical telephony agents is 80-150ms with semantic VAD, or 150-250ms with energy-based VAD.
What Native S2S Models Do Differently
GPT-4o-realtime, Gemini Live, and Sesame Maya all expose server VAD as the default and provide explicit response.cancel events for client-driven interrupts. The 2026 best practice with these models is to use server VAD as the floor and add client-side input-buffer cancellation for the cases where the round-trip cost is too high (long-form TTS, multi-sentence responses).
A Concrete Hybrid Implementation
sequenceDiagram
participant Caller
participant Client
participant Server
participant TTS
Caller->>Client: starts speaking
Client->>Client: local VAD fires (60ms)
Client->>Server: interrupt signal
Client->>Client: pause local TTS playback
Server->>Server: semantic VAD evaluates 200ms
alt confirmed interrupt
Server->>TTS: cancel
Server->>Client: confirm cancel
else false positive
Server->>Client: resume
Client->>Client: resume playback
end
This is the design we use on CallSphere's voice agents. False-positive rate dropped from 11 percent (server-only VAD) to 2.4 percent (hybrid) with no measurable increase in interrupt latency.
Sources
- Silero VAD — https://github.com/snakers4/silero-vad
- WebRTC VAD specification — https://webrtc.org
- OpenAI Realtime API VAD docs — https://platform.openai.com/docs/guides/realtime
- LiveKit barge-in patterns — https://docs.livekit.io
- "Endpoint detection in conversational AI" Deepgram — https://deepgram.com/learn
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.