Realtime Call Recording → Whisper Batch Transcription: An Event-Driven Pipeline (2026)
Live calls record to S3, then EventBridge triggers AWS Batch + Whisper-large-v3 (or Parakeet) for high-quality transcription. We show the full event-driven pipeline with diarization and PII redaction stitched in.
TL;DR — Stream live audio to a real-time STT for the in-call experience, AND record to S3 for batch Whisper-large-v3 to produce a higher-quality canonical transcript. Trigger via EventBridge → AWS Batch on Inferentia/L4. CallSphere uses both: realtime for in-call, batch for analytics ground truth.
Why this pipeline
Realtime STT is fast but error-prone on accents, technical terms, and overlapping speech. Whisper-large-v3 batch (or Parakeet) is 5–10% more accurate but slower. The 2026 best practice: do both. Realtime drives the conversation; batch overwrites with a canonical transcript once the call is done.
This is an event-driven pipeline: s3://recordings/... upload → EventBridge → AWS Batch job submission → containerized Whisper → write transcript to S3 + ClickHouse.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Architecture
flowchart LR
Live[Live call] -->|stream| RT[Realtime STT<br/>in-call only]
Live -->|record| S3[(S3<br/>recordings)]
S3 -->|ObjectCreated| EB[EventBridge rule]
EB -->|submit job| Batch[AWS Batch<br/>Whisper-large-v3 on Inferentia]
Batch -->|transcript JSON| S3T[(S3<br/>transcripts)]
S3T -->|trigger| Diar[Pyannote diarization]
Diar --> Red[PII redaction]
Red --> CH[(ClickHouse<br/>canonical transcripts)]
Diarization (post #5) and redaction (post #6) chain after batch ASR.
CallSphere implementation
CallSphere — 37 agents · 90+ tools · 115+ DB tables · 6 verticals, $149 / $499 / $1499 at /pricing. 14-day trial, 22% affiliate. Healthcare (/industries/healthcare) records every call to S3 with a per-tenant prefix; EventBridge fires a Whisper-large-v3 batch job that produces canonical transcripts with sentiment (-1.0..1.0) + lead score (0..100). The realtime transcript stays in the agent loop. See /demo.
Build steps with code
- Configure recording — voice agent writes mono WAV to
s3://recordings/{tenant}/{call_id}.wav. - Set up EventBridge rule on
ObjectCreatedmatching the prefix. - Build a Whisper container —
whisper-large-v3-turboong6.xlarge(L4) orinf2(Inferentia). - AWS Batch job definition points at ECR image and a job queue.
- Write transcript JSON with timestamps + segments.
- Chain diarization + redaction as separate Lambda or Step Functions tasks.
- Sink final transcript to ClickHouse with
is_canonical=1.
# whisper_job.py - runs in AWS Batch
import boto3, whisper, json, os
s3 = boto3.client("s3")
model = whisper.load_model("large-v3-turbo")
def main():
bucket = os.environ["S3_BUCKET"]
key = os.environ["S3_KEY"]
s3.download_file(bucket, key, "/tmp/audio.wav")
result = model.transcribe("/tmp/audio.wav", word_timestamps=True)
out = key.replace("recordings/", "transcripts/").replace(".wav", ".json")
s3.put_object(
Bucket=bucket,
Key=out,
Body=json.dumps(result),
ContentType="application/json",
)
if __name__ == "__main__":
main()
Pitfalls
- Re-running Whisper on every retry — make jobs idempotent by keying on
call_id. - GPU underutilized — batch multiple short calls per container with
whisper.transcribein a loop. - Skipping VAD — Whisper hallucinates on silence; gate with VAD.
- Mono vs. stereo — preserve channel layout; you'll regret losing it for diarization later.
- Forgetting hold music — voicemail trees often have music; suppress before ASR.
FAQ
Whisper-large-v3 vs. Parakeet? Parakeet (NVIDIA NeMo) is faster and cheaper on GPU; Whisper is more multilingual.
GPT-4o-transcribe? API-only, includes diarization, but more expensive. Use it when you don't want to host GPUs.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
Latency? Batch end-to-end is 1–3x audio duration; for a 5-min call, ~5–15 min on a single L4.
HIPAA? Self-host on AWS with BAA; never send raw audio to public APIs without one.
Cost? ~$0.006 per audio minute on Inferentia for Whisper-large-v3-turbo.
Sources
## Realtime Call Recording → Whisper Batch Transcription: An Event-Driven Pipeline (2026): production view Realtime Call Recording → Whisper Batch Transcription: An Event-Driven Pipeline (2026) usually starts as an architecture diagram, then collides with reality the first week of pilot. You discover that vector store choice (ChromaDB vs. Postgres pgvector vs. managed) is not really a vector store choice — it's a latency, freshness, and ops choice. Picking wrong forces a re-platform six months in, exactly when you have customers depending on it. ## Serving stack tradeoffs The big fork is managed (OpenAI Realtime, ElevenLabs Conversational AI) versus self-hosted on GPUs you operate. Managed wins on cold-start, model freshness, and zero-ops; self-hosted wins on unit economics past a certain conversation volume and on data residency for regulated verticals. CallSphere runs hybrid: Realtime for live calls, self-hosted Whisper + a hosted LLM for async, both routed through a Go gateway that enforces per-tenant rate limits. Latency budgets are non-negotiable on voice. End-to-end target is sub-800ms ASR-to-first-token and sub-1.4s first-audio-out; anything beyond that and turn-taking feels stilted. GPU residency in the same region as your TURN servers matters more than choosing a slightly bigger model. Observability is the unglamorous backbone — every conversation produces logs, traces, sentiment scoring, and cost attribution piped to a per-tenant dashboard. **HIPAA + SOC 2 aligned** isolation keeps healthcare traffic separated from salon traffic at the storage layer, not just the API. ## FAQ **Is this realistic for a small business, or is it enterprise-only?** The healthcare stack is a concrete example: FastAPI + OpenAI Realtime API + NestJS + Prisma + Postgres `healthcare_voice` schema + Twilio voice + AWS SES + JWT auth, all SOC 2 / HIPAA aligned. For a topic like "Realtime Call Recording → Whisper Batch Transcription: An Event-Driven Pipeline (2026)", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **Which integrations have to be in place before launch?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **How do we measure whether it's actually working?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [realestate.callsphere.tech](https://realestate.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.