Skip to content
AI Engineering
AI Engineering11 min read0 views

Build a Realtime AI Agent with Convex (Persistent Threads, 2026)

Convex's Agent component gives you persistent threads, live-updating message history, and reactive tool-call progress — all on top of websocket queries.

TL;DR — Convex's @convex-dev/agent component handles thread state, streaming text deltas over WebSocket, and reactive tool-call subscriptions out of the box. Every connected client sees the same agent in real time without manual SSE plumbing.

What you'll build

A multi-user chat where Alice asks an agent something, Bob (on another device) sees the answer stream in live, and a side panel shows tool-call progress as it happens — all powered by Convex's reactive queries.

Prerequisites

  1. convex@^1.18, @convex-dev/agent@^0.4, @ai-sdk/openai@^1.
  2. pnpm dlx convex dev linked to a Convex deployment.

Architecture

flowchart LR
  C1[Alice] -- mutation --> CV[Convex]
  C2[Bob]   <-- live query --- CV
  CV --> AG[agent.streamText]
  AG --> OA[OpenAI]
  AG -- delta + tool_call --> CV
  CV -- reactive --> C1 & C2

Step 1 — Wire the Agent component

```ts // convex/convex.config.ts import agent from "@convex-dev/agent/convex.config"; import { defineApp } from "convex/server"; const app = defineApp(); app.use(agent); export default app; ```

Step 2 — Define the agent

```ts // convex/myAgent.ts import { Agent } from "@convex-dev/agent"; import { openai } from "@ai-sdk/openai"; import { components } from "./_generated/api";

export const support = new Agent(components.agent, { name: "support", chat: openai.chat("gpt-4o-mini"), textEmbedding: openai.embedding("text-embedding-3-small"), instructions: "You are a friendly support agent.", tools: { lookupOrder: { /* tool def */ } }, }); ```

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Step 3 — Mutation to start a thread

```ts import { mutation } from "./_generated/server"; import { v } from "convex/values"; import { support } from "./myAgent";

export const ask = mutation({ args: { threadId: v.optional(v.string()), text: v.string() }, handler: async (ctx, { threadId, text }) => { const { thread, threadId: id } = threadId ? await support.continueThread(ctx, { threadId }) : await support.createThread(ctx); await thread.streamText({ prompt: text }, { saveStreamDeltas: true }); return id; }, }); ```

Step 4 — Reactive query

```ts import { query } from "./_generated/server"; import { components } from "./_generated/api";

export const messages = query({ args: { threadId: v.string() }, handler: (ctx, { threadId }) => ctx.runQuery(components.agent.messages.list, { threadId }), }); ```

Step 5 — React UI

```tsx "use client"; import { useQuery, useMutation } from "convex/react"; import { api } from "@/convex/_generated/api"; import { useState } from "react";

export function Chat({ threadId }: { threadId: string }) { const msgs = useQuery(api.myAgent.messages, { threadId }) ?? []; const ask = useMutation(api.myAgent.ask); const [t, setT] = useState(""); return ( <> {msgs.map((m) =>

{m.role}: {m.text}

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

)} <form onSubmit={(e) => { e.preventDefault(); ask({ threadId, text: t }); setT(""); }}> <input value={t} onChange={(e) => setT(e.target.value)} /> </> ); } ```

Step 6 — Stream tool progress

Tool calls write reactive rows; subscribe to components.agent.tools.list({ threadId }) and render a live progress feed.

Pitfalls

  • Function timeouts: Convex actions cap at 10 minutes; long agent loops should chunk via scheduler.runAfter.
  • Vector index dimension: Set textEmbedding to match your embedding dim or RAG breaks silently.
  • Reactivity cost: Every delta is a write — for high-volume voice transcripts, batch deltas in 250ms windows.

How CallSphere does this in production

Convex-style reactive threads inspired CallSphere's multi-device chat across 37 agents, 90+ tools, 115+ DB tables, 6 verticals — Healthcare, OneRoof (Next.js 16 + React 19), Salon (NestJS 10 + Prisma), Sales (Node.js 20 + React 18 + Vite). Pricing $149/$499/$1,499, 14-day trial, 22% affiliate.

FAQ

Convex pricing? Free tier ~1M function calls/month; paid starts at $25/mo + usage.

Self-hostable? Yes, since 2024 — Apache-2 licensed.

WebRTC voice support? Convex doesn't ship media transport; pair with LiveKit/OpenAI Realtime for audio.

Is the Agent component stable? GA since Q3 2025; 1.x semver.

Sources

## Build a Realtime AI Agent with Convex (Persistent Threads, 2026): production view Build a Realtime AI Agent with Convex (Persistent Threads, 2026) is also a cost-per-conversation problem hiding in plain sight. Once you instrument tokens-in, tokens-out, tool calls, ASR seconds, and TTS seconds against booked-revenue per call, the right tradeoff between Realtime API and an async ASR + LLM + TTS pipeline becomes obvious — and it's almost never the same answer for healthcare as it is for salons. ## Shipping the agent to production Production AI agents live or die on three loops: evals, retries, and handoff state. CallSphere runs **37 agents** across 6 verticals, each with its own eval suite — synthetic call transcripts replayed nightly with assertion checks on extracted entities (date, time, party size, insurance, address). Without that loop, prompt regressions ship silently and you only find out when bookings drop. Structured tools beat free-form text every time. Our **90+ function tools** all enforce JSON schemas validated server-side; if the model hallucinates an integer where a string is required, we retry with a corrective system message before falling back to a deterministic path. For long-running flows, we treat agent handoffs as a state machine — booking → confirmation → SMS — so context survives turn boundaries. The Realtime API vs. async decision usually comes down to "is the user holding the phone right now?" If yes, Realtime; if no (callback queue, after-hours voicemail), async wins on cost-per-conversation, which we track per agent in **115+ database tables** spanning all 6 verticals. ## FAQ **What's the right way to scope the proof-of-concept?** Setup runs 3–5 business days, the trial is 14 days with no credit card, and pricing tiers are $149, $499, and $1,499 — so a vertical-specific pilot is a same-week decision, not a quarterly project. For a topic like "Build a Realtime AI Agent with Convex (Persistent Threads, 2026)", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **How do you handle compliance and data isolation?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **When does it make sense to switch from a managed model to a self-hosted one?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [escalation.callsphere.tech](https://escalation.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.