OpenTelemetry for AI Agents: Distributed Tracing Across Agent Workflows
Learn how to instrument AI agent systems with OpenTelemetry for end-to-end distributed tracing, including span creation, custom attributes for LLM calls, and trace context propagation across multi-agent pipelines.
Why Distributed Tracing Matters for AI Agents
When an AI agent processes a user request, the work fans out across multiple steps: prompt assembly, LLM inference, tool calls, memory retrieval, and response formatting. In a multi-agent system, a single user query might trigger a triage agent, a specialist agent, and a summarizer — each making their own LLM calls and tool invocations. Without tracing, debugging a slow or incorrect response means reading logs line by line and guessing which step caused the problem.
OpenTelemetry (OTEL) solves this by assigning a unique trace ID to each request and creating hierarchical spans for every operation. You can see exactly how long each step took, what data flowed between agents, and where failures occurred — all in a single trace view.
Setting Up the OTEL SDK
Install the core packages and an exporter. Jaeger and Zipkin are popular choices for local development, while cloud providers offer managed backends like AWS X-Ray, Google Cloud Trace, or Grafana Tempo.
flowchart LR
APP(["Agent or API"])
SDK["OTel SDK<br/>GenAI conventions"]
COL["OTel Collector"]
subgraph BACKENDS["Backends"]
TR[("Traces<br/>Tempo or Honeycomb")]
MET[("Metrics<br/>Prometheus")]
LOG[("Logs<br/>Loki or ELK")]
end
DASH["Grafana plus alerts"]
PAGE(["Pager"])
APP --> SDK --> COL
COL --> TR
COL --> MET
COL --> LOG
TR --> DASH
MET --> DASH
LOG --> DASH
DASH --> PAGE
style SDK fill:#4f46e5,stroke:#4338ca,color:#fff
style DASH fill:#f59e0b,stroke:#d97706,color:#1f2937
style PAGE fill:#dc2626,stroke:#b91c1c,color:#fff
# pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
resource = Resource.create({
"service.name": "agent-service",
"service.version": "1.2.0",
"deployment.environment": "production",
})
provider = TracerProvider(resource=resource)
exporter = OTLPSpanExporter(endpoint="http://otel-collector:4317")
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("agent.core")
The Resource tags every span with service metadata so you can filter traces by service name or environment in your backend.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Instrumenting Agent Workflow Steps
Wrap each logical step of your agent pipeline in a span. Use attributes to capture LLM-specific metadata like model name, token counts, and prompt length.
import time
async def run_agent(user_message: str, context: dict):
with tracer.start_as_current_span("agent.run") as root_span:
root_span.set_attribute("user.message_length", len(user_message))
root_span.set_attribute("agent.name", "support-agent")
# Step 1: Retrieve context from memory
with tracer.start_as_current_span("agent.memory_retrieval") as mem_span:
memories = await retrieve_memories(user_message)
mem_span.set_attribute("memory.results_count", len(memories))
# Step 2: Call the LLM
with tracer.start_as_current_span("agent.llm_call") as llm_span:
llm_span.set_attribute("llm.model", "gpt-4o")
llm_span.set_attribute("llm.prompt_tokens", count_tokens(user_message))
start = time.perf_counter()
response = await call_llm(user_message, memories)
latency_ms = (time.perf_counter() - start) * 1000
llm_span.set_attribute("llm.completion_tokens", response.usage.completion_tokens)
llm_span.set_attribute("llm.latency_ms", latency_ms)
# Step 3: Execute tool calls if any
if response.tool_calls:
with tracer.start_as_current_span("agent.tool_execution") as tool_span:
tool_span.set_attribute("tools.count", len(response.tool_calls))
results = await execute_tools(response.tool_calls)
return response.content
Propagating Trace Context Across Services
When your triage agent hands off to a specialist agent running in a separate service, the trace context must travel with the request. OTEL uses the W3C Trace Context standard — inject the context into HTTP headers on the sender side and extract it on the receiver side.
from opentelemetry.propagate import inject, extract
from opentelemetry.context import attach, detach
import httpx
async def call_specialist_agent(payload: dict):
headers = {}
inject(headers) # Injects traceparent header
async with httpx.AsyncClient() as client:
resp = await client.post(
"http://specialist-agent:8001/run",
json=payload,
headers=headers,
)
return resp.json()
# On the specialist agent's side (FastAPI)
from fastapi import Request
@app.post("/run")
async def handle_run(request: Request, body: dict):
ctx = extract(dict(request.headers))
token = attach(ctx)
try:
with tracer.start_as_current_span("specialist.run"):
result = await process(body)
return result
finally:
detach(token)
This links the specialist agent's spans as children of the triage agent's span, giving you a complete trace across both services.
Adding Semantic Conventions for LLM Spans
The OpenTelemetry community is developing semantic conventions for generative AI. Adopting these early makes your traces compatible with tooling that understands LLM workloads.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
LLM_ATTRIBUTES = {
"gen_ai.system": "openai",
"gen_ai.request.model": "gpt-4o",
"gen_ai.request.temperature": 0.3,
"gen_ai.request.max_tokens": 2048,
"gen_ai.response.finish_reason": "stop",
"gen_ai.usage.prompt_tokens": 1250,
"gen_ai.usage.completion_tokens": 340,
}
with tracer.start_as_current_span("gen_ai.chat") as span:
for key, value in LLM_ATTRIBUTES.items():
span.set_attribute(key, value)
FAQ
How much overhead does OpenTelemetry add to agent latency?
The BatchSpanProcessor buffers spans and exports them asynchronously in batches, adding less than 1 millisecond per span to the hot path. The overhead is negligible compared to LLM inference times, which typically range from 200 milliseconds to several seconds.
Should I create a span for every LLM token streamed?
No. Creating a span per token would generate thousands of spans per request and overwhelm your tracing backend. Instead, create one span per LLM call and record token counts as attributes. If you need token-level timing, use events within the span rather than child spans.
Can I use OTEL with agent frameworks like LangChain or CrewAI?
Yes. LangChain has a built-in OTEL callback handler, and CrewAI can be instrumented by wrapping task execution methods. For frameworks without native support, wrap the key entry points — agent run, tool call, and LLM call — with manual spans as shown above.
#OpenTelemetry #Observability #DistributedTracing #AIAgents #Monitoring #AgenticAI #LearnAI #AIEngineering
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.