Skip to content
Learn Agentic AI
Learn Agentic AI12 min read10 views

Shared State in Multi-Agent Systems: Coordinating Data Between Agents

Master shared state management in multi-agent systems using the OpenAI Agents SDK's RunContext, including shared context objects, state mutation patterns, race conditions, and consistency strategies.

The State Problem in Multi-Agent Systems

When a single agent handles a conversation, state management is straightforward — everything lives in the conversation history. But when multiple agents collaborate, they often need to share data that does not belong in the conversation. A customer ID looked up by the triage agent, a shopping cart being built by the product agent, authentication status verified by the auth agent — this operational state must flow between agents without being lost during handoffs.

The conversation history carries the dialogue, but structured data like user profiles, accumulated results, and workflow progress needs a different mechanism. This is where shared state comes in.

RunContext: The SDK's Shared State Mechanism

The OpenAI Agents SDK provides RunContext — a typed context object that is available to all agents and tools within a single run. You define a context class, pass an instance to the Runner, and every tool function can access and modify it:

flowchart TD
    INPUT(["Task input"])
    SUPER["Supervisor agent<br/>plans plus monitors"]
    W1["Worker 1<br/>research"]
    W2["Worker 2<br/>code"]
    W3["Worker 3<br/>writing"]
    CRITIC{"Output meets<br/>rubric?"}
    REWORK["Rework or<br/>retry path"]
    SHARED[("Shared scratchpad<br/>and memory")]
    OUT(["Final result"])
    INPUT --> SUPER
    SUPER --> W1 --> CRITIC
    SUPER --> W2 --> CRITIC
    SUPER --> W3 --> CRITIC
    W1 --> SHARED
    W2 --> SHARED
    W3 --> SHARED
    SHARED --> SUPER
    CRITIC -->|Pass| OUT
    CRITIC -->|Fail| REWORK --> SUPER
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CRITIC fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OUT fill:#059669,stroke:#047857,color:#fff
    style SHARED fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
from dataclasses import dataclass, field
from agents import Agent, Runner, RunContextWrapper, function_tool

@dataclass
class CustomerContext:
    customer_id: str = ""
    customer_name: str = ""
    subscription_tier: str = ""
    interaction_notes: list[str] = field(default_factory=list)

@function_tool
def lookup_customer(
    ctx: RunContextWrapper[CustomerContext],
    email: str,
) -> str:
    """Look up a customer by email and store their info in context."""
    # Simulate database lookup
    ctx.context.customer_id = "cust_12345"
    ctx.context.customer_name = "Alice Johnson"
    ctx.context.subscription_tier = "enterprise"
    return f"Found customer: Alice Johnson (Enterprise)"

@function_tool
def add_interaction_note(
    ctx: RunContextWrapper[CustomerContext],
    note: str,
) -> str:
    """Add a note about the current interaction."""
    ctx.context.interaction_notes.append(note)
    return f"Note added. Total notes: {len(ctx.context.interaction_notes)}"

@function_tool
def get_customer_summary(
    ctx: RunContextWrapper[CustomerContext],
) -> str:
    """Return a summary of the current customer context."""
    c = ctx.context
    notes = "; ".join(c.interaction_notes) if c.interaction_notes else "None"
    return f"Customer: {c.customer_name} | Tier: {c.subscription_tier} | Notes: {notes}"

Now multiple agents can share this context:

auth_agent = Agent(
    name="Auth Agent",
    instructions="Look up the customer by email before proceeding.",
    tools=[lookup_customer],
)

support_agent = Agent(
    name="Support Agent",
    instructions="""Help the customer with their issue. Use
    get_customer_summary to understand who you are helping.
    Add interaction notes as you work.""",
    tools=[get_customer_summary, add_interaction_note],
)

When the auth agent calls lookup_customer, it populates the shared context. When the support agent later calls get_customer_summary, it reads the same context object and sees the data the auth agent stored.

Running with Context

Pass the context instance when starting the run:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
from agents import Runner

context = CustomerContext()

result = Runner.run_sync(
    auth_agent,
    "My email is [email protected] and my login is broken",
    context=context,
)

# After the run, context has been populated
print(context.customer_id)       # "cust_12345"
print(context.interaction_notes)  # ["...notes from the support agent..."]

The context is mutable and persists throughout the entire run, across all agent handoffs. This means data set by the first agent is available to the fifth agent without any explicit passing.

Designing Your Context Object

A well-designed context object serves as the "shared memory" for the agent team. Here are principles for structuring it:

Group by domain, not by agent. Do not create auth_agent_data and support_agent_data fields. Instead, model the domain: customer, order, interaction. Any agent that needs customer data reads from the same customer field.

Use typed fields, not dictionaries. A dataclass with explicit fields is self-documenting and catches errors at development time. Avoid metadata: dict catch-all fields.

Track workflow state explicitly. If your multi-agent workflow has phases, track the current phase in the context:

@dataclass
class WorkflowContext:
    phase: str = "intake"  # intake -> research -> resolution -> closure
    customer_id: str = ""
    issue_category: str = ""
    research_findings: list[str] = field(default_factory=list)
    resolution_applied: str = ""
    satisfaction_score: int = 0

Agents can check the phase before acting. The research agent verifies that phase == "research" before proceeding. This prevents agents from acting out of order.

Handling Concurrent Access

In a synchronous single-run scenario, race conditions are not a concern because only one agent is active at a time. But if you build a system where multiple agents process different parts of a request concurrently (using asyncio or parallel tool calls), concurrent writes to the shared context can cause problems.

The safest pattern is to give each concurrent agent its own section of the context:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

@dataclass
class ParallelResearchContext:
    # Each researcher writes to its own field
    web_findings: str = ""
    database_findings: str = ""
    api_findings: str = ""

    # Only the orchestrator writes to the final report
    final_report: str = ""

This eliminates write conflicts because no two agents write to the same field. The orchestrator reads all fields after the parallel phase completes and writes the final report.

For scenarios where concurrent writes to the same field are unavoidable, use a thread-safe structure:

import threading
from dataclasses import dataclass, field

@dataclass
class ThreadSafeContext:
    _lock: threading.Lock = field(default_factory=threading.Lock)
    _findings: list[str] = field(default_factory=list)

    def add_finding(self, finding: str):
        with self._lock:
            self._findings.append(finding)

    def get_findings(self) -> list[str]:
        with self._lock:
            return list(self._findings)

Context vs. Conversation History

A common mistake is to store everything in the conversation history by having agents emit verbose messages. This wastes context window tokens and creates noise. Use context for structured operational data and the conversation history for the dialogue:

Data Type Store In
Customer ID, name, tier RunContext
Shopping cart items RunContext
Workflow phase RunContext
What the user said Conversation history
Agent explanations to user Conversation history
Tool call results (visible) Conversation history

Persisting Context Beyond a Single Run

RunContext lives for the duration of a single Runner.run() call. If your application spans multiple runs (for example, a chat session with multiple user messages), you need to persist the context between runs:

import json

def save_context(context: CustomerContext) -> str:
    return json.dumps({
        "customer_id": context.customer_id,
        "customer_name": context.customer_name,
        "subscription_tier": context.subscription_tier,
        "interaction_notes": context.interaction_notes,
    })

def load_context(data: str) -> CustomerContext:
    d = json.loads(data)
    return CustomerContext(**d)

Store the serialized context in your session store (Redis, database, or in-memory cache) and reload it for each subsequent run.

FAQ

Can different agents see different parts of the context?

The SDK gives all agents and tools access to the full RunContext object. If you need to restrict access, implement it at the tool level — only provide certain tools to certain agents, and only those tools read/write specific context fields.

What is the maximum size for a RunContext object?

There is no hard limit imposed by the SDK. The context is a Python object in memory, so the limit is your server's RAM. However, keep the context lean. If you are storing megabytes of data in the context, you should be storing it in a database and keeping only references in the context.

Should I pass context through tool outputs or through RunContext?

Use RunContext for structured data that multiple agents need across the workflow. Use tool outputs for data that only the current agent needs to see in the conversation. If in doubt, ask: "Will another agent need this data later?" If yes, put it in RunContext.


#SharedState #MultiAgentSystems #OpenAIAgentsSDK #RunContext #StateManagement #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Streaming Agent Responses with OpenAI Agents SDK and LangChain in 2026

How to stream tokens, tool-call deltas, and intermediate steps from an agent — with code for both the OpenAI Agents SDK and LangChain — and the gotchas that bite in production.

Agentic AI

Parallel Tool Calling in the OpenAI Agents SDK: When It Helps, When It Hurts (2026)

OpenAI's parallel function calling can cut latency in half — or burn money on dependent calls. The architecture, code, and an eval that proves the win.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Tool Selection Accuracy: The Eval Most Teams Skip — and Should Not (2026)

Your agent picked the wrong tool 12% of the time and the final answer was still right. That's a latent bug. Here's the eval pipeline that surfaces it.

Agentic AI

OpenAI Agents SDK vs Assistants API in 2026: Migration Guide with Eval Parity

Honest principal-engineer comparison of the OpenAI Agents SDK and the legacy Assistants API, with a migration checklist and eval-parity strategy so you don't ship regressions.

Agentic AI

Token-Level Evaluation of Streaming Agents: TTFT, Stream Smoothness, and Mid-Stream Hallucination Detection

Streaming changes the eval game — final-answer correctness isn't enough when users perceive the answer one token at a time. Here's the metric set that matters.