Skip to content
Learn Agentic AI
Learn Agentic AI13 min read19 views

Agent Memory Sharing Strategies: Blackboard, Message Passing, and Shared Vector Stores

Compare three fundamental memory sharing architectures for multi-agent systems — blackboard, message passing, and shared vector stores — with implementation patterns, consistency considerations, and performance tradeoffs.

The Memory Sharing Problem

When multiple agents work together, they need to share information — intermediate results, discovered facts, decisions made, and context about the current task. How you architect this shared memory determines your system's consistency, performance, and scalability.

Three dominant patterns have emerged: the blackboard architecture (shared mutable state), message passing (explicit communication), and shared vector stores (semantic memory). Each makes different tradeoffs, and understanding when to use which pattern is critical for building reliable multi-agent systems.

Pattern 1: Blackboard Architecture

The blackboard is a shared workspace where agents read and write structured data. It originates from the Hearsay-II speech understanding system in the 1970s and remains one of the most practical patterns for collaborative problem-solving.

flowchart TD
    INPUT(["Task input"])
    SUPER["Supervisor agent<br/>plans plus monitors"]
    W1["Worker 1<br/>research"]
    W2["Worker 2<br/>code"]
    W3["Worker 3<br/>writing"]
    CRITIC{"Output meets<br/>rubric?"}
    REWORK["Rework or<br/>retry path"]
    SHARED[("Shared scratchpad<br/>and memory")]
    OUT(["Final result"])
    INPUT --> SUPER
    SUPER --> W1 --> CRITIC
    SUPER --> W2 --> CRITIC
    SUPER --> W3 --> CRITIC
    W1 --> SHARED
    W2 --> SHARED
    W3 --> SHARED
    SHARED --> SUPER
    CRITIC -->|Pass| OUT
    CRITIC -->|Fail| REWORK --> SUPER
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CRITIC fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OUT fill:#059669,stroke:#047857,color:#fff
    style SHARED fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
import asyncio
from dataclasses import dataclass, field
from typing import Any, Callable, Dict, List, Optional
import time

@dataclass
class BlackboardEntry:
    key: str
    value: Any
    written_by: str
    timestamp: float = field(default_factory=time.time)
    confidence: float = 1.0

class Blackboard:
    def __init__(self):
        self._data: Dict[str, BlackboardEntry] = {}
        self._subscribers: Dict[str, List[Callable]] = {}
        self._lock = asyncio.Lock()
        self._history: List[BlackboardEntry] = []

    async def write(
        self,
        key: str,
        value: Any,
        agent_id: str,
        confidence: float = 1.0,
    ):
        async with self._lock:
            entry = BlackboardEntry(
                key=key,
                value=value,
                written_by=agent_id,
                confidence=confidence,
            )
            self._data[key] = entry
            self._history.append(entry)

        # Notify subscribers outside the lock
        for callback in self._subscribers.get(key, []):
            await callback(entry)

    async def read(self, key: str) -> Optional[BlackboardEntry]:
        async with self._lock:
            return self._data.get(key)

    async def query(self, prefix: str) -> List[BlackboardEntry]:
        async with self._lock:
            return [
                entry for key, entry in self._data.items()
                if key.startswith(prefix)
            ]

    def subscribe(self, key: str, callback: Callable):
        if key not in self._subscribers:
            self._subscribers[key] = []
        self._subscribers[key].append(callback)

When to use: When agents need real-time access to a shared problem state, when the number of agents is moderate (under 20), and when you want simple read/write semantics.

Tradeoff: Easy to implement but creates tight coupling. All agents must agree on key naming conventions and data formats. Concurrent writes to the same key require careful conflict resolution.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Pattern 2: Message Passing

In message passing, agents communicate exclusively through explicit messages. There is no shared state — each agent maintains its own local memory and shares information by sending messages to specific agents or broadcasting to channels.

from collections import defaultdict
from typing import Set

@dataclass
class Message:
    sender: str
    content: Any
    channel: str = "default"
    msg_id: str = field(default_factory=lambda: str(uuid.uuid4()))
    timestamp: float = field(default_factory=time.time)

import uuid

class MessageBroker:
    def __init__(self):
        self._queues: Dict[str, asyncio.Queue] = {}
        self._channels: Dict[str, Set[str]] = defaultdict(set)

    def register(self, agent_id: str):
        self._queues[agent_id] = asyncio.Queue()

    def subscribe_channel(self, agent_id: str, channel: str):
        self._channels[channel].add(agent_id)

    async def send_direct(self, message: Message, recipient: str):
        queue = self._queues.get(recipient)
        if queue:
            await queue.put(message)

    async def broadcast(self, message: Message):
        subscribers = self._channels.get(message.channel, set())
        for agent_id in subscribers:
            queue = self._queues.get(agent_id)
            if queue:
                await queue.put(message)

    async def receive(
        self, agent_id: str, timeout: float = 5.0
    ) -> Optional[Message]:
        queue = self._queues.get(agent_id)
        if not queue:
            return None
        try:
            return await asyncio.wait_for(queue.get(), timeout)
        except asyncio.TimeoutError:
            return None

When to use: When agents are loosely coupled, when you need audit trails of all communication, or when agents may run on different machines.

Tradeoff: No shared state means no consistency issues, but agents must explicitly request information they need. This increases message volume and latency for queries that would be instant on a blackboard.

Pattern 3: Shared Vector Store

A shared vector store gives agents semantic memory — they can store and retrieve information based on meaning rather than exact keys. This is especially powerful when agents produce unstructured knowledge (research findings, conversation summaries, analysis results).

from typing import Tuple
import numpy as np

class SharedVectorMemory:
    def __init__(self, embedding_dim: int = 1536):
        self.embedding_dim = embedding_dim
        self._entries: List[Dict] = []
        self._embeddings: List[np.ndarray] = []
        self._lock = asyncio.Lock()

    async def store(
        self,
        text: str,
        embedding: np.ndarray,
        agent_id: str,
        metadata: Optional[Dict] = None,
    ):
        async with self._lock:
            self._entries.append({
                "text": text,
                "agent_id": agent_id,
                "metadata": metadata or {},
                "timestamp": time.time(),
            })
            self._embeddings.append(embedding)

    async def search(
        self,
        query_embedding: np.ndarray,
        top_k: int = 5,
        agent_filter: Optional[str] = None,
    ) -> List[Tuple[Dict, float]]:
        async with self._lock:
            if not self._embeddings:
                return []

            matrix = np.array(self._embeddings)
            similarities = np.dot(matrix, query_embedding) / (
                np.linalg.norm(matrix, axis=1) * np.linalg.norm(query_embedding)
            )

            results = []
            for idx in np.argsort(similarities)[::-1]:
                entry = self._entries[idx]
                if agent_filter and entry["agent_id"] != agent_filter:
                    continue
                results.append((entry, float(similarities[idx])))
                if len(results) >= top_k:
                    break

            return results

When to use: When agents produce unstructured knowledge, when you need fuzzy retrieval (finding related information rather than exact lookups), or when building research and analysis systems.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Tradeoff: Higher latency than blackboard reads due to embedding computation and similarity search. Requires an embedding model. Results are approximate — you may miss relevant entries or surface irrelevant ones.

Choosing the Right Pattern

Criteria Blackboard Message Passing Vector Store
Latency Low (direct read) Medium (async) Higher (similarity search)
Consistency Needs locking No shared state Eventually consistent
Scalability Moderate High High
Query type Exact key Direct/broadcast Semantic similarity
Best for Structured collaboration Decoupled agents Knowledge retrieval

In practice, production systems often combine patterns. Use a blackboard for structured task state, message passing for coordination signals, and a vector store for accumulated knowledge.

FAQ

Can I use Redis as a blackboard?

Yes, Redis is an excellent backing store for a blackboard. Use Redis hashes for structured entries, pub/sub for subscriber notifications, and sorted sets for time-ordered history. Redis also gives you atomic operations (SETNX, WATCH/MULTI) for conflict resolution on concurrent writes.

How do I handle stale data in shared memory?

Add TTL (time-to-live) to every entry. For blackboards, agents should check the timestamp before trusting a value. For vector stores, include a recency bias in your similarity scoring — multiply the cosine similarity by a time-decay factor. For message passing, staleness is not an issue since each message is consumed once.

Should agents have private memory in addition to shared memory?

Always. Agents should maintain a private working memory for in-progress reasoning, intermediate calculations, and agent-specific context. Only publish to shared memory when you have a result, decision, or fact that other agents need. This reduces noise and contention in the shared space.


#AgentMemory #BlackboardArchitecture #VectorStores #MessagePassing #MultiAgentSystems #AgenticAI #PythonAI #SharedMemory

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

LangGraph Supervisor Pattern: Orchestrating Multi-Agent Teams in 2026

The supervisor pattern in LangGraph for coordinating specialist agents, with full code, an eval pipeline that scores routing accuracy, and the failure modes to watch for.

Agentic AI

Multi-Agent Handoffs with the OpenAI Agents SDK: The Pattern That Actually Scales (2026)

Handoffs done right — when one agent should hand control to another, how to preserve context, and how to evaluate the handoff decision itself.

Agentic AI

Agent Memory in LangGraph 2026: Short-Term, Long-Term, and the Patterns That Survive Production

How short-term (thread-scoped) and long-term (cross-thread) memory actually work in LangGraph, with code, schemas, and the eviction policies that keep cost predictable.

Agentic AI

Evaluating Agent Memory: Recall, Precision, and the Eval Pipeline Most Teams Don't Build

Memory is supposed to make agents better — but does it? Build a memory eval pipeline that measures recall, precision, contradiction rate, and the freshness/staleness tradeoff.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

AI Strategy

Enterprise CIO Guide: Zep 2.0 — Temporal Knowledge Graphs for Agent Memory

Enterprise CIO Guide perspective on Zep 2.0's Graphiti engine adds temporal knowledge graphs to agent memory — the right data structure for fact updates over time.