Skip to content
Learn Agentic AI
Learn Agentic AI11 min read6 views

Building a Content Publishing Agent: Draft, Review, Edit, and Publish Pipeline

Create a multi-stage content publishing agent that drafts articles, routes them through AI reviewer agents, tracks versions, manages edits, and publishes to a CMS via API.

The Content Publishing Challenge

Publishing content involves multiple stages: drafting, review, editing, and final publication. In traditional workflows, each stage involves different people and tools, with content getting lost in email threads and shared documents. An AI-powered publishing agent automates the pipeline while maintaining quality through multi-agent review.

The architecture uses specialized agents for each stage — a drafter that generates content, reviewers that check quality from different angles, an editor that incorporates feedback, and a publisher that pushes to the CMS.

Data Model for the Pipeline

First, define the content artifact as it flows through stages:

flowchart LR
    INPUT(["User intent"])
    PARSE["Parse plus<br/>classify"]
    PLAN["Plan and tool<br/>selection"]
    AGENT["Agent loop<br/>LLM plus tools"]
    GUARD{"Guardrails<br/>and policy"}
    EXEC["Execute and<br/>verify result"]
    OBS[("Trace and metrics")]
    OUT(["Outcome plus<br/>next action"])
    INPUT --> PARSE --> PLAN --> AGENT --> GUARD
    GUARD -->|Pass| EXEC --> OUT
    GUARD -->|Fail| AGENT
    AGENT --> OBS
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OBS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
from dataclasses import dataclass, field
from datetime import datetime, timezone
from enum import Enum
from typing import Any
import uuid

class ContentStatus(Enum):
    DRAFT = "draft"
    IN_REVIEW = "in_review"
    REVISION_NEEDED = "revision_needed"
    APPROVED = "approved"
    PUBLISHED = "published"

@dataclass
class ContentVersion:
    version: int
    content: str
    created_at: str = field(
        default_factory=lambda: datetime.now(timezone.utc).isoformat()
    )
    created_by: str = ""
    changes_summary: str = ""

@dataclass
class ReviewFeedback:
    reviewer: str
    approved: bool
    comments: list[str] = field(default_factory=list)
    suggestions: list[str] = field(default_factory=list)
    reviewed_at: str = field(
        default_factory=lambda: datetime.now(timezone.utc).isoformat()
    )

@dataclass
class ContentArticle:
    article_id: str = field(default_factory=lambda: str(uuid.uuid4()))
    title: str = ""
    topic: str = ""
    target_audience: str = ""
    status: ContentStatus = ContentStatus.DRAFT
    versions: list[ContentVersion] = field(default_factory=list)
    reviews: list[ReviewFeedback] = field(default_factory=list)
    metadata: dict[str, Any] = field(default_factory=dict)

    @property
    def current_version(self) -> ContentVersion | None:
        return self.versions[-1] if self.versions else None

    def add_version(self, content: str, author: str, summary: str):
        v = ContentVersion(
            version=len(self.versions) + 1,
            content=content,
            created_by=author,
            changes_summary=summary,
        )
        self.versions.append(v)

Stage 1: The Drafter Agent

The drafter takes a brief and produces the first version:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
class DrafterAgent:
    def __init__(self, llm_client):
        self.llm = llm_client

    async def draft(self, article: ContentArticle) -> ContentArticle:
        prompt = f"""Write an article on the following topic.

Topic: {article.topic}
Target Audience: {article.target_audience}
Title: {article.title}

Requirements:
- 800 to 1200 words
- Clear structure with headings
- Include practical examples
- Professional tone appropriate for the audience
"""
        response = await self.llm.chat.completions.create(
            model="gpt-4o",
            messages=[
                {"role": "system", "content": "You are a professional content writer."},
                {"role": "user", "content": prompt},
            ],
        )
        content = response.choices[0].message.content
        article.add_version(content, "drafter_agent", "Initial draft")
        article.status = ContentStatus.IN_REVIEW
        return article

Stage 2: Reviewer Agents

Multiple reviewers check the content from different perspectives. Each reviewer is a specialized agent:

class ReviewerAgent:
    def __init__(self, llm_client, reviewer_name: str, focus_area: str):
        self.llm = llm_client
        self.name = reviewer_name
        self.focus = focus_area

    async def review(self, article: ContentArticle) -> ReviewFeedback:
        content = article.current_version.content
        prompt = f"""Review this article from the perspective of {self.focus}.

Article Title: {article.title}
Content:
{content}

Provide your review as JSON:
{{
    "approved": true/false,
    "comments": ["comment1", "comment2"],
    "suggestions": ["suggestion1", "suggestion2"]
}}"""

        response = await self.llm.chat.completions.create(
            model="gpt-4o",
            messages=[
                {"role": "system", "content": f"You are a {self.focus} reviewer."},
                {"role": "user", "content": prompt},
            ],
            response_format={"type": "json_object"},
        )
        result = json.loads(response.choices[0].message.content)
        return ReviewFeedback(
            reviewer=self.name,
            approved=result["approved"],
            comments=result.get("comments", []),
            suggestions=result.get("suggestions", []),
        )

# Create specialized reviewers
reviewers = [
    ReviewerAgent(llm, "technical_reviewer", "technical accuracy and code quality"),
    ReviewerAgent(llm, "seo_reviewer", "SEO optimization and keyword usage"),
    ReviewerAgent(llm, "style_reviewer", "writing style, grammar, and readability"),
]

Stage 3: The Editor Agent

The editor incorporates reviewer feedback into the next version:

class EditorAgent:
    def __init__(self, llm_client):
        self.llm = llm_client

    async def edit(
        self, article: ContentArticle, feedbacks: list[ReviewFeedback]
    ) -> ContentArticle:
        all_suggestions = []
        for fb in feedbacks:
            all_suggestions.extend(
                [f"[{fb.reviewer}] {s}" for s in fb.suggestions]
            )
            all_suggestions.extend(
                [f"[{fb.reviewer}] {c}" for c in fb.comments]
            )

        prompt = f"""Revise this article based on reviewer feedback.

Current Content:
{article.current_version.content}

Reviewer Feedback:
{chr(10).join(f"- {s}" for s in all_suggestions)}

Incorporate the feedback while maintaining the article's voice and structure.
Return only the revised article text."""

        response = await self.llm.chat.completions.create(
            model="gpt-4o",
            messages=[
                {"role": "system", "content": "You are a professional editor."},
                {"role": "user", "content": prompt},
            ],
        )
        revised = response.choices[0].message.content
        article.add_version(revised, "editor_agent", "Incorporated reviewer feedback")
        return article

The Pipeline Orchestrator

The orchestrator runs the full pipeline with configurable review rounds:

class PublishingPipeline:
    def __init__(self, drafter, reviewers, editor, publisher, max_rounds=3):
        self.drafter = drafter
        self.reviewers = reviewers
        self.editor = editor
        self.publisher = publisher
        self.max_rounds = max_rounds

    async def run(self, article: ContentArticle) -> ContentArticle:
        # Stage 1: Draft
        article = await self.drafter.draft(article)

        # Stage 2-3: Review and edit loop
        for round_num in range(1, self.max_rounds + 1):
            feedbacks = []
            for reviewer in self.reviewers:
                fb = await reviewer.review(article)
                feedbacks.append(fb)
                article.reviews.append(fb)

            all_approved = all(fb.approved for fb in feedbacks)
            if all_approved:
                article.status = ContentStatus.APPROVED
                break

            article.status = ContentStatus.REVISION_NEEDED
            article = await self.editor.edit(article, feedbacks)
            article.status = ContentStatus.IN_REVIEW

        # Stage 4: Publish
        if article.status == ContentStatus.APPROVED:
            await self.publisher.publish(article)
            article.status = ContentStatus.PUBLISHED

        return article

Stage 4: Publishing to a CMS

The publisher pushes the final content to your CMS API:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

class CMSPublisher:
    def __init__(self, api_base: str, api_key: str):
        self.api_base = api_base
        self.api_key = api_key

    async def publish(self, article: ContentArticle):
        import httpx
        async with httpx.AsyncClient() as client:
            response = await client.post(
                f"{self.api_base}/articles",
                headers={"Authorization": f"Bearer {self.api_key}"},
                json={
                    "title": article.title,
                    "content": article.current_version.content,
                    "status": "published",
                    "metadata": article.metadata,
                },
            )
            response.raise_for_status()

FAQ

How many review rounds should the pipeline allow before force-publishing?

Set a maximum of two to three rounds. If reviewers keep requesting changes after three rounds, the content likely needs a human editor. Escalate to a human rather than running an infinite review loop. Track the approval rate across rounds — if round-three approval is below 50 percent, your drafting prompt needs improvement.

How do I prevent reviewers from contradicting each other?

Give each reviewer a clearly scoped focus area and instruct them to only comment within their domain. The technical reviewer should not suggest style changes, and the SEO reviewer should not comment on code correctness. In the editor prompt, explicitly note which feedback came from which reviewer so the editor can weigh domain-specific suggestions appropriately.

Should I use the same LLM for all agents or different models?

Use your strongest model (GPT-4o or equivalent) for the drafter and editor, as they need the most creative and analytical capability. For reviewers, a smaller and faster model can work well since they are checking specific criteria rather than generating content. This reduces cost and latency. Run benchmarks with your actual content to find the quality threshold for each role.


#ContentPipeline #MultiAgent #Workflow #Publishing #Python #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Human-in-the-Loop Hybrid Agents: 73% Fewer Errors in 2026

Fully autonomous agents are still a fantasy in production. LangGraph's interrupt() lets you pause for human approval mid-graph without losing state. We cover approve/edit/reject/respond actions and CallSphere's escalation ladder.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

AI Strategy

Enterprise CIO Guide: AutoGen 0.5 — Microsoft's Multi-Agent Refresh

Enterprise CIO Guide perspective on AutoGen 0.5 brings async-first execution, an extension architecture, and tighter Azure integration.

AI Strategy

Enterprise CIO Guide: Cursor 2.0 — Multi-Agent Coding Hits the Mainstream

Enterprise CIO Guide perspective on Cursor 2.0 ships background agents, parallel branches, and a redesigned composer — multi-agent coding is no longer an experiment.

AI Strategy

Enterprise CIO Guide: Claude Code 2.1 — Multi-Agent Coding for Real

Enterprise CIO Guide perspective on Claude Code 2.1 ships background agents, sub-agent spawning, and a hooks API that turn it into a true multi-agent coding platform.

Agentic AI

Smolagents: Hugging Face's Code-First Agent Framework Reviewed

Smolagents lets agents write Python instead of JSON. Why code-as-action reduces tool errors and where the security trade-offs are for production deployments.