Skip to content
Learn Agentic AI
Learn Agentic AI12 min read21 views

Building a GitHub Event Agent: Auto-Responding to Issues, PRs, and Deployments

Build a GitHub webhook-powered AI agent that automatically triages issues, reviews pull requests, and monitors deployment status using FastAPI and the GitHub API.

Why GitHub Needs an AI Agent

Large repositories generate a constant stream of events — new issues, pull requests, comments, deployments, and security alerts. Manually triaging every issue, reviewing every PR, and monitoring every deployment does not scale. A GitHub event agent can handle the repetitive work: labeling and prioritizing issues, providing initial code review feedback, and alerting the team when deployments fail.

This is not about replacing human reviewers. It is about giving them a head start. When a developer opens a PR, the agent can summarize the changes, flag potential issues, and check for common anti-patterns before a human reviewer even looks at it.

Setting Up the Webhook Receiver

First, configure your GitHub repository to send webhooks to your FastAPI server. In your repository settings, add a webhook URL and select the events you want to receive.

flowchart LR
    CLIENT(["Client SDK"])
    GW["API Gateway<br/>auth plus rate limit"]
    APP["FastAPI app<br/>handlers and DI"]
    VAL["Pydantic validation"]
    SVC["Service layer<br/>business logic"]
    DB[(Database)]
    QUEUE[(Background queue)]
    OBS[(Tracing)]
    CLIENT --> GW --> APP --> VAL --> SVC
    SVC --> DB
    SVC --> QUEUE
    SVC --> OBS
    SVC --> CLIENT
    style GW fill:#4f46e5,stroke:#4338ca,color:#fff
    style APP fill:#f59e0b,stroke:#d97706,color:#1f2937
    style DB fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
import os
import hmac
import hashlib
import httpx
from fastapi import FastAPI, Request, HTTPException, BackgroundTasks

app = FastAPI()

GITHUB_WEBHOOK_SECRET = os.environ["GITHUB_WEBHOOK_SECRET"]
GITHUB_TOKEN = os.environ["GITHUB_TOKEN"]

def verify_github_signature(payload: bytes, signature: str) -> bool:
    expected = hmac.new(
        GITHUB_WEBHOOK_SECRET.encode(), payload, hashlib.sha256
    ).hexdigest()
    return hmac.compare_digest(f"sha256={expected}", signature)

@app.post("/github/webhook")
async def github_webhook(request: Request, background_tasks: BackgroundTasks):
    body = await request.body()
    signature = request.headers.get("X-Hub-Signature-256", "")

    if not verify_github_signature(body, signature):
        raise HTTPException(status_code=401, detail="Invalid signature")

    event_type = request.headers.get("X-GitHub-Event", "")
    payload = await request.json()

    background_tasks.add_task(route_github_event, event_type, payload)
    return {"status": "accepted"}

GitHub sends the event type in the X-GitHub-Event header, which tells you whether the payload is an issue, pull request, deployment, or something else.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Routing Events to Handlers

Build a dispatcher that routes each event type to its specialized handler.

from openai import AsyncOpenAI

llm = AsyncOpenAI()

async def route_github_event(event_type: str, payload: dict):
    handlers = {
        "issues": handle_issue_event,
        "pull_request": handle_pr_event,
        "deployment_status": handle_deployment_event,
    }
    handler = handlers.get(event_type)
    if handler:
        await handler(payload)

async def handle_issue_event(payload: dict):
    if payload["action"] != "opened":
        return

    issue = payload["issue"]
    title = issue["title"]
    body = issue["body"] or ""
    repo = payload["repository"]["full_name"]

    prompt = f"""Triage this GitHub issue. Respond with:
1. A severity label (bug, feature-request, question, documentation)
2. A priority (P0-critical, P1-high, P2-medium, P3-low)
3. A brief helpful response to the issue author.

Title: {title}
Body: {body}"""

    response = await llm.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}],
    )
    analysis = response.choices[0].message.content

    await add_issue_comment(repo, issue["number"], analysis)
    await add_issue_labels(repo, issue["number"], extract_labels(analysis))

Handling Pull Request Events

PR review is where the agent provides the most value. It can summarize changes, check for common issues, and leave inline comments.

async def handle_pr_event(payload: dict):
    if payload["action"] != "opened":
        return

    pr = payload["pull_request"]
    repo = payload["repository"]["full_name"]

    diff = await fetch_pr_diff(repo, pr["number"])

    prompt = f"""Review this pull request diff. Provide:
1. A summary of what this PR does (2-3 sentences)
2. Any potential bugs, security issues, or performance concerns
3. Suggestions for improvement

PR Title: {pr['title']}
PR Description: {pr['body'] or 'No description provided'}

Diff:
{diff[:8000]}"""

    response = await llm.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}],
    )
    review = response.choices[0].message.content
    await add_pr_comment(repo, pr["number"], f"## AI Review Summary\n\n{review}")

async def fetch_pr_diff(repo: str, pr_number: int) -> str:
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            f"https://api.github.com/repos/{repo}/pulls/{pr_number}",
            headers={
                "Authorization": f"Bearer {GITHUB_TOKEN}",
                "Accept": "application/vnd.github.diff",
            },
        )
        return resp.text

Deployment Status Monitoring

When a deployment fails, the agent can analyze logs and notify the team with context.

async def handle_deployment_event(payload: dict):
    status = payload["deployment_status"]
    if status["state"] != "failure":
        return

    repo = payload["repository"]["full_name"]
    description = status.get("description", "No description")
    environment = status.get("environment", "unknown")

    prompt = f"""A deployment to {environment} failed in {repo}.
Status description: {description}
Suggest possible causes and immediate remediation steps."""

    response = await llm.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}],
    )
    analysis = response.choices[0].message.content
    await notify_team(repo, environment, analysis)

GitHub API Helper Functions

These utility functions interact with the GitHub API to post comments and labels.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

async def add_issue_comment(repo: str, issue_number: int, body: str):
    async with httpx.AsyncClient() as client:
        await client.post(
            f"https://api.github.com/repos/{repo}/issues/{issue_number}/comments",
            headers={"Authorization": f"Bearer {GITHUB_TOKEN}"},
            json={"body": body},
        )

async def add_issue_labels(repo: str, issue_number: int, labels: list[str]):
    async with httpx.AsyncClient() as client:
        await client.post(
            f"https://api.github.com/repos/{repo}/issues/{issue_number}/labels",
            headers={"Authorization": f"Bearer {GITHUB_TOKEN}"},
            json={"labels": labels},
        )

FAQ

How do I prevent the agent from being too noisy on every PR?

Add filters based on PR size, author, or file paths. For example, skip PRs that only change markdown files or that come from dependabot. You can also set a minimum diff size threshold before the agent activates.

Can the agent leave inline comments on specific lines?

Yes. Use the GitHub Pull Request Review API to submit line-level comments. You need to map the LLM output to specific file paths and line numbers from the diff, which requires parsing the unified diff format.

How do I handle rate limits from the GitHub API?

GitHub allows 5,000 authenticated requests per hour. For high-volume repositories, cache API responses and batch operations. Use the X-RateLimit-Remaining response header to implement backoff before you hit the limit.


#GitHub #Webhooks #AIAgents #DevOpsAutomation #FastAPI #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Multi-Agent Handoffs with the OpenAI Agents SDK: The Pattern That Actually Scales (2026)

Handoffs done right — when one agent should hand control to another, how to preserve context, and how to evaluate the handoff decision itself.

AI Strategy

AI Agent M&A Activity 2026: Aircall–Vogent, Meta–PlayAI, OpenAI's Six Deals

Q1 2026 saw a record acquisition wave: Aircall bought Vogent (May), Meta acquired Manus and PlayAI, OpenAI closed six deals. The voice AI consolidation phase has begun.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

LangGraph Supervisor Pattern: Orchestrating Multi-Agent Teams in 2026

The supervisor pattern in LangGraph for coordinating specialist agents, with full code, an eval pipeline that scores routing accuracy, and the failure modes to watch for.