Skip to content
Learn Agentic AI
Learn Agentic AI13 min read9 views

Slack Bot Agent: Building an AI Assistant for Team Communication

Build a production-ready Slack bot agent using the Slack SDK that listens to events, handles slash commands, responds with interactive messages, and integrates LLM-powered reasoning for team support.

Why Slack Is the Perfect Home for AI Agents

Slack is where teams coordinate work. When an AI agent lives inside Slack, it eliminates context-switching. Instead of opening a separate tool, team members ask the agent directly in the channel where the conversation is already happening. The agent can answer questions about internal docs, summarize threads, create tickets, and route requests to the right people.

In this guide, we will build a Slack bot agent using Python's slack_bolt framework. The agent handles events, slash commands, and interactive messages while delegating complex reasoning to an LLM.

Setting Up the Slack App

Before writing code, create a Slack app at api.slack.com/apps. Enable Socket Mode for local development, add the chat:write, app_mentions:read, commands, and im:history scopes, and subscribe to the app_mention and message.im events.

flowchart LR
    INPUT(["User intent"])
    PARSE["Parse plus<br/>classify"]
    PLAN["Plan and tool<br/>selection"]
    AGENT["Agent loop<br/>LLM plus tools"]
    GUARD{"Guardrails<br/>and policy"}
    EXEC["Execute and<br/>verify result"]
    OBS[("Trace and metrics")]
    OUT(["Outcome plus<br/>next action"])
    INPUT --> PARSE --> PLAN --> AGENT --> GUARD
    GUARD -->|Pass| EXEC --> OUT
    GUARD -->|Fail| AGENT
    AGENT --> OBS
    style AGENT fill:#4f46e5,stroke:#4338ca,color:#fff
    style GUARD fill:#f59e0b,stroke:#d97706,color:#1f2937
    style OBS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff

Install dependencies:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
# pip install slack-bolt openai

from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
import os

app = App(token=os.environ["SLACK_BOT_TOKEN"])

Handling App Mentions

When someone mentions the bot in a channel, the agent processes the message and responds with LLM-generated content:

from openai import OpenAI

llm = OpenAI()

SYSTEM_PROMPT = (
    "You are a helpful team assistant in Slack. "
    "Answer questions concisely. Use bullet points for lists. "
    "If you do not know the answer, say so clearly."
)

def ask_llm(question: str, context: str = "") -> str:
    """Send a question to the LLM with optional context."""
    messages = [{"role": "system", "content": SYSTEM_PROMPT}]
    if context:
        messages.append({"role": "user", "content": f"Context:\n{context}"})
    messages.append({"role": "user", "content": question})

    response = llm.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
        temperature=0.3,
        max_tokens=500,
    )
    return response.choices[0].message.content

@app.event("app_mention")
def handle_mention(event, say, client):
    """Respond to @bot mentions in channels."""
    user_text = event["text"]
    thread_ts = event.get("thread_ts", event["ts"])

    # Fetch thread context if replying in a thread
    context = ""
    if event.get("thread_ts"):
        result = client.conversations_replies(
            channel=event["channel"],
            ts=event["thread_ts"],
            limit=10,
        )
        context = "\n".join(
            m["text"] for m in result["messages"][:-1]  # Exclude current message
        )

    response = ask_llm(user_text, context)
    say(text=response, thread_ts=thread_ts)

The agent fetches thread history for context when mentioned inside a thread. This gives the LLM the full conversation to work with rather than just the latest message.

Implementing Slash Commands

Slash commands provide structured entry points for specific actions. Here we create a /ask command that accepts a question and a /summarize command that summarizes the current channel's recent messages:

@app.command("/ask")
def handle_ask(ack, command, say):
    """Handle the /ask slash command."""
    ack()  # Acknowledge within 3 seconds
    question = command["text"]
    if not question.strip():
        say("Usage: `/ask <your question>`", ephemeral=True)
        return
    answer = ask_llm(question)
    say(f"*Q:* {question}\n\n{answer}")

@app.command("/summarize")
def handle_summarize(ack, command, client, say):
    """Summarize recent channel messages."""
    ack()
    channel = command["channel_id"]
    result = client.conversations_history(channel=channel, limit=50)
    messages = [m["text"] for m in result["messages"] if m.get("text")]
    combined = "\n".join(messages[:50])

    summary = ask_llm(
        "Summarize the following Slack messages into 3-5 bullet points. "
        "Focus on decisions, action items, and key topics.",
        context=combined,
    )
    say(f"*Channel Summary (last 50 messages):*\n\n{summary}")

Always call ack() immediately when handling slash commands. Slack requires acknowledgment within three seconds or it shows an error to the user.

Interactive Messages with Buttons

Interactive messages let users take actions directly from the bot's response. Here we add approval buttons to a request workflow:

@app.command("/request")
def handle_request(ack, command, client):
    """Create an approval request with interactive buttons."""
    ack()
    request_text = command["text"]
    requester = command["user_id"]

    client.chat_postMessage(
        channel=os.environ["APPROVAL_CHANNEL"],
        text=f"New request from <@{requester}>",
        blocks=[
            {
                "type": "section",
                "text": {
                    "type": "mrkdwn",
                    "text": f"*New Request from <@{requester}>:*\n{request_text}",
                },
            },
            {
                "type": "actions",
                "elements": [
                    {
                        "type": "button",
                        "text": {"type": "plain_text", "text": "Approve"},
                        "style": "primary",
                        "action_id": "approve_request",
                        "value": f"{requester}|{request_text}",
                    },
                    {
                        "type": "button",
                        "text": {"type": "plain_text", "text": "Deny"},
                        "style": "danger",
                        "action_id": "deny_request",
                        "value": f"{requester}|{request_text}",
                    },
                ],
            },
        ],
    )

@app.action("approve_request")
def handle_approve(ack, body, client):
    """Handle approval button click."""
    ack()
    requester, request_text = body["actions"][0]["value"].split("|", 1)
    approver = body["user"]["id"]
    client.chat_postMessage(
        channel=requester,
        text=f"Your request was *approved* by <@{approver}>: {request_text}",
    )

Running the Agent

Start the bot with Socket Mode for development:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

if __name__ == "__main__":
    handler = SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"])
    print("Slack bot agent is running...")
    handler.start()

For production, switch to HTTP mode behind a reverse proxy with proper SSL termination. Use a process manager like systemd or deploy as a container.

FAQ

How do I handle rate limits from the Slack API?

The slack_bolt framework includes built-in rate limit handling that automatically retries requests with appropriate backoff. For high-volume bots, use WebClient with retry_handlers configured and batch operations where possible.

Can the bot maintain conversation history across messages?

Store conversation context in a database or Redis keyed by channel ID and thread timestamp. Before each LLM call, retrieve the last N messages from your store to include as context. This gives the agent memory without relying solely on Slack API calls.

How do I restrict the bot to certain channels?

Check event["channel"] against an allowlist of channel IDs in your event handlers. Return early without processing if the channel is not in the list. You can also use Slack's app-level channel restrictions in the app configuration.


#SlackBot #AIAgents #SlackSDK #WorkflowAutomation #Python #ChatOps #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Multi-Agent Handoffs with the OpenAI Agents SDK: The Pattern That Actually Scales (2026)

Handoffs done right — when one agent should hand control to another, how to preserve context, and how to evaluate the handoff decision itself.

AI Strategy

AI Agent M&A Activity 2026: Aircall–Vogent, Meta–PlayAI, OpenAI's Six Deals

Q1 2026 saw a record acquisition wave: Aircall bought Vogent (May), Meta acquired Manus and PlayAI, OpenAI closed six deals. The voice AI consolidation phase has begun.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

LangGraph Supervisor Pattern: Orchestrating Multi-Agent Teams in 2026

The supervisor pattern in LangGraph for coordinating specialist agents, with full code, an eval pipeline that scores routing accuracy, and the failure modes to watch for.