Skip to content
Learn Agentic AI
Learn Agentic AI11 min read5 views

Building Slack AI Agents: Slash Commands, Bot Events, and Interactive Messages

Build a production-ready Slack AI agent with slash commands, real-time bot event handling, interactive Block Kit messages, and thread-aware conversation management using the Slack Bolt SDK.

Why Build AI Agents for Slack

Slack is where teams spend their working hours. An AI agent inside Slack meets users where they already are — no context switching, no separate dashboard. The agent can answer questions, triage requests, summarize threads, and take actions across integrated systems, all within the familiar chat interface.

The Slack Bolt SDK for Python provides a clean abstraction over Slack's Events API, slash commands, interactive components, and Socket Mode, making it the ideal foundation for AI agent development.

Setting Up the Slack App

Start by creating a Slack app at api.slack.com/apps. Enable Socket Mode for development (no public URL needed), then configure these scopes under OAuth and Permissions: app_mentions:read, chat:write, commands, im:history, and im:read.

sequenceDiagram
    autonumber
    participant Caller as Caller
    participant Agent as CallSphere Agent
    participant API as CRM API
    participant DB as CRM Database
    participant Webhook as Webhook Listener
    Caller->>Agent: Inbound call begins
    Agent->>Agent: STT plus intent detection
    Agent->>API: Lookup contact by phone
    API->>DB: Read contact record
    DB-->>API: Contact and history
    API-->>Agent: Personalized context
    Agent->>API: Create call activity
    Agent->>API: Update deal stage
    API->>Webhook: Outbound webhook fires
    Webhook-->>Agent: Confirmed
    Agent->>Caller: Spoken confirmation
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler

app = App(token="xoxb-your-bot-token")

# Start listening
if __name__ == "__main__":
    handler = SocketModeHandler(
        app, "xapp-your-app-level-token"
    )
    handler.start()

Handling Slash Commands

Slash commands are the most direct way users interact with your agent. Register a command in your Slack app config, then handle it in code.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
from slack_bolt import Ack, Respond

@app.command("/ask-agent")
def handle_ask_command(ack: Ack, respond: Respond, command: dict):
    ack()  # Must acknowledge within 3 seconds

    user_query = command["text"]
    user_id = command["user_id"]
    channel_id = command["channel_id"]

    # Process with AI agent (keep under 30s for respond())
    result = agent.run_sync(
        prompt=user_query,
        context={"user": user_id, "channel": channel_id}
    )

    respond(
        text=result.answer,
        response_type="in_channel",  # or "ephemeral"
    )

The critical detail: you must call ack() within 3 seconds or Slack shows an error to the user. For long-running agent tasks, acknowledge immediately, then use respond() asynchronously.

Listening to Bot Events

Subscribe to the app_mention and message.im events so your agent can respond when mentioned in channels or messaged directly.

import threading

@app.event("app_mention")
def handle_mention(event: dict, say, client):
    thread_ts = event.get("thread_ts", event["ts"])
    user_text = event["text"]
    channel = event["channel"]

    # Fetch thread context for multi-turn conversations
    thread_messages = []
    if event.get("thread_ts"):
        result = client.conversations_replies(
            channel=channel,
            ts=event["thread_ts"],
            limit=20,
        )
        thread_messages = [
            {"role": "user" if m.get("bot_id") is None else "assistant",
             "content": m["text"]}
            for m in result["messages"]
        ]

    agent_response = agent.run_sync(
        prompt=user_text,
        history=thread_messages,
    )

    say(text=agent_response.answer, thread_ts=thread_ts)

@app.event("message")
def handle_dm(event: dict, say):
    if event.get("channel_type") == "im" and not event.get("bot_id"):
        response = agent.run_sync(prompt=event["text"])
        say(text=response.answer)

Building Interactive Messages with Block Kit

Block Kit lets your agent present structured, interactive responses instead of plain text.

@app.command("/triage")
def handle_triage(ack, respond, command):
    ack()

    analysis = agent.run_sync(
        prompt=f"Triage this issue: {command['text']}"
    )

    blocks = [
        {
            "type": "header",
            "text": {"type": "plain_text", "text": "Issue Triage Result"}
        },
        {
            "type": "section",
            "text": {
                "type": "mrkdwn",
                "text": f"*Summary:* {analysis.summary}\n"
                        f"*Priority:* {analysis.priority}\n"
                        f"*Category:* {analysis.category}"
            }
        },
        {
            "type": "actions",
            "elements": [
                {
                    "type": "button",
                    "text": {"type": "plain_text", "text": "Create Ticket"},
                    "action_id": "create_ticket",
                    "value": analysis.id,
                    "style": "primary",
                },
                {
                    "type": "button",
                    "text": {"type": "plain_text", "text": "Dismiss"},
                    "action_id": "dismiss_triage",
                    "value": analysis.id,
                },
            ]
        }
    ]

    respond(blocks=blocks, text=analysis.summary)

@app.action("create_ticket")
def handle_create_ticket(ack, body, respond):
    ack()
    analysis_id = body["actions"][0]["value"]
    ticket = create_jira_ticket(analysis_id)
    respond(
        text=f"Ticket created: {ticket.key}",
        replace_original=False,
    )

Thread Management for Multi-Turn Conversations

Keep conversation context by tracking threads. Store agent state keyed by the thread timestamp.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

from collections import defaultdict

thread_contexts: dict[str, list[dict]] = defaultdict(list)

@app.event("app_mention")
def handle_threaded_mention(event, say, client):
    thread_ts = event.get("thread_ts", event["ts"])

    thread_contexts[thread_ts].append({
        "role": "user",
        "content": event["text"],
    })

    response = agent.run_sync(
        prompt=event["text"],
        history=thread_contexts[thread_ts],
    )

    thread_contexts[thread_ts].append({
        "role": "assistant",
        "content": response.answer,
    })

    say(text=response.answer, thread_ts=thread_ts)

FAQ

How do I handle Slack's 3-second acknowledgment requirement for long AI tasks?

Call ack() immediately, then spawn a background task to process the request. Use respond() with the response_url from the command payload to send the result when the agent finishes. Slack allows responses via response_url for up to 30 minutes after the original command.

Should I use Socket Mode or the Events API for production?

Socket Mode is excellent for development because it requires no public URL. For production, the Events API with a public HTTPS endpoint scales better because Slack pushes events to your server and you can load-balance across multiple instances. Socket Mode maintains a WebSocket connection per instance, which adds operational complexity at scale.

How do I prevent the agent from responding to its own messages?

Check for the bot_id field in the event payload. If event.get("bot_id") is truthy, the message came from a bot (possibly your own). Skip processing for those events to avoid infinite loops.


#Slack #BotDevelopment #SlackSDK #AIAgents #ChatIntegration #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Multi-Agent Handoffs with the OpenAI Agents SDK: The Pattern That Actually Scales (2026)

Handoffs done right — when one agent should hand control to another, how to preserve context, and how to evaluate the handoff decision itself.

AI Strategy

AI Agent M&A Activity 2026: Aircall–Vogent, Meta–PlayAI, OpenAI's Six Deals

Q1 2026 saw a record acquisition wave: Aircall bought Vogent (May), Meta acquired Manus and PlayAI, OpenAI closed six deals. The voice AI consolidation phase has begun.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

LangGraph Supervisor Pattern: Orchestrating Multi-Agent Teams in 2026

The supervisor pattern in LangGraph for coordinating specialist agents, with full code, an eval pipeline that scores routing accuracy, and the failure modes to watch for.