Skip to content
Learn Agentic AI
Learn Agentic AI11 min read4 views

CrewAI Getting Started: Installing and Creating Your First Multi-Agent Crew

Learn how to install CrewAI, define agents with the Agent class, create tasks with the Task class, assemble a Crew, and run it with kickoff to build your first multi-agent workflow.

Why CrewAI for Multi-Agent Systems

Building AI applications where multiple specialized agents collaborate on complex tasks has historically required significant orchestration code. CrewAI simplifies this by providing a framework built around three intuitive concepts: Agents (who), Tasks (what), and Crews (how). Each agent gets a role, a goal, and a backstory that shapes its reasoning. Tasks define discrete work units with expected outputs. Crews tie everything together and manage the execution flow.

CrewAI runs on top of LangChain but abstracts away most of the complexity. You describe your team of agents, assign them tasks, and call kickoff(). The framework handles the agent loop, tool execution, context passing, and output formatting.

Installing CrewAI

Install CrewAI and its tools package using pip:

flowchart TD
    GOAL(["Crew goal"])
    MGR["Manager agent<br/>hierarchical process"]
    R1["Researcher agent<br/>role plus backstory"]
    R2["Analyst agent"]
    W1["Writer agent"]
    T1["Task A<br/>research"]
    T2["Task B<br/>analyze"]
    T3["Task C<br/>draft"]
    TOOLS[("Tools<br/>web search, files")]
    OUT(["Crew output"])
    GOAL --> MGR
    MGR --> T1 --> R1 --> TOOLS
    R1 --> T2 --> R2
    R2 --> T3 --> W1 --> OUT
    style MGR fill:#4f46e5,stroke:#4338ca,color:#fff
    style TOOLS fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
pip install crewai crewai-tools

This installs the core framework along with the official tool integrations. Verify the installation:

python -c "from crewai import Agent, Task, Crew; print('CrewAI installed successfully')"

You also need an LLM API key. CrewAI defaults to OpenAI, so set your key:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
export OPENAI_API_KEY="sk-your-key-here"

Creating Your First Agent

The Agent class represents a team member with a specific role. Every agent needs a role, a goal, and a backstory:

from crewai import Agent

researcher = Agent(
    role="Senior Research Analyst",
    goal="Find comprehensive and accurate information about the given topic",
    backstory="""You are a senior research analyst at a leading think tank.
    You have 15 years of experience gathering data from diverse sources
    and synthesizing it into clear, actionable insights.""",
    verbose=True,
    allow_delegation=False,
)

The verbose flag prints the agent's thought process as it works. Setting allow_delegation=False prevents the agent from handing tasks off to other agents, which is useful when you want strict task assignment.

Defining Tasks

Tasks represent the work you want agents to accomplish. Each task has a description, an expected output format, and an assigned agent:

from crewai import Task

research_task = Task(
    description="""Research the current state of electric vehicle battery
    technology. Focus on solid-state batteries, charging speed improvements,
    and cost reduction trends from 2024 to 2026.""",
    expected_output="""A detailed research brief with at least 5 key findings,
    each supported by specific data points or examples.""",
    agent=researcher,
)

The expected_output field is critical. It tells the agent exactly what format and level of detail you expect, guiding it toward producing structured, useful results.

Assembling and Running a Crew

The Crew class combines agents and tasks into an executable workflow:

from crewai import Agent, Task, Crew, Process

researcher = Agent(
    role="Senior Research Analyst",
    goal="Find accurate information about AI trends",
    backstory="You are an expert researcher with deep knowledge of AI.",
    verbose=True,
)

writer = Agent(
    role="Technical Writer",
    goal="Create clear and engaging content from research findings",
    backstory="You are a skilled writer who makes complex topics accessible.",
    verbose=True,
)

research_task = Task(
    description="Research the latest breakthroughs in agentic AI frameworks.",
    expected_output="A bullet-point summary of 5 key breakthroughs with details.",
    agent=researcher,
)

writing_task = Task(
    description="Write a blog post based on the research findings.",
    expected_output="A 500-word blog post with introduction, body, and conclusion.",
    agent=writer,
)

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff()
print(result)

Calling crew.kickoff() starts the execution. In sequential mode, tasks run one after another and each subsequent agent receives the output of the previous task as context.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Understanding the Output

The kickoff() method returns a CrewOutput object containing the final task's result. You can access it as a string, as structured data, or inspect individual task outputs:

result = crew.kickoff()

# Final output as string
print(result.raw)

# Access individual task outputs
for task_output in result.tasks_output:
    print(f"Task: {task_output.description[:50]}...")
    print(f"Output: {task_output.raw[:200]}...")

This gives you full visibility into what each agent produced, which is essential for debugging and quality assurance.

FAQ

How does CrewAI differ from LangChain agents?

CrewAI is built on top of LangChain but adds a higher-level abstraction for multi-agent collaboration. While LangChain gives you individual agents with tool access, CrewAI focuses on teams of agents working together with defined roles, tasks, and processes. Think of LangChain as the engine and CrewAI as the fleet management system.

Can I use CrewAI without an OpenAI API key?

Yes. CrewAI supports multiple LLM providers including Anthropic Claude, Ollama for local models, Azure OpenAI, and any provider supported by LiteLLM. You configure the LLM at the agent level, so different agents in the same crew can even use different models.

What happens if an agent fails during kickoff?

CrewAI includes built-in retry logic. If an agent's LLM call fails, the framework retries with exponential backoff. If a task consistently fails, the crew raises an exception with details about which agent and task failed, making it straightforward to diagnose issues.


#CrewAI #MultiAgent #Python #GettingStarted #Tutorial #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like