Skip to content
Learn Agentic AI
Learn Agentic AI11 min read5 views

LangGraph Getting Started: Your First Stateful Agent Graph in Python

Learn how to install LangGraph, define a StateGraph with typed state, add nodes and edges, compile the graph, and invoke your first stateful agent workflow in Python.

Why LangGraph for Agent Workflows

Most agent frameworks treat execution as a linear pipeline: prompt in, response out, maybe a tool call in between. This works for simple question-answering but breaks down the moment you need branching logic, cycles, or persistent state across multiple reasoning steps. LangGraph solves this by modeling agent workflows as directed graphs where each node is a computation step, each edge defines the transition logic, and the entire graph operates on a shared, typed state object.

Built on top of LangChain but fully usable as a standalone library, LangGraph gives you explicit control over the flow of execution. You decide when the agent reasons, when it calls tools, when it loops back for another attempt, and when it terminates. There is no hidden orchestration magic — every transition is visible in the graph definition.

Installation

Install LangGraph alongside the LangChain OpenAI integration:

flowchart TD
    USER(["User input"])
    SUPER["Supervisor node<br/>routes by state"]
    A["Specialist node A<br/>research"]
    B["Specialist node B<br/>writing"]
    TOOL{"Tool call<br/>needed?"}
    EXEC["Tool executor<br/>ToolNode"]
    CHK[("Postgres<br/>checkpointer")]
    INT{"interrupt for<br/>human approval?"}
    HUMAN(["Human reviewer"])
    OUT(["Final response"])
    USER --> SUPER
    SUPER --> A
    SUPER --> B
    A --> TOOL
    B --> TOOL
    TOOL -->|Yes| EXEC --> SUPER
    TOOL -->|No| INT
    INT -->|Yes| HUMAN --> SUPER
    INT -->|No| OUT
    SUPER <--> CHK
    style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
    style CHK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
    style OUT fill:#059669,stroke:#047857,color:#fff
    style HUMAN fill:#f59e0b,stroke:#d97706,color:#1f2937
pip install langgraph langchain-openai

Set your API key:

export OPENAI_API_KEY="sk-your-key-here"

Defining State

Every LangGraph workflow starts with a state schema. This is a TypedDict that defines what data flows through the graph:

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]
    current_step: str

The Annotated type with add_messages tells LangGraph to append new messages rather than overwrite the list. This is called a reducer and it controls how state updates merge.

Building the Graph

Create a StateGraph, add nodes as functions, and connect them with edges:

from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

def call_model(state: AgentState) -> dict:
    response = llm.invoke(state["messages"])
    return {"messages": [response], "current_step": "completed"}

# Build the graph
builder = StateGraph(AgentState)
builder.add_node("agent", call_model)
builder.add_edge(START, "agent")
builder.add_edge("agent", END)

graph = builder.compile()

The START and END constants define entry and exit points. The compiled graph is an executable runnable.

Invoking the Graph

Run the graph with an initial state:

from langchain_core.messages import HumanMessage

result = graph.invoke({
    "messages": [HumanMessage(content="What is LangGraph?")],
    "current_step": "starting",
})

print(result["messages"][-1].content)

The invoke call processes input through every node following the defined edges and returns the final state. You now have a working stateful agent graph.

Visualizing the Graph

LangGraph can render your graph for debugging:

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

from IPython.display import Image, display

display(Image(graph.get_graph().draw_mermaid_png()))

This outputs a Mermaid diagram showing all nodes and edges, which is invaluable for understanding complex multi-step workflows.

Key Concepts Summary

The core building blocks are: StateGraph for the container, nodes for computation functions, edges for transitions, compile to produce an executable, and invoke to run it. Every node receives the current state and returns a partial state update that gets merged back. This explicit model gives you full visibility and control over agent behavior.

FAQ

How is LangGraph different from LangChain?

LangChain provides chains and agents as high-level abstractions. LangGraph sits underneath as a lower-level orchestration layer that models workflows as graphs with explicit state management. You can use LangGraph without LangChain, though they integrate seamlessly.

Can I use LangGraph with models other than OpenAI?

Yes. LangGraph is model-agnostic. You can use any LangChain chat model integration including Anthropic, Google, Mistral, or local models via Ollama. The graph structure itself has no dependency on any specific model provider.

Does LangGraph support async execution?

Yes. LangGraph supports both sync and async execution. You can define async node functions and use await graph.ainvoke() for non-blocking execution, which is essential for production web servers.


#LangGraph #StatefulAgents #Python #GettingStarted #AgentWorkflows #AgenticAI #LearnAI #AIEngineering

## LangGraph Getting Started: Your First Stateful Agent Graph in Python — operator perspective LangGraph Getting Started: Your First Stateful Agent Graph in Python is the kind of news that lives or dies on second-week behavior. The first benchmark is marketing. The eval suite a week later is the truth. The CallSphere stack treats announcements as input to an evals queue, not a product roadmap. Production agents stay pinned; new releases earn their slot only after a regression suite confirms cost, latency, and tool-call reliability move the right way. ## Where a junior engineer should actually start If you're new to agentic AI and want to be useful in three weeks, skip the framework war and start with one stack: the OpenAI Agents SDK. Build a single-agent app that does one thing well (book an appointment, qualify a lead, escalate a complaint). Then add a second specialist agent with an explicit handoff — the receiving agent gets a structured payload (intent, entities, prior tool results), not a transcript. That's the moment the abstractions click. From there, the next two skills that compound are evals (write the regression case the moment you find a bug, and refuse to merge anything that fails the suite) and observability (log the tool-call graph, not just the final answer). Frameworks come and go; those two habits transfer. Once you've shipped that first multi-agent app end-to-end, the rest of the agentic AI literature reads differently — you can tell which papers are solving real production problems and which are solving demo problems. ## FAQs **Q: Is langGraph Getting Started ready for the realtime call path, or only for analytics?** A: Most of the time it doesn't, and that's the right starting assumption. The relevant test is whether it improves at least one of: p95 first-token latency, tool-call argument accuracy on noisy inputs, multi-turn handoff stability, or per-session cost. CallSphere ships in 57+ languages, is HIPAA and SOC 2 aligned, and runs voice, chat, SMS, and WhatsApp from the same agent stack. **Q: What's the cost story behind langGraph Getting Started at SMB call volumes?** A: The eval gate is unsentimental — a regression suite that simulates real call traffic (noisy ASR, partial inputs, tool-call timeouts) measures four numbers, and a candidate has to win on three of four without losing badly on the fourth. Anything else is treated as a blog post, not a stack change. **Q: How does CallSphere decide whether to adopt langGraph Getting Started?** A: In a CallSphere deployment, new model and API capabilities land first in the post-call analytics pipeline (lower stakes, async, easy to roll back) and only later in the live realtime path. Today the verticals most likely to absorb new capability first are Sales and Salon, which already run the largest share of production traffic. ## See it live Want to see healthcare agents handle real traffic? Walk through https://healthcare.callsphere.tech or grab 20 minutes with the founder: https://calendly.com/sagar-callsphere/new-meeting.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Human-in-the-Loop Hybrid Agents: 73% Fewer Errors in 2026

Fully autonomous agents are still a fantasy in production. LangGraph's interrupt() lets you pause for human approval mid-graph without losing state. We cover approve/edit/reject/respond actions and CallSphere's escalation ladder.

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Browser Agents with LangGraph + Playwright: Visual Evaluation Pipelines That Don't Lie

Build a browser agent with LangGraph and Playwright that does multi-step web tasks, then ground-truth its work with visual diffs and DOM-based evaluators.