LangGraph Tool Nodes: Integrating Function Calling into Graph Workflows
Learn how to integrate LLM function calling into LangGraph workflows using ToolNode, tool binding, automatic tool execution, and structured error handling for reliable agent behavior.
Tools Turn Agents into Actors
An LLM that can only generate text is a reasoner. An LLM that can call tools is an actor — it can search the web, query databases, send emails, and modify external systems. LangGraph provides first-class support for tool integration through its ToolNode class, which automatically executes tool calls from LLM responses and feeds results back into the conversation.
Defining Tools
Tools in LangGraph use the LangChain tool decorator. Each tool is a Python function with a docstring that the LLM uses to understand when and how to call it:
flowchart TD
USER(["User input"])
SUPER["Supervisor node<br/>routes by state"]
A["Specialist node A<br/>research"]
B["Specialist node B<br/>writing"]
TOOL{"Tool call<br/>needed?"}
EXEC["Tool executor<br/>ToolNode"]
CHK[("Postgres<br/>checkpointer")]
INT{"interrupt for<br/>human approval?"}
HUMAN(["Human reviewer"])
OUT(["Final response"])
USER --> SUPER
SUPER --> A
SUPER --> B
A --> TOOL
B --> TOOL
TOOL -->|Yes| EXEC --> SUPER
TOOL -->|No| INT
INT -->|Yes| HUMAN --> SUPER
INT -->|No| OUT
SUPER <--> CHK
style SUPER fill:#4f46e5,stroke:#4338ca,color:#fff
style CHK fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
style OUT fill:#059669,stroke:#047857,color:#fff
style HUMAN fill:#f59e0b,stroke:#d97706,color:#1f2937
from langchain_core.tools import tool
@tool
def search_web(query: str) -> str:
"""Search the web for current information about a topic."""
# Real implementation would call a search API
return f"Top results for '{query}': [simulated search results]"
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression and return the result."""
try:
result = eval(expression) # Use a safe evaluator in production
return str(result)
except Exception as e:
return f"Error: {e}"
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a given city."""
return f"Weather in {city}: 72F, partly cloudy"
The function signature and docstring are automatically converted into the JSON schema that the LLM sees for function calling.
Binding Tools to the LLM
Before the LLM can call tools, you must bind them to the model:
from langchain_openai import ChatOpenAI
tools = [search_web, calculate, get_weather]
llm = ChatOpenAI(model="gpt-4o-mini")
llm_with_tools = llm.bind_tools(tools)
The bind_tools method attaches the tool schemas to every LLM request. The model now knows these functions exist and can generate structured tool call requests in its responses.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Using ToolNode in the Graph
LangGraph provides a ToolNode that automatically executes tool calls found in the last AI message:
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
def call_agent(state: AgentState) -> dict:
response = llm_with_tools.invoke(state["messages"])
return {"messages": [response]}
def should_continue(state: AgentState) -> Literal["tools", "end"]:
last = state["messages"][-1]
if hasattr(last, "tool_calls") and last.tool_calls:
return "tools"
return "end"
tool_node = ToolNode(tools)
builder = StateGraph(AgentState)
builder.add_node("agent", call_agent)
builder.add_node("tools", tool_node)
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue, {
"tools": "tools",
"end": END,
})
builder.add_edge("tools", "agent")
graph = builder.compile()
When the agent generates tool calls, the router sends execution to the ToolNode. The node looks up each tool by name, calls it with the provided arguments, wraps the results in ToolMessage objects, and returns them to the state. The edge from tools back to agent creates the agentic loop.
Running the Tool-Calling Agent
from langchain_core.messages import HumanMessage
result = graph.invoke({
"messages": [HumanMessage(
content="What is the weather in Tokyo and what is 42 * 17?"
)]
})
for msg in result["messages"]:
print(f"{msg.__class__.__name__}: {msg.content[:100]}")
The LLM may generate multiple tool calls in a single response. The ToolNode executes all of them and returns all results before the agent node runs again.
Handling Tool Errors
By default, tool exceptions propagate and crash the graph. Use handle_tool_errors=True for graceful handling:
tool_node = ToolNode(tools, handle_tool_errors=True)
With this flag, if a tool raises an exception, the error message is returned as the tool result instead of crashing. The LLM sees the error and can decide to retry with different arguments or inform the user.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
For custom error handling, wrap your tool logic:
@tool
def safe_database_query(sql: str) -> str:
"""Run a read-only SQL query against the analytics database."""
try:
results = execute_query(sql)
return format_results(results)
except DatabaseError as e:
return f"Query failed: {e}. Please check syntax and try again."
except TimeoutError:
return "Query timed out. Try a simpler query or add filters."
Returning error strings as tool results — rather than raising exceptions — gives the LLM the chance to self-correct, which is the hallmark of robust agentic behavior.
FAQ
Can I use tools from different providers like Tavily or Wikipedia?
Yes. Any LangChain-compatible tool works with ToolNode. The LangChain community package includes dozens of pre-built tool integrations for search engines, databases, APIs, and file systems. Just add them to the tools list.
How does the LLM decide which tool to call?
The LLM selects tools based on the function name, docstring, and parameter schema. Writing clear, specific docstrings is the most effective way to improve tool selection accuracy. Ambiguous descriptions lead to incorrect tool calls.
Can a tool call trigger another tool call?
Not directly. Tools return results to the state, then the agent node runs again and the LLM decides whether to make additional tool calls. This loop continues until the LLM generates a response without tool calls, at which point the router sends execution to the end node.
#LangGraph #ToolCalling #FunctionCalling #ToolNode #Python #AgenticAI #LearnAI #AIEngineering
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.