Skip to content
Learn Agentic AI
Learn Agentic AI11 min read13 views

LangChain Fundamentals: Chains, Prompts, and Language Models Explained

Master the core building blocks of LangChain including chains, prompt templates, language model wrappers, and the LangChain Expression Language for composing AI applications.

What Is LangChain and Why Does It Matter

LangChain is an open-source framework for building applications powered by large language models. Rather than writing raw API calls and managing prompt formatting, response parsing, and chaining logic yourself, LangChain provides composable abstractions that let you assemble complex LLM workflows from reusable components.

The framework has evolved significantly since its inception. Modern LangChain centers on three ideas: prompt templates for parameterized inputs, language model wrappers that normalize different providers behind a common interface, and chains that compose these pieces into pipelines. Understanding these fundamentals is essential before moving on to agents, RAG, or multi-step workflows.

Prompt Templates

A prompt template is a string with placeholders that get filled in at runtime. Instead of concatenating strings manually, you define a template once and invoke it with different variables.

flowchart TD
    SPEC(["Task spec"])
    SYSTEM["System prompt<br/>role plus rules"]
    SHOTS["Few shot examples<br/>3 to 5"]
    VARS["Variable injection<br/>Jinja or f-string"]
    COT["Chain of thought<br/>or scratchpad"]
    CONSTR["Output constraint<br/>JSON schema"]
    LLM["LLM call"]
    EVAL["Offline eval<br/>LLM as judge plus regex"]
    GATE{"Score over<br/>threshold?"}
    COMMIT(["Promote to prod<br/>version pinned"])
    REVISE(["Revise prompt"])
    SPEC --> SYSTEM --> SHOTS --> VARS --> COT --> CONSTR --> LLM --> EVAL --> GATE
    GATE -->|Yes| COMMIT
    GATE -->|No| REVISE --> SYSTEM
    style LLM fill:#4f46e5,stroke:#4338ca,color:#fff
    style EVAL fill:#f59e0b,stroke:#d97706,color:#1f2937
    style COMMIT fill:#059669,stroke:#047857,color:#fff
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that speaks {language}."),
    ("human", "{question}"),
])

# Invoke the template with variables
formatted = prompt.invoke({
    "language": "Spanish",
    "question": "What is machine learning?"
})
print(formatted.messages)

LangChain provides several template types. ChatPromptTemplate works with chat models that expect message lists. PromptTemplate handles plain string completion models. FewShotPromptTemplate lets you inject dynamic examples. All templates are Runnable objects, which means they can be composed using the pipe operator.

Language Model Wrappers

LangChain wraps model providers behind two interfaces: BaseChatModel for chat models and BaseLLM for completion models. In practice, nearly all modern usage goes through chat models.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

# OpenAI
gpt = ChatOpenAI(model="gpt-4o", temperature=0)

# Anthropic
claude = ChatAnthropic(model="claude-sonnet-4-20250514", temperature=0)

# Both share the same interface
response = gpt.invoke("Explain gradient descent in one sentence.")
print(response.content)

The wrapper handles authentication, retry logic, token counting, and response normalization. You can swap providers without changing downstream code because the interface is consistent.

Chains and the Pipe Operator

A chain connects a prompt template to a model and optionally to an output parser. With LangChain Expression Language (LCEL), you compose chains using the | pipe operator.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_template(
    "Explain {concept} in simple terms for a beginner."
)
model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

# Compose the chain
chain = prompt | model | parser

# Run it
result = chain.invoke({"concept": "neural networks"})
print(result)  # Plain string output

The pipe operator connects components left to right. The output of prompt feeds into model, and the output of model feeds into parser. Each component is a Runnable — an object that implements invoke, batch, stream, and their async counterparts.

Runnables: The Universal Interface

Every component in LCEL implements the Runnable interface. This means any component supports:

  • invoke(input) — process a single input synchronously
  • ainvoke(input) — async version
  • batch(inputs) — process multiple inputs with concurrency
  • stream(input) — yield output chunks as they arrive
# Streaming example
for chunk in chain.stream({"concept": "transformers"}):
    print(chunk, end="", flush=True)

This uniformity means that whether your component is a prompt, a model, a retriever, or a custom function, it plugs into the same composition framework.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Putting It All Together

Here is a practical example that builds a chain accepting a topic and difficulty level, then returns a structured explanation.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a computer science tutor. Adjust your "
               "explanation to the {level} level."),
    ("human", "Explain {topic}."),
])

chain = prompt | ChatOpenAI(model="gpt-4o-mini") | StrOutputParser()

# Batch processing
results = chain.batch([
    {"topic": "recursion", "level": "beginner"},
    {"topic": "recursion", "level": "advanced"},
])

for r in results:
    print(r[:100], "...")
    print("---")

The batch method processes both inputs concurrently, making efficient use of API rate limits.

FAQ

What is the difference between LangChain and calling the OpenAI API directly?

LangChain adds composability, provider abstraction, and a unified interface on top of raw API calls. You can swap models, chain components, add memory, and integrate tools without rewriting your application logic. For simple single-call use cases, the raw API is fine. For multi-step workflows, LangChain reduces boilerplate significantly.

Do I need to use LCEL or can I use the legacy chain classes?

Modern LangChain strongly recommends LCEL (the pipe operator approach). Legacy classes like LLMChain and SequentialChain still work but are no longer the primary API. LCEL provides streaming, batching, and async support automatically for every chain you build.

Does LangChain only work with OpenAI models?

No. LangChain supports dozens of providers through integration packages including Anthropic, Google, Mistral, Ollama for local models, and many more. You install the relevant package (e.g., langchain-anthropic) and swap the model wrapper.


#LangChain #LLM #PromptEngineering #Python #AIFramework #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Streaming Agent Responses with OpenAI Agents SDK and LangChain in 2026

How to stream tokens, tool-call deltas, and intermediate steps from an agent — with code for both the OpenAI Agents SDK and LangChain — and the gotchas that bite in production.

Agentic AI

Token-Level Evaluation of Streaming Agents: TTFT, Stream Smoothness, and Mid-Stream Hallucination Detection

Streaming changes the eval game — final-answer correctness isn't enough when users perceive the answer one token at a time. Here's the metric set that matters.

Agentic AI

Production RAG Agents with LangChain and RAGAS Evaluation in 2026

Build a production RAG agent with LangChain, then measure faithfulness, answer relevance, and context precision with RAGAS. The four metrics that matter and how to wire them up.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

Agentic RAG with LangGraph: Iterative Retrieval, Self-Correction, and Eval Pipelines

Beyond single-shot RAG — agentic RAG with LangGraph that re-retrieves, self-grades, and rewrites queries. With evals that catch silent retrieval drift.

AI Strategy

Claude for Equity Research: Workflows from Buy-Side Analysts

How leaders should think about Claude equity research — adoption patterns, ROI, competitive dynamics, and what financial AI means for the next 12 months.