Paged Attention and Its Descendants: Memory-Efficient LLM Serving in 2026
PagedAttention launched a family of memory-management techniques that make modern LLM serving possible. The 2026 descendants and what they fix.
What PagedAttention Solved
In 2023, the dominant problem in LLM serving was KV-cache memory fragmentation. Traditional implementations allocated contiguous KV-cache slots per sequence, sized for the maximum context the sequence might reach. Most of that allocation was wasted, and external fragmentation made it hard to fit more sequences.
PagedAttention (Kwon et al., paper title "Efficient Memory Management for Large Language Model Serving with PagedAttention") fixed this by paging the KV-cache. Sequences allocate fixed-size blocks on demand. Memory utilization went from ~30 percent to ~95 percent. vLLM ate the world.
Three years later, the family has grown. This piece walks through what shipped after PagedAttention and what each addition fixes.
The Original Idea
flowchart LR
Seq[Sequence's KV cache] --> P1[Block 1: 16 tokens]
Seq --> P2[Block 2: 16 tokens]
Seq --> P3[Block 3: 16 tokens]
Seq --> Tab[Block table:<br/>logical to physical]
Tab --> Pool[(Physical block pool)]
The KV-cache is split into fixed-size blocks (typically 16 tokens each). A per-sequence block table maps logical positions to physical blocks. Unused logical positions consume zero physical memory.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
RadixAttention (SGLang)
The first big extension: blocks are deduplicated across sequences. If two sequences share a prefix, they share the same physical blocks for the prefix tokens. The data structure is a radix tree of prefix → block list.
flowchart TB
R[Root] --> P1[Prefix: 'You are a helpful...']
P1 --> B1[Sequence A continuation]
P1 --> B2[Sequence B continuation]
P1 --> B3[Sequence C continuation]
This was the unlock for chat and RAG workloads in 2024-2025. Common system prompts, retrieved documents, and conversation prefixes are physically shared.
Prefix Caching (vLLM, others)
vLLM's "prefix caching" is a simpler version of the same idea: hash incoming prompts, look up matching cached blocks, reuse them. Less elegant than RadixAttention but lower-overhead. By default in vLLM 2026.
Disk-Backed KV Cache (2025-2026)
For very long-running sessions or massive prefix reuse, hot blocks live on GPU, warm on CPU, cold on NVMe. The block table is extended to include "where does this block currently live?" The block manager swaps blocks in and out as needed.
Distributed Block Pool (2026)
For multi-GPU deployments, the block pool spans GPUs connected via NVLink. The block manager picks the closest copy when serving a token. NVIDIA Blackwell's NVLink Switch makes this practical at the 72-GPU rack scale.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
Speculative Block Allocation
When a sequence's KV-cache nears the end of its current allocation, the engine speculatively pre-allocates the next block. Reduces stalls on the hot path. Implemented in vLLM 0.7 and TRT-LLM.
Block Reuse Across Models (Experimental)
If two models share architecture (or relevant prefixes), can their KV-caches share blocks? Research-stage in 2026; partial results from Berkeley and CMU.
A Modern Memory Manager
flowchart TB
Req[New Request] --> Hash[Hash prompt prefix]
Hash --> Lookup[Lookup in radix tree]
Lookup -->|Hit| Reuse[Reuse blocks]
Lookup -->|Miss| Alloc[Allocate fresh blocks]
Reuse --> Sched[Scheduler]
Alloc --> Sched
Sched --> Run[Run forward pass]
Run --> Update[Update cache]
vLLM 0.7 and SGLang 0.4 both implement essentially this pipeline by default in 2026.
What This Means for Workloads
Three workload shapes get the largest wins:
- Heavy prefix reuse: chat with system prompts, RAG with shared retrieved docs, agentic loops with shared scratchpads. 5-10x cost reduction.
- Long-tail context lengths: when sequences vary widely in context, paged allocation captures the savings naive contiguous allocation cannot.
- High concurrency, modest context: more sequences fit in memory, throughput rises.
For workloads with little prefix reuse and uniform context lengths (some batch-inference jobs), the gains are smaller but still positive.
Where It Hurts
- Tiny block sizes: paging overhead becomes noticeable at very small block sizes; 16 is roughly optimal
- Very high turnover: short, one-shot requests with no prefix reuse get less benefit
- Coherence with quantization: paged KV with FP4 KV is actively researched; some block-size and microscaling-block-size interactions need tuning
Sources
- vLLM PagedAttention paper — https://arxiv.org/abs/2309.06180
- SGLang RadixAttention — https://lmsys.org/blog/2024-01-17-sglang
- "Distributed KV cache management" 2025 — https://arxiv.org/abs/2403.06504
- vLLM documentation — https://docs.vllm.ai
- Anthropic prompt caching — https://docs.anthropic.com
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.