PyTorch Memory Optimization: Activation Checkpointing in Practice
Activation checkpointing trades compute for memory. The 2026 PyTorch patterns and where the tradeoffs actually pay off.
The Memory Problem
During training, intermediate activations from the forward pass are saved for the backward pass. Activation memory grows with sequence length and batch size. For large models or long sequences, activations can dominate memory usage.
Activation checkpointing recomputes activations during the backward pass instead of storing them. Trades compute (re-running forward) for memory (no stored activations).
What It Looks Like
flowchart LR
Forward[Forward pass: keep checkpoint, drop intermediates] --> Back[Backward pass]
Back --> Re[Re-run forward to recompute]
Re --> Grad[Compute gradients]
Without checkpointing: forward saves all intermediates; backward uses them.
With checkpointing: forward saves only a few "checkpoints"; backward re-runs forward between checkpoints.
When It Pays Off
- Memory-constrained training (model + batch + activations exceed GPU memory)
- Very long sequences
- Wanting to fit a larger batch size
- Training larger models on existing hardware
When It Hurts
- Memory is not the bottleneck; compute is
- Re-running forward exceeds the GPU compute slack
- Specific layers are expensive to recompute (e.g., attention with FA3)
How to Apply It
PyTorch's torch.utils.checkpoint.checkpoint is the primitive. For typical use:
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
from torch.utils.checkpoint import checkpoint
# Wrap a layer or block
output = checkpoint(layer, input, use_reentrant=False)
For transformers, FSDP and many libraries provide higher-level layer-checkpointing wrappers.
Selective Checkpointing
Not every layer needs checkpointing. Pattern:
- Attention layers (memory-heavy with KV intermediates): checkpoint
- MLP layers (cheap to recompute): checkpoint
- Norm layers: too cheap to bother
- Embedding layers: typically not checkpointed
Selective checkpointing balances memory savings with compute cost.
Compute Cost
Checkpointing increases forward compute by ~33 percent (you run forward 1.33x instead of 1x). Backward unchanged. Total step time: ~10-20 percent slower depending on what's checkpointed.
A Concrete Example
Training a 7B model on 8 A100s:
- Without checkpointing: max batch size 4, OOM at 8
- With activation checkpointing on attention layers: max batch size 16, 15% slower per step
- Net throughput: ~3.5x higher (more samples per second)
Memory-constrained training nearly always benefits.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
FSDP Integration
FSDP combines well with activation checkpointing. The combination:
- FSDP shards parameters and grads
- Activation checkpointing reduces activations
- Total memory: substantially smaller
For training large models in 2026, this combination is standard.
CPU Offload
A more aggressive variant: offload activations to CPU memory, fetch on backward. Even slower than checkpointing but unlocks larger models.
For very large training, offload combined with checkpointing pushes the boundary further.
When to Use Which
flowchart TD
Q1{Memory-bound?} -->|No| Skip[Skip; no benefit]
Q1 -->|Yes| Q2{Compute capacity?}
Q2 -->|Plenty| Check[Activation checkpointing]
Q2 -->|Tight| Off[CPU offload]
Most teams should reach for activation checkpointing first. CPU offload is heavier and slower.
Validation
Validate that checkpointing did not break:
- Loss curve unchanged
- Same final accuracy
- Specific layer-wise outputs match (within numerical noise)
Subtle bugs in checkpointing can corrupt gradients silently.
Sources
- PyTorch checkpoint documentation — https://pytorch.org/docs/stable/checkpoint.html
- "Activation checkpointing" survey — https://arxiv.org
- FSDP documentation — https://pytorch.org/docs/stable/distributed.fsdp.html
- "Memory optimization for PyTorch" — https://pytorch.org/blog
- "Training large models" Hugging Face — https://huggingface.co/blog
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.