RAG for Code: Indexing Repos and Retrieving Relevant Snippets
Code RAG is different from text RAG. The 2026 patterns for AST-aware chunking, function-level embedding, and snippet ranking.
What's Different About Code
Source code has structure: functions, classes, modules, imports. It has semantics tied to specific identifiers. It has context (a function's caller, callees, type signature). Treating code as text and applying standard RAG produces poor retrieval. Code RAG is its own pattern.
By 2026 the techniques are mature. Cursor, Claude Code, GitHub Copilot, and many internal-codebase Q&A tools all rely on them.
AST-Aware Chunking
flowchart LR
Repo[Repo] --> Parse[AST parser]
Parse --> Func[Function-level chunks]
Parse --> Cls[Class-level chunks]
Parse --> Mod[Module-level chunks]
Func --> Embed[Embed each]
Chunk by function or class boundary, not by token count. Tools like Tree-sitter parse multiple languages and emit function and class boundaries cleanly. Each chunk is a semantically meaningful unit.
Benefits:
- Retrieval returns whole functions, not arbitrary fragments
- Context for the LLM is coherent
- The LLM can reason about the function as a unit
Embedding Models for Code
Code-specific embedding models work better than text models:
- Voyage Code 3 — strong code embedding model in 2026
- text-embedding-3-large — the OpenAI default; competitive on code
- StarCoder embedding variants
- BGE-Code
For embedding source code, code-tuned models substantially outperform text-only models.
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
Metadata Augmentation
Each chunk should carry metadata:
- File path
- Function or class name
- Module
- Language
- Imports the function uses
- Functions it calls
- Functions that call it (callers)
This metadata enables filtering and ranking beyond pure embedding similarity.
Multi-Granularity Indexing
A 2026 pattern: index multiple granularities of the same code:
- Function-level chunks
- File-level chunks
- Module-level summaries (LLM-generated)
A query can match at any granularity. Module summaries help with high-level questions ("what does this codebase do"); function chunks help with specific questions.
Query Patterns
flowchart TB
Patterns[Query patterns] --> Q1[Find function by behavior]
Patterns --> Q2[Find usages of a name]
Patterns --> Q3[Find similar code]
Patterns --> Q4[Find files relevant to a task]
Different query patterns benefit from different retrieval strategies:
- "Find function by behavior": vector retrieval on function chunks
- "Find usages of a name": grep / symbol-search index
- "Find similar code": code-embedding similarity
- "Find files relevant to a task": file-level + module summaries
The right code RAG combines vector + symbol-aware indexing.
Symbol Index
Even with vector retrieval, a symbol index (like ctags or LSP-derived) is invaluable. For "find usages of processPayment", a symbol index is direct; vector embedding is a guess.
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
Hybrid Retrieval for Code
The 2026 hybrid for code:
- Vector retrieval on function and module chunks
- Symbol search for specific names
- BM25 for unusual terms (error messages, unusual identifiers)
- File-path heuristics ("test files for Y")
Fused; top results re-ranked.
Tool-Use Layer
In 2026 most code RAG sits inside agents. The agent has tools:
grep: regex searchsemantic_search: vector retrievalget_function: by symbol nameget_file: full filerun_tests: validation
The agent picks tools based on the question. This is more powerful than pure RAG.
A Production Example
For a Cursor-style codebase agent:
flowchart LR
Q[User question] --> Agent[Code agent]
Agent --> grep[grep / symbol]
Agent --> sem[semantic_search]
Agent --> file[get_file]
Agent --> Build[Build context]
Build --> Gen[Generate]
The agent assembles context from multiple tools, then generates the answer with citations to specific files and lines.
Common Failure Modes
- Token-based chunking mid-function
- Generic text embeddings on code
- No symbol index, only vectors
- No metadata, retrieval cannot filter
- No way to get the surrounding context (file imports, related functions)
Updating the Index
Code changes constantly. Patterns:
- Re-embed on commit (CI integration)
- Differential indexing (only changed files)
- Version-aware retrieval (which version was the user asking about)
Sources
- Tree-sitter parser — https://tree-sitter.github.io
- Voyage code embeddings — https://docs.voyageai.com
- LlamaIndex code RAG — https://docs.llamaindex.ai
- "Code search" GitHub — https://github.blog
- Sourcegraph Cody — https://sourcegraph.com/cody
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.