Skip to content
Learn Agentic AI
Learn Agentic AI14 min read16 views

Vector Index Types Explained: Flat, IVF, HNSW, and Product Quantization

Understand the four major vector index algorithms — Flat, IVF, HNSW, and Product Quantization — with clear explanations of accuracy vs speed tradeoffs and guidance on tuning parameters.

When you have 1,000 documents, finding the nearest neighbors to a query vector is trivial — compute the distance to every vector and sort. When you have 10 million documents, that brute-force approach takes seconds per query, which is unacceptable for interactive applications.

Vector indexes solve this by trading a small amount of accuracy for dramatic speed improvements. Instead of comparing against every vector (exact search), approximate nearest neighbor (ANN) indexes use clever data structures to narrow the search space. The four most important index types are Flat, IVF (Inverted File), HNSW (Hierarchical Navigable Small World), and PQ (Product Quantization).

Flat Index: The Baseline

A flat index stores vectors in a simple array and performs exhaustive comparison at query time. It is not really an "index" in the traditional sense — it is brute-force search.

flowchart LR
    FP16(["FP16 model<br/>baseline weights"])
    CALIB["Calibration set<br/>128 to 1024 samples"]
    METHOD{"Quantization<br/>method"}
    GPTQ["GPTQ<br/>weight only INT4"]
    AWQ["AWQ<br/>activation aware"]
    GGUF["llama.cpp GGUF<br/>K-quants for CPU"]
    EVAL["Eval delta vs FP16<br/>perplexity, MMLU"]
    SERVE[("Serve on<br/>consumer GPU")]
    FP16 --> CALIB --> METHOD
    METHOD --> GPTQ --> EVAL
    METHOD --> AWQ --> EVAL
    METHOD --> GGUF --> EVAL
    EVAL --> SERVE
    style METHOD fill:#4f46e5,stroke:#4338ca,color:#fff
    style EVAL fill:#f59e0b,stroke:#d97706,color:#1f2937
    style SERVE fill:#059669,stroke:#047857,color:#fff
import faiss
import numpy as np

dimension = 1536
vectors = np.random.rand(10000, dimension).astype("float32")

# Flat index with L2 distance
index = faiss.IndexFlatL2(dimension)
index.add(vectors)

# Query — compares against all 10,000 vectors
query = np.random.rand(1, dimension).astype("float32")
distances, indices = index.search(query, k=5)
print(f"Nearest IDs: {indices[0]}")

Pros: 100% recall (exact results), no training step, simple to use. Cons: Query time scales linearly with dataset size. Impractical above ~100K vectors for real-time applications.

When to use: Small datasets, ground truth evaluation, accuracy benchmarking of other index types.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

IVF (Inverted File Index)

IVF partitions the vector space into nlist clusters using k-means. At query time, it only searches the nprobe nearest clusters instead of the entire dataset.

nlist = 100  # number of clusters
quantizer = faiss.IndexFlatL2(dimension)
index = faiss.IndexIVFFlat(quantizer, dimension, nlist)

# IVF requires training on representative data
index.train(vectors)
index.add(vectors)

# Search the 10 nearest clusters
index.nprobe = 10
distances, indices = index.search(query, k=5)

The key tradeoff is nprobe:

  • Low nprobe (1-5): Very fast but may miss relevant vectors in other clusters
  • High nprobe (50-100): Slower but approaches exact search quality
  • nprobe = nlist: Equivalent to flat search (exact, but with cluster overhead)

Rule of thumb: Set nlist to roughly 4 * sqrt(n) where n is the number of vectors. Set nprobe to 5-10% of nlist as a starting point.

HNSW (Hierarchical Navigable Small World)

HNSW builds a multi-layer graph where each vector is a node. Upper layers are sparse (long-range connections for coarse navigation), and lower layers are dense (short-range connections for precise search). Queries start at the top layer and greedily navigate toward the nearest neighbors.

# HNSW in FAISS
M = 32              # connections per node
ef_construction = 40 # build-time search depth

index = faiss.IndexHNSWFlat(dimension, M)
index.hnsw.efConstruction = ef_construction
index.add(vectors)

# Query-time search depth
index.hnsw.efSearch = 64
distances, indices = index.search(query, k=5)

Key parameters:

  • M: Number of connections per node. Higher M improves recall but increases memory and build time. Typical values: 16-64.
  • ef_construction: Search depth during index building. Higher values produce better graphs. Typical: 100-200.
  • ef_search: Search depth at query time. Must be >= k. Higher values improve recall at the cost of latency.

Pros: Best recall-speed tradeoff for most workloads. Supports incremental inserts without rebuilding. No training step required. Cons: High memory usage (stores the graph structure). Slower to build than IVF for very large datasets.

Product Quantization (PQ)

PQ compresses vectors by splitting each vector into subvectors and quantizing each subvector independently. This reduces memory usage dramatically — a 1536-dimensional float32 vector (6KB) can be compressed to ~64 bytes.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

m = 48       # number of subquantizers (must divide dimension evenly)
nbits = 8    # bits per subquantizer (256 centroids each)

index = faiss.IndexPQ(dimension, m, nbits)
index.train(vectors)
index.add(vectors)

distances, indices = index.search(query, k=5)

PQ is often combined with IVF for a powerful combination:

# IVF + PQ: partitioned search with compressed vectors
index = faiss.IndexIVFPQ(quantizer, dimension, nlist, m, nbits)
index.train(vectors)
index.add(vectors)
index.nprobe = 10

Pros: Massive memory reduction (10-50x). Enables billion-scale search on a single machine. Cons: Lower recall than uncompressed indexes. Requires training. Distance calculations are approximate.

Choosing the Right Index

Criteria Flat IVF HNSW PQ
Dataset size < 100K 100K - 10M 100K - 50M 1M - 1B+
Recall 100% 90-99% 95-99.9% 80-95%
Memory High High Very high Low
Build time None Moderate Slow Moderate
Query speed Slow at scale Fast Very fast Fast
Incremental insert Yes Rebuild needed Yes Rebuild needed

For most applications with fewer than 10 million vectors, HNSW is the best default. It delivers the highest recall at competitive query speeds without requiring a training step.

FAQ

Can I combine multiple index types together?

Yes. The most common combination is IVF + PQ (IndexIVFPQ in FAISS), which partitions the space with IVF and compresses vectors with PQ. HNSW can also use PQ compression (IndexHNSWPQ) for memory-constrained environments. These composite indexes trade some recall for dramatic memory and speed improvements.

How do I measure recall to compare index configurations?

Generate a ground truth by running exact search (Flat index) on a representative query set. Then run the same queries against your ANN index and calculate recall at K — the fraction of true nearest neighbors that appear in the ANN results. A recall of 0.95 at K=10 means 9.5 out of 10 true nearest neighbors are found on average.

Does the embedding dimension affect which index type to choose?

Higher dimensions increase memory usage and computation time for all index types, but they impact PQ most significantly because more subquantizers are needed. For very high dimensions (3072+), HNSW with dimensionality reduction or PQ compression becomes important. For typical dimensions (384-1536), any index type works without special consideration.


#VectorIndex #HNSW #IVF #ANN #Algorithm #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like