Skip to content
Learn Agentic AI
Learn Agentic AI10 min read5 views

Request Validation for AI Agent APIs: Pydantic Models and Custom Validators

Build robust request validation for AI agent APIs using Pydantic v2 models, custom field validators, and discriminated unions. Learn how to handle nested agent configurations and return clear validation error responses.

Why Validation Is Critical for AI Agent APIs

AI agent APIs accept complex, user-facing input: conversation messages, tool configurations, agent parameters, and file references. Without rigorous validation, malformed inputs produce cryptic LLM errors, prompt injection passes unchecked, and debugging becomes a nightmare. Pydantic v2 in FastAPI gives you type-safe, performant validation that catches problems at the API boundary before they reach your agent logic.

Every field that enters your agent system should be validated for type, length, format, and business rules. This is not just about preventing crashes. It is about making your API self-documenting and giving clients clear feedback when something is wrong.

Basic Request Models

Start with well-typed models for your core agent interactions:

flowchart LR
    CLIENT(["Client SDK"])
    GW["API Gateway<br/>auth plus rate limit"]
    APP["FastAPI app<br/>handlers and DI"]
    VAL["Pydantic validation"]
    SVC["Service layer<br/>business logic"]
    DB[(Database)]
    QUEUE[(Background queue)]
    OBS[(Tracing)]
    CLIENT --> GW --> APP --> VAL --> SVC
    SVC --> DB
    SVC --> QUEUE
    SVC --> OBS
    SVC --> CLIENT
    style GW fill:#4f46e5,stroke:#4338ca,color:#fff
    style APP fill:#f59e0b,stroke:#d97706,color:#1f2937
    style DB fill:#ede9fe,stroke:#7c3aed,color:#1e1b4b
from pydantic import BaseModel, Field
from enum import Enum
from typing import Optional

class AgentRole(str, Enum):
    ASSISTANT = "assistant"
    RESEARCHER = "researcher"
    CODER = "coder"

class Message(BaseModel):
    role: str = Field(
        ...,
        pattern="^(user|assistant|system)$",
        description="Message sender role",
    )
    content: str = Field(
        ...,
        min_length=1,
        max_length=32000,
        description="Message content",
    )

class ChatRequest(BaseModel):
    messages: list[Message] = Field(
        ...,
        min_length=1,
        max_length=100,
        description="Conversation history",
    )
    agent_role: AgentRole = AgentRole.ASSISTANT
    temperature: float = Field(
        default=0.7,
        ge=0.0,
        le=2.0,
        description="Sampling temperature",
    )
    max_tokens: Optional[int] = Field(
        default=None,
        ge=1,
        le=16384,
        description="Maximum response tokens",
    )
    session_id: Optional[str] = Field(
        default=None,
        pattern="^[a-zA-Z0-9-]{1,64}$",
        description="Session identifier",
    )

The Field constraints handle most validation without any custom code. min_length, max_length, ge, le, and pattern catch invalid inputs instantly.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

Custom Field Validators

For validation logic that goes beyond simple constraints, use Pydantic v2 field validators:

from pydantic import field_validator, model_validator

class AgentConfigRequest(BaseModel):
    system_prompt: str = Field(..., max_length=10000)
    tools: list[str] = Field(default_factory=list)
    model: str = "gpt-4o"
    stop_sequences: list[str] = Field(
        default_factory=list, max_length=4
    )

    @field_validator("system_prompt")
    @classmethod
    def validate_system_prompt(cls, v: str) -> str:
        forbidden = [
            "ignore previous instructions",
            "disregard all prior",
        ]
        lower_v = v.lower()
        for phrase in forbidden:
            if phrase in lower_v:
                raise ValueError(
                    "System prompt contains forbidden content"
                )
        return v.strip()

    @field_validator("tools")
    @classmethod
    def validate_tools(cls, v: list[str]) -> list[str]:
        allowed = {
            "web_search", "calculator", "code_exec",
            "file_read", "database_query",
        }
        invalid = set(v) - allowed
        if invalid:
            raise ValueError(
                f"Unknown tools: {', '.join(invalid)}. "
                f"Allowed: {', '.join(sorted(allowed))}"
            )
        return v

    @field_validator("model")
    @classmethod
    def validate_model(cls, v: str) -> str:
        allowed_models = {
            "gpt-4o", "gpt-4o-mini",
            "claude-3-5-sonnet", "claude-3-haiku",
        }
        if v not in allowed_models:
            raise ValueError(
                f"Model '{v}' not supported. "
                f"Choose from: {', '.join(sorted(allowed_models))}"
            )
        return v

Cross-Field Validation with model_validator

Some validation rules involve multiple fields. Use model_validator to check relationships between fields:

class BatchAgentRequest(BaseModel):
    messages: list[Message]
    parallel: bool = False
    max_concurrent: int = Field(default=5, ge=1, le=20)
    timeout_seconds: int = Field(default=60, ge=5, le=300)

    @model_validator(mode="after")
    def validate_batch_config(self):
        if not self.parallel and self.max_concurrent > 1:
            raise ValueError(
                "max_concurrent > 1 requires parallel=True"
            )
        if len(self.messages) > 50 and self.timeout_seconds < 120:
            raise ValueError(
                "Batches over 50 messages need at least "
                "120s timeout"
            )
        return self

Discriminated Unions for Tool Parameters

AI agents often have tools with different parameter shapes. Use Pydantic discriminated unions to validate tool-specific configurations:

from typing import Literal, Union, Annotated

class WebSearchParams(BaseModel):
    tool_type: Literal["web_search"] = "web_search"
    query: str = Field(..., min_length=1, max_length=500)
    max_results: int = Field(default=5, ge=1, le=20)

class DatabaseQueryParams(BaseModel):
    tool_type: Literal["database_query"] = "database_query"
    query: str = Field(..., min_length=1)
    database: str = Field(..., pattern="^[a-z_]+$")
    read_only: bool = True

class CodeExecParams(BaseModel):
    tool_type: Literal["code_exec"] = "code_exec"
    code: str = Field(..., min_length=1, max_length=50000)
    language: str = Field(
        default="python", pattern="^(python|javascript)$"
    )
    timeout: int = Field(default=30, ge=1, le=120)

ToolParams = Annotated[
    Union[WebSearchParams, DatabaseQueryParams, CodeExecParams],
    Field(discriminator="tool_type"),
]

class ToolCallRequest(BaseModel):
    tool: ToolParams
    session_id: str

When a client sends {"tool_type": "web_search", "query": "..."}, Pydantic automatically validates against WebSearchParams. Wrong tool_type values get a clear error message.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

Customizing Error Responses

FastAPI returns Pydantic validation errors as 422 responses by default. Customize the error format for better client experience:

from fastapi import Request
from fastapi.exceptions import RequestValidationError
from fastapi.responses import JSONResponse

@app.exception_handler(RequestValidationError)
async def validation_exception_handler(
    request: Request, exc: RequestValidationError
):
    errors = []
    for error in exc.errors():
        errors.append({
            "field": " -> ".join(str(x) for x in error["loc"]),
            "message": error["msg"],
            "type": error["type"],
        })

    return JSONResponse(
        status_code=422,
        content={
            "error": "validation_error",
            "message": "Request validation failed",
            "details": errors,
        },
    )

FAQ

How do I validate optional fields that should not be empty strings?

Use a field validator that checks for empty strings after stripping whitespace. Pydantic's min_length=1 on an Optional[str] only applies when the value is not None. Add a validator like: @field_validator("field_name") def check(cls, v): if v is not None and not v.strip(): raise ValueError("Cannot be empty"); return v. This allows None but rejects "" and " ".

Should I use Pydantic models for response validation too?

Yes. Define response_model on your endpoints to ensure responses match a known schema. This catches bugs where your endpoint accidentally returns extra fields, missing fields, or wrong types. It also automatically generates accurate OpenAPI documentation. Use model_config = ConfigDict(from_attributes=True) when returning ORM objects directly.

How do I handle validation for multipart form data with JSON fields?

FastAPI can accept Form and File parameters alongside Pydantic models. For complex JSON embedded in form data, accept the JSON as a Form() string parameter, then parse and validate it manually with your Pydantic model: config = AgentConfig.model_validate_json(config_json). This gives you full Pydantic validation even for form-submitted JSON.


#FastAPI #Pydantic #Validation #AIAgents #APIDesign #AgenticAI #LearnAI #AIEngineering

Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.

Related Articles You May Like

Agentic AI

Multi-Agent Handoffs with the OpenAI Agents SDK: The Pattern That Actually Scales (2026)

Handoffs done right — when one agent should hand control to another, how to preserve context, and how to evaluate the handoff decision itself.

AI Strategy

AI Agent M&A Activity 2026: Aircall–Vogent, Meta–PlayAI, OpenAI's Six Deals

Q1 2026 saw a record acquisition wave: Aircall bought Vogent (May), Meta acquired Manus and PlayAI, OpenAI closed six deals. The voice AI consolidation phase has begun.

Agentic AI

Building Your First Agent with the OpenAI Agents SDK in 2026: A Hands-On Walkthrough

Step-by-step build of a working agent with the OpenAI Agents SDK — Agent class, tools, handoffs, tracing — plus an eval pipeline that catches regressions before merge.

Agentic AI

LangGraph Checkpointers in Production: Durable, Resumable Agents with Eval Replay

Use LangGraph's checkpointer to make agents resumable across crashes and human-in-the-loop pauses, then replay any checkpoint into your eval pipeline.

Agentic AI

LangGraph State-Machine Architecture: A Principal-Engineer Deep Dive (2026)

How LangGraph's StateGraph, channels, and reducers actually work — with a working multi-step agent, eval hooks at every node, and the patterns that survive production.

Agentic AI

LangGraph Supervisor Pattern: Orchestrating Multi-Agent Teams in 2026

The supervisor pattern in LangGraph for coordinating specialist agents, with full code, an eval pipeline that scores routing accuracy, and the failure modes to watch for.