Skip to content
Technology
Technology8 min read0 views

AI Incident Reporting: What OECD AIM and the US AI Safety Institute Want

Incident reporting expectations changed in 2026. What OECD AIM, the AISI, and EU AI Office want from operators when an AI system fails.

The 2026 Reality

Until recently, AI incident reporting was voluntary and ad hoc. By 2026 several frameworks and authorities expect (or in some cases require) incident reports for material AI failures: the OECD AI Incidents Monitor (AIM), the US AI Safety Institute (AISI), the EU AI Office (under the AI Act for systemic-risk models), and several sectoral regulators.

This piece walks through what counts as a reportable incident, who wants the report, and what the report should contain.

What Counts as Reportable

flowchart TD
    Sev[AI failure] --> Q1{Material harm or<br/>near-harm?}
    Q1 -->|Yes| Q2{Wide impact<br/>or dangerous capability?}
    Q1 -->|No| Internal[Internal log only]
    Q2 -->|Yes| Sys[Systemic incident<br/>likely reportable]
    Q2 -->|No| Local[Local incident<br/>internal + sector-specific]

Material harm or near-harm includes:

  • Discriminatory decisions affecting groups of users
  • Financial loss or data exposure attributable to AI
  • Physical harm or endangerment
  • Generation of CBRN-relevant content
  • Successful jailbreaks producing prohibited content at scale
  • Tool-abuse leading to unauthorized actions

Trivial output errors and individual hallucinations are not reportable; patterns are.

OECD AI Incidents Monitor

The OECD AIM is the broadest catalog of AI incidents in 2026. It aggregates reports from press, civil society, and direct submissions. Submissions are voluntary but increasingly expected for cross-border or systemic incidents.

Hear it before you finish reading

Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.

Try Live Demo →

A submission to AIM includes:

  • Incident date and discovery date
  • AI system involved (with provider attribution)
  • Affected parties
  • Type and severity of harm
  • Resolution status

OECD publishes anonymized incident summaries that have become a primary reference for regulators worldwide.

US AI Safety Institute

AISI is part of NIST. It runs voluntary safety testing of frontier models and accepts incident reports from frontier-model deployers. The reporting expectations are higher for models that have been through AISI evaluation — incidents must be communicated within defined windows.

AISI's incident framework covers:

  • Capability surprises (the model did something its developers did not expect)
  • Safety regressions (a fine-tune undid a safety property)
  • Misuse incidents (the model assisted a real harm)
  • Deployment-context failures (the model failed in deployment in a way pre-deployment testing missed)

EU AI Office

For systemic-risk GPAI models under the EU AI Act, incident reporting is mandatory. The AI Office defined the format in 2025 implementing acts. Reports must be filed within specific windows after the operator becomes aware of a serious incident.

Still reading? Stop comparing — try CallSphere live.

CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.

A serious incident under Article 73 includes anything causing material harm to fundamental rights, safety, or critical infrastructure.

Sector-Specific Reporting

Several sectors have their own AI-incident channels:

  • FDA: medical AI device failures
  • NHTSA: AV / driver-assistance failures
  • FFIEC member regulators: AI-driven banking failures
  • EU sectoral regulators: under EU AI Act high-risk Annex III categories

These overlap with OECD AIM and EU AI Office but require sector-specific filing in addition.

What's in a Report

flowchart TB
    Rep[Incident Report] --> Meta[Metadata: date, system, version]
    Rep --> Desc[Narrative description]
    Rep --> Impact[Impact assessment]
    Rep --> Cause[Root-cause analysis]
    Rep --> Resp[Response taken]
    Rep --> Prev[Prevention plan]

A typical 2026 report runs 5-15 pages. The hardest sections to write well are root-cause analysis (LLMs are non-deterministic; classical RCA does not apply cleanly) and prevention plan (what evidence shows the prevention will actually prevent).

Building an Incident-Reporting Workflow

For a mid-sized AI company in 2026:

  1. Define what counts as a reportable incident in your system (severity matrix)
  2. Build a one-button reporting tool from your support and operations stacks
  3. Run an incident commander training quarterly
  4. Pre-write template reports for the most likely classes of incident
  5. Map your incidents to the relevant authorities (OECD AIM, AISI, EU AI Office, sector regulators)
  6. Quarterly review of patterns; ship preventive engineering work

Why This Pays Off

Beyond compliance, incident reporting:

  • Catches systemic patterns that single-incident postmortems miss
  • Builds trust with customers and regulators
  • Gives engineers concrete cases to test against
  • Aligns the company on what "safe enough" actually means in production

Sources

## AI Incident Reporting: What OECD AIM and the US AI Safety Institute Want: production view AI Incident Reporting: What OECD AIM and the US AI Safety Institute Want forces a tension most teams underestimate: agent handoff state. A single LLM call is easy. A booking agent that hands a confirmed slot to a billing agent that hands a follow-up to an escalation agent — that's where context loss, hallucinated IDs, and double-bookings live. Solving it well means treating the conversation as a stateful workflow, not a chat. ## Broader technology framing The protocol layer determines what's possible: WebRTC for browser-side widgets, SIP trunks (Twilio, Telnyx) for PSTN voice, WebSockets for the Realtime API streaming session. Each has its own jitter buffer, its own ICE/STUN dance, and its own failure modes when a customer's corporate firewall is hostile. Front-end is **Next.js 15 + React 19** for the marketing surface and the in-app dashboards, with server components used heavily for the SEO-critical pages. Backend splits across **FastAPI** for the AI worker, **NestJS + Prisma** for the customer-facing API, and a thin **Go gateway** that does auth, rate limiting, and routing — letting each service scale on its own characteristics. Datastores: **Postgres** as the source of truth (per-vertical schemas like `healthcare_voice`, `realestate_voice`), **ChromaDB** for RAG over support docs, **Redis** for ephemeral session state. Postgres RLS enforces tenant isolation at the row level so a misconfigured query can't leak across customers. ## FAQ **What's the right way to scope the proof-of-concept?** Real Estate runs as a 6-container pod (frontend, gateway, ai-worker, voice-server, NATS event bus, Redis) backed by Postgres `realestate_voice` with row-level security so multi-tenant data never crosses tenants. For a topic like "AI Incident Reporting: What OECD AIM and the US AI Safety Institute Want", that means you're not starting from scratch — you're configuring an agent template that's already been hardened across thousands of conversations. **How do you handle compliance and data isolation?** Day one is integration mapping (scheduler, CRM, messaging) and prompt tuning against your top 20 real call transcripts. Day two through five is shadow-mode running, where the agent transcribes and recommends but a human still answers, so you can compare side-by-side. Go-live is the moment your eval pass-rate clears your internal bar. **When does it make sense to switch from a managed model to a self-hosted one?** The honest answer: it scales until your tool catalog gets stale. The agent is only as good as the integrations it can actually call, so the operational discipline is keeping schemas, webhooks, and fallback paths green. The platform handles the rest — observability, retries, multi-region routing — without your team owning the GPU layer. ## Talk to us Want to see how this maps to your stack? Book a live walkthrough at [calendly.com/sagar-callsphere/new-meeting](https://calendly.com/sagar-callsphere/new-meeting), or try the vertical-specific demo at [salon.callsphere.tech](https://salon.callsphere.tech). 14-day trial, no credit card, pilot live in 3–5 business days.
Share

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.