AI Project Discovery: 20 Questions Before You Start Building
Twenty questions that separate viable AI projects from doomed ones, applied at the discovery stage in 2026.
Why Discovery Decides Outcomes
The 2026 data is clear: most failed AI projects failed at discovery, not at engineering. The team did not know what success meant, what data they had, what compliance applied, or what stakeholders needed. Building the wrong thing well is still failure.
This piece is the working list of 20 questions every AI project should answer at discovery.
The 20 Questions
flowchart TB
D[Discovery] --> B[Business: 1-5]
D --> U[User: 6-9]
D --> Tech[Technical: 10-13]
D --> R[Risk: 14-17]
D --> O[Org: 18-20]
Business (1-5)
- What is the measurable business outcome? (Specific metric and target)
- What is the dollar value of success? (Saving X, earning Y)
- What is the cost of doing nothing? (Status quo vs intervention)
- Who owns the P&L impact? (Single accountable person)
- What is the timeline for measurable impact? (Pilot vs production milestones)
User (6-9)
- Who is the user, specifically? (Persona, role, context)
- What problem are they currently solving how? (Existing workflow)
- What does their day look like with this AI? (Specific scenarios)
- What is success from the user's perspective? (Their criteria, not yours)
Technical (10-13)
- What systems must this integrate with? (List with API surfaces)
- What data is available? (Sources, freshness, quality)
- What is the latency budget? (User-experienced latency)
- What is the volume? (Per day, peak vs average)
Risk (14-17)
- What is the worst-case failure? (Concrete scenarios)
- What compliance applies? (HIPAA, SOC 2, EU AI Act, sector rules)
- Who reviews high-stakes outputs? (Human-in-the-loop strategy)
- What is the audit requirement? (Logs, retention, access)
Org (18-20)
- Who builds and owns this long-term? (Team)
- What governance applies? (Approval gates, ongoing oversight)
- What is the deprecation plan? (When this is sunset, how)
Why Each Matters
flowchart TD
Q[Unanswered question] --> Risk[Project risk]
NoMetric[No metric] --> Drift[Goal drift]
NoData[No data] --> Stuck[Engineering stuck]
NoComp[No compliance check] --> Block[Late-stage block]
NoOwner[No owner] --> Cancel[Cancellation]
Each unanswered question is a future risk. Discovery surfaces them while they are cheap to handle.
The Discovery Workflow
For a typical project:
Hear it before you finish reading
Talk to a live CallSphere AI voice agent in your browser — 60 seconds, no signup.
- Stakeholder kickoff (collect first answers)
- User research (validate user-side questions)
- Technical discovery (data, integrations)
- Compliance review (legal / risk team)
- Spec writeup
- Stakeholder sign-off
Total: 2-6 weeks for non-trivial projects. Skipping it doesn't save time; it shifts cost to engineering and beyond.
What Good Answers Look Like
For "What is the measurable business outcome?":
- Bad: "Improve customer experience"
- Good: "Increase first-call resolution rate from 65 percent to 80 percent within 6 months, measured by post-call disposition + CSAT survey"
The good answer is testable; the bad one is decoration.
When Answers Aren't Available
Some questions don't have crisp answers at discovery time. The discipline:
Still reading? Stop comparing — try CallSphere live.
CallSphere ships complete AI voice agents per industry — 14 tools for healthcare, 10 agents for real estate, 4 specialists for salons. See how it actually handles a call before you book a demo.
- Mark them as open
- Define what would make them answerable
- Schedule the work to answer
- Don't pretend the answer is "TBD"
What CallSphere Asks
For client deployments, we run a 90-minute discovery call covering all 20 questions. Clients sometimes resist; we have learned to insist.
The clients who push back hardest at this stage are typically the ones whose projects have the most discovery gaps. The clients who answer cleanly typically have successful deployments.
Red Flags at Discovery
flowchart TD
Red[Red flags] --> R1[Vague success criteria]
Red --> R2[No clear owner]
Red --> R3[Compliance "we'll figure it out"]
Red --> R4[Volume estimate is wishful]
Red --> R5[Timeline driven by external pressure not feasibility]
Each is a sign discovery is incomplete. Push back; do not start building until they are addressed.
What Comes After
Once discovery is complete, the project moves to:
- Spec writing (covered elsewhere)
- Stakeholder sign-off
- Engineering kickoff
- Pilot phase
The 20 questions are revisited at the end of pilot to confirm assumptions held.
Sources
- "Discovery in product management" SVPG — https://www.svpg.com
- "AI project discovery" Anthropic engineering — https://www.anthropic.com/engineering
- "Why AI projects fail" BCG — https://www.bcg.com
- "Effective project discovery" PMI — https://www.pmi.org
- "Pre-mortem" research — https://hbr.org
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.