Overjet + Videa AI — ROI Analysis

Staggered difference-in-differences across SGA East (Overjet Vision) and Gen4 West (Videa AI) rollouts.
Action Briefing
Headlines
By Practice
Methodology

What we know

Videa AI is delivering measurable case-acceptance lift at Gen4 West practices. Across 42 treated practices with at least 6 months of post-implementation data, treatment acceptance percentage rose +4.0 points (count-based) and +5.1 points (dollar-based) versus comparable not-yet-treated practices — both statistically significant with clean parallel trends. Average accepted treatment dollars per practice rose +$48K/month (+47.7%). Same-day accepted treatment dollars more than doubled.

Overjet Vision evidence is inconclusive at SGA East. With only 11 treated practices having sufficient data on both PBI and Dental Intel, no case acceptance effect is detectable. Production-per-visit dropped 7% (CI excludes zero), but selection bias makes this hard to interpret — these practices were already declining faster than their peers before AI rollout.

The Brittney question, answered

In the 5/14 call, Brittney raised the question of whether AI tools could lower revenue per visit by surfacing more conservative care plans. Our data shows the opposite for Videa: treatment presented and accepted both went UP, not down. The lift came from more treatment getting accepted, not from per-visit pricing changes (PPV stayed flat). For Overjet, the small SGA East cohort can't answer the question yet — re-run after another 6 months of post-data accumulates.

Recommended actions

1. Get Videa usage data unblocked. Brittney's open action — Videa monthly engagement data is blocked on the Dentrix integration. Without it, we can't separate "Videa works" from "Videa works when actually used." This is the single most valuable next data pull.
2. Defer Overjet ROI judgment by 6 months. SGA East has 63 Overjet Vision practices total but only 11 with the full pre+post window AND DI data today. By Nov 2026, that grows to ~40+ as Wave 2 (Dec 2025–Apr 2026 live dates) accumulates post-data. Run this analysis again then.
3. Investigate the four high-effect Videa outliers. SMD Daydreams, PCC Mountain View, NHD New Horizons, and CCD Century City show 12-120% PPV lifts. Either real wins worth replicating, or data quality issues worth fixing — drill into each in the "By Practice" tab.
4. Don't make causal claims on production-per-visit. Parallel trends are violated for PPV (treated practices were declining $11/month faster than controls before treatment). The PPV numbers describe what happened, not what AI caused. Case acceptance % is the cleaner causal metric.

What we're missing

  • Time clock data — only available since 10/1/25, no pre-baseline. Can't measure clinician time savings.
  • Videa feature-level usage — blocked on Dentrix integration (Brittney chasing).
  • Overjet utilization data — Brittney pulling.
  • Insurance denial rebuttal effectiveness — no instrumentation yet; would need a manual claims-denial sample.
  • 16 of 85 treated PBI practices missing from Dental Intel — different practice scope between systems. Would need name-matching expansion or separate DI grant.

Methodology in one paragraph

For each treated practice, we compare its 6-month-pre vs 6-month-post outcome (with a 1-month implementation washout) against the same calendar months for not-yet-treated practices in the same brand group. Modis was excluded as a control because it's mostly oral surgery and periodontics specialty practices, not GP dental. Per-practice effects are averaged within group with bootstrap 95% CIs (n=2,000). Parallel-trends diagnostics flag where pre-period slopes differ between treated and control. Six DI metrics + three PBI metrics analyzed per practice.