I just read Tesla’s FSD safety stats—2.9M miles per crash sounds great, but here’s what’s missing

What Changed and Why It Matters

Tesla has published its most detailed public safety metrics for Full Self‑Driving (Supervised), claiming substantially lower collision rates than national averages and committing to quarterly, rolling 12‑month updates. Tesla reports FSD users go about 2.9 million miles between “major” collisions versus a national baseline of roughly 505,000 miles, and 986,000 miles per “minor” collision versus 178,000. This raises the transparency bar for advanced driver assistance systems (ADAS) but stops short of injury data or detailed context on its supervised robotaxi trials in Austin.

This matters because procurement, risk, and policy decisions increasingly hinge on objective, comparable safety evidence. Tesla’s move puts numbers on the table-and invites scrutiny of what’s included, what isn’t, and how to interpret comparisons to human driving.

Key Takeaways

  • Tesla reports FSD (Supervised) users see ~2.9M miles per “major” collision and ~986k per “minor,” compared with NHTSA-derived national averages of ~505k and ~178k miles respectively (per Tesla’s interpretation).
  • “Major” is defined using FMVSS 49 C.F.R. §563.5: airbag or other non‑reversible pyrotechnic restraint deployment; Tesla includes crashes if FSD was active at any point within five seconds before impact, capturing disengements and system aborts.
  • Quarterly updates will reflect a rolling 12‑month aggregation. Tesla will not publish injury rates, citing automated collection limits; it uses airbag deployment as a severity proxy.
  • Comparability caveats: national averages differ by road type, conditions, and reporting practices. Without exposure breakdowns (urban/highway, day/night, weather), apples‑to‑apples conclusions are limited.
  • No safety outcomes were shared for Tesla’s supervised robotaxi pilots in Austin, unlike Waymo’s driverless programs that publish city‑level analyses.

Breaking Down the Announcement

Tesla’s new page separates Autopilot (highway-focused ADAS) from FSD (Supervised), addressing a longstanding criticism that prior reports mixed modes and masked risk exposure. In the FSD section, Tesla cites ~2.9M miles per major collision and ~986k per minor, benchmarking against national rates that it interprets from NHTSA sources (~505k and ~178k miles). Elsewhere, Tesla references broader figures (around 5M per major, 1.5M per minor in North America). The variance likely reflects different aggregation windows and datasets, but underscores how sensitive these ratios are to scope and methodology.

Two methodological choices are notable. First, severity: Tesla relies on FMVSS 563.5-airbag or pyrotechnic restraint deployment-to classify “major.” That is objective and automatable, but it may miss serious injuries without airbag deployment (for example, many pedestrian or cyclist crashes). Second, inclusion: counting collisions if FSD was active within five seconds pre‑impact is stricter than only counting while actively engaged and reduces gaming via last‑second disengagements.

Tesla says it will update the figures quarterly using a rolling 12‑month window to dampen noise and reflect recent software changes. It will not publish injury or claims data, arguing those are not programmatically available from vehicles. That choice increases timeliness but reduces insight into harm severity and vulnerable road user (VRU) risk.

Industry Context and Comparisons

Waymo has published detailed, peer‑reviewed analyses claiming its driverless fleet is around five times safer than human drivers overall and 12 times safer with respect to pedestrians in its operating domains. Those studies are specific to geofenced areas, curated maps, and driverless operation. Tesla’s FSD (Supervised) operates broadly with a human driver responsible, and uses nationwide averages as a comparator. The operating design domains differ, and reporting baselines are not standardized—meaning the headline ratios are indicative but not conclusive.

Regulatory backdrop matters. NHTSA has ongoing oversight of Tesla’s driver assistance features, including a 2023 recall and subsequent queries on remedy effectiveness. States like California require disengagement and collision reporting for autonomous testing, but FSD (Supervised) does not fall under those rules. Tesla’s publication is a voluntary step toward transparency that could shape future regulatory templates—but it is not a substitute for independent audit or harmonized benchmarks.

What This Changes for Operators

If you manage fleets or approve driver-assist features, this data can inform risk posture—directionally. A reported 5-6x improvement in miles per “major” collision versus national averages, if sustained and comparable, could affect insurance negotiations, downtime assumptions, and driver training policies. However, without injury severity, VRU exposure, and road‑type stratification, you should not translate Tesla’s ratios directly into actuarial assumptions or safety KPIs.

The five‑second inclusion rule is a positive precedent for accountability. The reliance on airbag deployment as a severity proxy is practical but incomplete. And the absence of robotaxi (Austin) performance data—still supervised with employees—means enterprises cannot extrapolate to driverless operations.

Risks, Caveats, and What to Watch

  • Exposure mismatch: We don’t know FSD’s distribution across urban arterials, residential streets, or highways—each with different baseline crash rates.
  • Severity blind spots: Airbag deployment may undercount serious injuries, especially for pedestrians and cyclists where airbags often do not deploy.
  • Data provenance: Tesla calls the national comparators NHTSA-based and “according to Tesla’s interpretation.” Methods should be published for auditability.
  • Software drift: Quarterly rolling updates will move the numbers; enterprises should track trendlines, not a single snapshot.
  • Robotaxi gap: No Austin pilot safety outcomes were disclosed; comparisons to Waymo’s driverless metrics remain non‑equivalent.

Recommendations

  • Demand segmentation: Ask Tesla (and any ADAS vendor) for collision rates by road type, speed band, lighting, weather, and VRU involvement to assess true comparability.
  • Seek third‑party validation: For high‑stakes deployments, require independent audit of methodology and denominators; align with FMVSS definitions but incorporate injury severity where possible.
  • Instrument your fleet: Collect your own telematics (engagement, disengagement, near‑misses, harsh events) and benchmark against vendor reports before changing insurance or policy.
  • Governance controls: Maintain driver monitoring, usage policies, and incident review boards; treat FSD as supervised ADAS, not autonomy, until robust, audited evidence supports broader claims.

Bottom line: Tesla’s release is a step toward meaningful transparency. For decision‑makers, the numbers are useful signals—but not yet a standalone safety case. Push for standardized, auditable metrics before you scale policy or capital decisions on the back of headline ratios.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *