This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Perspectives

| 1 minute read

The Next Evidentiary Shift: AI, Deepfakes, and Proof in the Courtroom

AI‑generated outputs and digital media are moving from business tools to courtroom exhibits. As their use expands, the federal rulemaking process governing the Federal Rules of Evidence is increasingly focused on a central question: how courts should test the reliability and authenticity of machine‑generated evidence before it reaches a jury. 

At its May 2026 meeting, the Advisory Committee on Evidence Rules advanced two proposals that signal a clear shift. One addresses AI and machine‑generated evidence (proposed Rule 707). The other tackles deepfakes and fabricated digital content (proposed Rule 901(c)). Together, they signal that courts no longer view AI evidence as “business as usual.”  However, the Advisory Committee remains divided on the path forward as it relates to the adoption of these rules after a public comment period.  Still, it is important to prepare for how the rules may ultimately address this type of evidence as its prevalence grows.

What This Means in Practice

The rule proposals reflect a growing concern that AI outputs can appear authoritative without being understandable, testable, or explainable. Courts want clearer guardrails, especially when no human expert is available to explain how an output was created.

In practical terms:

  • AI evidence may increasingly be treated like expert testimony, even when no expert testifies.
  • Courts will focus on explainability, error rates, validation, and bias, not just whether a system is widely used.
  • Digital evidence suspected of being a deepfake may trigger a burden‑shifting process, forcing the proponent to affirmatively prove authenticity.

The takeaway is not that AI evidence will be excluded, but that it will be scrutinized earlier and more rigorously.

Why This Matters

Key impacts to watch:

  • Higher admissibility hurdles for AI‑generated outputs used in disputes or enforcement actions
  • Increased need for expert support, even when technology feels “settled” or routine
  • Discovery and disclosure pressure around proprietary tools, training data, and validation
  • Early motion practice focused on reliability and authenticity, not just relevance
  • Strategic risk if AI evidence cannot be clearly explained to a judge or jury

Bottom line: The evidentiary rules are catching up to technology. Those who understand, and prepare for, these changes will be far better positioned when AI evidence becomes outcome‑determinative.

Tags

trials