By Prof. Naomi Klineberg
In a courtroom, testimony is not just information. It is a performance of credibility. A witness swears an oath, recounts events, faces cross-examination. The jury not only hears their words but weighs their character, gestures, tone. The law presumes that truth emerges from this human exchange. But what happens when the “witness” is a machine—an algorithm reconstructing a crime scene, a smart doorbell recording a suspect’s arrival, or an AI system generating transcripts from noisy audio? Can nonhumans testify?
Testimony Without a Witness
Consider the difference between evidence and testimony. A blood sample or a security camera feed is admitted as physical fact. Yet when a human explains what they saw, we call it testimony. The line blurs when machines interpret reality. An AI trained to enhance blurry footage, or to transcribe speech, does not simply pass along raw data—it processes, filters, and selects. Is this closer to evidence, or to testimony?
If it is testimony, then who is cross-examined? The programmer? The algorithm itself? If it is mere evidence, how do we account for the interpretive steps taken by a system that cannot swear, lie, or feel pressure from the bench?
The Question of Accountability
Philosophers of law remind us that testimony involves responsibility. A human witness can be charged with perjury. A machine cannot. If an algorithm produces a false match in facial recognition, responsibility disperses into a network of coders, data trainers, and institutional users. The courtroom thrives on locating responsibility, yet AI resists this pinpointing.
Should we, then, restrict “witness” status to humans alone, consigning machines to the category of tools? Or should we expand the definition, accepting that truth in the digital age emerges through hybrids of human and nonhuman perception?
The Epistemic Stakes
There is also a deeper epistemic question: does truth require subjectivity? The very reason we distrust hearsay is because the chain of interpretation is obscured. AI-generated outputs extend that chain beyond human awareness. If no one can fully explain how a deep-learning model reached its conclusion, can we ever treat its output as reliable testimony?
Courts have long balanced between expertise and accessibility. We allow forensic scientists to interpret chemical tests, even if jurors cannot reproduce them. But AI is stranger still: opaque even to its creators, unaccountable by design, and yet increasingly authoritative.
Questions Left Open
Perhaps the law must refuse to call AI a witness—not because machines cannot see, but because they cannot be questioned. Or perhaps the law must innovate, creating new categories that acknowledge the role of algorithmic perception in shaping truth claims.
The Socratic task is not to offer a final answer but to frame the dilemma sharply:
Is testimony inseparable from moral agency?
Can truth be told without someone to stand behind it?
And if not, what becomes of justice in an era where the most precise witnesses cannot take the stand?


