AI in the Courtroom: Judges Increasingly Turn to Algorithms for Sentencing Guidance

Trending·4 min read
Interior of an American courtroom with judge bench and scales of justice

When Judge Patricia Morales sentenced a defendant in a drug trafficking case in a Phoenix courtroom last month, she did something that would have been unthinkable a decade ago: she consulted an artificial intelligence system that analyzed thousands of similar cases to recommend a sentencing range.

The system, developed by a company called JusticeMetrics, is now used in courtrooms across 14 states — and it is at the center of one of the most contentious debates in American criminal justice.

How AI Sentencing Tools Work

AI sentencing platforms like JusticeMetrics and its competitor, Equitas AI, use machine learning models trained on millions of court records to assess a defendant's risk of reoffending and to suggest sentencing ranges based on comparable cases. The systems analyze variables including the nature of the offense, criminal history, age, employment status, and other factors.

Proponents argue that these tools bring consistency and data-driven rigor to a process that has historically been subject to wide judicial discretion and, in many cases, demonstrable bias. Studies have repeatedly shown that human judges impose harsher sentences before lunch, on Mondays, and after their local sports team loses — patterns that suggest sentencing can be influenced by factors entirely unrelated to the case at hand.

"No algorithm is perfect, but the baseline it's replacing — human intuition — is deeply flawed," said Dr. Alan Xu, a computer scientist at Stanford University who consults for JusticeMetrics. "These tools don't replace judges. They give judges a reference point grounded in data rather than gut feeling."

The Bias Problem

Critics contend that AI sentencing tools risk encoding and amplifying the very biases they claim to eliminate. Because the models are trained on historical court data, they inevitably absorb patterns of racial and socioeconomic disparity that are baked into the criminal justice system.

A 2025 audit by the Brennan Center for Justice found that defendants from predominantly Black and Latino zip codes received higher risk scores from JusticeMetrics, even after controlling for offense type and criminal history. The disparity was modest — roughly 8 percent — but civil rights organizations say any racially correlated variation in sentencing recommendations is unacceptable.

"You cannot train an algorithm on a biased system and expect it to produce fair outcomes," said Keisha Roberts, director of the ACLU's Criminal Justice Reform Project. "These tools launder bias through a veneer of technological objectivity, and that makes them more dangerous, not less."

JusticeMetrics has disputed the audit's methodology, arguing that the Brennan Center failed to account for several variables that the algorithm considers. The company has also pointed to its own internal testing, which it says shows the system reduces racial disparity in sentencing compared to jurisdictions that do not use AI tools.

Legal Challenges Mount

The use of AI in sentencing is facing legal challenges in multiple states. In Ohio, a public defender filed a motion arguing that her client's Sixth Amendment right to confront the evidence against him was violated because the AI system's internal logic is proprietary and cannot be examined by the defense.

"My client was sentenced based in part on a recommendation generated by a black box," attorney Diane Kowalski told reporters. "He has no way to understand why the algorithm scored him the way it did, and neither does the judge."

The case, now before the Ohio Court of Appeals, could set a precedent for whether AI sentencing tools must be fully transparent to satisfy constitutional due process requirements. Legal scholars expect the issue to eventually reach the Supreme Court.

Judges Remain Divided

Among judges themselves, opinions are split. Some welcome the tools as a valuable supplement to their own judgment. Judge Morales in Phoenix said the AI recommendations help her "check my own assumptions" and ensure consistency across her courtroom.

Others are skeptical. Judge Franklin Hayes of Cook County, Illinois, who has refused to use AI sentencing tools, warned that over-reliance on algorithms could erode the fundamentally human nature of justice.

"Sentencing is not a math problem," Judge Hayes said in a recent interview. "It requires empathy, context, and moral judgment — things that no algorithm can replicate. I worry that judges will start deferring to the machine instead of doing the hard work of weighing each case on its own merits."

The Path Forward

Despite the controversy, the trend appears to be accelerating. Several states are considering legislation that would mandate the use of AI risk assessments in certain categories of cases, while others are moving to ban or restrict the technology.

The National Center for State Courts has called for the development of uniform standards governing the use of AI in courtrooms, including requirements for algorithmic transparency, regular bias audits, and clear guidelines specifying that AI recommendations are advisory rather than determinative.

As the debate continues, one thing is certain: the intersection of artificial intelligence and criminal justice will be one of the defining legal questions of this decade. How courts resolve it will shape the meaning of fairness in America for generations to come.

Share

Related Stories