compas

Opening the lid on criminal sentencing software

In 2013, a Wisconsin man named Eric Loomis was convicted of fleeing an officer and driving a car without the owner's consent. He was denied probation and sentenced to six years in prison based, in part, on a prediction made by a secret computer algorithm. The algorithm, developed by a private company called Northpointe, had determined Loomis was at "high risk" of running afoul of the law again. Car insurers base their premiums on the same sorts of models, using a person's driving record, gender, age and other factors to calculate their risk of having an accident in the future.

Courts Are Using AI to Sentence Criminals. That Must Stop Now

A Wired op-ed discussing the lack of algorithmic transparency that can be observed in the case of Wisconsin v. Loomis.

In Wisconsin, a Backlash Against Using Data to Foretell Defendants' Futures

Mr. Loomis was arrested in February 2013 and was accused of driving a car that had been used in a shooting. He pleaded guilty to eluding an officer and no contest to operating a vehicle without the owner’s consent.

Mr. Loomis, 34, is a registered sex offender, stemming from a past conviction for third-degree sexual assault.

Before his sentencing for his 2013 arrest, Mr. Loomis received a score on the Compas scale that suggested he was at a high risk of committing another crime. He is now serving his six-year sentence, with a possible release in 2019.

Hannah-Moffat (2018). Algorithmic risk governance: Big data analytics, race and information activism in criminal justice debates

Meanings of risk in criminal justice assessment continue to evolve, making it critical to understand how particular compositions of risk are mediated, resisted and re-configured by experts and practitioners. Criminal justice organizations are working with computer scientists, software engineers and private companies that are skilled in big data analytics to produce new ways of thinking about and managing risk.

Green and Chen (2019). Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments

Despite vigorous debates about the technical characteristics of risk assessments being deployed in the U.S. criminal justice system, remarkably little research has studied how these tools affect actual decision-making processes. After all, risk assessments do not make definitive decisions, they inform judges, who are the final arbiters.

Eckhouse et al (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment

Scholars in several fields, including quantitative methodologists, legal scholars, and theoretically oriented criminologists, have launched robust debates about the fairness of quantitative risk assessment. As the Supreme Court considers addressing constitutional questions on the issue, we propose a framework for understanding the relationships among these debates: layers of bias. In the top layer, we identify challenges to fairness within the risk-assessment models themselves. We explain types of statistical fairness and the tradeoffs between them.

Dressel and Farid (2018). The accuracy, fairness, and limits of predicting recidivism

Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. These predictions are used in pretrial, parole, and sentencing decisions. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. We show, however, that the widely used commercial risk assessment software COMPAS is no more accurate or fair than predictions made by people with little or no criminal justice expertise.

Brennan et al (2008). Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System

This study examines the statistical validation of a recently developed, fourth-generation (4G) risk—need assessment system (Correctional Offender Management Profiling for Alternative Sanctions; COMPAS) that incorporates a range of theoretically relevant criminogenic factors and key factors emerging from meta-analytic studies of recidivism. COMPAS's automated scoring provides decision support for correctional agencies for placement decisions, offender management, and treatment planning.

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

The COMPAS tool is widely used to assess a defendant’s risk of committing more crimes. Scholarly studies shows that the automatic tool is not much more accurate or fair than a coin toss.

When Algorithms Take the Stand

In February of 2013, Eric Loomis was found driving a car that had been used in a shooting. He was arrested; he pleaded guilty to eluding an officer and no contest to operating a vehicle without its owner’s consent. The judge in Loomis’s case gave him a six-year prison sentence for those offenses – a length determined in part not just by Loomis’s criminal record, but also by his score on the COMPAS scale, an algorithmically determined assessment that aims, and claims, to predict an individual’s risk of recidivism.