Skip to content

Bad Data In: Predictive Algorithmic Risk Assessment in Pretrial Detention

Project

Cities and states are turning to predictive algorithms to determine who would be at lowest risk of recidivism if released from jail pre­trial. These algorithms predict using factors correlated with race, and have few guidelines in how they are implemented. We will work with those impacted by mass incarceration, technologists, researchers, and national civil rights leaders to find bias and best practices in algorithms, and to push in cities and states nationwide to implement those best practices.

Team

[ess_grid alias=”mmp”]

Challenge

Cities are using predictive algorithms to determine who is low­ risk to be released pre­trial into the community, and who is high ­risk and should remain incarcerated to protect public safety. The factors fed into those algorithms are often correlated with race and perpetuate decades of racial, ethnic, and economic bias. When tools of this nature are implemented by cities and states, there is often little comprehensive support for the actors that use the tools,­ from bail magistrates and judges to public defenders and their clients.

Pretrial detention has a major impact on those held in jail. Poor people lose jobs, housing, and access to their families when they are locked up. Their likelihood of pleading guilty goes up, and their length of incarceration can double. Our home city of Philadelphia has one of the highest per capita population of pre­trial detainees of any big city in the US. The City is considering implementing a risk assessment algorithm to create profiles for defendants, to identify low risk defendants who can be released from incarceration and granted at-home monitoring.

This team is working with local community members impacted directly by the criminal justice system ­ as well as researchers at Rutgers and Penn, technologists and policy advocates convened by The Leadership Conference on Civil and Human Rights, and local stakeholders across the country to build a database of pretrial risk assessment algorithms and to understand how these algorithms are used in practice as well as the potential for algorithmic bias. We plan to use the results of the database to help shape the design and the implementation of the pretrial risk assessment algorithm in Philadelphia and in other communities nationwide that are piloting these new algorithmic, actuarial risk predicting
tools.

Conversations

Algorithmic Risk Assessment in Pretrial Detention

Di Luong presented the work of the Media Mobilizing Project at the Data on Purpose conference at Stanford University in February 2018.

Learn More

Reach out to the Media Mobilizing Team to learn more about this project.