Skip to content

Algorithmic Risk Assessment in Pretrial Detention

Grants, Profiles

With the help of a Digital Impact grant, one Philadelphia-based nonprofit is taking on the criminal justice system, one algorithm at a time.

When Hannah Sassaman heard that the City of Philadelphia was looking to use AI to help judges determine whether someone accused of a crime should be released before trial, she was instantly worried.

“I knew that any data used to feed an algorithm of that nature would be extraordinarily ugly in its racism and its flaws,” she says. Sassaman is the policy director at Media Mobilizing Project, a Philadelphia-based nonprofit focused on issues related to technology and its impact on minorities and low-income residents.

Sassaman wanted to know more about how courts around the country were using AI. Now, two years later, with the help of a Digital Impact grant, Sassaman and her team are beginning to share with communities nationwide what they’ve learned about how jurisdictions are using technology to say who stays in prison and who can return home before trial. Their goal is to help local groups either thwart efforts like Philadelphia’s or at least have a voice in how they are implemented.

Pretrial risk assessment tools, as they’re known, have become increasingly popular as local governments work to reduce prison overcrowding. Some criminal justice reformers also see them as viable alternatives to cash bail, which tends to negatively affect minorities and low-income defendants the most. In August, California became the first state to abolish the practice.

In surveying the landscape, Di Luong, who oversees the nonprofit’s research and policy organizing, was shocked by what she found. In some jurisdictions around the country, algorithms relied on as few as nine variables to calculate the “riskiness” of an individual, scored on a scale of one to 10—one being the lowest risk of problems if he or she is set free, 10 being the highest. In other jurisdictions, as many as 100 variables were weighed.

“Different tools around the country are using very different characteristics,” says Luong. The most common variables, she said, are a defendant’s age, prior arrests, zip code, if they have a job, and whether they rent or own a residence. “Our objective is to document and communicate policies and practices that are often unavailable to the public. This research helps us understand how a raw score, like a nine or a 56, influence judicial decisions during bail hearings.”

Moreover, some algorithms are developed by for-profit companies for off-the-shelf consumption, while others are homegrown tools built by local university professors. Newer models—including the one being considered in Philadelphia—rely on the random forest classification, a common machine learning technique that uses randomly generated decision trees and averages their individual predictions to derive a score. Critics say these methods are too complex and obscure for use in pretrial detention rulings.

To make things more complicated, some officials Luong interviewed didn’t know how scores were calculated in their jurisdictions. Others who did know refused to disclose the information.

“When we saw that there are so many different methods used to develop these tools, we realized there’s no single solution to finding a remedy to address any flaws or other issues in their design,” says Luong. “It would be a game of whack-a-mole.”

Worse yet, some evidence suggests that the use of algorithms in pretrial detention decisions isn’t working—either to reduce prison populations or to address racial disparities. For example, many tools rely on factors that are correlated with racial bias, like prior convictions. “We’re still seeing disparities in who stays locked up,” says Sassaman. “They’re still mostly black and brown.”

A Blueprint for Protecting Rights

Luong and Sassaman say the work the Digital Impact grant has enabled is just beginning. Luong, for example, is beta testing a comprehensive online resource for nonprofits and other community leaders to learn about any pretrial risk assessment tools in use or under consideration in their areas—and to compare them with others. One example is a regional database she compiled, which shows that 11 different algorithms are being used statewide in California alone.

“Grassroots organizations that don’t have the time to invest and understand these tools need more facts about them, “says Luong. “They also need the language to resist them or insert their direct oversight over their implementation.”

The grant has also allowed Media Mobilizing Project to work with representatives from more than 100 civil rights organizations, including the Leadership Conference on Civil and Human Rights, the ACLU, and the NAACP Legal Defense and Educational Fund, to come out against the use of technology in pretrial detention decisions, and to develop six principles aimed at mitigating harm when algorithms are used.

PODCAST & TRANSCRIPT

Digital Impact convened a panel of experts to talk about the growing use of algorithms and how civil society organizations can work toward better policy for predictive technologies.

One of these principles calls for algorithms to be fully transparent and independently validated by data scientists and in partnership with communities affected by these decisions. Another says these tools should predict success—such as the likelihood of a defendant showing up for a court hearing, as opposed to the odds of them failing to appear.

“With the Digital Impact grant, we are changing the narrative about these tools and the idea that technology has a role to play in determining who’s a public safety risk and who’s eligible for release,” says Luong.

Digital Impact, an initiative within the Digital Civil Society Lab at the Stanford Center for Philanthropy and Civil Society (Stanford PACS), helps fund research teams and nonprofit organizations looking to advance the safe, ethical, and effective use of digital resources for social good. With the support of the Bill & Melinda Gates Foundation, Digital Impact has given nearly three-quarters of a million dollars in grants since 2016.