Skip to content

Flipping Our Algorithmic Assumptions

Opinion

"There is plenty of evidence that algorithmically-driven policies and enterprise innovations are exacerbating social harms."

I’ve had countless conversations with well-intended people from a number of social sectors and academic disciplines who are working on digital innovations that they firmly believe can be used to address shared social challenges. Some of these approaches — such as ways to use aggregated public data — are big investments in unproven hypotheses, namely that making use of these data resources will improve public service delivery.

When I ask these folks for evidence to support their hypotheses, they look at me funny. I get it—the underlying hypothesis that better use of information will lead to better outcomes seems so straightforward, why would anyone ask for evidence? In fact, the assumption is so widespread we’re not only not questioning it, we’re ignoring countervailing evidence.

There is plenty of evidence that algorithmically-driven policies and enterprise innovations are exacerbating social harms such as discrimination and inequity. We are surrounded by evidence of the social harms that automated decision making tools exacerbate — from the ways social media outlets are being used to the application of predictive technologies to policing and education. Policy innovators, software coders, and data collectors need to assume that any automated tool applied to an already unjust system will exacerbate the injustices, not magically overcome these systemic problems.

We need to flip our assumptions about applying data and digital analysis to social problems. There’s no excuse for continuing to act like inserting software into a broken system will fix it. It’s more likely to break it even further.

Rather than assume algorithms will produce better outcomes and hope they don’t accelerate discrimination, we should assume they will be discriminatory and inequitable unless designed specifically to redress these issues. This means different software code, different data sets, and simultaneous attention to structures for redress, remediation, and revision. Then and only then should we implement and evaluate whether the algorithmic approach can help improve whatever service area they’re designed for (housing costs, educational outcomes, environmental justice, transportation access, etc.)

In other words, every innovation for public (all?) services should be designed for the real world — a world in which power dynamics, prejudices, and inequities are part of the system into which the algorithms will be introduced. This assumption should inform how the software itself is written (with measures in place to check for and remediate biases and the amplification of them) as well as the structural guardrails surrounding the data and software.

By this I mean implementing new organizational processes to monitor the discriminatory and harmful ways the software is working and the implementing systems for revision, remediation, and redress. If these social and organizational processes can’t be built, then the technological innovation shouldn’t be used — if it exacerbates inequity, it’s not a social improvement.

Better design of our software for social problems involves factoring in the existing systemic and structural biases as well as directly seeking to redress them, rather than assuming that an analytic toolset on its own will produce more just outcomes. There is no “clean room” for social innovation — it takes place in the inequitable, unfair, discriminatory world of real people.

No algorithm, machine learning application, or policy innovation on its own will counter that system and it’s past time to keep pretending they will. It’s time to stop being sorry for or surprised by the ways our digital data-driven tools aren’t improving social challenges, and start designing them in such a way that they stand a chance.