Human perceptions and decision-making are riddled with cognitive biases. Cognitive biases are tendencies to think in certain ways that sometimes lead to irrational judgments. For example, you might think you are more likely to die in a plane crash than die of a heart attack (that’s caused by the availability heuristic). While that can impact your anxiety to fly, it isn’t really impacting your livelihood. Bias, however, can have a more significant effect on your life. For instance, research has consistently demonstrated a gender bias that sees firms more likely to hire men than women, despite both applicants having the same resume. Cognitive bias has also been found to impact decisions about loan applications. For example, research on loan behavior on the global microfinance website Kiva found that people were more likely to get a loan if they were lighter-skinned, attractive, and less obese.
Bias in decision-making is ever present in today’s increasingly digitized world. As such, it is critical for society to understand how data, including their own data, is being used in decision-making that can affect their ability to get a loan, a job, or access to services. This is because human decision-making is slowly being digitized, which means we are increasingly relying on algorithms to make decisions for us.
Algorithms are processes or rules set by computers to solve problems. While algorithms are everywhere in our daily lives – deciding what news we read, what ads we see, and even detecting diseases – they are often deeply flawed.
One reason algorithmic decision-making is problematic is that few people understand how the algorithms are built, know where the data came from that the algorithms were trained on, or understand the limitations of where an algorithm can be applied effectively. A White House report in 2014 argued that algorithmic decision-making can amplify the practice of “redlining,” denying someone services or charging higher prices based on their demographic characteristics, in the digital economy, potentially making the most vulnerable people in our society even more vulnerable.
Algorithmic bias can stem from two related issues. First, bias occurs when the algorithm is trained on a limited data set and then applied to a diverse population. The most common algorithmic bias is racial bias stemming from the practice of training algorithmic products and services on data from predominately Caucasian populations. Professor Kate Crawford recently called out artificial intelligence as having “a white guy problem.” Second, biased algorithms can arise from problems with how the data are collected in the first place. Most algorithms are trained on historical data, limiting the algorithm’s ability to predict the future. Take historical crime statistics, for example, that are used to train algorithms for predictive policing tools. The algorithm is biased toward areas with crimes that were reported in the first place, and to which police responded. This model fails to account for crimes that go undetected or unreported, skewing the future police focus and attention toward certain neighborhoods over others.
Algorithmic bias in action
Not surprisingly, algorithmic bias often reflects common areas of human bias: race, gender, ethnicity, and socioeconomic status.
A hand wash dispenser was recently called racist because it does not recognize darker skin tones when detecting when soap needs to be dispensed onto someone’s hand. This is because the technology was trained on a limited range of skin colors. The same thing happened with the Xbox Kinect and HP, whose cameras couldn’t detect black faces, and with the social robots that could only play peek-a-boo with white people.
A machine learning tool trained on photos of people was recently found to have developed sexist biases toward women, for instance automatically associating a picture of a kitchen with a woman rather than a man.
In their award-winning article, ProPublica explored the impact of machine learning in predicting recidivism rates of offenders, only to find that higher risk ratings were disproportionately applied to minority groups in America, because the algorithm was trained on inherently prejudiced data.
Algorithms are only as good as the data put into them. For instance, an algorithm used to distribute loans might conclude that certain demographics are less likely to pay their loan back if the algorithm was trained on a data set in which loans were unfairly distributed in the first place.
In philanthropy, a giving program that uses an algorithm to decide who in the community will spend the money wisely might disproportionately favor people in the community who are facing only economic barriers to prosperity, whilst further marginalizing those facing other social barriers beyond their control. This could further reinforce societal inequities through a failure to provide resources for those most vulnerable in society.
Designing out bias
Some argue that algorithmic bias is rarely intentional, but some stories make you question how much the average organization really knows about the algorithms they are using. For instance, research on SAT tutoring prices found customers in areas with a high ratio of Asian residents are often charged more, a practice that was justified as an ‘incidental’ result of geo-based pricing models. Similarly, Amazon recently had trouble explaining why their same day delivery service was unavailable in predominately black neighborhoods.
Intentional or otherwise, the consequences of algorithmic bias are having a significant impact on societal wellbeing by skewing access to jobs, funding and services, and by amplifying unfair targeting, biased sentencing convictions, and other social injustices.
Designing out bias is a fundamental issue for societal wellbeing in the digital age. To date, there is limited accountability for the consequences of algorithms. Some argue for more regulation to protect citizens’ rights to not be discriminated against and to understand how decisions are made that pertain to their own lives. However, the sheer computing power of algorithms is sometimes beyond human capabilities for oversight, suggesting a dangerous gap in governance. Opponents argue regulation brings an unnecessary constraint to innovation and that responsibility for applying an algorithm lies with the user, not the creator of the algorithm.
While policymakers build a regulatory framework around the use of algorithms in many facets of society, some groups have already taken more concrete actions to combat bias.
- The American Civil Liberties Union (ACLU) and AI Now recently partnered to explore three key issues arising from biased artificial intelligence: criminal justice, equity, and surveillance.
- The National Science Foundation recently awarded a $1 million grant to University of Wisconsin-Madison researchers to tackle bias in algorithms with their program FairSquare. The program is positioned as a regulatory tool that tests for fairness in decision-making prior to an algorithm being used. This enables organizations to identify and fix bugs before a product hits the market. Such an approach would have been ideal for Google and Flickr before they released photo applications that identified black people as ‘gorillas’ and concentration camps as ‘sport.’
- The Algorithmic Justice League (AJL) is a platform where citizens can report algorithmic bias when they experience or observe it. It is a space to raise awareness about algorithmic bias, and to encourage practices for accountability during the design, development, and deployment of coded systems. Organizations can also request the AJL check their designs for bias.
To ensure a resilient and prosperous civil society in the digital age, we must consider all facets of bias and explore ways to test for, and design out, bias in our digital tools.
Dr. Paula Dootson spoke on “Considering Bias” at the Digital Impact World Tour event on July 29th, 2017 in Brisbane, Australia. You can connect with her on Twitter and LinkedIn.
Have thoughts or case studies to share regarding algorithmic decision-making and bias? Chime in below with a comment.
To stay up to date with the latest from Digital Impact, sign up for our newsletter and follow us on Twitter. Better yet, become a contributor!