Skip to content

Digital Impact was created by the Digital Civil Society Lab at Stanford PACS and was managed until 2024. It is no longer being updated.

Max Planck Institute for Intelligent Systems

Project

Avoiding Discrimination in Automated Decision Making and Machine Learning

Team

[ess_grid alias=”maxplanck”]

  • Danilo Brajovic
  • Niki Kilbertus
  • Bernhard Schölkopf
  • Adrian Weller

Project Overview

As automatic systems increasingly make decisions affecting humans, we must understand how our notions of fairness can be incorporated in a principled way. We implement various recent methods for discrimination discovery and removal in data as well as algorithms. The easy to use and well documented library called fairensics is openly available to audit data sets or algorithms and scrutinize their fairness.

Challenge

Automated decision making based on machine learning penetrates almost all aspects of our daily lives, including consequential decisions in lending, hiring, or criminal justice. With the welfare of individuals in the hands of machines, concerns about discriminatory systems and unethical biases are rising. While the research community is making steady progress on developing techniques to address such concerns, these tools rarely reach the state of being usable by a broader audience outside of research. Our challenge is to bridge this gap between cutting edge research and real-life application.

Approach

We implement and bundle relevant algorithms suggested in recent literature on fairness in machine learning into a single, easy to use python library. In addition, we provide detailed tutorials in the form of Jupyter notebooks on how to use the library and interpret the results. During this project, other academic labs as well as large industry players such as Microsoft, IBM and others have open sourced similar projects to ours. This provided us with valuable prototypes and allowed us to survey what works particularly well up front. Moreover, for some specific fairness techniques we were spared the task of re-implementing the algorithms from scratch.

One key aspect of the project is to foster usability, which is why we spent significant time in lowering the entry barrier for new users by providing detailed annotated notebooks on how to use the fairensics library and in particular on how to interpret the results.

Outputs and Progress

The core output of this project is the fairensics library, which is publicly available on GitHub. The core library contains code to handle datasets and machine learning models with a special focus on fairness and bias. Features in the dataset which indicate membership in a protected group (for example age, gender, race, sexual orientation, religion, or others) can be explicitly marked as protected. This allow us to measure various kinds of dataset bias, such as stark imbalances in the data or whether the labels of interest in the current task are distributed differently across the protected groups. Moreover, when training predictive algorithms, the protected features can be taken into account to enforce various forms of fairness during the training procedure. For example, we can avoid disparate impact, disparate mistreatment, or ensure preferential treatment.

To understand the ideas behind these technical notions of fairness and what they mean in practice, we further provide detailed annotated Jupyter notebooks in which we run through example applications on relevant datasets commonly used by the research community. These examples are designed to lower the entry barrier for non-experts to engage with cutting-edge techniques to identify and remove discrimination in machine learning systems. They also contain interpretations and warning that help interpret the results and compare different technical notions of fairness.

Insights

First, we were delighted to learn during the grant period that multiple universities, non-profits, and even large corporations recognized the issue of fairness in machine learning and set out to provide similar libraries to fairensics. For example, as part of a larger open source effort on trustworthy artificial intelligence, IBM created an extensive fairness library called AIF360. As this library has been developed by a large team of professionals over a long time period, we could learn a lot from their design decisions and could even build on top of some of their implementations.

We also realized that there is a long way to go from research or pseudo code to a well-organized library that is flexible and easy to use at the same time. In a future project, we would allocate even more time to conceiving the design and various interfaces of the library. There were many trade-offs to be made between flexibility for expert users and usability with meaningful default settings for novices. As we tried to reach as broad an audience as possible including total novices, we opted for high level interfaces that hide most of the complexity whenever it did not significantly restrict functionality.

Finally, when informally presenting prototypes to colleagues to gather feedback, we learned that code examples and notebook tutorials are the most important component in lowering the entry barrier to getting people engaged with the topic and helping them with using the library for their own problem.

Next Steps

The research community is still making rapid progress on removing bias from data, training fair algorithms, and post-processing decisions to mitigate discrimination. Interdisciplinary work has also led to a better understanding of which fairness notions are perceived most ethical by humans in various settings and how different notions correspond to ideas from political philosophy. Moreover, researchers have started to investigate the long-term impact of fair decision making, i.e., how automated decisions may change society in the long run through downstream effects.

As an ongoing effort, we plan to curate recent research in the area and distill the most practicable ideas to add them as usable implementations to our library. In addition to simply augmenting the functionality, one may imagine to also specify the application domain (e.g., hiring, parole in criminal justice, college admission, lending, etc.) and receive appropriate advice on what humans in surveys perceived as morally acceptable notions of fairness in these settings or even the localized legal situation.

Learn More

Visit Max Planck Institute for Intelligent Systems on GitHub at github.com/MPI-IS.

Closing the Fairness Gap in Machine Learning