Skip to content

Artificial Intelligence

Rethinking Nonprofit Evaluation

This module introduces key topics around AI including whether you should use AI within your organization, what that could mean and how to understand algorithmic bias.



What is AI? What should nonprofits ask about AI and digital tools?

Watch “Me, My .Org, and AI: In Conversation with Allison Fine, Mutale Nkonde, and Marietje Schaake” (10 Mins)

What is AI? What should nonprofits ask about AI and digital tools? Watch “Me, My .Org, and AI: In Conversation with Allison Fine, Mutale Nkonde, and Marietje Schaake” (10 Mins)

Scenarios

RankMyProposal

Instructions: Take the next 2 – 3 mins to read through the following scenario quietly on your own. Once finished, there are a series of questions for you to consider, first on your own and then with your group. At the end of your discussion, be prepared to report back to the group about what you have discussed with your breakout room.

Preventing Fraud: Artificial Intelligence and Nonprofits

Instructions: Take the next 2 – 3 mins to read through the following scenario quietly on your own. Once finished, there are a series of questions for you to consider, first on your own and then with your group. At the end of your discussion, be prepared to report back to the group about what you have discussed with your breakout room.

Close Reading Exercise

Welcome to the Stanford Digital Civil Society Lab “Me, my .org, and AI” workshop. There are three activities for you to do in advance of the workshop. They should take you about 30 minutes total. We’ll ask you to bring your notes with you to the workshop; you’ll have a chance to share your thoughts and hear from others.


Additional Resources

The Stanford Digital Civil Society Lab curated these resources for those interested in taking action about AI, digital systems, and data collection. They are intended to provide a range of opportunities. You can suggest additional resources by contacting us.

Take Action

  • Human Rights Data Analysis Group
    Nonprofit, nonpartisan organization that applies rigorous science to the analysis of human rights violations around the world.
  • AI Toolkit for Racial Equity
    A toolkit that provides resources for using and understanding AI in the context of racial equity work.
  • Gender Shades
    Gender Shades is an Algorithmic Justice League project that studies how well AI services track different faces. Their website contains an email where datasets from their studies can be requested.
  • Hacking 4 Justice
    Hacking 4 Justice’s trainings convene leaders from the SAO, experts from the data science field, and community members of all backgrounds to learn from each other, learn with each other, and build communities that are more self knowledgeable, more prosperous, and more just.
  • Data for Black Lives
    Data for Black Lives is a movement of activists, organizers, and mathematicians committed to the mission of using data science to create concrete and measurable change in the lives of Black people.
  • AI Procurement in a Box
    A practical guide to using AI that helps governments rethink the use of AI with a focus on innovation, efficiency, and ethics.
  • Just Data Lab
    The JUST DATA Lab brings together activists, artists, educators, and researchers to develop a humanistic approach to data conception, production, and circulation. Our aim is to rethink and retool data for justice.
  • Algorithmic Accountability Policy Toolkit
    A toolkit that covers frequently asked questions about algorithmic systems as well as resources pulling from multiple fields, including Healthcare, Education, Criminal Justice, Immigration, and more.
  • Mapping Police Violence
    Mapping Police Violence is a research collaborative collecting comprehensive data
    on police killings nationwide to quantify the impact of police violence in communities.
  • AFROTECTOPIA
    A social institution fostering interdisciplinary innovation at the intersections of art, design, technology, Black culture and activism.
  • Where Texting Brings People to Court Toolkit
    A toolkit that pairs with the corresponding “70 Million” podcast episode. Includes resources to get started with reform projects like the text based Court Reminder System in Palm Beach County.
  • Ethics and Algorithms Toolkit
    A toolkit that focuses on the use of AI in Government, specifically risk management algorithms, and how to be effective and transparent in their use.
  • Data for Democracy
    Data for Democracy is an enthusiastic network of individuals utilizing data to drive better choices and improve the world where we live.
  • AI Blindspot
    AI Blindspot provides cards on nine key oversights that can exist when using AI in any work, such as Privacy and Generalization Error.

Learn More

  • The Age of AI – How Far is Too Far? (Video: 34 minutes)
    PBS Video that covers examples of cutting edge technology that utilizes Artificial Intelligence and asks questions about the limitations and downsides of AI.
  • Joy Buolamwini TED Talk (Video: 8 minutes)
    TED Talk from the founder of the Algorithmic Justice League. She speaks about her experience with and research on facial recognition algorithms and what needs to be done to challenge these problems.
  • Where Texting Brings People To Court (Podcast: 27 minutes)
    Podcast episode that covers a project in Palm Beach, Florida that uses texting as a method to remind people about their court appointments.
  • Racial Bias in Healthcare (Article: 6 minutes)
    Nature article about a study which shows that decision-making software in the US healthcare system is biased and the effects of this bias.
  • Scaling Justice Podcast: Measures for Justice (Podcast: 50 minutes)
    Scaling Justice Podcast episode that features stories about data being used in action and how your agency can start moving toward greater transparency and positive change.
  • Scaling Justice Podcast: Lessons in Data Quality (Podcast: 27 minutes)
    Scaling Justice Podcast episode that centers around a conversation about if we can know whether or not the data we use is biased.
  • Challenging The Algorithms of Oppression (Safiya Noble) (Video: 12 minutes)
    Talk from Safiya Noble, author of the book Algorithms of Oppression. In this talk, she gives more examples of algorithmic bias and ways that we can challenge them.
  • Risk Assessment: Explained (Article: 20 minutes)
    Article from The Appeal that explains how algorithmic risk assessments in the criminal justice system work and the problems involved with their implementation.
  • Is Ethical AI Even Possible? (Article: 9 minutes)
    New York Times article that talks about what ethical AI looks like and if it’s even possible given the nature of AI.
  • Fairness In Machine Learning (Course materials)
    Course materials from a UC Berkeley class on the topic of fairness and bias in Machine Learning. The lectures are quite technical, but there are great general resources in the “Legal and Policy Perspectives” and “Background Reading” sections.
  • Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks (Book)
    Book by Virginia Eubanks that provides an investigative look into the effect of data mining, policy algorithms, and predictive risk models on poor and working-class people in America.
  • Gender Bias in AI (Video: 2.5 minutes)
    Short video that explains gender bias in algorithms from the Natural Language Processing field.