Skip to content

Digital Impact was created by the Digital Civil Society Lab at Stanford PACS and was managed until 2024. It is no longer being updated.

Nonprofits and Artificial Intelligence

I’ve participated in numerous conferences, panels, and discussions on the topics of nonprofits and AI, foundations and AI, AI for good1 and so on. The vast majority miss the point altogether.

1 I try to avoid conversations or initiatives structured as “[blank] for (social) good,” especially where the [blank] may be the name of a company or type of technology.

It’s not really a question of these organizations using artificial intelligence, which is how every one of these panels approaches it. For most civil society organizations, they may be buying software that’s going to use algorithmic analysis and some AI on a large dataset, perhaps through their vendors of fund development data or software. And then, yes, there are legitimate questions to be asked about the inner workings, ethical implications, effects on staff and board, and so on.

Though important, in my opinion these questions are hardly worth a conference panel. Yes, they are important software vendor considerations, and it is important for all organizations to understand how these things work — just but not in the “black magic” or “sector transforming phenomenon” way a conference organizer might want you to think.

The real issue is how large datasets (with all the legitimate questions raised about bias, consent and purpose) are being interrogated by proprietary algorithms (non-explainable, opaque, discriminatory) to feed decision making in the public and private sectors in ways that fundamentally shift how the people and communities served by nonprofits and philanthropy are treated.

  • Biased policing algorithms cause harm that nonprofits need to understand, advocate against, deal with, and mitigate.
  • AI-driven educational programs shift the nature of learning environments and outcomes in ways that nonprofit after-school programs need to understand and (at worst) remediate, (at best) improve upon.
  • The use of AI-driven decision making to provide public benefits leaves people without clear paths of recourse to receive programs for which they qualify (read Virginia Eubanks’s Automating Inequality).
  • Algorithmically-optimized job placement practices mean that job training programs and economic development efforts need to understand how online applications are screened, as much as they help people actually add skills to their applications.

This essay on “The Automated Administrative State” is worth a read.

The real question for nonprofits and foundations is not how will they use AI, but how is AI being used within the domains within which they work and how must they respond?