Skip to content

Sorting Algorithms: The Role of Civil Society

Opinion

CSOs can help to ensure fairness, accountability, and transparency when it comes to AI. But we can't assume this will happen naturally.

This piece was published as “Sorting Algorithms: The Role of Civil Society in Ensuring AI Is Fair, Accountable and Transparent” by CAF in September 2018. Some words have been changed to reflect American English usage.

I have recently found myself repeatedly making the case for the importance of involving civil society in the growing debate about the development of AI. (For instance, where are charities in the great AI debate?, It’s more vital than ever that charities get to grips with AI, and our voices must be heard in the AI debate). Some might suggest I am in danger of being repetitive, although the more charitable among you might allow that I am merely being consistent. (For now, at least.)

Without revisiting that argument in full here, the key points are that AI has the potential to affect civil society three broad ways: Firstly, by offering new ways of delivering social and environmental missions; secondly, in terms of its wider impact on the operating environment for organizations, and finally, because deliberate misuses of the technology and unintended consequences of its honest application are going to create new problems for the people and communities that CSOs serve.

This means that civil society needs to get to grips with the issues now: both so that CSOs can harness AI’s potential for delivering impact in new and innovative ways, and so that they can play a part in the debate about shaping the development of the technology in order to minimize the potential negative consequences. Conversely, governments and the tech industry need to recognize that engaging with civil society when it comes to the AI debate is not merely a nice added extra, but an absolute necessity. If they are serious about the need for AI to be ethical (which the UK, in particular, has made a central pillar of its ambitions) then it is vital to involve CSOs, because they represent the very people and communities that are likely to be hit earliest and hardest by any negative consequences of the technology. It is important to recognize that in order for many CSOs to be able to bring their valuable perspective to the table, they may well require additional support. I would argue that it is incumbent on government and the tech industry to provide this support.

And just to make it clear that it isn’t just me saying this, here’s a good quote I came across recently in a new ebook published by Microsoft, making the case for the importance of engaging a wide range of voices in debates about where technology is going:

As technology evolves so quickly, those of us who create AI, cloud and other innovations will know more than anyone else how these technologies work. But that doesn’t necessarily mean that we will know how best to address the role they should play in society. This requires that people in government, academia, business, civil society, and other interested stakeholders come together to help shape this future. And increasingly we need to do this not just in a single community or country, but on a global basis. Each of us has a responsibility to participate—and an important role to play.

What Does This Mean in Practice?

Having re-mounted my hobby horse briefly, I want to move away from the case for engaging civil society in the debate about AI and instead explore another aspect of this wider issue: namely what practical role can CSOs play in the design, implementation, and oversight of specific AI systems? This is just as important, because even if CSOs do end up playing a vital role in framing the ethical debate about the overall development of AI, there will still be an ongoing need to put into practice whatever mechanisms are identified as necessary to ameliorate any negative unintended consequences. Furthermore, these mechanisms are unlikely to be perfect so there will still be a need to address problems where they do occur.

What, then, can CSOs do in practical terms? It is useful to break this down using the concepts of Fairness, Accountability, and Transparency, as have become key framing devices in the academic debate about the ethics of machine learning (ML). (There’s even a community of academics focused on ‘FATML’. It’s definitely worth checking out the resources on their website for anyone interested in these issues. Or, if you want a more easily digestible overview, I heartily recommend this Medium post by Fiontann O’Donnell from BBC News Labs). It is worth saying that Fairness, Accountability, and Transparency are not necessarily the only criteria one could use, of course, but they are a useful starting point.

Fairness

Fairness is a concept that sits at the heart of the work of many CSOs; so the fact that it is also one of the key concerns about the implementation of AI should immediately suggest that civil society has something relevant to bring to the table. But using a word like “fair” potentially begs an awful lot of questions such as “fair to whom?” And “fair in what way?”

Luckily, academics have begun to dig into some of these issues and to parse the concept of fairness when it comes to ML systems (such as in this great paper by Skirpan and Gorelick (2017)). What this leads to is not a uniform notion of fairness, but a series of context-relevant sub-questions that can often be assessed in more practical ways.

For example, in the first instance before we have even started building an ML system, we need to ask some fundamental questions such as:

  • Is it fair to apply ML in this context at all?
  • Do the risks evidently outweigh any potential gains?
  • Have the people and communities that this system will affect been given an opportunity to voice any concerns?
  • Are there demographic or cultural considerations that should give us cause for concern?
  • Does the system inherently require data that could compromise the privacy or rights of certain individuals or groups?

This is a point at which civil society clearly has a role to play. CSOs will be able to bring relevant insights to human rights and civil liberties issues, and knowledge of marginalized groups and communities. But perhaps even more importantly, unlike technologists, CSOs have no particular prior interest in building ML systems.

This is crucial, because the question of whether it is appropriate to use ML at all must be on the table. The concern—if ethical and moral questions about issues like fairness are left solely to technologists—is that either tacitly or explicitly it will be taken as a given that the end result of considerations will involve the creation of an ML system; and that all that is in question is what kind of system. However, this immediately misses one of the fundamental dimensions of fairness.

Once all parties are agreed that there is a justifiable case for building an ML system in a particular instance, we can then move on to considerations about how that is done fairly in practice. For instance:

  • Can we involve the people and communities the system will affect in the design process to minimize any potential negative impacts?
  • Can we control for statistical bias in the training sets of data, so that the algorithms do not develop biases?
  • What mechanisms are there for ensuring that the operation of the system is actually understandable by the people it affects? (We will come back to this shortly when we consider transparency).
  • Do we need to involve external organizations in the oversight of the system on an ongoing basis, and how do we achieve this?

Finally, we need to give consideration to how we test our system for fairness on an ongoing basis; because even if we diligently follow the first two steps outlined above, the chances are that there will still be unforeseen consequences over time.

For instance, the algorithmic processes might affect the target groups we have already identified, but in ways that we never guessed. Or the system may—through widespread adoption or interaction with other systems—come to affect other groups that we had not considered. That is why we need to keep assessing the impact of the ML systems we use, rather than seeing it as a one-shot deal.

Accountability

We have already touched on some of the questions we might ask about oversight or governance mechanisms that might be necessary to ensure fairness. But fairness is not the only consideration when it comes to AI. We also need to address the wider question of how we make algorithmic systems accountable when things go wrong, or when people take issue with the decisions they make or the outcomes they produce.

There are obviously major questions of law, regulation and governance here. They are ones that many governments, academics and others around the world are grappling with. Solutions proposed so far range from tough new laws governing ML systems to the creation of kitemarking systems for algorithms or new bodies that could defend user interests (like an AI-focused Which?).

There is a clear need for civil society to play a role here too. CSOs often represent the most marginalized people and communities within society, so they need to be in a position to speak out on their behalf when it comes to the impact of AI systems and to hold those responsible to account when necessary. Likewise the the application of ML systems in some contexts will have serious implications at a societal level for many of the kinds of issues that are core to the work of civil society such as human rights, civil liberties, equality, and poverty. Hence it is vital that CSOs are able to identify those responsible for any deliberate or unintended negative consequences and hold them to account.

Some of this may require new policy and legislative frameworks, but often these are likely to be extensions of existing international law and governmental agreements, so it is crucial that CSOs with relevant expertise in the current context adapt what they know and lend their weight to any advocacy efforts that may be required. There are some good examples of this already happening: Human Rights Watch, for instance, has joined a campaign to ban the development of autonomous weapons, while Amnesty International and Access Now recently published their “Toronto Declaration,” aimed at setting principles which ensure the right to non-discrimination and equality in ML systems.

Transparency

The other thing that is often cited as a key component of ethical AI is transparency. At its most basic level, this simply means that people should be able to determine that an AI system is being used. This might sound obvious, but evidence shows that many people are still unaware of the role that algorithmic processes play in determining the content they are presented with on platforms like Netflix, Spotify or YouTube; or that their Facebook and Twitter feed is likewise determined by highly complex machine learning algorithms that are constantly adapting and evolving based on their behavior. (One study analyzing awareness and understanding of the way in which Facebook’s news feed works (Eslami et al. (2015)), for instance, found that well over half of participants (62.5%) were not aware that their feed was curated by an algorithm, and that when they were informed they were not happy).

And not realizing that AI is involved in a given context may not be simply down to ignorance on the part of the user: there are worrying examples in which tech companies seem to suggest that their aim is to conceal this fact deliberately. Google, for example, showcased a new feature called “Duplex” in its voice assistant at a conference earlier this year, which can make simple phone calls (to reserve a table at a restaurant, say) on behalf of its owner.

This immediately drew a storm of criticism for the fact that the system was designed to hide the fact that it was a machine talking (probably not helped by the rapturous, whooping reception the demonstration received from the roomful of developers either). Google subsequently did a swift retreat and announced that future versions of the feature would announce that an automated system is being used, but the potential future danger had been clearly highlighted by that point.

The challenge of knowing when an AI system is being used is also likely to get more widespread as our interactions with a broad range of technology come to rely on conversational interfaces (like Alexa or Siri), or virtual and augmented reality interfaces. These are likely to seem as though they are presenting us with an objective view of the available options, yet in reality they are underpinned by algorithms which filter our experience according to criteria that may well remain hidden. CSOs could play a valuable role simply by educating the communities they work with about the role that algorithms play in many of the platforms and interfaces they rely on.

But beyond simply being transparent about the fact that algorithms are being used, many also argue there needs to be transparency about the algorithms themselves. This links back to the earlier goals of fairness and accountability. The theory goes that by making ML systems transparent, the people they actually affect will be able to see how they work and why certain decisions have been taken; and will therefore be able to identify instances where they feel they have been treated unfairly and seek recourse accordingly.

The problem is that in reality, transparency is likely to be far harder to achieve. In part this is because many applications of ML take place in contexts that are considered commercially sensitive, so explicit efforts are made to keep the underlying workings of the system opaque. It may be that this challenge can be overcome through new legislation or regulation requiring openness in contexts where there is a demonstrable public interest, but not everyone would agree that this is desirable; and in any case it certainly isn’t likely to happen without a significant amount of hard work in terms of lobbying and advocacy by civil society organizations and others.

But even if platforms and technology firms are willing to make the inner workings of their systems totally open, it is not clear that this would actually result in transparency in any sense in which we would ordinarily intend it. Presented with vast reams of technical data on a machine learning system, most of us would be none the wiser in terms of understanding how a particular decision had been arrived at. And this might not simply be a deficiency in our own understanding: it is possible that no amount of additional technical expertise would actually help.

Machine learning systems are very good at identifying patterns in data, but they primarily do it through a vastly-iterated process of trial and error which doesn’t involve any deeper ‘understanding’ of why the pattern has emerged, and there may be no way of inferring any such understanding from the system. Furthermore, if we are talking about systems that incorporate unsupervised learning (see this previous blog for a more detailed discussion of what this means), the AI could arguably be seen as relying on modes of thinking that are entirely non-human and thus we could never understand them.

So should we just give up and go home? Well, no, not necessarily. Challenges such as these have led many to suggest that instead of straightforward transparency, what we should actually be demanding is something more akin to “explainability” or “interpretability” i.e. that those who are affected by algorithmic processes are able to get access to some meaningful account of how decisions have been taken, which enables them to take relevant action to challenge the decision or ensure that a similar one does not occur in future.

How would this work in practice, you might well ask? That is an area where a lot of interesting research is currently going on: for instance the development of algorithms that are able to  “explain themselves”; or the use of counterfactual scenarios to give people a meaningful sense of how things might have been different had a given algorithmic decision not been taken. (For a useful overview of how humans might understand explanations of ML systems, see Narayanan et al. (2018)). It is not clear whether any one of these approaches is going to solve the problem: most likely it will be a combination of elements of all of them. But from the point of view of civil society organizations who see the need to engage in these issues and are evaluating what role they could play, the key point is that there are practical approaches being developed so they don’t need to start from scratch by any means.

Where Next?

It seems pretty clear to me that CSOs could have a valuable role to play in all of the various stages of ensuring fairness, accountability, and transparency when it comes to the use of AI. However, we cannot simply assume that this will happen naturally. Instead, is it likely to take an awful lot of careful thought and intermediation to ensure that they are able to engage in the design, implementation, and evaluation of ML systems.

But if the ongoing pronouncements from governments and the tech industry about the desire to ensure that AI is “ethical” are genuine, then I would argue that engaging civil society is an absolute necessity, and that the effort or cost of enabling CSOs to play their part will be more than justified.