Skip to content

Data Incidents: Design With Responsibility in Mind

4Q4 Podcast, Interviews

Stuart Campo and Jos Berens of UNOCHA explain how humanitarian organizations can manage effectively when data goes wrong.

Digital Impact 4Q4 Podcast: Jos Berens and Stuart Campo on Humanitarian Data


00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today, we’re joined by Stuart Campo and Jos Berens at the UNOCHA Centre for Humanitarian Data. In March, the Centre introduced guidelines to help UNOCHA staff better assess the sensitivity of the data they handle in different crisis contexts.

The Centre defines data incidents as events involving the management of data that have caused harm or have the potential to cause harm to crisis affected populations, organizations, and other individuals or groups. This could be a physical breach of infrastructure, unauthorized disclosure of data, or the use of humanitarian data for non-humanitarian purposes.

00:51 CHRIS DELATORRE: Jos, can you walk us through an example of a data incident (real or imagined)? Sarah Telford, who heads the UNOCHA Centre for Humanitarian Data, has described the sector as reticent to sharing. Why is it so difficult for humanitarian organizations to be open about these incidents when they happen?

01:10 JOS BERENS: Sure, Chris. So, a hypothetical example of a humanitarian data incident would be a response setting in which we would have an armed actor looking to identify the location of an ethnic minority in order to do harm. Now if that armed actor were to raid an office of a humanitarian organization and seize several hard drives on the premises, that could be the beginning of an incident. If those hard drives contain unencrypted beneficiary data of members of the ethnic minority that this armed group is looking to target, including their location, then you can see how that could become an issue. Even if unique beneficiary identifiers have been removed that individual privacy protection would not prevent group level targeting the ethnic minority.

And so, if the armed actor were to target this minority — and this could lead to injury or even death — that would be an example of a humanitarian data incident. Now, it’s important to note that data incidents do not always need to be caused intentionally by an outside bad actor. They can also be caused by accident, often due to staff unawareness of risks associated with humanitarian data management.

“You can’t manage or prevent an incident if you don’t understand how it arises in the first place.”

Now, to the second part of your question regarding the openness about incidents. A key reason why organizations are currently not being very transparent about humanitarian data incidents, is that there’s no clear incentive for this transparency. While on the other hand there are definitely disincentives, including possible exposure to similar incidents, damage to the reputation of an organization, a chilling effect on data sharing, and other consequences.

03:20 CHRIS DELATORRE: Stuart, you’ve laid out four aspects of data incidents — a threat source, an event, vulnerability, and an adverse impact. How can this breakdown help organizations, not only in terms of managing crises but also in taking preventative measures with data?

03:39 STUART CAMPO: Thanks, Chris. We reviewed a number of different models for risk assessment and risk modeling, and ultimately landed on this approach which is borrowed from the National Institute of Standards and Technology of the US Chamber of Commerce, or NIST. We think this is relevant to the humanitarian sector because it’s a clear approach that’s straightforward to adapt.

Let’s think about the example Jos just shared, where these four aspects are really easy to identify. In the scenario that he described, the threat source is the armed group that’s looking to target the different members of this population. The threat event is the actual raid of the facility where the hard drives of the data are located. The primary vulnerability that allows this to manifest into an incident is the fact that the hard drives contain data that’s unencrypted. We might also identify the absence of robust physical security measures and related protective measures for the data as vulnerabilities that have contributed to this incident.


“Humanitarians have not had a common understanding of what comprises a data incident, nor is there a minimum technical standard for how these incidents should be prevented and managed.”

Finally, the adverse impact is the actual misuses of the data, whether it’s the beneficiary data in its rawest form or the aggregated form clean of some of the identifiers Jos mentioned. The threat actor taking this to target, potentially injure, or kill members of the population manifests into the impact that ultimately defines this incident.

So, how does this breakdown help humanitarian organizations manage data incidents more effectively? As one of our collaborators on this notes from the Yale Jackson Institute — Nathaniel Raymond — often says, “You can’t ‘do no harm’ if you don’t know the harm.” The same holds true for data incidents. You can’t manage or prevent an incident if you don’t understand how it arises in the first place. By unpacking the source of the threat, the threat event itself, the underlying vulnerabilities, and the adverse impacts that then characterize an incident, humanitarian organizations can then improve their understanding of how these incidents unfold and put measures in place to prevent them.

In our experience, organizations need to spend more time thinking about the vulnerabilities and related predisposing conditions that allow the threat sources and events to cause adverse impacts, rather than focusing on the more nebulous specter of what different threats and events might look like. Most vulnerabilities are, sadly, weaknesses and internal systems, procedures and practices that are largely avoidable if addressed directly. And so that’s a really good place for organizations to start.

06:05 CHRIS DELATORRE: Jos, responding to Stuart’s point, preventing or responding to data incidents is clearly a need for the humanitarian community. How can organizations prioritize response and prevention techniques in their strategic planning and workflows?

06:19 JOS BERENS: So, we’ve identified three areas for investment to improve on humanitarian data incident management. And these are areas for investment for both individual organizations as well as networks.

The first area [of investment] is to establish a common understanding of what a humanitarian data incident is and that starts by understanding the causal chain that can lead to data incidents for specific offices and systems. Identifying key threat actors and vulnerabilities is [the second] component — and understanding existing security controls and their effectiveness. And thirdly, mapping existing data incident management capacity and determining whether that’s positioned appropriately is an important third component.

Once clear definitions and processes are articulated, it’s important to invest in staff awareness and to support a culture of open dialogue about incidents in which proactive reporting and management of incidents is incentivized and not punished.

“The guidance note will offer examples of how to develop and use predictive analytics ethically in humanitarian action.”

The second area of investment is to follow the steps for data incident management. And so that starts by taking measures to put in place security controls to mitigate the risk of data incidents and sharing that best practice with partners. Second, it’s important to build on existing work in the humanitarian sector to fill governance gaps which can create vulnerabilities for the organization. Third, it’s important to engage with organizational partners to set up information channels around data incidents to share lessons learned. And the fourth component is to share known vulnerabilities in a controlled manner with trusted counterparts for cross-organizational learning.

The third area of investment is to support continuous learning. And that is done by supporting learning and development of improved data incident management by organizing training and drills for staff based on scenarios that are likely to occur in different operational settings. And these exercises should occur regularly and they may even involve multiple organizations, training and drilling together. And finally in this last area of investment, it’s critical to document cases of data incidents so that we can learn over time.

08:59 CHRIS DELATORRE: Stuart, the next set of notes from your project will focus on data responsibility in public-private partnerships and predictive analytics and ethics. How will this build on what you’ve learned about the use of data in the humanitarian community? How can our listeners connect with the project?

09:15 STUART CAMPO: As with the first two notes in the series, including the one we’re discussing on this podcast, we’ve really been drawing on the centers experience managing data in different response environments, as well as that of our collaborators and different partners in the sector. That includes so called traditional humanitarian actors as well as non traditional actors. Public-private partnerships have been a major topic of interest in recent months, and I think you’ve covered that extensively on some of the episodes you’ve produced this year.

Our goal of note is highlight what good practice looks like, rather than necessarily focusing on different examples of core practice or risky practice because we think it’s more constructive to really look to the city on the hill and then help think about how to get there. We want to help colleagues within OCHA and other humanitarian organizations as well as our private sector counterparts design with responsibility in mind. Value sensitive development of partnerships in the public and private sectors is really essential to the complex data landscape that we operate in today.

On the predictive analytics and ethics front – this is also a hot topic, not just in the humanitarian sector but really more broadly in discussions about the role of data in society today. In April of this year, the centre brought together 30 people and 15 organizations to discuss the promise and pitfalls of predictive analytics and humanitarian response. One area of particular interest was establishing a pure mechanism to strengthen ethical deliberation for improved transparency and accountability in this area. And we’ve since invested in developing a prototype of a model for peer review for predictive approaches in the sector, which includes ethical issues.

The guidance note will offer more detail on the proposed approach as well as concrete examples of how to develop and use predictive analytics ethically in humanitarian action.

In terms of how your listeners can engage with us and our work, they can access the guidance notes directly on our website, which is They can also subscribe to our mailing list for updates more specifically focused on our work around data responsibility. We’re always open to feedback, and in the existing notes as well as those to come, you’ll find a call to action where we invite people and include contact information so that they can reach out, share their own experience, and even propose topics for us to cover in the future.

This is a really great opportunity for us to make sure that we’re remaining demand driven and keep a pulse on how things are changing in the sector.

11:35 CHRIS DELATORRE: Stuart Campo, Data Policy Team Lead, and Jos Berens, Data Policy Officer at UNOCHA Centre for Humanitarian Data, thank you.

Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at and on Twitter @dgtlimpact with #4Q4Data.