Skip to content

Navigating Disinformation

This module focuses on how nonprofits can prepare for and counter dis- and misinformation that affects their organizations and the communities they serve. We will also examine the racialized implications of disinformation.

Disinformation: Policy Overview

This document will provide an overview for digital policy issues that concern disinformation. Skim this document before the workshop to get a better sense of the policy topics we will be discussing. For additional context and information, we will be sharing two resource lists at the end of the workshop.

US Big Tech and Policy

For the past twenty years, Section 230 has offered broad legal immunity to internet companies for the content that appears on their platforms. Indeed, it says that platforms are not liable for content that their users post and they can moderate their platforms as they choose to. It is unsurprising, therefore, that from 2000 and 2018, the US Department of Justice only instigated one anti-monopoly action despite the fact that antitrust activism has risen dramatically, particularly since the 2016 US Presidential election. 

In May 2020, however, the tide began to change as President Trump signed an executive order, opening the door for the US government to assume oversight of political speech published on the Internet. The former President’s directive sought to change Section 230 to rethink when its liability protections apply and to channel complaints about political bias to the Federal Trade Commission. The order also created a council, in cooperation with the state attorneys general, to probe allegations of censorship based on political views. Finally, it tasked federal agencies with reviewing companies’ spending on social media advertising. prevent Facebook, Google, and Twitter from allegedly censoring conservative opinions with impunity. 

In July 2020, CEOs of tech giants Google, Apple, Facebook, and Amazon were called upon to testify before US Congress about their companies’ monopolistic behavior. Interestingly, while Democratic representatives quested the four CEOs over their breach of antitrust laws, Republican representatives questioned the companies’ ties to the Chinese military. Rep Ken Buck, in particular, accused Google of declining to work with the US DoD while simultaneously (falsely) accusing them of collaborating with the Chinese. Though Buck’s accusations were false and, indeed, run directly contrary to Silicon Valley’s growing affinity with the US government, the widespread anxiety voiced Upgrade Initiative Feb 2021 Page 2 through the hearing was that China’s tech industry may spell the end of Silicon Valley’s dominance.

On October 6th 2020, following a 16-month investigation into the four Big Tech companies, the House Judiciary subcommittee on antitrust released a 449 page report detailing how and why they have monopoly power. More specifically, the report found that: Apple has monopoly power over software distribution on the iPhone, Amazon bullies its third party sellers, Facebook uses its power to acquire or kill potential competitors, and Google has complete dominance over online search. The chief suggestion made in the report is for Congress to take up changes to anti-trust laws that could result in the dismantling of these businesses.

CNBC provides a clear summary of the key recommendations:

  1. Forcing tech companies to be broken up or imposing business structures that make different lines of business functionally separate from the parent company. Subcommittee Chairman David Cicilline has referred to this as a type of “Glass-Steagall” law for the internet – he is referring to the 1930s law that separated commercial from investment banking.
  2. Instructing antitrust agencies to presume mergers by dominant platforms to be anticompetitive, shifting the burden onto the merging parties to prove their deal would not harm competition, rather than making the enforcers prove that it would.
  3. Preventing dominant platforms from prioritizing their own services.
  4. Requiring dominant firms to make their services compatible with competitors and allow users to transfer their data (Interoperability and portability).
  5. Overriding “problematic precedents” in antitrust case law.
  6. Requiring the Federal Trade Commission to regularly collect data on concentration
  7. Increasing budgets for the FTC and Department of Justice Antitrust Division
  8. Strengthening private enforcement by eliminating forced arbitration clauses and limits on class-action lawsuits.

Though the subcommittee aimed to kick-start a bipartisan effort to address the need to update US antitrust regulations to rein in Big Tech’s power, discussions of the report’s findings quickly devolved into a dispute between Democrats and Republicans as to next best steps.

According to CNBC, many Republican objections mainly center around the idea that social media platforms – particularly Facebook and Google – discriminate against conservative viewpoints. Admittedly, there is little to no evidence that this is the case; Facebook’s data shows that posts by conservatives are almost always popular content on their platforms. Moreover, these issues have little to do with American antitrust laws. Nonetheless, Republican representatives criticized the Democratic majority for not considering this supposed discrimination in their report.

Aside from this, Republican House representatives also disagreed with the recommendation for sweeping changes to antitrust laws that could lead to the break-up of Big Tech companies in the US.

Both sides of the aisle do agree, however, that Big Tech wields too much power over the US economy and the global market more broadly. It is likely that more funding will be provided for agencies like the Department of Justice and the Federal Trade Commission so that they can scrutinize tech mergers and otherwise police potential anticompetitive practices. Interestingly, both of these agencies are already conducting their own investigations into Big Tech: the DOJ sued Google for violating anti-trust laws in October 2020 and sued Facebook for discriminating against US workers in December 2020; the FTC sued Facebook in December 2020 for illegal monopolization.

Most recently, in February 2021, the Judiciary Subcommittee on Antitrust, Competition Policy, and Consumer Rights, introduced legislation to bolster American antitrust laws. The Competition and Antitrust Law Enforcement Reform Act proposes to: 

  1. make it easier to increase enforcement agency budgets
  2. make it easier to find a company guilty of monopoly behavior
    1. update the legal standard for permissible mergers
    2. shift the burden to the merging parties to prove their merger will not violate the law
  3. ban anti-competitive conduct
  4. increase penalties for misconduct
  5. create a new division at the Federal Trade Commission to research and monitor merger activity and propose regulatory remedies.

EU Regulations and Big Tech

At present, there is no EU-wide legislation that deals with political advertisements, meaning that each member state is responsible for devising and implementing its own laws to tackle disinformation and misinformation. 

Among the first (and most controversial) laws passed by a member state, was the law passed by German parliament on June 30th, 2017 called the Network Enforcement Act or NetzDG. NetzDG was introduced to prevent the dissemination of offensive content as defined under sec. 1(3). Though the law did not introduce any new criminal offences, it did force digital platforms to take on responsibility for unlawful content published on their sites. It obliges social networks with 2 million users or more to setup user-friendly complaint mechanisms for reporting and removing “manifestly unlawful content” within 24 hours. Failure to systematically delete illegal content will result in a fine of up to 50 million EUR. The legislation also required all companies to provide the German government with bi-annual transparency reports. Unsurprisingly, the law has been repeatedly criticized by civil society for restricting dissent and freedom of speech. NetzDG was amended in 2020; the main amendment forces social media companies to proactively report serious cases of illegal speech to law-enforcement. 

This being said, in 2018, the European Commission introduced the Code of Practice on disinformation; it is the first, albeit self-regulatory, framework/standards that the international tech industry agreed upon to fight disinformation and misinformation online. The code was signed by leading social network platforms, advertisers, and the other members of the advertising industry in October 2018. Notable signatories include: Facebook, Twitter, and Google. Microsoft joined the ranks in May 2019. Tik Tok joined in June 2020. 

Each organization created a roadmap for how they would take action in five key areas: disrupting advertising revenues of certain accounts and websites that spread disinformation;

  1. making political advertising and issue based advertising more transparent; 
  2. addressing the issue of fake accounts and online bots; 
  3. empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;
  4. empowering the research community to monitor online disinformation through privacy compliant access to the platforms’ data.

After initial 12 month period, the European Commission published the platforms’ and their own annual assessment in September 2019. This report, summarizes the actions undertaken to improve the scrutiny of ad placements, ensure transparency of political and issue-based advertising, and to hacked fake accounts/ malicious use of bots. 

In 2020, signatories of the Code have continued to produce regular self-assessment reports as part of the “Fighting COVID-19 disinformation monitoring programme” implemented across the EC. The aims of these efforts to address disinformation around COVID-19 are the following: 

  • promote authoritative information sources through various tools;
  • work to limit the appearance or reduce the prominence of content containing false or misleading information;
  • increase efforts to limit manipulative behavior on their services;
  • enhance collaborations with fact-checkers and researchers, and increase the visibility of content that is fact-checked;
  • provide grants and free ad space to governmental and international organizations to
  • promote campaigns and information on the pandemic;
  • fund media literacy actions and actions to sustain good journalism; and
  • take actions to limit the flow of advertising linked to COVID-19 disinformation.

The additional measures that specific social media and online signatories have taken to build resilience against the spread of COVID-19 disinformation and misinformation include the following:

  • Google reported that, as countries approve vaccines and enact their vaccination plans, Search will surface lists of authorized vaccines along with information panels, and YouTube will add authoritative vaccination information from local health authorities to its COVID-19 panels.
  • Twitter reported that its #ThereIsHelp prompt can be used by Member State authorities to include a specific link regarding COVID-19 vaccines.
  • Microsoft reported that COVID-19 vaccine-related queries on Bing surface content developed by the EU on COVID-19 vaccines.
  • TikTok reported that it has expanded its Project Halo campaign to include doctors and researchers from the EU, including from France, Italy and Spain, who explain their work regarding the development of vaccines, providing content in local languages.
  • While this was not part of their November report, Facebook announced in December that it will remove false claims about COVID-19 vaccines that have been debunked by health experts.

The European Commission has recently launched the European Democracy Action Plan with the aim of overhauling the Code of Practice on Disinformation. This Action Plan will create a co regulatory framework of obligations and accountability of online platforms. The three main pillars of the plan are the following: 

  1. Promote free and fair elections
  2. Strengthen media freedom and pluralism
  3. Counter disinformation

Aside from its Action Plans and Code, the European Commission created the European Digital Media Observatory aimed at creating an EU hub for fact-checkers, academics, and other relevant stakeholders. The Observatory has five main activities: 

  1. Mapping of fact-checking organizations in Europe and supporting them by fostering joint and cross-border activities and dedicated training modules.
  2. Mapping, supporting and coordinating research activities on disinformation at the European level, including the creation and regular update of a global repository of peerreviewed scientific articles on disinformation.
  3. Building a public portal providing media practitioners, teachers and citizens with information and materials aimed at increasing awareness, building resilience to online disinformation and supporting media literacy campaigns.
  4. Design of a framework to ensure secure and privacy-protected access to platforms’ data for academic researchers working to better understand disinformation.
  5. Support to public authorities in the monitoring of the policies put in place by online platforms to limit the spread and the impact of disinformation.

Disinformation: Background Information

This document provides an overview and key definitions on the topic of disinformation, as well as some steps your nonprofit can take to get active in the fight against it. For additional context and information, we will be sharing two resource lists at the end of the workshop.

Worksheet

Public Policy Options: Disinformation

Close Reading Exercise

Before the start of the workshop, please read and compare the following two articles and think about answers to the questions below. We also sent out two optional readings that provide a general and a policy overview on disinformation.


Additional Resources

The Stanford Digital Civil Society Lab curated these resources for those interested in taking action about AI, digital systems, and data collection. They are intended to provide a range of opportunities. You can suggest additional resources by contacting us.

Take Action

  • Shorenstein Center on Media, Politics and Public Policy
    Led by Dr. Joan Donovan, the Shorenstein Center leads the field in examining Internet and technology studies, online extremism, media manipulation, and disinformation campaigns. 
  • COVID-10 Misinformation and Black Communities
    Brandi Collins-Dexter, a visiting fellow at the Shorenstein Center, writes about disinformation and coordinated attacks on Black technoculture. As a senior campaign director at Color Of Change (COC), her work involves interrogating the role of media, technology, and information integrity in improving or deteriorating community health and economic opportunities. 
  • Disinformation Action Lab at Data & Society (DAL)
    This research lab forges new approaches to address the complex dynamics underpinning the spread of propaganda and disinformation. 
  • The 101 of Disinformation Detection
    This starter kit provides the basic steps organizations should follow to begin tracking online disinformation, and includes helpful graphics and thoughtful explorations of the pros and cons of data collection. 
  • The COMPROP Navigator
    The Project on Computational Propaganda launched an interactive resource for civil society groups as they respond to a rise in disinformation. 
  • Disinformation Toolkit
    This toolkit designed by and for international NGOs and civil society helps organizations identify their risk, develop a strategy, and build resilience. 
  • Inspecting Algorithms in Social Media Platforms
    Ada Lovelace Institute’s joint briefing with Reset gives recommendations toward a practical way forward for regulatory inspection of algorithms. Primarily aimed at policymakers, the report could also be helpful for organizations thinking about methods, skills, and capacity for inspecting algorithmic systems. 
  • MediaJustice
    MediaJustice fights for racial, economic, and gender justice in a digital age. Their report on digital cultures is an invaluable resource to address misinformation and information access through a lens of racial justice. 
  • Berkeley Protocol
    A practical guide on the effective use of digital open source information in investigating violations of international criminal, human rights, and humanitarian law. The Human Rights Center at UC Berkeley works with technologists and students to improve the quality of information found on social media, including developing protocols for better identifying misinformation. Included is an online investigation plan template, a digital threat and risk assessment template, a digital landscape assessment template, a data collection form, and considerations for validating new tools. 

Trainings and Resources from Other Organizations and Alliances 

  • First Draft
    First Draft provides trainings for journalists with the aim of protecting communities from harmful misinformation, and empowering society with the knowledge and tools needed to outsmart mis- and disinformation. 
  • School of Information Center for Social Media Responsibility
    This University of Michigan program offers online explainers, and more. 
  • Tow Center for Digital Journalism
    Part of the Columbia Journalism School, this program examines digital journalism’s cultural shifts and its relationship with the broader, constantly changing world of technology. 
  • Shorenstein Center on Media, Politics and Public Policy
    This Harvard Kennedy School research center is dedicated to exploring and illuminating the intersection of press, politics, and public policy. 
  • Alternative Regulatory Responses to Misinformation
    Yale Law School/Wikimedia Initiative on Intermediaries and Information focusing on novel regulatory responses to misinformation. Moderator Michael Karanicolas is joined by panelists Akriti Gaur, Ivar Hartmann, Barbora Bukovská, Lisa H. Macpherson, Jonathan Obar, Sandra Cortesi, Stephen LePorte, 

Learn More

  • How Civil Society Can Combat Misinformation and Hate Speech Without Making It Worse (Article: 20 min)
    This article explores how civil society can combat misinformation and hate speech without making it worse. 
  • The Media Manipulation Casebook (Glossary)
    Terms related to misinformation, disinformation, and media manipulation. 
  • Disinformation creep: ADOS and the strategic weaponization of breaking news (Article: 24 min) An in-depth analysis of how COVID-19 misinformation ties to election misinformation. 
  • Racial Equity Tools (Website)
    Information on how to combat disinformation targeting Black communities. 
  • Detect Political Fakes (Website)
    Exercises designed to help spot deep fake videos. 
  • As Online Communities Mobilize Offline, Misinformation Manifests a Physical Threat (Article: 6 min)
    This article explores the link between misinformation and the right to freedom of association and assembly. 
  • Guide to Anti-Misinformation Actions Around the World (Website)
    The Poynter guide to civil society, journalism, and other groups debunking misinformation. 
  • How Online Misinformation Murdered the Truth (Podcast: 33 min)
    This MIT Technology Review “Deep Tech” podcast explores how attempts to stem the spread of harmful, false material can create no-win scenarios for the companies that run the Internet’s largest platforms. 
  • Want A Safer Internet? Listen to Black Women (Podcast: 35 min)
    This “UNDISTRACTED” podcast explores the role of racial justice in holding social media companies accountable for a lack of safety online. 
  • How Nonprofits Can Identify, Expose and Stop the Spread of Mis/Disinformation (Webinar: 90 min)
    This TechSoup webinar is designed to help professionals have conversations with their organizations about what qualifies as mis/disinformation, how it spreads, and what they can do to expose it. 
  • Civil Society: A Key Player in the Global Fight Against Misinformation (Article: 9 min)
    This article explores reports of Russian interference and accusations of biased news coverage following the 2016 US presidential election that gave rise to a renewed interest in how information influences politics. 
  • How Communicators Can Help Fight Disinformation (Article: 3 min)
    This PR News article explains how a wave of information pollution is impacting the planet. 
  • Harmony Square “Inoculates” Against Political Misinformation (Online Game)
    A free-to-play online game teaches players about how political misinformation is produced and spread. 
  • Protecting Democracy in an Age of Disinformation: Lessons from Taiwan (2021)
    This CSIS report explores how disinformation found a new foothold in the current era with a discussion on tactics us