Democracy, Institutional Design, and Technologies of Expertise
“Algorithms used to be what made computers run faster. Now they create social and economic systems.” Ashish Goel, Stanford University
Digital platforms to solicit input, sort ideas, rank opinions, find patterns, and “matchmake” skills are proliferating. When applied to democratic institutions, these tools are what Beth Noveck of NYU’s Gov Lab calls civic “technologies of expertise” in her new book Smart Citizens, Smarter State.
Government agencies have been slow to adapt these tools, although citizens, civil society associations, and commercial enterprises routinely use them for a number of purposes. Wikipedia is the best known and most famous example, an enterprise so improbably successful at tapping into and aggregating the expertise of thousands of individuals who volunteer their time and talent that scholars say of it, “Works in practice, though it will never work in theory.”
When deployed by public institutions, crowdsourcing technologies can have direct bearing on public policy and decision-making. Examples include White House-hosted online petition sites, expert-matching software used at Los Alamos National Laboratories, and digital platforms to enable participatory budgeting experiments at large scale. Experimentation is even more widespread at the state and municipal level, facilitated by the open data movement, government innovation efforts led by groups such as Code For America, and university or city-based communities of civic technologists.
Modern democracy — beyond face-to-face direct participation — possesses a dizzying array of institutional arrangements. Think of institutions that structure voting, tax paying, civil service, open meetings, citizen service on commissions, juries, lobbying, petition signing, voter drives, political campaigns, and more. Developing these democratic institutions poses a design challenge: what norms should inform the design of these institutions, and by what outcomes should we measure their success?
As President Obama takes to SXSW to encourage techies to join government, we need to ask not only how technology can improve democracy, but how democracy must shape civic technology.
The advent of digital technologies permits us to re-think the design of democratic institutions so as to bring about better outcomes. Many of the actions that take place within democratic institutions — choice selection, funding, participation, contributions of expertise, expression of opinion — can now be facilitated at greater scale, with broader reach, and possibly at lower cost using digital technologies. Crowdsourcing ideas, ranking opinions, selecting among numerous options, raising funds for joint action — these are “bread and butter” actions for countless digital platforms, and there’s no reason to think they would not be usefully deployed within democratic life in addition to civil society associations and ordinary businesses.
What happens when the mechanisms of democracy as we’ve established them over generations meet these new technological capabilities? How might they complement each other? What perils are there to be avoided?
Cluster a variety of technologies — digital platforms for gathering ideas, opinions, and expertise — under the category of crowdsourcing. Core to the appeal of these technologies in democratic decision-making is not simply their efficiencies, but also their potential epistemic value, their potential to increase expertise, and improve decision making, and thereby to shift the outcomes of public policy for the better. Civic crowdsourcing aims first and foremost at making the outcomes of democratic institutions better. Civic crowdsourcing also aims, according to some advocates, at increasing the fairness of democratic processes, independent of the prospect of improved outcomes generated by the processes. It is essential to distinguish these two aspirations for civic crowdsourcing, the epistemic and the procedural. Deploying these technologies in ways that might attract a more divergent set of opinions or ideas would be in line with an epistemic argument. Structuring their deployment so that participation itself is more inclusive, equitable, or representative of the population as a whole would align with a procedural argument. Either approach can be said to contribute to the legitimacy of the design choices and the ultimate decisions.
Noveck’s work, informed by her research and experience inside the U.S. government (she served as Deputy Chief Technology Officer in the first Obama administration), begins with an exploration of how civic technologies are shifting our understanding of expertise and the nature of credentialing writ large. What’s distinctive about her approach to civic crowdsourcing is that she does not stress the wisdom of crowds. It’s a familiar observation these days to note that asking a large group of people to guess the number of jellybeans in a jar improves the likelihood of an accurate guess. Noveck stresses instead that because individual citizens have distinctive and particular forms of expertise, civic technologies can be used to curate smarter, more relevant groups rather than simply putting out open calls. Wisdom rests in the heads of different citizens, not only in the crowd.
When deploying digital technologies to identify and reach out to the expertise of citizens, individually or collectively, it’s important to focus our attention on the underlying values that we “build into” the institutional mechanisms. Or put differently, we need to think about the norms that help guide the design of technologies of expertise when used in democratic institutions. If civic crowdsourcing holds the promise to improve the outcomes of public policy, enhance the participatory process, and increase democratic legitimacy, we need to ask if all three of these aspirations — call them democratic knowledge, democratic process, and democratic legitimacy — can be simultaneously realized or whether we must face tradeoffs in designing civic crowdsourcing technologies that emphasize one aspiration over others.
Notice an important contrast here. Outside of the civic realm, the technological platforms that have achieved broad acceptance are designed for commercial purposes and emphasize scale and efficiency. While many of these platforms have created new avenues for more inclusive participation or enhancements of expertise, these are secondary benefits (at best). In a commercial enterprise, there is no need to design crowdsourcing for democratic participation by increasing equity, inclusion, or participation. It’s enough just to improve efficiency. For example, Github is designed to serve as a resource to make software coding more efficient. It has developed into a mechanism for demonstrating coding expertise and also serves as a macro-lens into the state of open source software — two developments that hint at the knowledge-creating possibilities of these tools. Recent research on gender differences of “successful” code contributions that shows the site reflects social bias does not impugn the efficiency gains it produces.
Noveck’s book is rich with examples of attempts to “custom build” knowledge sharing or expertise sourcing systems within existing institutions, many of which never achieve lift off, let alone set a new standard of practice as Github has done in the world of open code. Why? There are a host of reasons ranging from misaligned incentives to organizational cultures that make initial adoption and rapid iteration very difficult. Many of the commercially successful platforms grew over time, starting with core adopters and expanding their services (while also constantly adjusting their interfaces and behind-the-scenes algorithms). Internal efforts at expertise matching systems within large bureaucracies, on the other hand, have tended to be designed from the top down with a hope of building them “once and for all.” When adoption wasn’t immediate, the response was to shut down the experiment, a far cry from the commercially proven approach of focusing on a core group, steadily improving the services, and expanding reach by ongoing experimentation with new features.
We can learn a great deal from experiments with crowdsourcing within government agencies. Noveck shows, for example, that pre-qualifying experts (through a range of credentialing options) and then seeking input from targeted audiences works better than open calls.
But experimentation within existing institutions — improving governance “the day after the election,” as Noveck explains — is not the only opportunity.
Civic crowdsourcing should also experiment with participatory input on agenda setting, budgetary allocation, and the generation of policy ideas, not just the improved implementation of existing policy.
Consider recent work by an inspired team of collaborators across disciplinary lines at Stanford and Yale, Ashish Goel, David Lee, Tanja Aitamuro, and Hélène Landemore, Efforts to engage residents of a northern Finnish state in informing new traffic regulations provided a test case for learning more about both the social and technological implications of using technologies of expertise in democratic decision making.  The team of authors examined opportunities for using crowdsourcing techniques to inform policy options from two radically different methodological perspectives. The first is that of computational complexity theory, which focuses on the outer bounds of algorithmic choice making. What number of variables can be considered, what shortcuts can be developed, and what tradeoffs have to be considered when applying software to categorize and rank radically different options? The findings have implications for how these crowd-based technologies may be used to augment “analog” decision making capacity. They also draw attention to the ways the inner workings of these tools — the mathematical and computational capacities — should be examined for the purposes of enhancing democratic decision making, not merely assumed. That advanced technical skills are necessary to do so is not a factor for or against their use; it does point to the need for cross-disciplinary expertise. The lesson is unsurprising even if often forgotten: the implementation of civic technologies that aim to improve democratic knowledge, processes, and legitimacy can be complex, and their complexity can be a barrier to widespread adoption, and, when this happens, can compromise their fairness.
The authors considered the same decision making experiment from the vantage point of epistemic democracy, or improved policy outcomes. How did residents participating in the program offer, select, and rank numerous policy choices? How were these choices different from or similar to those that might have been generated by more traditional processes? How were the final selections made from the choices provided? While it’s easiest for researchers to experiment with using the tools of crowdsourcing at only one point in the process between problem and solution, it turns out that it’s the whole process that matters. Increasing input and allowing for a greater range of choices in the early stages of policymaking are inadequate steps to real change. They are neither powerful enough to overcome the inertia of existing institutions nor democratic enough to supercede existing rules about citizen engagement and public accountability.
Here we see the tensions between empirical analysis and real-world decision-making. In democratic societies, elected officials or civil service employees in public agencies may seek to bypass civic crowdsourcing technologies. It’s possible to crowdsource ideas and rank them, but elected officials may still require staff to research the ideas that weren’t “upvoted by the crowd.” We can introduce databases and expertise-finding systems into big bureaucracies over and over again, and even get better at slowly building buy-in and changing incentives so people use them. But unless the institutional incentives — not just the technological ones — also change, we’ll continue to face situations where the potential gains of a new technology are lost because of inertia, lagging institutional incentives, or procedural barriers.
We face a bi-directional opportunity or challenge. Civic crowdsourcing offers us the opportunity to think of both democracy and technology platforms as “designed systems,” and approach them with an eye toward mutual improvement. Simply applying technologies optimized for efficiency to democratic systems will not suffice. We need to consider how these technologies of expertise can improve democratic institutions and the policy outputs that emerge from them and are administered by them. We need to consider what requisite changes to democratic institutions will facilitate their adoption. Though there has been little longitudinal experimentation, Noveck’s works abounds with numerous smaller, one-off efforts. In the context of the now established open data and civic tech movements there is a body of knowledge and communities of practitioners and scholars with the capacity and incentive to build on what has already been tried.
Finally, and most importantly, we also need to consider what changes to the technologies are required to achieve the purposes of democratic societies: democratic knowledge, democratic process, and democratic legitimacy. Broad participation, equal opportunity, and inclusive design are a start. But if the epistemic benefits of democracy — its capacity to generate better policies because it sources and organizes expertise very broadly — are to be realized, then simply repurposing existing commercial platforms won’t suffice. Additionally, if the measure of a successful democratic experiment is not just its inclusive degree of participation, but its ability to produce and implement better policies, then the design and evaluative demands fall on both the tech platforms and the institutional practices.
It’s not just a matter of deploying new technologies and trying to make democratic institutions use them. It’s also a matter of designing technology that facilitates inclusivity and equality and allows for open scrutiny. Democracy demands processes of scrutiny and accountability — usually thought of as published decision-making criteria, public access to meetings and materials, and due process — that don’t come as part of off the shelf crowdsourcing platforms. On these fronts we need to redesign the technologies to meet the standards by which we rightly expect democracies to make decisions. Permitting algorithms into public agencies requires visibility into the algorithm itself; there can be no algorithmic governance without processes to democratically govern algorithms.
As Henry Farrell and Cosma Shalizi write in their important essay “Pursuing Cognitive Democracy,” new technologies do not automatically yield better democracy. Instead, democratic values must shape the deployment of civic technologies of expertise if we are to reap the potential gains of these new media.
 We thank the presenters at a February 2016 workshop at Stanford University, Beth Simone Noveck, Hélène Landemore, Henry Farrell, Ashish Goel, David Lee, and Tanja Aitamuro, for discussion.
 Annalee Newitz, “Data Analysis of Github Contributions reveals Unexpected Gender Bias,” February 11, 2016,http://arstechnica.com/information-technology/2016/02/data-analysis-of-github-contributions-reveals-unexpected-gender-bias/
 Tanja Aitamurto, Hélène Landemore, “Crowdsourced Deliberations: The Case of the Law on Off-Road Traffic in Finland,” Policy & Internet, Vol. 9999, No. 9999, 2016; David T. Lee, Ashish Goel, Tanja Aitamurto, Helene Landemore, “Crowdsourcing for Participatory Democracies: Efficient Elicitation of Social Choice Functions,” Collective Intelligence Conference,http://collective.mech.northwestern.edu/?page_id=217
 Ruth Simon, “Crowdfunding Sites like GoFundMe and YouCaring Raise Funds — And Concerns,” The Wall Street Journal, February 29, 2016
 Henry Farrell and Cosma Shalizi, “Pursuing Cognitive Democracy,” inFrom Voice to Influence: Understanding Citizenship in a Digital Age, Danielle Allen and Jennifer Light, eds. (University of Chicago Press, 2015): 211–231.