Skip to content

Digital Impact was created by the Digital Civil Society Lab at Stanford PACS and was managed until 2024. It is no longer being updated.

From Raw Data to Informed Decisions: What We Can Learn From the Financial Sector

The complexity of the “data” question is evident to the point of new cliché. Regardless of whether it is cliché or not, we have to deal with it. After any given round of wrestling with the question, we can often sense ways forward that are simple, powerful, and counterintuitive. That’s a small consolation en route to the next hard part: execution. Sunand Menon, founder of New Media Insight, covers this wide range from complexity to the ways we can act in this entry for Markets For Good.

Approximately US$300 billion in philanthropic giving is distributed annually to more than one million nonprofit organizations in the United States alone. However, there seems to be no clear way to gauge how well these resources are being used, since there is insufficient information, transparency, access, quality, and utility.

If the right data is collected and the right performance analytics are created, they could help pinpoint the highest performers, and result in better decision-making and more efficient allocation of resources, which ultimately will provide greater value to those in need. Sounds good in theory. But how do we do this?

Take the example of the Financial Services industry. Information companies servicing Financial Services firms have been successful in collecting, analyzing, and disseminating data, analytics and research to help investors make better investment decisions. Many of the systems and processes that are readily available and taken for granted in financial services information can also be implemented in the social sector.

Companies like Thomson Reuters, Bloomberg, Standard & Poors, Morningstar and Lipper have thrived by collecting data (no matter how opaque or infrequently generated), developing performance criteria that help make sense of the data (no matter how objective or subjective), and distributing it in a manner that allows for better decision-making. They achieved success by providing value across the spectrum of content services – from “Data”, to “Information” (in the form of value-added analytics such as Classifications, Indices and Ratings), to “Knowledge” (in the form of human insights, research, and best practices). And they maintained that success by investing in high quality, scalable operations, and by building brands that signify independence, accuracy and reliability.

Interestingly, they have all co-existed while developing different types of performance metrics – some of which are more accepted than others. Standard & Poors and Thomson Reuters advocate different data classification schemas (“GICS” vs. “TRBC”). Lipper and Morningstar use different fund ratings criteria (“Lipper Leaders” vs. “Star Ratings”). There is rarely one universally agreed criterion.

As long as the metrics are simple and generally representative; as long as they are being used and are helpful to the customer; and as long as they are initially endorsed and socialized by a few key players in order to gain traction, they can succeed.

Ah, you say. But what about all the failings of the financial services industry, for example, the mortgage crisis? Why should we take lessons from an industry that played a key role in the economic crisis we are currently in? How can we avoid such disruptions in the social sector?

You are right. The above is likely not sufficient. In my view, there are at least two other very important considerations – transparency and aggregation.

Many failures seem to occur when there is a lack of transparency – take the example of the recent ruling by the Federal Court of Australia that S&P “deceived” and “misled” 12 local councils that bought triple-A rated constant proportion debt obligations (CPDOs?). According to the Financial Times, the court said a “reasonably competent” rating agency could not have given a triple A rating to the “grotesquely complicated” securities, and that they had published information that was either “false” or involved “negligent misrepresentations”. Even in this failure, there are lessons to be learned.

The takeaway for the nonprofit industry would be to create easily understandable, transparent methodologies that facilitate better apples to apples comparisons, and therefore more informed decision-making. And of course, to avoid creating a rating entity that is generally paid by the organizations it rates!

Aggregation also plays an important role in avoiding financial market disruptions, allowing us to gain multiple viewpoints before deciding. Let’s take the example of a mutual fund. Look at its Lipper rating. Look at its Morningstar rating. Read up about it. Speak to people. Compare its performance against a benchmark index. Form a view, and then make a decision. That’s “Information Complementarity” at work. And it generally works – as long as there is sufficient transparency, and there is the ability to review multiple, aggregated viewpoints.

So why hasn’t this approach been adopted in the nonprofit world, and what would it take to do so?

Firstly, there seems to be lukewarm interest and incentive from nonprofits and funders to build such metrics and infrastructure – unlike in industries such as Asset Management, where the success of a firm is critically dependent on demonstrating its high performance and low costs. This is said to be slowly changing, since many large foundations are now signaling a desire for increased transparency, efficiency and performance monitoring. This needs to further gain momentum.

Secondly,there seems to be an overly strong emphasis on ensuring that as many stakeholders as possible come together and agree on a set of metrics and taxonomies, before officially launching a solution. This may result in a protracted set of discussions, and produce a “lowest common denominator” set of metrics that may not be optimal. The nonprofit world could consider convening a group of key influencers (e.g. prominent foundations with a history of interest and research in this area, and subject matter experts with “gravitas”) to design these metrics, test them, gain feedback, tweak them, endorse them, and then create programs to gain adoption.

These are valuable lessons that could help make the social sector more performance-oriented and effective in the future. The solutions do not have to be perfect; they should be transparent and good enough to ensure that the end user is able to access and make use of the “raw data” and transform it to actionable, “informed decisions”.