Standardizing Resource Data APIs

Team

Challenge

More than 1,000 organizations across the country provide people in need with the service of information and referral to health, human, and social services. Community by community, sector by sector — and through a range of channels such as call centers, resource directories, and web apps — these referral providers help tens of millions of people answer the question:  Where can I go to get help?

However, all of these channels have evolved within their own respective software systems, which tend to inhibit the ability for resource data to effectively flow through the many contexts in which people might try to find and use it. (This pattern shows up in service sectors around the world.) As a result:

  • People in need still have difficulty discovering and accessing services that can help them live better lives.
  • Service providers still have a hard time connecting clients with other services that can address complex needs.
  • Researchers and decision-makers find it hard to gauge effectiveness of programs at serving community needs.
  • Innovators are stymied by lack of access to data that could power helpful tools for any of the above.  

Project Overview

Many attempts to build centralized ‘one stop shop’ solutions have come and gone – but one solution can’t meet everyone’s needs. These well-intentioned efforts end up causing more fragmentation and confusion.

However, if the many different kinds of ‘community resource databases’ all recognized a common ‘language,’ then resource data could be published once and accessed simultaneously in many ways. So — in collaboration with Code for America, Google.org, the Alliance of Information and Referral Systems and others — Open Referral developed a data exchange format to establish interoperability among diverse information systems. This makes it possible to unleash this data from its silos. Now we must make it easy.

In Open Referral’s pilot projects, lead stakeholders — consisting of government champions, referral providers, community anchor institutions, etc — collaborate to establish open data infrastructure.

As various institutions adopt open standards and platforms, we anticipate the following outcomes:

  • More reliable information can be made available at a lower overall cost than in today’s siloed status quo;
  • Innovative tools and applications can proliferate, and become easier to re-deploy and adapt;
  • People can more easily find services, and service providers can more readily meet complex needs;
  • Researchers, policy-makers and funders can better understand community needs & resource gaps;
  • All of this can result in healthier people living in more resilient communities.

Conversations

Greg Bloom presented the Open Referral project at the Data on Purpose conference at Stanford University in February 2018.

Greg Bloom discussed the Open Referral project with the FabRiders’ Network in March 2019.

Outputs

  • The Human Services Data API SuiteA core protocol (HSDA), along with complementary sets of “microservice” protocols that address specific user needs, and open-source reference implementations and tools to assist deployment. These specifications are written in a machine-readable format known as OpenAPI. They can be freely used as ‘blueprints’ for the design and deployment of API-driven platforms, and also as tools to translate pre-existing APIs into this common format — enabling interoperability among compliant systems. The primary (‘core’) Human Services Data API protocol (HSDA) describes read/write functionality for the Open Referral’s Human Services Data Specification (HSDS), including resources for organizations, their locations, and their services. Through the HSDA Full protocol, third-parties can access entire contents of a database with a single call.
  • The Open Referral Developer Portal serves as both fully-functional documentation of the HSDA suite and redeployable code that can serve as a developer portal for any resource referral platform.

Insights

  • Standardization is a long-term process of knowledge accumulation and relationship building. This project was able to succeed with a modest budget because we leveraged existing relationships and accumulated knowledge (such as the Human Services Data Spec development, the Miami Open211 Initiative, etc). This was an intermediate step in a much longer-term process.
  • Research and development requires time and resources. We under-budgeted for the work of research, development and engagement. We succeeded largely through in-kind donations of labor that should be compensated. Ideally, funding for standards and interoperability would be ongoing, with resources intentionally allocated as a standard operating cost within broader funding initiatives.
  • Iterative development is especially difficult in a network-oriented context. Modern software development tends to entail rapid iteration — learning by ‘failing fast.’ This practice is more difficult when developing standards, because participants are understandably reluctant to implement a spec that might change shortly thereafter (thus requiring work to be re-done). Ideally, active implementations could be at least partially funded alongside the R&D process, in order to generate authentic feedback.
  • Success runs through our partners, which means progress is subject to forces beyond our control. For our work to be successful, our partners have to implement the specification in their own environments — and their actions are constrained by forces outside of our control. That means timeframes and budgets are all especially hard to estimate, and subject to unexpected change.
  • Just because anyone can redeploy open source technology doesn’t mean they readily willeven when they would benefit from it. Guidance and even direct implementation support can accelerate adoption. This is both a challenge and a business opportunity.
  • There is a viable business strategy for open resource data platforms. We’ve learned, through pilot projects in Miami and elsewhere, that open data can be an effective business model, especially through premium value-add products and service guarantees. This is often a non-obvious conclusion, and more work will be required to validate and scale these findings.
  • The logic of cooperation is infectious. The ‘default mode’ of business-as-usual — relentless competition in all directions — is wasteful and disempowering, yet also deeply ingrained in our culture. It can also be unlearned. The more people see cooperative behavior from other actors, the more they see the value of cooperation. Ultimately, this work (as is the case with most social problems) is less about technology itself than it is about making it easier for people with common interests to cooperate.

Next Steps

  • Apply expertise of AIRS and I&R sector to new practices of data sharing. For the Alliance of Information and Referral Systems, inter-organizational collaboration and data-sharing is considered a ‘best practice’ according to their official standards. These new API capabilities present a transformational shift in how such cooperation can actually work. The Open Referral ecosystem stands to gain much by applying the know-how from this diverse industry to these new technical methods.
  • Specify protocols for federated data management. HSDS currently enables publication of data from a single source to many potential users; this version of HSDA offers ‘write’ functionality, yet it still assumes a one-to-many relationship between source and users. Future protocols should account for the reality that resource directory data comes from distributed sources; in other words, they should support many-to-many data flows. At next frontier of this work, we need to support data federation.
  • Establish protocols for information that should stay out of public view. The majority of information about health, human, and social services is rightfully public; however, in some instances the location of certain services — such as shelters for survivors of domestic violence, or asylum-seekers, etc — is highly sensitive. Directory data that can potentially put vulnerable people at risk should be kept ‘closed.’ Future iterations of our protocols may need to specify methods to exclude such data from public view.
  • Evolve Open Referral in an API-first direction. Open Referral’s first specification, HSDS, called for bulk packaging of CSV files (to support bulk publishing and direct editing with simple tools). Given more interest in sharing data via API, our technical leads have proposed a new approach in v2.0, which would recenter around HSDA as a core specification, with datapackaged-CSVs as an optional bulk format.
  • Develop tools and support systems to facilitate adoption of APIs. Our field will benefit from more opportunities to learn by doing. The live HSDA ‘developer platform’ can be redeployed and adapted by vendors for their own data portals; our validator tool can help organizations with data transformation processes. We assume, however, that more research and development can improve the ability for developers and data managers to adopt new practices.
  • Licensing: research, deliberation, and education. The matter of licensing — of data as well as APIs and other code — is still an open question: “open source” can mean different things in different contexts, with widely varying economic and legal implications. We currently recommend Creative Commons 4.0 By Attribution-ShareAlike, though implementers may consider a range of licenses recognized as Open Definition Conformant. More research is needed into coherent and effective licensing norms.
  • Align with related standards: This domain is relevant to so many other established fields with adopted standards, such as FHIR in healthcare, schema.org on the web, Open311 for municipal ticketing systems, the National Information Exchange Model, and others. Future iterations of our development should include resources to establish crosswalks with relevant standards in neighboring domains.