In March 2023, we kicked off our Spring Hack and Learn, a community event co-hosted with INSPER Metricis and the Indian Institute of Management – Lucknow, as part of the INDIGO initiative. The Hack and Learn is an international online event where policymakers, practitioners, researchers and data enthusiasts get together and work for two weeks on a particular challenge related to using better data, or using data in a more efficient manner, to improve social outcomes.
For this edition, we had three teams working on very exciting challenges. The ‘How much does it cost?’ challenge was a continuation of a conversation that started at the previous Hack and Learn event (September 2022). This team addressed the issue of the scarce and fragmented data on the transaction costs incurred by impact bond projects. The second team worked on measurement. Given how difficult it is to measure outcomes, this group focused on reviewing the measurement frameworks of different impact bond projects. Finally, a third group worked with the set of variables on outcome metrics. They worked to create new variables to better capture group-level metrics. All of the teams worked with data from the INDIGO Impact Bond Dataset.
After two weeks of work, we gathered for our Show and Tell event. Participants showed the final outputs of their work and received feedback from a panel of experts – Abhik Sen (Commonwealth Secretariat) and Felix-Anselm van Lier (GO Lab). This blog summarises the reflections and learnings of participants, and highlights the tools they developed during the Hack and Learn.
How much does it cost? (Gabriella Escobar Cukier, INSPER Metricis, São Paulo - Brazil)
We joined forces with a fantastic international team and decided to continue the conversation around transaction costs that started in 2022. Some of us were new to the impact bond world and we had the chance to learn about them, how they work and some of their advantages and disadvantages. Other participants were experienced practitioners in the field of outcomes-based contracts and they had a deeper discussion around the best ways to think of transaction costs in impact bond projects.
Our main task was to fill in the cost variables spreadsheet that was created during the last Hack and Learn event. This showed me the importance of testing research findings and data models as an iterative process to adjust and optimise the results. One challenge that we faced was the difficulty of attributing a single category to every cost line. For instance, the variables were designed to capture the cost of feasibility studies, business case development and design work in a separate way, but impact bond reports would use only one category to summarise all the early design and development costs. The level of granularity that the variables assumed was not consistent with the type of data that is publicly available. In the future, a more detailed description about what each category entails and how to aggregate them could be beneficial.
Understanding ‘Measurement’ in Impact Bonds (PriyanshuGupta, Indian Institute of Management, Lucknow – India)
Given the crucial importance of measurement to the design and execution of impact bonds, the team sought to analyse and draw insights around the different components of the measurement frameworks being used to structure impact bonds. For this project, the team looked into three elements: (i) the choice of metric to track social outcomes, (ii) the evaluation method used to capture and track this metric, and (iii) the payment structure that was tied to the chosen metric.
We found that there is considerable heterogeneity in the number of metrics used to trigger payments in impact bonds, with some using 2 or fewer, while one of the bonds used as many as 19 metrics. There was a median of 4 metrics per bond. Nearly two-thirds of the impact bonds had at least one output metric, while a third focused exclusively on outcome metrics. A per capita payment structure was used most frequently, followed by the use of payment structures based on the distance travelled by the beneficiary population after the intervention. The most popular choice of evaluation method was use of validated (and audited) administrative data generated as part of the project; followed by experimental and quasi-experimental evaluation designs.
While the challenge focused on asking the “what” questions in the project, the next step would be to investigate the “why” questions, such as why particular measurement frameworks, payment structures, and evaluation methods chosen, and how appropriate are they to track and manage the intended objectives of the programme?
Rethinking the outcome metric variables (Jorge Norio Rezende Ikawa, INSPER Metricis – São Paulo, Brazil)
The Impact Bond Dataset has a dedicated set of variables that capture information on outcome metrics and targets. As impact bonds evolve and adapt to different contexts, they have started to include group or system level metrics. However, the Impact Bond Dataset was designed to capture individual level metrics. Participants were challenged to adapt the outcome metrics model to be able to better capture data on different types of outcome metrics. More specifically, this conceptual challenge proposed two main activities: the identification of projects with group or system level metrics (for example, “aggregate learning gains for all students” or “expansion of the labour market”), and a reflection on how well the current data model captures non-individual level metrics.
Based on the analysis of a subsample of 100 contracts, the group found that group/system level metrics are very relevant as 37% of the contracts presented at least one group/system level metric. As a second step, the group proposed some improvements to the data model. The first suggestion was to standardise the information that should appear in the field “Outcome Definition”. This suggestion derived from the fact that some contracts have very detailed information and others are very succinct. The second suggestion refers to cases with missing data. Completing the column “Outcome metric target - (Value)” would bring important information about the metrics. However, the variable has a high percentage of missing data. The third suggestion was the inclusion of other options in the column “Unit type of targeted Service users or beneficiaries - (Value)”. At the moment, the two options are individuals or other. The categories “group level” and “system level” could be useful to specify that some metrics try to address system or group changes, rather than individual ones.
Surprise! Our friends at Open Data Services decided to use the opportunity of the Hack and Learn to explore the use of organisations id (org-id) to enrich the insights coming from the Impact Bond Dataset:
Discovering more information about projects using Org-id (James Baster, Open Data Services Coop, Scotland - UK)
When we use the Impact Bond Dataset to look at organisations, we know there are other datasets out there with information that might be of interest. For instance, 360Giving lists data on grants and Open Contracting lists data on outsourcing processes. If those grants or tenders relate to an impact bond, we may be interested in them.
But how can we find them? Well, one way to do so is by looking for organisations involved in impact bonds and seeing if these organisations appear in other datasets. To do this, we need to make sure that organisations in different datasets all use the same IDs. And this is where Org-id comes in - it provides a way to identify organisations by ID numbers they may have been given from various registries. And the INDIGO project, 360 giving, Open Contracting and other data standards all use Org-ids.
During this hack and learn we added Org-ids to some INDIGO organisations and wrote up notes on how to find them. We also looked at some Open Contracting data that was found in previous hack and learns and cleaned it up. At Open Data Services co-op we are also experimenting with a tool to bring org-ids from all these different data sets together, to make it easy to find all the references to an Org-id. For more information, you can read a blog post on Org-id.
Next Steps
We had a great time working together during the two weeks, and are very grateful to all the participants who dedicated time and effort to their challenges, and to our partners who co-hosted the event (INSPER Metricis and Indian Institute of Management, Lucknow). The next Hack and Learn event will be in September 2023. We are looking forward to the next opportunity to get together with the INDIGO community and continue the conversations that we started in this event.
The Hack and Learn is a biannual event hosted by the Government Outcomes Lab and our partners.The event is open to all, and free to attend. We also welcome proposals from external organisations to lead challenges. These are not associated with the GO Lab, and do not constitute a formal partnership with us, but we are happy to provide a platform for those who wish to share learning.