
The Needle and the Dangers of Early Pay for Success Project Evaluation
From the Social Finance Institute
From the Social Finance Institute
The Social Finance Institute
Following the primaries? Maybe you’ve come across The Needle, the odometer-like data visualization The New York Times uses to display live election results. This seemingly simple graphic does a big job, analyzing initial returns and other sources to produce easy-to-consume forecasts. It also encapsulates The Times’ ambitious goal of providing real-time election predictions to information-hungry voters.
In the Pay for Success (PFS) world, we grapple with similar demands. Like voters feverishly anticipating their candidates’ delegate counts, PFS project stakeholders crave to measure their impact as quickly as possible. In fact, the desire for data-based validation is even more intense in our space. PFS initiatives typically unfold over 3 to 7 years to account for service delivery and participant observation. But few funders want to wait that long to know whether the millions in catalytic capital they’ve contributed are making an impact on society. Consequently, the temptation to peek under the hood mid-project can be overwhelming. However, looking at the preliminary evaluation results of an intervention and using that data to make funding or continuation decisions is risky.
If you’ve been keeping an eye on The Needle this primary season, you probably understand why. During early the hours of the New Hampshire Democratic primary on Feb. 11, it reflected the initial assumption that Vermont Senator Bernie Sanders had a 70% chance to win. But as initial returns rolled in, former South Bend, Indiana, Mayor Pete Buttigieg gained momentum and The Needle shifted — Sen. Sanders’ chances of victory dropped and Mayor Buttigieg’s increased sharply. However, several waves of fresh voting data pushed the senator back into the lead, stabilized The Needle, and allowed it form one final conclusion: 100% for Sen. Sanders.
PFS project evaluation results tend to fluctuate just like this. So, if stakeholders review intervention data too early, they might draw improper inferences about final outcomes much like you might’ve assumed Mayor Buttigieg was going to win New Hampshire after glancing at The Needle at 8:30 p.m. Actions like this can naturally lead to premature corrective action that pushes a successful program off track or results in the termination of an initiative on the verge of gaining traction.
That said, in a PFS initiative, or for that matter, any pilot, demonstration, or project with an evaluation-based learning agenda, reviewing preliminary evaluation results is sometimes necessary. So if you must check on your evaluation cake before it’s fully baked, consider adopting the following principles to help avoid making improper inferences:
Interested in learning more about Social Finance and our advisory work? Connect with Jake Segal.
This paper presents an overview of New York State’s Pay for Success (PFS) project to increase employment and recidivism in partnership with the Center for Employment Opportunities (CEO) and Social Finance.
This series of 10 issue briefs on Pay for Success (PFS) is intended to provide practical guidance and examples for government officials interested in pursuing PFS within their agencies or jurisdictions.