Christopher Anderson / Magnum Photos

Confronting Radical Uncertainty

Tracy Palandjian and Jake Segal

March 27, 2024

What do we do when we don’t know ‘what works’?

What do we do when we don’t know ‘what works’?

Social science research is overdue for a recalibration. Social change is far less predictable than we’d like to believe; the interrelated fields of evidence, measurement and evaluation would benefit from a much heavier focus on feedback, reflection and improvement.

What’s so intriguing about Megan Stevenson’s article, “Cause, Effect, and the Structure of the Social World,” is the magnitude of its claims. Even on its face, it’s a bold challenge to the practical usefulness of causal research in criminal justice. But it’s far more than that, too; Stevenson uses her field of expertise as a window into something deeper, suggesting that, for all our confident assertions about evidence-based policy, we in fact know very little about how change happens in people’s lives.

That claim is enormous, fully against the grain of conventional wisdom and, in our experience, almost certainly true.

At Social Finance, a nonprofit that mobilizes data, funding, and ideas to shape public policy, we often confront what Stevenson describes as “the engineer’s view” of social change: Tweak something here, get more value there. Unlike Dr. Stevenson, we aren’t researchers or philosophers but users of causal research. The models we build — which are about accountability, often linking money with results — make the reliability of that research especially important. So we are serious consumers of the studies she cites and their applicability to real-world issues.

Our experience supports her caution about the limits of social science research. We have found, in our own less-comprehensive but wider-aperture work, a similar pattern: that “evidence” is context-dependent, contradictory, often minor, consistently conservative and rarely replicable. This is true even of policy interventions that top-notch researchers describe as rooted in the best evidence.

Our world is deeply unpredictable.

For example, home visiting for families with young children — a darling of the evidence-based policy movement — can be highly effective, or not. Different models find hugely different results, and sequential experiments of the same model don’t get the same findings. Supportive housing can sometimes reduce health care costs — this has become nearly a shibboleth in our sector — but it often doesn’t. Despite our best efforts to codify what works into simple recipes, we can’t; our world is deeply unpredictable.

Like any good secret, this uncertainty about the world is kept hidden for a reason. Certainty is convenient; it helps politicians get elected, and it helps nonprofits get funded. It’s far more compelling to say “we know this works” than to accurately describe a complex truth: that a variety of studies, conducted in different contexts and with highly variable methods, suggest there’s a reasonable probability it might work for some people. And simplicity is protective, too, of our collective social license to operate. To be transparent about our uncertainty is to risk retrenchment from a public that’s resistant to nuance. In other words, admitting how little we know for sure makes it harder to fund the work of exploration and improvement and easier to shrug our shoulders and walk away.

We believe in a more open, more humble view of social change — a deeply held belief in uncertainty, a rejection of false simplicity and honest political messaging that trusts people to hold complex truths. But we’re also here to argue for something more.

Good evaluations are momentary glimpses of underlying truths about the world.

Experimental evidence still has its place. We think of it as a spotlight. It can help us to see something specific we might only guess at otherwise. Yes, by its nature, such evidence misses much of the richness of context — ignoring the diffuse impact that good programs can create just out of view. But we see good evaluations, rare as they are, as momentary glimpses of underlying truths about the world, the tips of icebergs poking out over a dark sea. They shift under our feet the moment we step — nothing is ever “proven” — but it’s better than jumping, unseeing, into the water. We get a starting place for how we might build the next effective program, then take the next step forward. And over time, maybe we can get a better sense for where the patterns are, so we’re more likely to stay dry.

We shouldn’t see evidence as a solution, but as a starting place. The solution is simpler and harder: to collaborate with each other; get customer feedback to people delivering programs; help systems improve. We are not talking about one-off interventions that change the world. The real work isn’t about what we deliver, but how.

Our delivery systems need to be more dynamic. We should not have to rely on our (very limited) ability to predict how well programs will work. Instead, we should link funding for programs with how successful they are — while equipping providers with performance management tools, professional development opportunities, and thought partnership to help them be responsive to their participants. For example, we worked with the Commonwealth of Massachusetts and the Tuscaloosa Research and Education Advancement Corporation to design a funding model for a veterans’ supportive employment program that allows it to grow based on the program’s outcomes.

The real work isn’t about what we deliver, but how. Our delivery systems need to be more dynamic.

We should also make service delivery more adaptive. Adaptive systems ask us to replace compliance relationships — the dark heart of government — with something that’s more flexible to community needs and changing circumstances. We need to be able to look at progress every week, openly assess what’s working and what’s not and make changes. For example, in Texas’ child welfare system, we worked with the Department of Family and Protective Services to build a quarterly feedback tool that gives 130 statewide providers more insight into how the people they serve and the outcomes they achieve compare with other, similar organizations. Providers don’t just submit information to the state; they get detailed, actionable data back, letting them test new ideas, fix problems and adjust over time.

Consumers of social services should shape the programs they receive. That means new kinds of governance structures, and it also means pairing rigorous quantitative analysis with participant feedback. For example, along with the Municipality of Anchorage, the United Way, the Anchorage Coalition to End Homelessness, Southcentral Foundation and others, we help manage a supportive housing expansion in which a small governance group — including the local continuum of care and a community representative with firsthand experience in this issue — is empowered (and active) in making contract adjustments.

Simply trying to replicate the results of successful randomized controlled trials won’t change the world. But that doesn’t mean they’re a waste of time. They’re a good starting place for the real and hard work of creating better systems.