Reasonable Doubt: A New Look at Whether Prison Growth Cuts Crime

This is the first in a series of posts summarizing the Open Philanthropy review of the evidence on the impacts of incarceration on crime. The full report is available in PDF, Kindle, and ePub formats.

About when Chloe Cockburn joined Open Philanthropy to spearhead our grantmaking for criminal justice reform, I was tasked with reviewing the research on whether reducing the number of people in American jails and prisons might actually increase crime. In effect, we at Open Philanthropy asked ourselves: what if we’re wrong? What if our grantees win reforms that cut the number of people behind bars, and that pushes the crime rate up? How likely is that? And how likely is it that any increase would be large enough to overshadow the benefits of decarceration, which include taxpayer savings and expanded human freedom?

It may seem strange to launch a grantmaking program even as we question its empirical basis. But Open Philanthropy had already invested significant time in studying criminal justice reform as a cause. And practical decisions must always be made in the face of incomplete information, forcing people and organizations to exercise what Herbert Simon called “bounded rationality.” It can be boundedly rational to act on the information gathered so far, even as we gather more.

The final report reaches two major conclusions:

  • I estimate, that at typical policy margins in the United States today, decarceration has zero net impact on crime. That estimate is uncertain, but at least as much evidence suggests that decarceration reduces crime as increases it. The crux of the matter is that tougher sentences hardly deter crime, and that while imprisoning people temporarily stops them from committing crime outside prison walls, it also tends to increase their criminality after release. As a result, “tough-on-crime” initiatives can reduce crime in the short run but cause offsetting harm in the long run.
  • Empirical social science research—or at least non-experimental social science research—should not be taken at face value. Among three dozen studies I reviewed, I obtained or reconstructed the data and code for eight. Replication and reanalysis revealed significant methodological concerns in seven and led to major reinterpretations of four. These studies endured much tougher scrutiny from me than they did from peer reviewers in order to make it into academic journals. Yet given the stakes in lives and dollars, the added scrutiny was worth it. So from the point of view of decision makers who rely on academic research, today’s peer review processes fall well short of the optimal.

The rest of this post elaborates on those conclusions.

The scale of incarceration in the U.S.

Long ago when the world was young, I followed my girlfriend to Philadelphia. Biking to work, I sometimes noticed, embedded in the drab urban fabric of asphalt and rowhouses, a massive, crenelated, slit-windowed, medieval fortress. Not till 2015, when I returned with my family, did I learn that it was the former Eastern State Penitentiary, an old prison built on the theories of a criminal justice reform movement of 200 years ago. It is a fantastic museum now: you must go. When we visited, we finished our self-guided audio tours in an interior courtyard, where inmates were once permitted to exercise. Now installed there is the Big Graph, a sculpture that puts America’s incarceration rate in historical and global perspective. This photograph of the Big Graph shows prisoners per 100,000 Americans by decade for 1900–2010:

In 1970, 196,000 people resided in American prisons, and another 161,000 in jails, which worked out to 174 inmates per 100,000 people. In 2015, 1.53 million people lived in U.S. prisons and 728,000 in jails, or 673 per 100,000. The next photo captures the side of that tall red bar, which depicts prisoners per 100,000 residents by country in 2010:

In fact, the U.S. may be #2 behind North Korea, but that country is presumably left off the Big Graph for lack of reliable data.

Experts disagree about what caused the incarcerated population to mushroom. John Pfaff of the Fordham School of Law sees a change in prosecutor behavior as key: prosecutors file more charges per arrest than they did in the 1970s. Alfred Blumstein and Allen Beck, on the other hand, assign roughly equal responsibility to increased commitments per arrests, which prosecutors would have contributed to, and lengthened prison stays. Regardless, I think there is little doubt that a “tough on crime” movement, including the “war on drugs” initiated by Richard Nixon, caused much of the expansion. Laws were passed to toughen sentences. More was spent on arresting people and keeping them in prison. For whatever reasons of principle and politics, prosecutors became more aggressive. And there was a racial dimension: the newly imprisoned were disproportionately black.

The more practical question for Open Philanthropy is not about the causes of the rise, but the consequences of a fall. Putting fewer people behind bars should expand liberty and save government money. But might it also increase crime? Or might it reduce crime, if the prison experience is itself a source of criminality?

A good way to organize our inquiry is to start with a simple observation. Incarceration can be thought of as affecting crime before, during, and after: before, in that stiffer sentences may deter crime before it would happen; during, in that people inside prison cannot physically commit crime outside; and after, in that having been incarcerated may shift one’s chance of reoffending. The first is called “deterrence,” the second “incapacitation.” The third I call simply “aftereffects.” Conceptually, the aftereffects channel is most diverse. Prisons may rehabilitate inmates, by “scaring them straight,” or teaching them job skills, or treating their addictions. Or doing time may be criminogenic. Having been imprisoned may make it harder for people to find legal employment, may psychologically alienate them from mainstream society, and may strengthen their social bonds with criminals, all of which could raise recidivism. (Aftereffects are conventionally termed “specific deterrence,” on the idea that having been in prison strengthens deterrence: once bitten twice shy. But I think it’s better to approach the evidence with theory-neutral terminology.)

In reviewing studies that study one or more of these channels from incarceration to crime, I restricted myself to potentially high-credibility studies: ones that exploit randomized experiments, or else “quasi-experiments” that arise incidentally from the machinations of the criminal justice system. And I put special weight on the eight studies whose data and computer code I could touch with my own hands, as it were. As I mentioned, my replication and reanalysis of these studies led me to revise my readings of some. (Section 3.3 of the full report describes my search for data and code.)

On the basis of the evidence, gathered, checked, and distilled, here is my reasoning that decarceration in the United States today is unlikely to increase crime:

  1. Deterrence is de minimis. Eric Helland and Alex Tabarrok’s study of California’s tough “Three Strikes and You’re Out” sentencing law suggests that each 10% increase in prospective sentences cut crime by 1% through deterrence, for a modest “elasticity” of −0.1. The other replicable deterrence study, by David Abrams, looks at two kinds of laws adopted by many states: laws that set mandatory minimum sentences for certain crimes, and laws that raise sentences for crimes that involve a gun. In particular, the Abrams study checks whether rates of gun-involved assault and robbery fell right after either type of law was passed. Two types of law and two kinds of crime make for four combinations; the study finds an impact in one of the four, also with an elasticity of about −0.1. But my reanalyses of the two studies calls even those mild impacts into question.
  2. Incapacitation is real. Putting people in prison reduces crime outside of prison for the duration of their stays. Of course, putting more people in prison also causes more crime in prison. Credible estimates of incapacitation—defined here as the crime reduction outside of prisons—vary by context. Especially salient to today’s criminal justice reform movement is the experience of California after the 2011 “realignment” reform, which reduced confinement of people convicted of non-serious, non-sexual, nonviolent offenses. Revisiting the work of Magnus Lofstrom and Steven Raphael, I estimate that each person-year of reduced incarceration caused 6.7 more property crimes in the state—burglary, general theft, motor vehicle theft—among which the impact on motor vehicle thefts is clearest, at 1.2.
  3. Aftereffects appear harmful more often than not: more time in prison, more crime after prison. In particular, all but one of the five studies that compare incapacitation and aftereffects in the same context find that any short-run crime drop from putting people behind bars is at least cancelled out by more crime when they get out.[1] For example, Donald Green and Daniel Winik find that drug defendants in Washington, DC, who happened to appear before longer-sentencing judges were at least as likely to be rearrested within four years as those appearing before shorter-sentencing judges—even though the first group spent more of those four years in prison, when they could not be rearrested.

In short, it seems, incarceration’s “before” effect is mild or zero while the “after” typically cancels out the “during.”

Since my conclusion is uncertain and may be biased—or may at least look that way—I also develop a devil’s-advocate position. From the evidence gathered here, how could one most persuasively contend that decarceration would endanger the public? I think the strongest argument would discard as biased my critical reanalysis of the two studies finding mild deterrence (item 1). It would then invoke the minority of aftereffects studies that contradict item 3 above, notably the Ilyana Kuziemko and Peter Ganong papers finding harmful aftereffects in Georgia (setting aside my critical reanalyses of those as well). Then, incarceration would be seen as reducing crime before, during, and after.

This table summarizes the two views on the marginal impact of decarceration on crime in the U.S. today:

My best synthesis of the evidence

Devil’s-advocate view

Deterrence 0 + (mild)
Incapacitation + +
Aftereffects +
Total 0 +
Note: “mild” = elasticity of –0.1.

If the devil’s advocate is right that putting fewer people in prison causes more crime outside of prison, the increase might still be small enough that most people would view the tradeoff as worthwhile. After all, decarceration saves taxpayers money, increases the liberty of and economic productivity of erstwhile prisoners, and reduces disruption of their families and communities. To explore this possible trade-off, the report closes with a cost-benefit analysis.

Overall, I estimate the societal benefit of decarceration at $92,000 per person-year of averted confinement. That figure is dominated by taxpayer savings and the money value of gained liberty. The crime increase perceived by the devil’s advocate translates into $22,000–$92,000, depending on the method used to express crime’s harm in dollars. I argue that the methodology behind the high end of that range is less reliable. It works from surveys that asked people how much they would pay for a 10% crime cut, even though most Americans do not know how much crime occurs near them, thus what it would mean to cut it 10%. But if we accept the high figure, then in the worst-case valuation of the worst-case scenario plausibly rooted in the evidence, decarceration is about break-even. Given the great uncertainties in that calculation—about the crime impact of decarceration, the money value of crime victimization, the value of liberty—the precision in the worst-case assessment ($92,000 in costs, $92,000 in benefits) is an illusion. The worst case should be viewed as roughly break-even.

This spreadsheet contains a big table that lists all studies I reviewed, what the studies find, and what I take away from them. A separate tab holds the cost-benefit analysis, which you can modify. In the next three posts, I’ll go into more depth on the research on deterrence, incapacitation, and aftereffects.

Code and data for all replications are here (800 MB). The cost-benefit spreadsheet is here.


[1] Among the five studies—Green and Winik (2010), Loeffler (2013), Nagin and Snodgrass (2013), Mueller-Smith (2015), and Roach and Schanzenbach (2015)—only the last dissents. It is also the one where the quality of the quasi-experiment is least certain.


This is fantastic, David, thank you.

Leave a comment