This is the first in a series of posts summarizing the Open Philanthropy review of the evidence on the impacts of incarceration on crime. Read the full report here.
About when Chloe Cockburn joined Open Philanthropy to spearhead our grantmaking for criminal justice reform, I was tasked with reviewing the research on whether reducing the number of people in American jails and prisons might actually increase crime. In effect, we at Open Philanthropy asked ourselves: what if we’re wrong? What if our grantees win reforms that cut the number of people behind bars, and that pushes the crime rate up? How likely is that? And how likely is it that any increase would be large enough to overshadow the benefits of decarceration (including taxpayer savings and expanded human freedom)?
It may seem strange to launch a grantmaking program even as we question its empirical basis. But Open Philanthropy had already spent significant time studying criminal justice reform as a cause. And practical decisions must always be made in the face of incomplete information, forcing people and organizations to exercise what Herbert Simon called “bounded rationality.” It can be boundedly rational to act on the information gathered so far, even as you gather more.
The final report reaches two major conclusions:
- At typical policy margins in the United States today, decarceration probably has about zero net impact on crime outside of prison. That estimate is uncertain, but at least as much evidence suggests that decarceration reduces crime as increases it. The crux of the matter is that tougher sentences hardly deter crime, and that while imprisoning people temporarily stops them from committing crime outside prison walls, it also tends to increase their criminality after release. As a result, “tough-on-crime” initiatives can reduce crime in the short run but cause offsetting harm in the long run. In effect, they borrow from the future.
- Empirical social science research—or at least non-experimental social science research—should not be taken at face value. Among three dozen studies I reviewed, I obtained or reconstructed the data and code for eight. Replication and reanalysis revealed significant methodological concerns in seven and led to major reinterpretations of four. These studies endured much tougher scrutiny from me than they did from peer reviewers in order to make it into academic journals. Yet given the stakes in lives and dollars, the added scrutiny was worth it. So from the point of view of decision makers who rely on academic research, today’s peer review processes fall well short of optimal.
The rest of this post elaborates on those conclusions.
The scale of incarceration in the U.S.
Long ago when the world was young, I followed my girlfriend to a life in Philadelphia. Biking to work, I sometimes noticed, embedded in the drab urban fabric of asphalt and rowhouses, a crenelated, slit-windowed, medieval fortress. Not till I returned with my family in 2015 did I learn that it was the former Eastern State Penitentiary, an old prison built on the theories of a criminal justice reform movement of 200 years ago. It is a fantastic museum now: you must go. We finished our self-guided audio tour in a courtyard where inmates were once permitted to exercise. Installed there now is the Big Graph, a sculpture that puts America’s incarceration rate in perspective. This photograph of the Big Graph shows prisoners per 100,000 Americans by decade for 1900–2010:
In 1970, 196,000 people resided in American prisons, and another 161,000 in jails, which worked out to 174 inmates per 100,000 people. In 2015, 1.53 million people lived in U.S. prisons and 728,000 in jails, or 673 per 100,000. The next photo captures the side of that tall red bar, which depicts prisoners per 100,000 residents by country in 2010:
In fact, the U.S. may be rivalled by North Korea, but that country is presumably left off the Big Graph for lack of reliable data.
Experts disagree about what caused the incarcerated population to mushroom. John Pfaff of the Fordham School of Law sees a change in prosecutor behavior as key: prosecutors file more charges per arrest than they did in the 1970s. Alfred Blumstein and Allen Beck, on the other hand, assign roughly equal responsibility to increased commitments per arrests, which prosecutors would have contributed to, and lengthened prison stays. Regardless, there is little doubt that the “tough on crime” movement, including the “war on drugs” initiated by Richard Nixon, caused much of the expansion. Laws were passed to toughen sentences. More was spent on arresting people and keeping them in prison. For whatever reasons of principle and politics, prosecutors became more aggressive. And there was a racial dimension: the newly imprisoned were disproportionately black.
The more practical question for Open Philanthropy is not about the causes of the rise, but the consequences of a fall. Shrinking prisons should expand liberty and save money. But might it also increase crime? Or might it reduce crime, if the prison experience is itself a source of criminality?
A caveat: I focus on how incarceration affects crime outside prison, and thus neglect that putting more people in prison almost certainly increases crime inside. I do this for several reasons. I found no rigorous studies of how much crime the marginal prisoner (the sort of person most likely to go free if decarceration initiatives prevail) commits while in prison. Also, neglecting the potential benefit of lower in-prison crime biases my conclusion in the conservative direction, against the operating beliefs of Open Philanthropy. Finally, I think some people view crime in prison as less morally important than crime outside, part of deserved punishment. No one said prison would be fun. Though I disagree, to the extent that I conclude that decarceration does net good even under their moral calculus, my conclusion should be persuasive to a larger group; and progress on criminal justice reform today is possible precisely because of pragmatic coalition-building among people of diverse world views.
A good starting point for this investigation turned out to be a simple observation. Incarceration can be thought of as affecting crime before, during, and after: before, in that stiffer sentences may deter crime before it would happen; during, in that people inside prison cannot physically commit crime outside; and after, in that having been incarcerated may alter one’s chance of reoffending. The first is called “deterrence,” the second “incapacitation.” The third I call simply “aftereffects.” Conceptually, the aftereffects channel is most diverse. Prisons may rehabilitate inmates, by “scaring them straight,” or teaching them job skills, or treating their addictions. Or doing time may be criminogenic. Having been imprisoned may make it harder for people to find legal employment, may alienate them from society, and may strengthen their bonds with criminals, all of which could raise recidivism. (Aftereffects are conventionally termed “specific deterrence,” on the idea that having been in prison strengthens deterrence: once bitten twice shy. But I think it’s better to approach the evidence with theory-neutral terminology.)
In reviewing research on these channels from incarceration to crime, I restricted myself to potentially high-credibility studies, meaning ones that exploit randomized experiments, or else “quasi-experiments” that arise incidentally from the arbitrary machinations of the criminal justice system. And I put special weight on the eight studies whose data and computer code I could touch with my own hands, as it were. As I mentioned, my replication and reanalysis of these studies led me to revise my readings of some. (Section 3.3 of the full report describes my search for data and code.)
On the basis of the evidence, gathered, checked, and distilled, here is my reasoning behind my central estimate that decarceration in the United States today is not, or would not, increasing crime:
- Deterrence is de minimis. Eric Helland and Alex Tabarrok’s study of California’s tough “Three Strikes and You’re Out” sentencing law suggests that each 10% increase in prospective sentences cut crime by 1% through deterrence, for a modest “elasticity” of −0.1. The other replicable deterrence study, by David Abrams, looks at two kinds of laws adopted by many states: laws that set mandatory minimum sentences for certain crimes, and laws that raise sentences for crimes that involve a gun. In particular, the Abrams study checks whether rates of gun-involved assault and robbery fell right after either type of law was passed. Two types of law and two kinds of crime make for four combinations; the study finds an impact in one of the four, also with an elasticity of about −0.1. But my reanalyses of the two studies questions even those mild impacts.
- Incapacitation is real. Putting people in prison reduces crime outside of prison for the duration of their stays. Credible estimates of incapacitation—defined here as the crime reduction outside of prisons—vary by context. Especially salient to today’s criminal justice reform movement is the experience of California after the 2011 “realignment” reform, which reduced confinement of people convicted of non-serious, non-sexual, nonviolent offenses. Revisiting the work of Magnus Lofstrom and Steven Raphael, I estimate that each person-year of reduced incarceration caused 6.7 more property crimes in the state—burglary, general theft, motor vehicle theft—among which the impact on motor vehicle thefts is clearest, at 1.2.
- Aftereffects appear harmful more often than not: more time in prison, more crime after prison. In particular, all but one of the five studies that compare incapacitation and aftereffects in the same context find that any short-run crime drop from putting people behind bars is at least cancelled out by more crime when they get out. For example, Donald Green and Daniel Winik find that drug defendants in Washington, DC, who happened to appear before longer-sentencing judges were at least as likely to be rearrested within four years as those appearing before shorter-sentencing judges—even though the first group spent more of those four years in prison, where they could not be rearrested.
In short, it seems, incarceration’s “before” effect is mild or zero while the “after” typically cancels out the “during.”
Since my conclusion is uncertain and may be biased—or at least look that way—I also develop a devil’s-advocate position. From the evidence gathered here, how could one most persuasively contend that decarceration would endanger the public? I think the strongest argument would discard as biased my critical reanalysis of the two studies finding mild deterrence (item 1 above). It would then invoke the minority of aftereffects studies that contradict item 3, notably the Ilyana Kuziemko and Peter Ganong papers finding harmful aftereffects in Georgia, setting aside my critical reanalyses of those as well. Then, incarceration would be seen as reducing crime before, during, and after.
This table summarizes the two views on the marginal impact of decarceration on crime outside prison in the U.S. today:
My best synthesis of the evidence
|Note: “mild” = elasticity of –0.1.|
If the devil’s advocate is right that putting fewer people in prison causes more crime outside of prison, the increase might still be small enough that most people would view the tradeoff as worthwhile. After all, decarceration saves taxpayers money, increases the liberty and productivity of erstwhile prisoners, and reduces disruption of their families and communities. To explore this possible trade-off, the report closes with a cost-benefit analysis.
Overall, I estimate the societal benefit of decarceration at $92,000 per person-year of averted confinement. That figure is dominated by taxpayer savings and the money value of gained liberty. The crime increase perceived by the devil’s advocate translates into $22,000–$92,000, depending on the method used to express crime’s harm in dollars. I argue that the methodology behind the high end of that range is less reliable. It works from surveys that asked people how much they would pay for a 10% crime cut, even though most Americans do not know how much crime occurs near them, thus what it would mean to cut it 10%. But if we accept the high figure, then in the worst-case valuation of the worst-case scenario plausibly rooted in the evidence, decarceration is about break-even. Given the great uncertainties in that calculation—about the crime impact of decarceration, the money value of crime victimization, the value of liberty—the precision in the worst-case assessment—$92,000 in costs, $92,000 in benefits—is an illusion. The worst case should be viewed as roughly break-even.
This spreadsheet contains a big table that lists all studies I reviewed, what the studies find, and what I take away from them. A separate tab holds the cost-benefit analysis, which you can modify. In the next three posts, I’ll go into more depth on the research on deterrence, incapacitation, and aftereffects.
Code and data for all replications are here (800 MB, right-click and select “save link as”). The cost-benefit spreadsheet is here.
 Among the five studies—Green and Winik (2010), Loeffler (2013), Nagin and Snodgrass (2013), Mueller-Smith (2015), and Roach and Schanzenbach (2015)—only the last dissents. It is also the one where the quality of the quasi-experiment is least certain.