The Open Philanthropy Blog

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

The Open Philanthropy Project has ambitions of influencing very large amounts of giving in the future (hundreds of millions of dollars a year or more). To date, we haven’t made nearly enough recommendations to reach this level of giving, and this is not ideal. In a perfect world, we’d be recommending far more giving.

However, our approach is deliberate: we have chosen to prioritize capacity-building (choosing focus areas and hiring/onboarding program staff, in order to lay the groundwork for future grantmaking) over near-term grantmaking. This post discusses the reasons we’ve done this so far, as well as outlining our plans for ramping up giving in the future.

Contents:

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

Mariel_Refugees[1]

As a consultant for the Open Philanthropy Project last year, I reviewed the research on whether immigration reduces employment or earnings for workers in receiving countries. I concluded that for natives the harm, if any, is small.

Last month the prominent immigration researcher George Borjas posted a challenge to a seminal study in my review. His new paper contends that the Mariel boatlift, which brought some 60,000 Cuban refugees to Miami in 1980, did profoundly affect the labor market there, depressing wages for low-education men (ones with less than a high school education) by 10–30%.

Borjas’s work is especially significant because it seems to upend a study of the boatlift published by David Card 25 years ago, which found little impact of all that immigration on workers in Miami. Interestingly, Borjas, who emphasizes the harm of Cuban immigration, is himself a Cuban emigré.

I probed this dispute, replicating and checking the results in the dueling papers. I ultimately found little cause to change my views. The main reasons:

  • Of the two Census Bureau data sets that Borjas relies on, the one with larger samples shows smaller impacts.
  • According to that data set, wages for women, which Borjas excludes, rose, if anything, after immigration spikes (especially after a second one in 1994–95).
  • I see no sharp breaks from long-term trends of the sort that could be confidently attributed to the 1980 immigration surge. The Borjas analysis appears correct that wages for low-education Miami men (defined henceforth as those with less than a high school education) were lower on average in 1981–83 than in 1977–79—with the drop being larger than in most other US cities. But the data argue more for a steady long-term decline than sudden drops after immigration surges. The Borjas analysis tends to obscure this distinction by aggregating or smoothing data over several years.
  • The original study by David Card is one of 17 covered in my review, including three others exploiting natural experiments in mass migration. None of the studies is as compelling as a randomized trial, but the overall picture—of at most modest harm from substantial immigration—does not change if the Card study is removed.

Details follow.

Read More

Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on science philanthropy and global catastrophic risks. Throughout, “we” refers to positions taken by the Open Philanthropy Project as an entity rather than to a consensus of all staff.

Two priorities for the Open Philanthropy Project are our work on science philanthropy and global catastrophic risks. These interests are related because—in addition to greatly advancing civilization’s wealth and prosperity—advances in certain areas of science and technology may be key to exacerbating or addressing what we believe are the largest global catastrophic risks. (For detail on the idea that advances in technology could be a driver, see “ ‘Natural’ GCRs appear to be less harmful in expectation” in this post.) For example, nuclear engineering created the possibility of nuclear war, but also provided a source of energy that does not depend on fossil fuels, making it a potential tool in the fight against climate change. Similarly, future advances in bioengineering, genetic engineering, geoengineering, computer science (including artificial intelligence), nanotechnology, neuroscience, and robotics could have the potential to affect the long-term future of humanity in both positive and negative ways.

Therefore, we’ve been considering the possible consequences of advancing the pace of development of various individual areas of science and technology in order to have more informed opinions about which might be especially promising to speed up and which might create additional risks if accelerated. Following Nick Bostrom, we call this topic “differential technological development.” We believe that our views on this topic will inform our priorities in scientific research, and to a lesser extent, global catastrophic risks. We believe our ability to predict and plan for future factors such as these is highly limited, and we generally favor a default presumption that economic and technological development is positive, but we also think it’s worth putting some effort into understanding the interplay between scientific progress and global catastrophic risks in case any considerations seem strong enough to influence our priorities.

The first question our investigation of differential technological development looked into was the effect of speeding progress toward advanced AI on global catastrophic risk. This post gives our initial take on that question. One idea we sometimes hear is that it would be harmful to speed up the development of artificial intelligence because not enough work has been done to ensure that when very advanced artificial intelligence is created, it will be safe. This problem, it is argued, would be even worse if progress in the field accelerated. However, very advanced artificial intelligence could be a useful tool for overcoming other potential global catastrophic risks. If it comes sooner—and the world manages to avoid the risks that it poses directly—the world will spend less time at risk from these other factors.

Curious about how to compare these two factors, I tried looking at a simple model of the implications of a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University. I found that speeding up advanced artificial intelligence—according to my simple interpretation of these survey results—could easily result in reduced net exposure to the most extreme global catastrophic risks (e.g., those that could cause human extinction), and that what one believes on this topic is highly sensitive to some very difficult-to-estimate parameters (so that other estimates of those parameters could yield the opposite conclusion). This conclusion seems to be in tension with the view that speeding up artificial intelligence research would increase risk of human extinction on net, so I decided to write up this finding, both to get reactions and to illustrate the general kind of work we’re doing to think through the issue of differential technological development.

Below, I:

  • Describe our simplified model of the consequences of speeding up the development of advanced AI on the risk of human extinction using a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University.
  • Explain why, in this model, the effect of faster progress on artificial intelligence on the risk of human extinction is very unclear.
  • Describe several of the model’s many limitations, illustrating the challenges involved with this kind of analysis.

We are working on developing a broader understanding of this set of issues, as they apply to the areas of science and technology described above, and as they relate to the global catastrophic risks we focus on.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

This post gives an overall update on progress and plans for the Open Philanthropy Project. Our last update was about six months ago, and the primary goals it laid out were six-month goals.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

We’re excited to announce that Lewis Bollard has accepted our offer to join the Open Philanthropy Project as a Program Officer, leading our work on treatment of animals in industrial agriculture.

Lewis currently works as Policy Advisor & International Liaison to the CEO at The Humane Society of the United States (HSUS). Prior to that, he was a litigation fellow at HSUS, a law student at Yale, and an associate consultant at Bain & Company.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

Earlier this year, we announced Chloe Cockburn as our incoming Program Officer for criminal justice reform. Chloe started her new role at the end of August.

This hire was the top priority we set in our March update on U.S. policy. It represents the first time we’ve hired someone for a senior, cause-specific role. Chloe will be the primary person responsible for recommending $5+ million a year of grants in this space. As such, hiring Chloe is one of the highest-stakes decisions we’ve made yet for the Open Philanthropy Project, certainly higher-stakes than any particular grant to date. As such, we are writing up a summary of our thinking (including reservations), and the process we ran for this job search.

We also see this blog post as a major part of the case for future grants we make in criminal justice reform. Part of the goal of this process was to hire a person with context, experience, and relationships that go well beyond what it would be realistic to put in a writeup. We expect that future criminal justice reform grants will be subject to a good deal of critical discussion, and accompanied by writeups; at the same time, for readers who want to fully understand the thinking behind our grants, it is important to note that our bigger-picture bet on Chloe’s judgment will be a major input into each grant recommendation in this area.

Note that Chloe reviewed this post.

Table of contents:

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

Benjamin Soskis, who has been working for us on our history of philanthropy project, has completed a case study of philanthropy’s impact on the 2010 passage of the Affordable Care Act (ACA).

The case study focuses first on the Atlantic Philanthropies’ funding of Health Care for America Now! (HCAN), as well as on HCAN’s activities and impact. The second part of the study surveys the activities of other funders involved in health care reform, such as the Robert Wood Johnson Foundation, the Kaiser Family Foundation, and the Commonwealth Fund.

The case study concludes that, as a whole, philanthropic spending had a critical, though not necessarily easily quantifiable, role in the passage of the ACA. In the following passage, Dr. Soskis quotes HCAN’s Doneg McDonough:

“There’s just no way health reform would have passed without the [philanthropically funded] outside efforts going on. No question about it. Beyond that, it gets a little fuzzy. How much of an impact did [any particular intervention] have and which things actually were critical to making the ACA happen?”

This last statement, with its combination of broadly conceived certitude and localized indeterminacy, epitomizes one of this report’s central findings regarding the claims of philanthropic impact. (Case Study, Pg. 4)

Dr. Soskis’s study also examines the difficulty of disentangling the impact of any one funder from the impact of philanthropy as a whole. He writes:

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

This is the fourth post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was recently released.

I devoted the first three posts in this series to describing geomagnetic storms and assessing the odds that a Big One is coming. I concluded that the iconic Carrington superstorm of 1859 was neither as intense nor as overdue for an encore as some prominent analysts have suggested. (I suppose that’s unsurprising: those who say more-alarming things get more attention.) But my analysis is not certain. To paraphrase Churchill, the sun is a riddle, wrapped in a mystery, inside a corona. And great harm would flow from what I cannot rule out: a blackout spanning states and lasting months.

I shift in this post from whether the Big One is coming to what will happen if it does. And here, unfortunately, my facility with statistics does less good, for the top questions are now about power engineering: how grids and high-voltage transformers respond to planetary magnetic concussions.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on global catastrophic risks.

One focus area for the Open Philanthropy Project is reducing global catastrophic risks (such as from pandemics, potential risks from advanced artificial intelligence, geoengineering, and geomagnetic storms). A major reason that the Open Philanthropy Project is interested in global catastrophic risks is that a sufficiently severe catastrophe may risk changing the long-term trajectory of civilization in an unfavorable direction (potentially including human extinction if a catastrophe is particularly severe and our response is inadequate).

One possible perspective on such risks—which I associate with the Future of Humanity Institute, the Machine Intelligence Research Institute, and some people in the effective altruism community who are interested in the very long-term future—is that (a) the moral value of the very long-term future overwhelms other moral considerations; (b) given any catastrophe short of an outright extinction event, humanity would eventually recover, leaving humanity’s eventual long-term prospects relatively unchanged. On this view, seeking to prevent potential outright extinction events has overwhelmingly greater significance for humanity’s ultimate future than seeking to prevent less severe global catastrophes.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

This is the third post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was just released.

My last post examined the strength of certain major geomagnetic storms that occurred before the advent of the modern electrical grid, as well as a solar event in 2012 that could have caused a major storm on earth if it had happened a few weeks earlier or later. I concluded that the observed worst cases over the last 150+ years are probably not more than twice as intense as the major storms that have happened since modern grids were built, notably in 1982, 1989, and 2003.

But that analysis was in a sense informal. Using a branch of statistics called Extreme Value Theory (EVT), we can look more systematically at what the historical record tells us about the future. The method is not magic—it cannot reliably divine the scale of a 1000-year storm from 10 years of data—but through the familiar language of probability and confidence intervals it can discipline extrapolations with appropriate doses of uncertainty. This post brings EVT to geomagnetic storms.

Read More

Pages