The Open Philanthropy Blog

Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on science philanthropy and global catastrophic risks. Throughout, “we” refers to positions taken by the Open Philanthropy Project as an entity rather than to a consensus of all staff.

Two priorities for the Open Philanthropy Project are our work on science philanthropy and global catastrophic risks. These interests are related because—in addition to greatly advancing civilization’s wealth and prosperity—advances in certain areas of science and technology may be key to exacerbating or addressing what we believe are the largest global catastrophic risks. (For detail on the idea that advances in technology could be a driver, see “ ‘Natural’ GCRs appear to be less harmful in expectation” in this post.) For example, nuclear engineering created the possibility of nuclear war, but also provided a source of energy that does not depend on fossil fuels, making it a potential tool in the fight against climate change. Similarly, future advances in bioengineering, genetic engineering, geoengineering, computer science (including artificial intelligence), nanotechnology, neuroscience, and robotics could have the potential to affect the long-term future of humanity in both positive and negative ways.

Therefore, we’ve been considering the possible consequences of advancing the pace of development of various individual areas of science and technology in order to have more informed opinions about which might be especially promising to speed up and which might create additional risks if accelerated. Following Nick Bostrom, we call this topic “differential technological development.” We believe that our views on this topic will inform our priorities in scientific research, and to a lesser extent, global catastrophic risks. We believe our ability to predict and plan for future factors such as these is highly limited, and we generally favor a default presumption that economic and technological development is positive, but we also think it’s worth putting some effort into understanding the interplay between scientific progress and global catastrophic risks in case any considerations seem strong enough to influence our priorities.

The first question our investigation of differential technological development looked into was the effect of speeding progress toward advanced AI on global catastrophic risk. This post gives our initial take on that question. One idea we sometimes hear is that it would be harmful to speed up the development of artificial intelligence because not enough work has been done to ensure that when very advanced artificial intelligence is created, it will be safe. This problem, it is argued, would be even worse if progress in the field accelerated. However, very advanced artificial intelligence could be a useful tool for overcoming other potential global catastrophic risks. If it comes sooner—and the world manages to avoid the risks that it poses directly—the world will spend less time at risk from these other factors.

Curious about how to compare these two factors, I tried looking at a simple model of the implications of a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University. I found that speeding up advanced artificial intelligence—according to my simple interpretation of these survey results—could easily result in reduced net exposure to the most extreme global catastrophic risks (e.g., those that could cause human extinction), and that what one believes on this topic is highly sensitive to some very difficult-to-estimate parameters (so that other estimates of those parameters could yield the opposite conclusion). This conclusion seems to be in tension with the view that speeding up artificial intelligence research would increase risk of human extinction on net, so I decided to write up this finding, both to get reactions and to illustrate the general kind of work we’re doing to think through the issue of differential technological development.

Below, I:

  • Describe our simplified model of the consequences of speeding up the development of advanced AI on the risk of human extinction using a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University.
  • Explain why, in this model, the effect of faster progress on artificial intelligence on the risk of human extinction is very unclear.
  • Describe several of the model’s many limitations, illustrating the challenges involved with this kind of analysis.

We are working on developing a broader understanding of this set of issues, as they apply to the areas of science and technology described above, and as they relate to the global catastrophic risks we focus on.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

This post gives an overall update on progress and plans for the Open Philanthropy Project. Our last update was about six months ago, and the primary goals it laid out were six-month goals.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

We’re excited to announce that Lewis Bollard has accepted our offer to join the Open Philanthropy Project as a Program Officer, leading our work on treatment of animals in industrial agriculture.

Lewis currently works as Policy Advisor & International Liaison to the CEO at The Humane Society of the United States (HSUS). Prior to that, he was a litigation fellow at HSUS, a law student at Yale, and an associate consultant at Bain & Company.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

Earlier this year, we announced Chloe Cockburn as our incoming Program Officer for criminal justice reform. Chloe started her new role at the end of August.

This hire was the top priority we set in our March update on U.S. policy. It represents the first time we’ve hired someone for a senior, cause-specific role. Chloe will be the primary person responsible for recommending $5+ million a year of grants in this space. As such, hiring Chloe is one of the highest-stakes decisions we’ve made yet for the Open Philanthropy Project, certainly higher-stakes than any particular grant to date. As such, we are writing up a summary of our thinking (including reservations), and the process we ran for this job search.

We also see this blog post as a major part of the case for future grants we make in criminal justice reform. Part of the goal of this process was to hire a person with context, experience, and relationships that go well beyond what it would be realistic to put in a writeup. We expect that future criminal justice reform grants will be subject to a good deal of critical discussion, and accompanied by writeups; at the same time, for readers who want to fully understand the thinking behind our grants, it is important to note that our bigger-picture bet on Chloe’s judgment will be a major input into each grant recommendation in this area.

Note that Chloe reviewed this post.

Table of contents:

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

Benjamin Soskis, who has been working for us on our history of philanthropy project, has completed a case study of philanthropy’s impact on the 2010 passage of the Affordable Care Act (ACA).

The case study focuses first on the Atlantic Philanthropies’ funding of Health Care for America Now! (HCAN), as well as on HCAN’s activities and impact. The second part of the study surveys the activities of other funders involved in health care reform, such as the Robert Wood Johnson Foundation, the Kaiser Family Foundation, and the Commonwealth Fund.

The case study concludes that, as a whole, philanthropic spending had a critical, though not necessarily easily quantifiable, role in the passage of the ACA. In the following passage, Dr. Soskis quotes HCAN’s Doneg McDonough:

“There’s just no way health reform would have passed without the [philanthropically funded] outside efforts going on. No question about it. Beyond that, it gets a little fuzzy. How much of an impact did [any particular intervention] have and which things actually were critical to making the ACA happen?”

This last statement, with its combination of broadly conceived certitude and localized indeterminacy, epitomizes one of this report’s central findings regarding the claims of philanthropic impact. (Case Study, Pg. 4)

Dr. Soskis’s study also examines the difficulty of disentangling the impact of any one funder from the impact of philanthropy as a whole. He writes:

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

This is the fourth post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was recently released.

I devoted the first three posts in this series to describing geomagnetic storms and assessing the odds that a Big One is coming. I concluded that the iconic Carrington superstorm of 1859 was neither as intense nor as overdue for an encore as some prominent analysts have suggested. (I suppose that’s unsurprising: those who say more-alarming things get more attention.) But my analysis is not certain. To paraphrase Churchill, the sun is a riddle, wrapped in a mystery, inside a corona. And great harm would flow from what I cannot rule out: a blackout spanning states and lasting months.

I shift in this post from whether the Big One is coming to what will happen if it does. And here, unfortunately, my facility with statistics does less good, for the top questions are now about power engineering: how grids and high-voltage transformers respond to planetary magnetic concussions.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on global catastrophic risks.

One focus area for the Open Philanthropy Project is reducing global catastrophic risks (such as from pandemics, potential risks from advanced artificial intelligence, geoengineering, and geomagnetic storms). A major reason that the Open Philanthropy Project is interested in global catastrophic risks is that a sufficiently severe catastrophe may risk changing the long-term trajectory of civilization in an unfavorable direction (potentially including human extinction if a catastrophe is particularly severe and our response is inadequate).

One possible perspective on such risks—which I associate with the Future of Humanity Institute, the Machine Intelligence Research Institute, and some people in the effective altruism community who are interested in the very long-term future—is that (a) the moral value of the very long-term future overwhelms other moral considerations; (b) given any catastrophe short of an outright extinction event, humanity would eventually recover, leaving humanity’s eventual long-term prospects relatively unchanged. On this view, seeking to prevent potential outright extinction events has overwhelmingly greater significance for humanity’s ultimate future than seeking to prevent less severe global catastrophes.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

This is the third post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was just released.

My last post examined the strength of certain major geomagnetic storms that occurred before the advent of the modern electrical grid, as well as a solar event in 2012 that could have caused a major storm on earth if it had happened a few weeks earlier or later. I concluded that the observed worst cases over the last 150+ years are probably not more than twice as intense as the major storms that have happened since modern grids were built, notably in 1982, 1989, and 2003.

But that analysis was in a sense informal. Using a branch of statistics called Extreme Value Theory (EVT), we can look more systematically at what the historical record tells us about the future. The method is not magic—it cannot reliably divine the scale of a 1000-year storm from 10 years of data—but through the familiar language of probability and confidence intervals it can discipline extrapolations with appropriate doses of uncertainty. This post brings EVT to geomagnetic storms.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

This is the second post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was just released.

My last post raised the specter of a geomagnetic storm so strong it would black out electric power across continent-scale regions for months or years, triggering an economic and humanitarian disaster.

How likely is that? One relevant source of knowledge is the historical record of geomagnetic disturbances, which is what this post considers. In approaching the geomagnetic storm issue, I had read some alarming statements to the effect that global society is overdue for the geomagnetic “Big One.” So I was surprised to find reassurance in the past. In my view, the most worrying extrapolations from the historical record do not properly represent it.

I hasten to emphasize that this historical analysis is only part of the overall geomagnetic storm risk assessment. Many uncertainties should leave us uneasy, from our incomplete understanding of the sun to the historically novel reliance of today’s grid operators on satellites that are themselves vulnerable to space weather. And since the scientific record stretches back only 30–150 years (depending on the indicator) and big storms happen about once a decade, the sample is too small to support sure extrapolations of extremes.

Read More

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

Image from NASA via Wikipedia

This is the first post in a series about geomagnetic storms as a global catastrophic risk. A paper covering the material in this series was recently released.

The Open Philanthropy Project has included geomagnetic storms in its list of global catastrophic risks of potential focus.

To be honest, I hadn’t heard of them either. But when I was consulting for GiveWell last fall, program officer Howie Lempel asked me to investigate the risks they pose. (Now I’m an employee of GiveWell.)

Read More

Pages