Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

This post gives an overall update on progress and plans for the Open Philanthropy Project. Our last update was about six months ago, and the primary goals it laid out were six-month goals.

Summary:

The overall theme is that we are putting most of our effort into capacity building (recruiting, trial hires, onboarding new hires). This is in contrast to six months ago, when most of our effort went into selecting focus areas. Six months from now, we hope to be putting most of our effort into recommending grants and putting out public content. (Specifically, we hope that our efforts within the “U.S. policy” and “global catastrophic risks” categories will fit this description. We expect it to take longer to choose focus areas within scientific research.)

U.S. policy

Our previous update stated:

  • Our new goal is to be in the late stages of making at least one “big bet” – a major grant ($5+ million) or full-time hire – in the next six months. We think there is a moderate likelihood that we will hit this goal; if we do not, we will narrow our focus to a smaller number of causes in order to raise our odds.
  • Our highest priority is to make a full-time hire on criminal justice reform, factory farming (pending a last bit of cause investigation, focused on the prospects for research on meat alternatives), or macroeconomic policy. Our second-highest priority is to further explore immigration policy and land use reform, with an eye to either finding more giving opportunities (hopefully including at least one major one) or to developing a full-time job description. A more extensive summary of our priorities is available as a Google sheet.

Since then:

Over the next six months, our top priority will be working with Chloe and Lewis to get in sync about goals and plans for those fields. Over time, we hope that the new program officers will lead this work with less involvement from us, but we believe it is important to invest early on in understanding each other’s thinking.

We expect that Alexander Berger, who leads our work on U.S. policy, will also have substantial time to pursue giving opportunities in causes that we aren’t currently expecting to make full-time hires for - particularly immigration policy and land use reform.

We expect to continue working on macroeconomic stabilization policy in some way, but we aren’t yet sure whether we will make a full-time hire for this cause. If we do not, it will be one of Alexander’s top priorities along with the two causes in the previous paragraph.

As a much lower priority, we will continue to conduct investigations on other causes.

We have updated our spreadsheet summary of our priority causes. We also provide a version that highlights the cells that have changed since our last public spreadsheet. In brief, we expect to focus primarily on immigration policy, land use reform and macroeconomic stabilization policy in addition to the two causes (criminal justice reform and factory farming) where we will be working with full-time hires. We may investigate a couple of particular giving opportunities in foreign aid policy and soil lead reduction; we are unlikely to work on other causes in the next 6 months beyond grant maintenance and general investigation.

Global catastrophic risks

Our previous update stated:

  • Our new goal is to be in the late stages of making at least one “big bet” – a major grant ($5+ million) or full-time hire – in the next six months. We think there is a moderate likelihood that we will hit this goal; if we do not, we will narrow our focus to a smaller number of causes in order to raise our odds.
  • Our highest priority is to make a full-time hire for working on biosecurity. As a second priority, we are spending significant time on various aspects of geoengineering, geomagnetic storms, potential risks from advanced artificial intelligence, and some issues that cut across different global catastrophic risks. A more extensive summary of our priorities and reasoning is available as a Google sheet.

 

Progress has been in line with our goals in some ways and not in others.

The main update has been regarding the cause of potential risks from advanced artificial intelligence. At the time of our last update, we hadn’t determined how to prioritize this cause, and it’s worth reviewing the basic progression of our thinking on the matter:

  • Since the beginning of our work on global catastrophic risks, we believed that this topic was worth looking into due to the high potential stakes and our impression that it was getting little attention from philanthropists. We were already broadly familiar with the arguments that this issue is important, and we initially focused on trying to determine why these arguments hadn’t seemed to get much engagement from mainstream computer scientists.
  • However, we paused our investigations (other than keeping up on major new materials and some of the critical response to them) when we learned about the Future of Life Institute’s January conference specifically on this topic, which Howie Lempel and Jacob Steinhardt attended as our representatives.
  • Our last update took place shortly following the conference. At that point, we had become convinced that this cause was highly important and worthy of investment. (This despite the fact that we remained uncertain about the details of some key people’s views - more here.) Our remaining concern was crowdedness: we wrote, “It remains unclear to us how to think about [the cause’s] ‘crowdedness’ [in light of Elon Musk’s $10 million gift], and we plan to coordinate closely with the Future of Life Institute to follow what gets funded and what gaps remain.”
  • Since then, we became convinced that there was a strong case for providing more funding to the Future of Life Institute’s research grant program, as discussed in our writeup on this program.
  • We decided to prioritize investigating the possibility of helping to support the Future of Life Institute research grants program. We ended up spending a large amount of time investigating this opportunity. It was difficult to tell in advance what the size of our recommendation (if any) would be; we ended up recommending a grant of ~$1.2 million. In some sense, this grant was a “big bet” given that we saw it as a major opportunity and invested significant time in it, although the grant size we ended up with was below the working definition of a “big bet” above.
  • Regardless of how one maps the grant to our goal, we see our progress as suboptimal in hindsight. We feel we could have invested less time in investigating this grant (while still ultimately recommending it) and instead made more progress on our continuing search for a full-time biosecurity hire.
  • We now have the impression that the field of research on potential risks of advanced artificial intelligence is changing rapidly - in particular, the amount of interest and the number of projects are growing - and if we do prioritize this area it may call for a full-time specialist. (This is a change from our previous position, where we saw the space as quite thin and unlikely to be a fit for a full-time position). Accordingly, we have begun working with a contractor (who could become a Program Officer in the future) to do a more in-depth investigation of the different possible activities in this space.

Other progress in this category:

Over the next six months, our top priorities will be our search for a full-time biosecurity hire and the above-mentioned work on investigating the field around potential risks from advanced artificial intelligence.

We have updated our spreadsheet summary of our priority causes. We also provide a version that highlights the cells that have changed since our last public spreadsheet.

Scientific research

Since our last update:

  • We have been working with Lily Kim and Melanie Smith on neglected goal investigations, and hope to publish our first writeup (on animal product alternatives) in the next few months. A major challenge of work in this category is that we haven’t identified an effective way to do “shallow” investigations; investigating any given scientific field takes a large amount of work from both generalist staff and scientific advisors.
  • Lily was previously working for us ~5 hours per week, and is now working ~20 hours per week.
  • We have created a posting for a full-time scientific advisor and put significant effort into the search, but have not made an offer yet.
  • We have lowered the priority of the other work in this category. Unlike with the above two categories, we have not yet set focus areas for scientific research, and we believe that doing so will require a large amount of investigation - preferably with high involvement from scientific advisors. We see our top priority as building scientific advisory capacity, and don’t expect to make substantial headway on setting focus areas until we have much more of it.
  • We have started an exploration of social sciences research. We are doing quick skims of the literature on questions that seem to have high potential social value, in order to identify potential important gaps in the literature. We might address such gaps by directly funding research or by working on systemic issues that affect many literatures. This work is quite preliminary, and so far we have been focused on developing a process for surveying the literature on a given question.

Public content

We are hoping to launch a website for the Open Philanthropy Project by the end of the year. We believe that the new website will make it much easier to understand our work and current priorities. Creating it has been a significant amount of work, and it remains difficult to forecast exactly when the website will be ready to launch.

We previously wrote:

We have recently been prioritizing investigation over public writeups, and our public content is running well behind our private investigations. We are experimenting with different processes for writing up completed investigations – in particular, trying to assign more of the work to more junior staff. If we could do this, it would make a major difference to our capacity, since senior staff already have a substantial challenge keeping up with all of our priority causes. By the end of 2015, we hope that our public content will be no further behind our private investigations than it is at the moment.

We have made some progress on this front, particularly on high-priority causes. We have published new writeups on land use reform, potential risks from advanced artificial intelligence, health care policy, and potential risks from atomically precise manufacturing as well as an updated and more in-depth writeup on nuclear security; the first three are particularly high-priority causes. We also have several other writeups in progress, including a more in-depth writeup on biosecurity.

We’re still not where we want to be on public content: we have many writeups still in progress, and our content is not well-organized (largely because it is on the GiveWell website rather than a separate Open Philanthropy Project website). By the end of the year, we hope that the situation will be much better due to launching our new website and publishing most of our still-pending writeups, but we expect that we will still have significant progress to make at that time.

Leave a comment