- Research & Ideas
- Focus Areas
- U.S. Policy
- Criminal Justice Reform
- Farm Animal Welfare
- Macroeconomic Stabilization Policy
- Immigration Policy
- Land Use Reform
- Global Catastrophic Risks
- Biosecurity and Pandemic Preparedness
- Potential Risks from Advanced Artificial Intelligence
- Scientific Research
- Global Health & Development
- Other areas
- About Us
- Get Involved
The Open Philanthropy Blog
September 20, 2016
Our goal is to give blog readers and followers of the Open Philanthropy Project an opportunity to publicly raise comments or questions about the Open Philanthropy Project or related topics (in the comments section below). As always, you’re also welcome to email us at firstname.lastname@example.org if there’s feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.
You can see our previous open thread here.Read More
September 16, 2016
One of our core values is sharing what we’re learning. We envision a world in which philanthropists increasingly discuss their research, reasoning, results and mistakes publicly to help each other learn more quickly and serve others more effectively.
However, we think there has been confusion - including in our own heads - between the above idea and a related one: the idea that philanthropists should share and explain their thinking near-comprehensively so that the reasoning behind every decision can be understood and critiqued.
Such near-comprehensive information sharing is an appropriate goal for GiveWell, which exists primarily to make recommendations to the public, and emphasizes the transparency of these recommendations as a key reason to follow them. (See GiveWell’s approach to transparency.)
However, we now feel it is not an appropriate goal for the Open Philanthropy Project, whose mission is to give as effectively as we can and share our findings openly so that anyone can build on our work. For our mission, it seems more appropriate to aim for extensive information sharing (well in excess of what other funders currently do) but not to aim for near-comprehensiveness.
This distinction has become more salient to us as our picture of the costs and benefits of information sharing has evolved. This post lays out that evolution, and some changes we plan to make going forward. In brief:Read More
September 6, 2016
Philanthropy - especially hits-based philanthropy - is driven by a large number of judgment calls. At the Open Philanthropy Project, we’ve explicitly designed our process to put major weight on the views of individual leaders and program officers in decisions about the strategies we pursue, causes we prioritize, and grants we ultimately make. As such, we think it’s helpful for individual staff members to discuss major ways in which our personal thinking has changed, not only about particular causes and grants, but also about our background worldviews.
I recently wrote up a relatively detailed discussion of how my personal thinking has changed about three interrelated topics: (1) the importance of potential risks from advanced artificial intelligence, particularly the value alignment problem; (2) the potential of many of the ideas and people associated with the effective altruism community; (3) the properties to look for when assessing an idea or intervention, and in particular how much weight to put on metrics and “feedback loops” compared to other properties. My views on these subjects have changed fairly dramatically over the past several years, contributing to a significant shift in how we approach them as an organization.
I’ve posted my full writeup as a personal Google doc. A summary follows.Read More
July 15, 2016
In the last few months, we’ve welcomed several new team members:
- Jaime Yassif has joined as a Program Officer, leading our work on Biosecurity and Pandemic Preparedness. Previously, Jaime was a Science & Technology Policy Advisor at the U.S. Department of Defense, where she focused on oversight of the Cooperative Threat Reduction Program. During this period, she also worked on the Global Health Security Agenda at the Department of Health and Human Services.
- Chris Somerville has joined as a Scientific Advisor, and will be leading much of our work on scientific research. Chris has been a professor at UC Berkeley, Stanford University and Michigan State University.
- Chris will be working with two other new full-time Scientific Advisors: Heather Youngs (joining in August), formerly director of the Bakar Fellows Program for faculty entrepreneurs at UC Berkeley, and Daniel Martin Alarcon, who recently got his PhD in Biological Engineering under Professor Ed Boyden.
These hires conclude long-running searches, and we’re very excited to have Jaime, Chris, Heather and Daniel on board.Read More
June 23, 2016
Earlier this week, Google Research (in collaboration with scientists at OpenAI, Stanford and Berkeley) released Concrete Problems in AI Safety, which outlines five technical problems related to accident risk in AI systems. Four of the authors are friends and technical advisors of the Open Philanthropy Project.
We’re very excited about this paper. We highly recommend it to anyone looking to get a sense for the tractability of reducing potential risks from advanced AI (a cause we’ve previously written about) - as well as for what sorts of research we would be most excited to fund in this cause.Read More
June 15, 2016
Ben Soskis, a consultant who has been working with us as part of our History of Philanthropy project, recently finished a case study on the founding and growth of the Center for Global Development (CGD), a think tank that conducts research on and promotes improvements to rich-world policies that affect the global poor.
We are very impressed with CGD and its accomplishments. Earlier this year, we announced a $3 million grant to CGD. Our writeup includes a review of CGD’s track record, concluding that “it seems reasonable to us to estimate that CGD has produced at least 10 times as much value for the global poor as it has spent, though we consider this more of a rough lower bound than an accurate estimate.”
CGD appears to be an example of a funder-initiated startup: philanthropist Ed Scott was the driving force behind the original high-level concept, and he recruited Nancy Birdsall to be CGD’s leader.
There are a number of similarities between this case study and our recent case study on the Center on Budget and Policy Priorities (CBPP). In particular, both cases involved a funder exploring an extensive personal network, finding a strong leader for the organization they envisioned, committing flexible support up-front to that leader, and then trusting the leader with a lot of autonomy in creating the organization (while retaining a high level of personal involvement throughout the early years of the organization). However, there are a couple of major points of contrast: the crowdedness of the landscape and the amount of money committed.
Amount committed: in the case of CBPP, the founding funder “committed $175,000 in the first year  and $150,000 in the second year, and asked CBPP to find other supporters so it could reduce its support after that point.” In the case of CGD, the founding funder committed $25 million, to be disbursed over five years, in 2001.Read More
May 20, 2016
Suzanne Kahn, a consultant who has been working with us as part of our History of Philanthropy project, recently finished a case study on the founding and growth of the Center on Budget and Policy Priorities (CBPP), a well-regarded D.C. think tank that focuses on tax and budget policy with an aim of improving outcomes for low-income people.
We were interested in learning more about the history and founding of CBPP because:
- It appeared to be a successful institution that funders played a central role in helping to establish.
- Relative to many other think tanks, CBPP appears to have an unusually clear and concrete track record.
- We’ve supported CBPP’s Full Employment Project.
The report’s account of CBPP’s history and founding mirrors, in many ways, that of Center for Global Development (the subject of another upcoming case study). In particular:Read More
May 6, 2016
We’re planning to make potential risks from artificial intelligence a major priority this year. We feel this cause presents an outstanding philanthropic opportunity — with extremely high importance, high neglectedness, and reasonable tractability (our three criteria for causes) — for someone in our position. We believe that the faster we can get fully up to speed on key issues and explore the opportunities we currently see, the faster we can lay the groundwork for informed, effective giving both this year and in the future.
With all of this in mind, we’re placing a larger “bet” on this cause, this year, than we are placing even on other focus areas — not necessarily in terms of funding (we aren’t sure we’ll identify very large funding opportunities this year, and are more focused on laying the groundwork for future years), but in terms of senior staff time, which at this point is a scarcer resource for us. Consistent with our philosophy of hits-based giving, we are doing this not because we have confidence in how the future will play out and how we can impact it, but because we see a risk worth taking. In about a year, we’ll formally review our progress and reconsider how senior staff time is allocated.
This post will first discuss why I consider this cause to be an outstanding philanthropic opportunity. (My views are fairly representative, but not perfectly representative, of those of other staff working on this cause.) It will then give a broad outline of our planned activities for the coming year, some of the key principles we hope to follow in this work, and some of the risks and reservations we have about prioritizing this cause as highly as we are.
- It seems to me that artificial intelligence is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. I believe there’s a nontrivial probability that transformative AI will be developed within the next 20 years, with enormous global consequences.
- By and large, I expect the consequences of this progress — whether or not transformative AI is developed soon — to be positive. However, I also perceive risks. Transformative AI could be a very powerful technology, with potentially globally catastrophic consequences if it is misused or if there is a major accident involving it. Because of this, I see this cause as having extremely high importance (one of our key criteria), even while accounting for substantial uncertainty about the likelihood of developing transformative AI in the coming decades and about the size of the risks. I discuss the nature of potential risks below; note that I think they do not apply to today’s AI systems.
- I consider this cause to be highly neglected in important respects. There is a substantial and growing field around artificial intelligence and machine learning research, but most of it is not focused on reducing potential risks. We’ve put substantial work into trying to ensure that we have a thorough landscape of the researchers, funders, and key institutions whose work is relevant to potential risks from advanced AI. We believe that the amount of work being done is well short of what it productively could be (despite recent media attention); that philanthropy could be helpful; and that the activities we’re considering wouldn’t be redundant with those of other funders.
- I believe that there is useful work to be done today in order to mitigate future potential risks. In particular, (a) I think there are important technical problems that can be worked on today, that could prove relevant to reducing accident risks; (b) I preliminarily feel that there is also considerable scope for analysis of potential strategic and policy considerations.
- More broadly, the Open Philanthropy Project may be able to help support an increase in the number of people – particularly people with strong relevant technical backgrounds - thinking through how to reduce potential risks, which could be important in the future even if the work done in the short term does not prove essential. I believe that one of the things philanthropy is best-positioned to do is provide steady, long-term support as fields and institutions grow.
- I consider this a challenging cause. I think it would be easy to do harm while trying to do good. For example, trying to raise the profile of potential risks could contribute (and, I believe, has contributed to some degree) to non-nuanced or inaccurate portrayals of risk in the media, which in turn could raise the risks of premature and/or counterproductive regulation. I consider the Open Philanthropy Project relatively well-positioned to work in this cause while being attentive to pitfalls, and to deeply integrate people with strong technical expertise into our work.
- I see much room for debate in the decision to prioritize this cause as highly as we are. However, I think it is important that a philanthropist in our position be willing to take major risks, and prioritizing this cause is a risk that I see as very worth taking.
My views on this cause have evolved considerably over time. I will discuss the evolution of my thinking in detail in a future post, but this post focuses on the case for prioritizing this cause today.Read More
May 6, 2016
We’re planning to make potential risks from advanced artificial intelligence a major priority in 2016. A future post will discuss why; this post gives some background.
- I first give our definition of “transformative artificial intelligence,” our term for a type of potential advanced artificial intelligence we find particularly relevant for our purposes. Roughly and conceptually, transformative AI refers to potential future AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution. I also provide (below) a more detailed definition. The concept of “transformative AI” has some overlap with concepts put forth by others, such as “superintelligence” and “artificial general intelligence.” However, “transformative AI” is intended to be a more inclusive term, leaving open the possibility of AI systems that count as “transformative” despite lacking many abilities humans have.
- I then discuss the question of whether, and when, we might expect transformative AI to be developed. This question has many properties (long timelines, relatively vague concepts, lack of detailed public analysis) I associate with developments that are nearly impossible to forecast, and I don’t think it is possible to make high-certainty forecasts on the matter. With that said, I am comfortable saying that I think there is a nontrivial likelihood (at least 10% with moderate robustness, and at least 1% with high robustness) of transformative AI within the next 20 years. I can’t feasibly share all of the information that goes into this view, but I try to outline the general process I have followed to reach it.
- Finally, I briefly discuss whether there are other potential future developments that seem to have similar potential for impact on similar timescales to transformative AI, in order to put our interest in AI in context.
The ideas in this post overlap with some arguments made by others, but I think it is important to lay out the specific views on these issues that I endorse. Note that this post is confined in scope to the above topics; it does not, for example, discuss potential risks associated with AI or potential measures for reducing them. I will discuss the latter topics more in the future.Read More
April 29, 2016
This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year.
- Our 2015 goals revolved mostly around building our staff capacity, and particularly around hiring. Broadly speaking, we mostly accomplished our goals, though we significantly scaled back our goals for scientific research at mid-year.
- Our team has roughly doubled in size compared to a year ago. We’re now in a much better position to recommend a significant amount of grantmaking. We also feel much better positioned to identify outstanding causes.
- This year, we have a general goal of focusing on making grants in the most outstanding causes we’ve found. This is a departure from past years’ goals, which revolved around building knowledge and staff capacity. We expect to prioritize building knowledge and staff capacity again in the future, but we think this is a good year to focus on increasing our grantmaking. We currently are bottlenecked in terms of management capacity, and we believe that focusing on grantmaking will likely lead to a lot of learning that will inform future hiring and capacity building.
- Potential risks from advanced artificial intelligence will be a major priority for 2016. Not only will Daniel Dewey be working on this cause full-time, but Nick Beckstead and I will both be putting significant time into it as well. Some other staff will be contributing smaller amounts of time as appropriate.
- Other major focus areas where we expect significant grantmaking include criminal justice reform, farm animal welfare, and biosecurity. We expect to recommend at least $10 million in grants in each of these areas.
- We have a variety of other goals, including completing the separation of the Open Philanthropy Project as an independent organization from GiveWell, with its own employees and financials, though some individuals will continue to do work for both organizations.