- Research & Ideas
- Focus Areas
- About Us
- Get Involved
The Open Philanthropy Blog
June 23, 2016
Earlier this week, Google Research (in collaboration with scientists at OpenAI, Stanford and Berkeley) released Concrete Problems in AI Safety, which outlines five technical problems related to accident risk in AI systems. Four of the authors are friends and technical advisors of the Open Philanthropy Project.
We’re very excited about this paper. We highly recommend it to anyone looking to get a sense for the tractability of reducing potential risks from advanced AI (a cause we’ve previously written about) - as well as for what sorts of research we would be most excited to fund in this cause.Read More
June 15, 2016
Ben Soskis, a consultant who has been working with us as part of our History of Philanthropy project, recently finished a case study on the founding and growth of the Center for Global Development (CGD), a think tank that conducts research on and promotes improvements to rich-world policies that affect the global poor.
We are very impressed with CGD and its accomplishments. Earlier this year, we announced a $3 million grant to CGD. Our writeup includes a review of CGD’s track record, concluding that “it seems reasonable to us to estimate that CGD has produced at least 10 times as much value for the global poor as it has spent, though we consider this more of a rough lower bound than an accurate estimate.”
CGD appears to be an example of a funder-initiated startup: philanthropist Ed Scott was the driving force behind the original high-level concept, and he recruited Nancy Birdsall to be CGD’s leader.
There are a number of similarities between this case study and our recent case study on the Center on Budget and Policy Priorities (CBPP). In particular, both cases involved a funder exploring an extensive personal network, finding a strong leader for the organization they envisioned, committing flexible support up-front to that leader, and then trusting the leader with a lot of autonomy in creating the organization (while retaining a high level of personal involvement throughout the early years of the organization). However, there are a couple of major points of contrast: the crowdedness of the landscape and the amount of money committed.
Amount committed: in the case of CBPP, the founding funder “committed $175,000 in the first year  and $150,000 in the second year, and asked CBPP to find other supporters so it could reduce its support after that point.” In the case of CGD, the founding funder committed $25 million, to be disbursed over five years, in 2001.Read More
May 20, 2016
Suzanne Kahn, a consultant who has been working with us as part of our History of Philanthropy project, recently finished a case study on the founding and growth of the Center on Budget and Policy Priorities (CBPP), a well-regarded D.C. think tank that focuses on tax and budget policy with an aim of improving outcomes for low-income people.
We were interested in learning more about the history and founding of CBPP because:
- It appeared to be a successful institution that funders played a central role in helping to establish.
- Relative to many other think tanks, CBPP appears to have an unusually clear and concrete track record.
- We’ve supported CBPP’s Full Employment Project.
The report’s account of CBPP’s history and founding mirrors, in many ways, that of Center for Global Development (the subject of another upcoming case study). In particular:Read More
May 6, 2016
We’re planning to make potential risks from artificial intelligence a major priority this year. We feel this cause presents an outstanding philanthropic opportunity — with extremely high importance, high neglectedness, and reasonable tractability (our three criteria for causes) — for someone in our position. We believe that the faster we can get fully up to speed on key issues and explore the opportunities we currently see, the faster we can lay the groundwork for informed, effective giving both this year and in the future.
With all of this in mind, we’re placing a larger “bet” on this cause, this year, than we are placing even on other focus areas — not necessarily in terms of funding (we aren’t sure we’ll identify very large funding opportunities this year, and are more focused on laying the groundwork for future years), but in terms of senior staff time, which at this point is a scarcer resource for us. Consistent with our philosophy of hits-based giving, we are doing this not because we have confidence in how the future will play out and how we can impact it, but because we see a risk worth taking. In about a year, we’ll formally review our progress and reconsider how senior staff time is allocated.
This post will first discuss why I consider this cause to be an outstanding philanthropic opportunity. (My views are fairly representative, but not perfectly representative, of those of other staff working on this cause.) It will then give a broad outline of our planned activities for the coming year, some of the key principles we hope to follow in this work, and some of the risks and reservations we have about prioritizing this cause as highly as we are.
- It seems to me that artificial intelligence is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. I believe there’s a nontrivial probability that transformative AI will be developed within the next 20 years, with enormous global consequences.
- By and large, I expect the consequences of this progress — whether or not transformative AI is developed soon — to be positive. However, I also perceive risks. Transformative AI could be a very powerful technology, with potentially globally catastrophic consequences if it is misused or if there is a major accident involving it. Because of this, I see this cause as having extremely high importance (one of our key criteria), even while accounting for substantial uncertainty about the likelihood of developing transformative AI in the coming decades and about the size of the risks. I discuss the nature of potential risks below; note that I think they do not apply to today’s AI systems.
- I consider this cause to be highly neglected in important respects. There is a substantial and growing field around artificial intelligence and machine learning research, but most of it is not focused on reducing potential risks. We’ve put substantial work into trying to ensure that we have a thorough landscape of the researchers, funders, and key institutions whose work is relevant to potential risks from advanced AI. We believe that the amount of work being done is well short of what it productively could be (despite recent media attention); that philanthropy could be helpful; and that the activities we’re considering wouldn’t be redundant with those of other funders.
- I believe that there is useful work to be done today in order to mitigate future potential risks. In particular, (a) I think there are important technical problems that can be worked on today, that could prove relevant to reducing accident risks; (b) I preliminarily feel that there is also considerable scope for analysis of potential strategic and policy considerations.
- More broadly, the Open Philanthropy Project may be able to help support an increase in the number of people – particularly people with strong relevant technical backgrounds - thinking through how to reduce potential risks, which could be important in the future even if the work done in the short term does not prove essential. I believe that one of the things philanthropy is best-positioned to do is provide steady, long-term support as fields and institutions grow.
- I consider this a challenging cause. I think it would be easy to do harm while trying to do good. For example, trying to raise the profile of potential risks could contribute (and, I believe, has contributed to some degree) to non-nuanced or inaccurate portrayals of risk in the media, which in turn could raise the risks of premature and/or counterproductive regulation. I consider the Open Philanthropy Project relatively well-positioned to work in this cause while being attentive to pitfalls, and to deeply integrate people with strong technical expertise into our work.
- I see much room for debate in the decision to prioritize this cause as highly as we are. However, I think it is important that a philanthropist in our position be willing to take major risks, and prioritizing this cause is a risk that I see as very worth taking.
My views on this cause have evolved considerably over time. I will discuss the evolution of my thinking in detail in a future post, but this post focuses on the case for prioritizing this cause today.Read More
May 6, 2016
We’re planning to make potential risks from advanced artificial intelligence a major priority in 2016. A future post will discuss why; this post gives some background.
- I first give our definition of “transformative artificial intelligence,” our term for a type of potential advanced artificial intelligence we find particularly relevant for our purposes. Roughly and conceptually, transformative AI refers to potential future AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution. I also provide (below) a more detailed definition. The concept of “transformative AI” has some overlap with concepts put forth by others, such as “superintelligence” and “artificial general intelligence.” However, “transformative AI” is intended to be a more inclusive term, leaving open the possibility of AI systems that count as “transformative” despite lacking many abilities humans have.
- I then discuss the question of whether, and when, we might expect transformative AI to be developed. This question has many properties (long timelines, relatively vague concepts, lack of detailed public analysis) I associate with developments that are nearly impossible to forecast, and I don’t think it is possible to make high-certainty forecasts on the matter. With that said, I am comfortable saying that I think there is a nontrivial likelihood (at least 10% with moderate robustness, and at least 1% with high robustness) of transformative AI within the next 20 years. I can’t feasibly share all of the information that goes into this view, but I try to outline the general process I have followed to reach it.
- Finally, I briefly discuss whether there are other potential future developments that seem to have similar potential for impact on similar timescales to transformative AI, in order to put our interest in AI in context.
The ideas in this post overlap with some arguments made by others, but I think it is important to lay out the specific views on these issues that I endorse. Note that this post is confined in scope to the above topics; it does not, for example, discuss potential risks associated with AI or potential measures for reducing them. I will discuss the latter topics more in the future.Read More
April 29, 2016
This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year.
- Our 2015 goals revolved mostly around building our staff capacity, and particularly around hiring. Broadly speaking, we mostly accomplished our goals, though we significantly scaled back our goals for scientific research at mid-year.
- Our team has roughly doubled in size compared to a year ago. We’re now in a much better position to recommend a significant amount of grantmaking. We also feel much better positioned to identify outstanding causes.
- This year, we have a general goal of focusing on making grants in the most outstanding causes we’ve found. This is a departure from past years’ goals, which revolved around building knowledge and staff capacity. We expect to prioritize building knowledge and staff capacity again in the future, but we think this is a good year to focus on increasing our grantmaking. We currently are bottlenecked in terms of management capacity, and we believe that focusing on grantmaking will likely lead to a lot of learning that will inform future hiring and capacity building.
- Potential risks from advanced artificial intelligence will be a major priority for 2016. Not only will Daniel Dewey be working on this cause full-time, but Nick Beckstead and I will both be putting significant time into it as well. Some other staff will be contributing smaller amounts of time as appropriate.
- Other major focus areas where we expect significant grantmaking include criminal justice reform, farm animal welfare, and biosecurity. We expect to recommend at least $10 million in grants in each of these areas.
- We have a variety of other goals, including completing the separation of the Open Philanthropy Project as an independent organization from GiveWell, with its own employees and financials, though some individuals will continue to do work for both organizations.
April 4, 2016
One of our core values is our tolerance for philanthropic “risk.” Our overarching goal is to do as much good as we can, and as part of that, we’re open to supporting work that has a high risk of failing to accomplish its goals. We’re even open to supporting work that is more than 90% likely to fail, as long as the overall expected value is high enough.
And we suspect that, in fact, much of the best philanthropy is likely to fail. We suspect that high-risk, high-reward philanthropy could be described as a “hits business,” where a small number of enormous successes account for a large share of the total impact — and compensate for a large number of failed projects.
If this is true, I believe it calls for approaching our giving with some counterintuitive principles — principles that are very different from those underlying our work on GiveWell. In particular, if we pursue a “hits-based” approach, we will sometimes bet on ideas that contradict conventional wisdom, contradict some expert opinion, and have little in the way of clear evidential support. In supporting such work, we’d run the risk of appearing to some as having formed overconfident views based on insufficient investigation and reflection.
In fact, there is reason to think that some of the best philanthropy is systematically likely to appear to have these properties. With that said, we think that being truly overconfident and underinformed would be extremely detrimental to our work; being well-informed and thoughtful about the ways in which we could be wrong is at the heart of what we do, and we strongly believe that some “high-risk” philanthropic projects are much more promising than others.
This post will:Read More
March 31, 2016
When I started as the Open Philanthropy Project’s Farm Animal Welfare Program Officer in October, I decided to prioritize investigating opportunities to speed up the corporate transition away from using eggs from caged hens. Based on that investigation, the Open Philanthropy Project recommended three grants, totaling $2.5 million over two years, to the Humane League, Mercy for Animals, and the Humane Society of the United States’ Farm Animal Protection Campaign. This post explains why I wanted to make our first farm animal welfare grants on corporate cage-free campaigns.
- Battery cages cause severe suffering, and cage-free systems are much better.
- Corporate cage-free campaigns are tractable and high-impact, with a strong recent track record.
- The cost-effectiveness of these campaigns, in terms of animal suffering averted per dollar, looks better than any alternatives I’m aware of.
- I don’t see these campaigns as representing a “short-term-only” approach. I see them as a logical step along a long-term path toward greatly reduced farm animal suffering, and I think they’re competitive with other approaches when thought of in these terms.
- I believe our funding has made and will continue to make a tangible difference to the success of these campaigns.
Details follow.Read More
March 15, 2016
In keeping with our quarterly GiveWell open threads, we wanted to host a separate open thread for questions related to the Open Philanthropy Project.
Our goal is to give blog readers and followers of the Open Philanthropy Project an opportunity to publicly raise comments or questions about the Open Philanthropy Project or related topics (in the comments section below). As always, you’re also welcome to email us at email@example.com if there’s feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.Read More
February 23, 2016
Around this time of year, we usually publish a review of the past year’s progress and an outline of our plans for the coming year. Past examples are here.
This year, the annual review and plan will be delayed significantly, because we feel that the plan we’re considering calls for a fair amount of investigation and consideration. We expect to publish our annual review and plan in April or May.Read More