It appears possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area. These advances could lead to extremely positive developments, but could also potentially pose risks from misuse, accidents, or harmful societal effects, which could plausibly reach the level of global catastrophic risks. We’re interested in supporting technical research that could reduce the risk of accidents, as well as strategic and policy work that has the potential to improve society’s preparedness for major advances in AI.
For more on why we consider this an important cause, see Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity (May 2016). Also see our process for selecting focus areas.
A complete list of our grants in the area of potential risks from artificial intelligence can be found here. Grants include:
- Georgetown University — Center for Security and Emerging Technology
- UC Berkeley — Center for Human-Compatible AI
- Ought — General Support
- GoalsRL — Workshop on Goal Specifications for Reinforcement Learning
- Machine Intelligence Research Institute — General Support
- OpenAI — General Support
- Announcing the 2018 AI Fellows
- The Open Philanthropy Project AI Fellows Program
- Blog post: Potential risks from advanced artificial intelligence: the philanthropic opportunity
- Landscape of current work on potential risks from advanced AI
- Blog post: Some background on our views regarding advanced artificial intelligence
- Public cause report (August 2015) used in our process for selecting focus areas
- What should we learn from past AI forecasts?
- What do we know about AI timelines?
- Blog post: Concrete Problems in AI Safety