Open Philanthropy
Published on Open Philanthropy (https://www.openphilanthropy.org)


Potential Risks from Advanced Artificial Intelligence

It appears possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area. These advances could lead to extremely positive developments, but could also potentially pose risks from misuse, accidents, or harmful societal effects, which could plausibly reach the level of global catastrophic risks [1]. We’re interested in supporting technical research that could reduce the risk of accidents, as well as strategic and policy work that has the potential to improve society’s preparedness for major advances in AI.

For more on why we consider this an important cause, see Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity [2] (May 2016). Also see our process for selecting focus areas [3].

Illustrative grants

A complete list of our grants in the area of potential risks from artificial intelligence can be found here [4]. Grants include:

  • Georgetown University — Center for Security and Emerging Technology [5]
  • UC Berkeley — Center for Human-Compatible AI [6]
  • Ought — General Support [7]
  • GoalsRL — Workshop on Goal Specifications for Reinforcement Learning [8]
  • Machine Intelligence Research Institute — General Support [9]
  • OpenAI — General Support [10]

Other content

  • The Open Phil AI Fellowship [11]
  • Blog post: Potential risks from advanced artificial intelligence: the philanthropic opportunity [12]
  • Blog post: Some background on our views regarding advanced artificial intelligence [13]
  • Public cause report (August 2015) [14] used in our process for selecting focus areas [3]

  • What should we learn from past AI forecasts? [15]
  • What do we know about AI timelines? [16]
  • Blog post: Concrete Problems in AI Safety [17]

Nick Beckstead

Nick Beckstead [18] joined Open Philanthropy in 2014. He oversees a substantial part of Open Philanthropy’s research and grantmaking related to global catastrophic risk reduction [19]. Previously, Nick led the creation of our grantmaking programs in scientific research and effective altruism [20]. Prior to that, he was a research fellow at the Future of Humanity Institute at Oxford University. He has a Ph.D. in Philosophy from Rutgers University and a B.A. in Philosophy and Mathematics from the University of Minnesota.

Luke Muehlhauser

Luke Muehlhauser [21] joined Open Philanthropy in 2015. He leads Open Philanthropy’s grantmaking on AI governance and policy [22]. Previously, as a Research Analyst and then a Senior Research Analyst, his investigations spanned a wide range of topics, particularly the social sciences, global catastrophic risks, animal consciousness [23], and forecasting [24]. Before that, he was Executive Director of the Machine Intelligence Research Institute in Berkeley, California.

  • contact us
  • jobs
  • press kit
  • facebook
  • twitter
  • rss
© Open Philanthropy. Except as otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License.
Some images may be copyrighted by others and not licensed for re-use: see image captions or footnotes. Privacy policy

Source URL: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence

Links
[1] https://www.openphilanthropy.org/focus/global-catastrophic-risks#Our_basic_framework
[2] http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity
[3] https://www.openphilanthropy.org/research/our-process#Selecting_and_prioritizing_focus_areas
[4] https://www.openphilanthropy.org/giving/grants?field_focus_area_target_id_selective=532
[5] https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology
[6] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai
[7] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support
[8] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/goals-rl-workshop-on-goal-specifications-for-reinforcement-learning
[9] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support
[10] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support
[11] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program
[12] https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity
[13] https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence
[14] https://www.openphilanthropy.org/research/cause-reports/global-catastrophic-risks/ai-risk
[15] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts
[16] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines
[17] https://www.openphilanthropy.org/blog/concrete-problems-ai-safety
[18] https://www.openphilanthropy.org/about/team/nick-beckstead
[19] https://www.openphilanthropy.org/focus/global-catastrophic-risks
[20] https://www.openphilanthropy.org/focus/other-areas#EffectiveAltruism
[21] https://www.openphilanthropy.org/about/team/luke-muehlhauser
[22] https://www.openphilanthropy.org/blog/ai-governance-grantmaking
[23] https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood
[24] https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting