We aim to support research and strategic work that could reduce risks and improve preparedness.

In recent years, we’ve seen rapid progress in artificial intelligence. And within a decade or two, we think it’s plausible that AI systems will arrive that can outperform humans in nearly all intellectual domains (as do many of the world’s foremost AI experts).

These systems could have enormous benefits, from accelerating scientific progress to vastly increasing global GDP. However, they could also pose severe risks from misuse, accidents, or drastic societal change — with potentially catastrophic effects. We’re interested in supporting technical, strategic, and policy work that could reduce the risk of accidents or help society prepare for major advances in AI. We’re also interested in supporting work that increases the number of people working in these areas or helps those who already do to achieve their goals.

 

Funding opportunities and requests for proposals

RFP on AI governance: Supports work in six areas: technical AI governance, policy development, frontier company policy, international AI governance, law, and strategic analysis and threat modeling. We’re evaluating expressions of interest on a rolling basis.

Capacity building: We have several open RFPs aimed at increasing the number of people working on risks from advanced AI and helping those who already do to achieve their goals:

The following Open Philanthropy staff oversee the Potential Risks from Advanced Artificial Intelligence program.

Potential Risks from Advanced AI, at a glance

  • 340+ Grants
    Made

  • $490+ Million
    Given

Sample Grants

Research & Updates