• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

Berkeley Existential Risk Initiative — AI Standards (2022)

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Berkeley Existential Risk Initiative
  • Amount: $210,000

  • Award Date: April 2022

Table of Contents

    Open Philanthropy recommended a grant of $210,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence. An additional grant to the Center for Long-Term Cybersecurity will support related work.
    This follows our July 2021 support and falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — AI Standards (2021)

      Open Philanthropy recommended a grant of $300,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce...

      Read more
    • Potential Risks from Advanced AI

      Center for Long-Term Cybersecurity — AI Standards (2021)

      Open Philanthropy recommended a gift of $25,000 to the Center for Long-Term Cybersecurity (CLTC), via UC Berkeley, to support work by CLTC's AI Security Initiative on the development...

      Read more
    • Potential Risks from Advanced AI

      Center for Long-Term Cybersecurity — AI Standards (2022)

      Open Philanthropy recommended a gift of $20,000 to the Center for Long-Term Cybersecurity (CLTC), via UC Berkeley, to support work by CLTC's AI Security Initiative on the development...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License. If you'd like to translate this content into another language, please get in touch!