• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!

Berkeley Existential Risk Initiative — AI Standards

Visit Grantee Site
  • Category: Longtermism
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Berkeley Existential Risk Initiative
  • Amount: $300,000

  • Award Date: July 2021

Table of Contents

    Open Philanthropy recommended a grant of $300,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence. An additional grant to the Center for Long-Term Cybersecurity will support related work.

    This follows our January 2020 support and falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Longtermism

      Berkeley Existential Risk Initiative — General Support

      Open Philanthropy recommended a grant of $150,000 to the Berkeley Existential Risk Initiative (BERI) for general support. BERI seeks to reduce existential risks to humanity, and collaborates with...

      Read more
    • Longtermism

      Berkeley Existential Risk Initiative — Sculpting Evolution Collaboration

      Open Philanthropy recommended two grants totaling $130,000 over two years to the Berkeley Existential Risk Initiative to support their collaboration with Kevin Esvelt’s Sculpting Evolution group at the...

      Read more
    • Longtermism

      Berkeley Existential Risk Initiative — David Krueger Collaboration

      Open Philanthropy recommended a grant of $40,000 to the Berkeley Existential Risk Initiative to support its collaboration with Professor David Krueger. This falls within our focus area of potential...

      Read more
    Visit Grantee Site
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022