• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

Berkeley Existential Risk Initiative — David Krueger Collaboration

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Berkeley Existential Risk Initiative
  • Amount: $40,000

  • Award Date: April 2022

Table of Contents

    Open Philanthropy recommended a grant of $40,000 to the Berkeley Existential Risk Initiative to support its collaboration with Professor David Krueger.

    This falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

      Open Philanthropy recommended a grant of $2,047,268 to the Berkeley Existential Risk Initiative to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory...

      Read more
    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — SERI MATS Program

      Open Philanthropy recommended three grants totaling $195,000 to the Berkeley Existential Risk Initiative to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the SERI ML Alignment Theory Scholars...

      Read more
    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — General Support (2022)

      Open Philanthropy recommended a grant of $100,000 to the Berkeley Existential Risk Initiative (BERI) for general support. BERI seeks to reduce existential risks to humanity by providing...

      Read more
    Visit Grantee Site
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License. If you'd like to translate this content into another language, please get in touch!