• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Innovation Policy
      • Land Use Reform
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Innovation Policy
      • Land Use Reform
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

Berkeley Existential Risk Initiative — David Krueger Collaboration

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Berkeley Existential Risk Initiative
  • Amount: $140,050

  • Award Date: April 2022

Table of Contents

    Open Philanthropy recommended a grant of $140,050 to the Berkeley Existential Risk Initiative to support its collaboration with Professor David Krueger.

    This falls within our focus area of potential risks from advanced artificial intelligence.

    The grant amount was updated in August 2023.

    Related Items

    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — Lab Retreat

      Open Philanthropy recommended a grant of $35,000 to the Berkeley Existential Risk Initiative to support a retreat for Anca Dragan's BAIR lab group, where members will discuss potential risks...

      Read more
    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

      Open Philanthropy recommended a grant of $2,047,268 to the Berkeley Existential Risk Initiative to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory...

      Read more
    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — SERI MATS Program

      Open Philanthropy recommended three grants totaling $195,000 to the Berkeley Existential Risk Initiative to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the SERI ML Alignment Theory Scholars...

      Read more
    Visit Grantee Site
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License. If you'd like to translate this content into another language, please get in touch!