• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!

UC Berkeley — AI Safety Research (2019)

Visit Grantee Site
  • Category: Longtermism
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: UC Berkeley
  • Amount: $1,111,000

  • Award Date: December 2019

Table of Contents


    Grant investigator: Daniel Dewey

    This page was reviewed but not written by the grant investigator. UC Berkeley staff also reviewed this page prior to publication.


    Open Philanthropy recommended a grant of $1,111,000 over three years to UC Berkeley to support research relevant to potential risks from artificial intelligence and machine learning, led by Jacob Steinhardt. This funding will allow Professor Steinhardt to fund students to work on robustness, value learning, aggregating preferences, and other areas of machine learning.

    This falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Longtermism

      Center for Long-Term Cybersecurity — AI Standards

      Open Philanthropy recommended a gift of $25,000 to the Center for Long-Term Cybersecurity (CLTC), via UC Berkeley, to support work by CLTC's AI Security Initiative on the development...

      Read more
    • Longtermism

      UC Berkeley — Adversarial Robustness Research (Aditi Raghunathan)

      Open Philanthropy recommended a grant of $101,064 to UC Berkeley to support postdoctoral research by Aditi Raghunathan on adversarial robustness as a means to improve AI safety. This...

      Read more
    • Longtermism

      UC Berkeley — Adversarial Robustness Research (Dawn Song)

      Open Philanthropy recommended a grant of $330,000 over three years to UC Berkeley to support research by Professor Dawn Song on adversarial robustness as a means to improve...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022