• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

UC Berkeley — AI Safety Research

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: UC Berkeley
  • Amount: $1,450,016

  • Award Date: October 2017

Table of Contents

    Grant investigator: Daniel Dewey
    This page was reviewed but not written by the grant investigator. UC Berkeley staff also reviewed this page prior to publication.

    The Open Philanthropy Project recommended two gifts totaling $1,450,016 to the University of California, Berkeley to support a four-year research project on AI safety. The work will be led by Professors Sergey Levine and Anca Dragan, who will each devote approximately 20% of their time to the project, with additional assistance from four graduate students. They initially intend to focus their research on how objective misspecification can produce subtle or overt undesirable behavior in robotic systems, though they have the flexibility to adjust their focus during the grant period.

    Our broad goals for this funding are to encourage top researchers to work on AI alignment and safety issues in order to build a pipeline for young researchers; to support progress on technical problems; and to generally support the growth of this area of study.

    This funding falls within our focus area of potential risks from advanced artificial intelligence.

    Sources

    Document Source
    Levine and Dragan, Project Narrative, 2017 Source

    Related Items

    • Potential Risks from Advanced AI

      Center for Long-Term Cybersecurity — AI Standards (2022)

      Open Philanthropy recommended a gift of $20,000 to the Center for Long-Term Cybersecurity (CLTC), via UC Berkeley, to support work by CLTC's AI Security Initiative on the development...

      Read more
    • Potential Risks from Advanced AI

      Center for Long-Term Cybersecurity — AI Standards (2021)

      Open Philanthropy recommended a gift of $25,000 to the Center for Long-Term Cybersecurity (CLTC), via UC Berkeley, to support work by CLTC's AI Security Initiative on the development...

      Read more
    • Potential Risks from Advanced AI

      UC Berkeley — Adversarial Robustness Research (Aditi Raghunathan)

      Open Philanthropy recommended a grant of $101,064 to UC Berkeley to support postdoctoral research by Aditi Raghunathan on adversarial robustness as a means to improve AI safety. This...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 This work is licensed under a Creative Commons Attribution-Noncommercial-Share-alike 4.0 International License. If you'd like to translate this content into another language, please get in touch!