• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

UC Berkeley — Center for Human-Compatible AI (2016)

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: University of California, Berkeley
  • Amount: $5,555,550

  • Award Date: August 2016

Table of Contents

    Photo courtesy of UC Berkeley

     

    UC Berkeley staff reviewed this page prior to publication.


    The Open Philanthropy Project recommended a grant of $5,555,550 over five years to UC Berkeley to support the launch of a Center for Human-Compatible Artificial Intelligence (AI), led by Professor Stuart Russell.

    We believe the creation of an academic center focused on AI safety has significant potential benefits in terms of establishing AI safety research as a field and making it easier for researchers to learn about and work on this topic.

    1. Background

    This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. We wrote more about this cause on our blog.

    Stuart Russell is Professor of Computer Science and Smith-Zadeh Professor in Engineering at the University of California, Berkeley. He is the co-author of Artificial Intelligence: A Modern Approach, which we understand to be one of the most widely-used textbooks on AI.

    2. About the grant

    This grant will support the establishment of the Center for Human-Compatible AI at UC Berkeley, led by Professor Russell with the following co-Principal Investigators and collaborators:

    • Pieter Abbeel, Associate Professor of Computer Science, UC Berkeley
    • Anca Dragan, Assistant Professor of Computer Science, UC Berkeley
    • Tom Griffiths, Professor of Psychology and Cognitive Science, UC Berkeley
    • Bart Selman, Professor of Computer Science, Cornell University
    • Joseph Halpern, Professor of Computer Science, Cornell University
    • Michael Wellman, Professor of Computer Science, University of Michigan
    • Satinder Singh Baveja, Professor of Computer Science, University of Michigan

    Research topics that the Center may focus on include:

    • Value alignment through, e.g., inverse reinforcement learning from multiple sources (such as text and video).
    • Value functions defined by partially observable and partially defined terms (e.g. “health,” “death”).
    • The structure of human value systems, and the implications of computational limitations and human inconsistency.
    • Conceptual questions including the properties of ideal value systems, tradeoffs among humans and long-term stability of values.

    We see the creation of this center as an important component of our efforts to help build the field of AI safety for several reasons:

    • We expect the existence of the Center to make it much easier for researchers interested in exploring AI safety to discuss and learn about the topic, and potentially consider focusing their careers on it. Ideally this will result in a larger number of researchers ending up working on topics related to AI safety than otherwise would have.
    • The Center may allow researchers already focused on AI safety to dedicate more of their time to the topic and produce higher-quality research.
    • We hope that the existence of a well-funded academic center at a major university will solidify the place of this work as part of the larger fields of machine learning and artificial intelligence.

    Based on our in-progress investigation of field-building, our impression is that funding the creation of new academic centers is a very common part of successful philanthropic efforts to build new fields.

    We also believe that supporting Professor Russell’s work in general is likely to be beneficial. He appears to us to be more focused on reducing potential risks of advanced artificial intelligence (particularly the specific risks we are most focused on) than any comparably senior, mainstream academic of whom we are aware. We also see him as an effective communicator with a good reputation throughout the field.

    2.1 Budget and room for more funding

    Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year.

    Professor Russell currently has a few other sources of funding to support his own research and that of his students (all amounts are approximate):

    • $340,000 from the Future of Life Institute
    • $280,000 from the Defense Advanced Research Projects Agency
    • $1,500,000 from the Leverhulme Trust (spread over 10 years)

    Our understanding is that most of this funding is already or will soon be accounted for, and that Professor Russell would not plan to announce a new Center of this kind without substantial additional funding. Professor Russell has also applied for a National Science Foundation Expedition grant, which would be roughly $7 million to $10 million over ten years. However, because we do not expect that decision to be made until at least a few months after the final deadline for proposals in January 2017, and because we understand those grants to be very competitive, we decided to set our level of funding without waiting for that announcement.

    We are not aware of other potential funders who would consider providing substantial funding to the Center in the near future, and we believe that having long-term support in place is likely to make it easier for Professor Russell to recruit for the Center.

    2.2 Internal forecasts

    We’re experimenting with recording explicit numerical forecasts of events related to our decisionmaking (especially grantmaking). The idea behind this is to pull out the implicit predictions that are playing a role in our decisions, and make it possible for us to look back on how well-calibrated and accurate those are. For this grant, we are recording the following forecast:

    • 50% chance that, two years from now, the Center will be spending at least $2 million a year, and will be considered by one or more of our relevant technical advisors to have a reasonably good reputation in the field.

    3. Our process

    We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms.

    Related Items

    • Potential Risks from Advanced AI

      UC Berkeley — InterACT Lab

      Open Philanthropy recommended a gift of $775,000 to UC Berkeley to support the InterACT Laboratory. Led by Anca Dragan, the lab focuses on enabling AI agents to work...

      Read more
    • Potential Risks from Advanced AI

      University of California, Berkeley — Software Engineering Benchmark

      Open Philanthropy recommended a gift of $739,866 to UC Berkeley to support Professor Koushik Sen and his team in developing a software engineering benchmark, using their Repository2Environment framework. This grant was...

      Read more
    • Potential Risks from Advanced AI

      University of California, Berkeley — AI Alignment Workshop

      Open Philanthropy recommended a gift of $26,000 to UC Berkeley to support a small workshop bringing together experts in computational social choice theory and AI alignment. Professor...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • We’re Hiring!
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    info@openphilanthropy.org
    Media Inquiries
    media@openphilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT