• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

Stanford University — Support for Percy Liang

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Stanford University
  • Amount: $1,337,600

  • Award Date: May 2017

Table of Contents

    Grant investigator: Daniel Dewey
    This page was reviewed but not written by the grant investigator. Stanford University staff also reviewed this page prior to publication.

    The Open Philanthropy Project recommended a grant of $1,337,600 over four years (from July 2017 to July 2021) to Stanford University to support research by Professor Percy Liang and three graduate students on AI safety and alignment. The funds will be split approximately evenly across the four years (i.e. roughly $320,000 to $350,000 per year).

    This is one of a number of grants we plan to recommend to support work by top AI researchers on AI safety and alignment issues, with the goals of a) building a pipeline for younger researchers, b) making progress on technical problems, and c) further establishing AI safety research as a field.

     

    Background

    This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks.

    We previously recommended a $25,000 planning grant to Professor Liang in March 2017 to enable him to spend substantial time engaging with our process to determine whether to proceed with this larger funding recommendation.

    About the grant

    Proposed activities

    We asked Professor Liang to submit a research description of problems he currently plans to work on, why he finds these problems important, and how he thinks he might make progress. In broad terms, Professor Liang initially plans to focus on a subset of the following topics:

    • Robustness against adversarial attacks on machine learning (ML) systems
    • Verification of the implementation of ML systems
    • “Knowing what you don’t know,” i.e. calibrated / uncertainty-aware ML
    • “Natural language” supervision, i.e. using compositional, abstract, underspecified languages (e.g. English or some engineered language) in dialogue to specify rewards and goals

    Professor Liang thinks that it is possible to make empirically verifiable progress on these topics, and that the general principles developed in this way are reasonably likely to be relevant for addressing global catastrophic risks from advanced AI (though this latter impact will be much harder to evaluate). Professor Liang also thinks that work on these topics will clarify how existing reliable ML research relates to potential risks from advanced AI, which could increase the number and quality of researchers working on potential risks from advanced AI.

    Professor Liang plans to spend about 20% of his overall research time on the agenda supported by this grant. This grant will also support about three graduate students.

    Risks and reservations

    We have some disagreements with Professor Liang about which problems and approaches in this area are most important and promising. These disagreements depend largely on differing intuitions about which research directions are likely to be most promising, and we can easily imagine later agreeing with Professor Liang on these issues. In one past instance, Daniel Dewey (our Program Officer for Potential Risks from Advanced Artificial Intelligence, “Daniel” throughout this page) was persuaded by Professor Liang of the likely usefulness of a line of research about which he had initially been skeptical.

    Rather than trying to resolve our disagreements and settle on a fixed research agenda for this grant now, we expect it to be more valuable to keep Professor Liang’s potential research directions relatively open, and to facilitate discussion about these issues between Professor Liang, our technical advisors, other Open Philanthropy grantees, and other AI research organizations in order to move toward resolving our disagreements over time.

    Overall, we are highly confident that Professor Liang understands and shares our interests and values in this space.

    Plans for learning and follow-up

    Key questions for follow-up

    • How is the research going overall?
    • Has Professor Liang’s team formed any new perspectives on research problems they investigate?
    • Have there been any updates to the team’s research priorities?
    • Are there other ways in which Open Philanthropy could help?

    Follow-up expectations

    We plan to check in with Professor Liang roughly every six months for the duration of the grant to get in-depth updates on his results so far and plans for the future. We may also have less comprehensive, more informal discussions with Professor Liang roughly once a month (if both we and Professor Liang have time and think it would be beneficial).

    At the end of the grant period, we will decide whether to renew our support based on our technical advisors’ evaluation of Professor Liang’s work so far, his proposed next steps, and our assessment of how well his research program has served as a pipeline for students entering the field. We are optimistic about the chances of renewing our support. We think the most likely reason we might choose not to renew would be if Professor Liang decides that AI alignment research isn’t a good fit for him or for his students.

    Our process

    Two of Open Philanthropy’s technical advisors reviewed Professor Liang’s research proposal. Both felt largely positive about the proposed research directions and recommended to Daniel that Open Philanthropy make this grant, despite some disagreements with Professor Liang (and with each other) about the likely value of some specific components of the proposal (see above).

    Related Items

    • Potential Risks from Advanced AI

      Stanford University — LLM Cybersecurity Benchmark

      Open Philanthropy recommended a grant of $2,937,000 to Stanford University to support research to develop a benchmark for the cybersecurity capabilities of large language model (LLM) agents, led...

      Read more
    • Potential Risks from Advanced AI

      Stanford University — LLM-Generated Research Ideation Benchmark

      Open Philanthropy recommended a grant of $880,000 over two years to Stanford University to support a project evaluating the abilities of Large Language Model (LLM) agents at generating...

      Read more
    • Potential Risks from Advanced AI

      Stanford University — AI Economic Impacts Workshop

      Open Philanthropy recommended a gift of $120,000 to Stanford University to support a workshop on the economic and societal impacts of transformative AI. One of the organizers...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • We’re Hiring!
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    info@openphilanthropy.org
    Media Inquiries
    media@openphilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT