• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

Stanford University — Support for Percy Liang

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Stanford University
  • Amount: $1,337,600

  • Award Date: May 2017

Table of Contents

    Grant investigator: Daniel Dewey
    This page was reviewed but not written by the grant investigator. Stanford University staff also reviewed this page prior to publication.

    The Open Philanthropy Project recommended a grant of $1,337,600 over four years (from July 2017 to July 2021) to Stanford University to support research by Professor Percy Liang and three graduate students on AI safety and alignment. The funds will be split approximately evenly across the four years (i.e. roughly $320,000 to $350,000 per year).

    This is one of a number of grants we plan to recommend to support work by top AI researchers on AI safety and alignment issues, with the goals of a) building a pipeline for younger researchers, b) making progress on technical problems, and c) further establishing AI safety research as a field.

     

    Background

    This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks.

    We previously recommended a $25,000 planning grant to Professor Liang in March 2017 to enable him to spend substantial time engaging with our process to determine whether to proceed with this larger funding recommendation.

    About the grant

    Proposed activities

    We asked Professor Liang to submit a research description of problems he currently plans to work on, why he finds these problems important, and how he thinks he might make progress. In broad terms, Professor Liang initially plans to focus on a subset of the following topics:

    • Robustness against adversarial attacks on machine learning (ML) systems
    • Verification of the implementation of ML systems
    • “Knowing what you don’t know,” i.e. calibrated / uncertainty-aware ML
    • “Natural language” supervision, i.e. using compositional, abstract, underspecified languages (e.g. English or some engineered language) in dialogue to specify rewards and goals

    Professor Liang thinks that it is possible to make empirically verifiable progress on these topics, and that the general principles developed in this way are reasonably likely to be relevant for addressing global catastrophic risks from advanced AI (though this latter impact will be much harder to evaluate). Professor Liang also thinks that work on these topics will clarify how existing reliable ML research relates to potential risks from advanced AI, which could increase the number and quality of researchers working on potential risks from advanced AI.

    Professor Liang plans to spend about 20% of his overall research time on the agenda supported by this grant. This grant will also support about three graduate students.

    Risks and reservations

    We have some disagreements with Professor Liang about which problems and approaches in this area are most important and promising. These disagreements depend largely on differing intuitions about which research directions are likely to be most promising, and we can easily imagine later agreeing with Professor Liang on these issues. In one past instance, Daniel Dewey (our Program Officer for Potential Risks from Advanced Artificial Intelligence, “Daniel” throughout this page) was persuaded by Professor Liang of the likely usefulness of a line of research about which he had initially been skeptical.

    Rather than trying to resolve our disagreements and settle on a fixed research agenda for this grant now, we expect it to be more valuable to keep Professor Liang’s potential research directions relatively open, and to facilitate discussion about these issues between Professor Liang, our technical advisors, other Open Philanthropy grantees, and other AI research organizations in order to move toward resolving our disagreements over time.

    Overall, we are highly confident that Professor Liang understands and shares our interests and values in this space.

    Plans for learning and follow-up

    Key questions for follow-up

    • How is the research going overall?
    • Has Professor Liang’s team formed any new perspectives on research problems they investigate?
    • Have there been any updates to the team’s research priorities?
    • Are there other ways in which Open Philanthropy could help?

    Follow-up expectations

    We plan to check in with Professor Liang roughly every six months for the duration of the grant to get in-depth updates on his results so far and plans for the future. We may also have less comprehensive, more informal discussions with Professor Liang roughly once a month (if both we and Professor Liang have time and think it would be beneficial).

    At the end of the grant period, we will decide whether to renew our support based on our technical advisors’ evaluation of Professor Liang’s work so far, his proposed next steps, and our assessment of how well his research program has served as a pipeline for students entering the field. We are optimistic about the chances of renewing our support. We think the most likely reason we might choose not to renew would be if Professor Liang decides that AI alignment research isn’t a good fit for him or for his students.

    Our process

    Two of Open Philanthropy’s technical advisors reviewed Professor Liang’s research proposal. Both felt largely positive about the proposed research directions and recommended to Daniel that Open Philanthropy make this grant, despite some disagreements with Professor Liang (and with each other) about the likely value of some specific components of the proposal (see above).

    Related Items

    • Potential Risks from Advanced AI

      Stanford University — AI Index

      Open Philanthropy recommended a grant of $78,000 to Stanford University to support the AI Index, which collects and reports data related to artificial intelligence, including data relevant to AI...

      Read more
    • Potential Risks from Advanced AI

      Stanford University — AI Alignment Research (Barrett and Viteri)

      Open Philanthropy recommended a grant of $153,820 to Stanford University to support research on AI alignment by Professor Clark Barrett and Stanford student Scott Viteri. This falls within...

      Read more
    • Potential Risks from Advanced AI

      Stanford University — Adversarial Robustness Research (Dimitris Tsipras)

      Open Philanthropy recommended a grant of $330,792 over three years to Stanford University to support early-career research by Dimitris Tsipras on adversarial robustness as a means to improve...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 This work is licensed under a Creative Commons Attribution-Noncommercial-Share-alike 4.0 International License. If you'd like to translate this content into another language, please get in touch!