• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

Ought — General Support (2018)

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Ought
  • Amount: $525,000

  • Award Date: May 2018

Table of Contents

    Grant Investigator: Daniel Dewey

    This page was reviewed but not written by the grant investigator. Ought staff also reviewed this page prior to publication.

    The Open Philanthropy Project recommended a grant of $525,000 to Ought for general support. Ought is a new organization with a mission to “leverage machine learning to help people think.” Ought plans to conduct research on deliberation and amplification, a concept we consider relevant to AI alignment.1 Our funding, combined with another grant from Open Philanthropy Project technical advisor Paul Christiano, is intended to allow Ought to hire up to three new staff members and provide one to three years of support for Ought’s work, depending how quickly they hire.

    Background

    This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. Ought is a new 501(c)(3) organization founded by Andreas Stuhlmüller, a former researcher at Stanford’s Computation and Cognition lab.2 Ought’s goal is to conduct research and build tools that leverage machine learning for deliberation, and to do so in a scalable way.

    About the grant

    Proposed activities

    Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today.

    Andreas believes that AI will ultimately be used to help people deliberate and make wise decisions. Ought will focus on this application, conducting theoretical and empirical work informed by real-world problems and data. Andreas thinks that it is helpful to pursue a concrete vision for how transformative AI might benefit and empower people, because such a vision can be criticized and improved, and can guide more theoretical research.

    Early on, Ought plans to focus on conceptual research and implementation of prototypes for decomposing and automating deliberation. Depending on research outcomes, Ought expects to move towards a more empirical and application-driven approach over time.

    Ought plans to publish its results, thoughts, code, and progress in online posts for the benefit of other researchers, and will publish in academic outlets if the additional effort is clearly justified. We do not expect a significant number of academic publications to result from this grant, and would consider such publications a bonus instead of part of the basic case for the funding.

    For more information on Ought’s vision, see this page by Andreas.

    Case for the grant

    The basic case for the grant is as follows:

    • We consider research on deliberation and amplification as an approach to AI safety both important and neglected.
    • Paul Christiano is excited by Ought’s plan and work, and we trust his judgement.
    • Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans.
    • We have seen some minor indications that Ought is well-run and has a reasonable chance at success, such as: an affiliation with Stanford’s Noah Goodman3, which we believe will help with attracting talent and funding; acceptance into the Stanford-Startx4 accelerator; and that Andreas has already done some research, application prototyping, testing, basic organizational set-up, and public talks at Stanford and USC.

    Budget

    Our funding is for general support. Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher.

    Plans for follow-up

    We plan to check in annually with Ought through a phone call with Andreas as well as a review of new published results, such as online writeups, published code, and academic papers, if any. Our Program Officer and investigator for this grant, Daniel Dewey, will conduct these check-ins, accompanied by another technical advisor.

    Key questions for follow-up

    We plan to consider the following questions when following up with Ought:

    • Has there been any progress on hires?
    • How has research progressed?
    • How has implementation progressed?
    • How has testing progressed?
    • Have there been any leads on other researchers who are noticing and/or building on your work?
    • Have any significant plans changed?

    Additionally, there are two situations where we might consider a renewal or expansion of funds:

    1. Ought wants to make additional hires while maintaining a reasonable level of funding reserves.
    2. After 2-2.5 years, Ought would like to extend its runway while maintaining a four-person team.

    In either situation, Daniel believes he would lean strongly toward renewal or increased support, provided Ought is making research progress that looks impressive to us and our technical advisors (we consider other metrics of success less important at this time).

    Sources

    DOCUMENT SOURCE
    Ought, Our Approach, 2018 Source (archive)
    Paul Christiano, Directions and desiderata for AI alignment [archive only] Source
    Stanford Computation and Cognition Lab, Homepage, December 2017 [archive only] Source
    Stanford Computation and Cognition Lab, Noah Goodman, December 2017 [archive only] Source
    Stanford-Startx, Homepage, December 2017 [archive only] Source
    Expand Footnotes Collapse Footnotes

    1.For reference, see this post by Paul Christiano, one of our technical advisors: Directions and desiderata for AI alignment

    2.Archived copy of link: @Stanford Computation and Cognition Lab, Homepage, December 2017 [archive only]@

    3.Archived copy of link: @Stanford Computation and Cognition Lab, Noah Goodman, December 2017 [archive only]@

    4.Archived copy of link: @Stanford-Startx, Homepage, December 2017 [archive only]@

    Related Items

    • Potential Risks from Advanced AI

      Ought — General Support (2020)

      Open Philanthropy recommended a grant of $1,593,333 to Ought for general support. Ought conducts research on factored cognition, which we consider relevant to AI alignment and to reducing...

      Read more
    • Potential Risks from Advanced AI

      Ought — General Support (2019)

      Open Philanthropy recommended a grant of $1,000,000 over two years to Ought for general support. Ought conducts research on factored condition, which we consider relevant to AI alignment....

      Read more
    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

      Open Philanthropy recommended a grant of $2,047,268 to the Berkeley Existential Risk Initiative to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License. If you'd like to translate this content into another language, please get in touch!