• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

Distill Prize for Clarity in Machine Learning — General Support

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Distill Prize for Clarity in Machine Learning
  • Amount: $25,000

  • Award Date: March 2017

Table of Contents

    Grant investigator: Daniel Dewey
    This page was reviewed but not written by the grant investigator. Distill staff also reviewed this page prior to publication.

    The Open Philanthropy Project recommended a grant of $25,000 to support the Distill Prize for Clarity in Machine Learning, which will be awarded for clear explanations of concepts related to machine learning. The Open Philanthropy Project will also be administering the prize.

    This grant is part of the Distill Prize’s total initial endowment of $125,000, which is also funded by Chris Olah, Greg Brockman, Jeff Dean, and DeepMind. We see this grant as an opportunity to increase the volume of work being done on a problem that we believe to be important but which is often not institutionally supported.

    We see several possible benefits that could result from the prize:

    • Fostering a culture of deeply and fully understanding how machine learning systems work, which we expect could reduce the likelihood of undesirable side effects.
    • Helping the Open Philanthropy Project to find and build relationships with machine learning researchers who can think and write clearly about how machine learning systems work. We expect that these researchers may also be more open to thinking and writing clearly about potential risks from advanced artificial intelligence (AI), one of our focus areas.
    • We also believe that explaining machine learning clearly could improve the interpretability and transparency of machine learning over time, which could help to mitigate risks from advanced AI, though this was not a major consideration in our decision to make this grant.

    Without our funding, we estimate that there is a 60% chance that the prize would be administered at the same level of quality, a 30% chance that it would be administered at lower quality, and a 10% chance that it would not move forward at all. We believe that our assistance in administering the prize will also be of significant help to Distill.

    Related Items

    • Potential Risks from Advanced AI

      Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

      Open Philanthropy recommended a grant of $2,047,268 to the Berkeley Existential Risk Initiative to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars...

      Read more
    • Potential Risks from Advanced AI

      AI Safety Hub — Safety Labs

      Open Philanthropy recommended a grant of £53,700 (approximately $63,839 at the time of conversion) to AI Safety Hub to support its Safety Labs program, which will match students...

      Read more
    • Potential Risks from Advanced AI

      Open Philanthropy Technology Policy Fellowship (2022)

      Open Philanthropy recommended a total of $1,623,938 to support the 2022 cohort of the Open Philanthropy Technology Policy Fellowship. This fellowship provides fellows with policy-focused training and mentorship...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License. If you'd like to translate this content into another language, please get in touch!