• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

Montreal Institute for Learning Algorithms — AI Safety Research

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Montreal Institute for Learning Algorithms
  • Amount: $2,400,000

  • Award Date: July 2017

Table of Contents

    Published: July 2017

    MILA staff reviewed this page prior to publication.

    The Open Philanthropy Project recommended a grant of $2.4 million over four years to the Montreal Institute for Learning Algorithms (MILA) to support technical research on potential risks from advanced artificial intelligence (AI).

    $1.6 million of this grant will support Professor Yoshua Bengio and his co-investigators at the Université de Montréal, and $800,000 will support Professors Joelle Pineau and Doina Precup at McGill University. We see Professor Bengio’s research group as one of the world’s preeminent deep learning labs and are excited to provide support for it to undertake AI safety research.

    Background

    This grant falls within our work on potential risks from advanced AI, one of our focus areas within global catastrophic risks. Currently, two of our primary aims in this area are (1) to increase the amount of high-quality technical AI safety research being done, and (2) to increase the number of people who are deeply knowledgeable about both machine learning and potential risks from advanced AI. We believe we can pursue these aims both directly (by supporting this type of work) and indirectly (by supporting programs that can attract talented students to this area, can provide positive examples of AI safety work that draws on machine learning expertise, and can provide leadership for the broader machine learning community).

    The organization

    The Montreal Institute for Learning Algorithms (MILA) is a machine learning research group based at the Université de Montréal, led by Professors Yoshua Bengio, Pascal Vincent, Christopher Pal, Aaron Courville, Laurent Charlin, Simon Lacoste-Julien, and Jian Tang. We see MILA as one of the very top deep learning labs in academia, and among the top machine learning labs. Professors Joelle Pineau, Doina Precup, Hugo Larochelle, Alain Tapp, and Jackie Cheung are associate members of MILA.

    About the grant

    Proposed activities

    Professor Bengio has presented several potential AI safety research directions to us, along with some initial ideas about how he might work on them. However, we intend for Professor Bengio to have the flexibility to use our grant for whichever AI safety research projects may seem most promising in the future, rather than be restricted to projects that he has already proposed. In particular, we think it will be valuable for Professor Bengio’s students to be free to explore new ideas that they have and to talk to others in the AI safety community (such as Open Philanthropy’s technical advisors, or other grantees of ours) about which kinds of safety work may be most effective. Given that AI safety research is a relatively new area, we think it is particularly valuable to keep potential research options flexible.

    Based on discussion with our technical advisors, some portions of Professor Bengio’s currently proposed agenda appear to us quite likely to be valuable, while we have reservations about some others (see “Risks and reservations” below). However, we expect that we would consider this grant worthwhile even if Professor Bengio were to use it to pursue exactly the projects that he has already proposed.

    Case for the grant

    Among potential grantees in the field, we believe that Professor Bengio is one of the best positioned to help build the talent pipeline in AI safety research. Our understanding, based on conversations with our technical advisors and our general impressions from the field, is that many of the most talented machine learning researchers spend some time in Professor Bengio’s lab before joining other universities or industry groups. This is an important contributing factor to our expectations for the impact of this grant, both because it increases our confidence in the quality of the research that this grant will support and because of the potential benefits for pipeline building.

    In our conversations with Professor Bengio, we’ve found significant overlap between his perspective on AI safety and ours, and Professor Bengio was excited to be part of our overall funding activities in this area. We think that Professor Bengio is likely to serve as a valuable member of the AI safety research community, and that he will encourage his lab to be involved in that community as well. We believe that members of his lab could likely be valuable participants at future workshops on AI safety.

    Budget and room for more funding

    Our impression is that MILA is already fairly well-funded, and that its ability to use additional marginal funding is somewhat limited. Professor Bengio told us that the amount of additional yearly funding that he would be able to use productively for AI safety research is $400,000; we have decided to grant this full amount for four years ($1.6 million total). We have also granted two of Professor Bengio’s co-investigators at MILA who are also interested in working on this agenda, Professors Pineau and Precup, $200,000 per year ($800,000 total), which they estimated as the amount of funding they would be able to use productively.

    Risks and reservations

    Some of our technical advisors expressed some reservations about and offered significant feedback on Professor Bengio’s proposed research plan. We are not especially concerned about this; because AI safety is a relatively new field, we think it is reasonable to expect disagreements among researchers as to which research directions are most promising. We plan to continue having discussions with Professor Bengio and his team over the next few years in order to reach a greater degree of mutual understanding about his research agenda by the time we decide whether to renew our support in 2020.

    Follow-up expectations

    We expect to have a conversation with Professor Bengio six months after the start of the grant, and annually after that, to discuss his projects and results, with public notes if the conversation warrants it. In the first few months of the grant, we plan to visit Montreal for several days to meet Professor Bengio’s co-investigators and discuss the project with them.

    At the conclusion of this grant in 2020, we will decide whether to renew our support. If Professor Bengio’s research is going well (based on our technical advisors’ assessment and the impressions of others in the field), and if we have achieved a better mutual understanding with Professor Bengio about how his research is likely to be valuable, it is likely that we will decide to provide renewed funding. If Professor Bengio is using half or more of our funding to pursue research directions that we do not find particularly promising, it is likely that we would choose not to renew.

    Our process

    We spoke with Professor Bengio and several of his students during our recent outreach to machine learning researchers and formed a positive impression of him and his work. Our technical advisors spoke highly of Professor Bengio’s capabilities, reputation, and goals.

    Related Items

    • Potential Risks from Advanced AI

      Mila — Research Project on Artificial Intelligence

      Grant investigator: Luke Muehlhauser This page was reviewed but not written by the grant investigator. MILA staff also reviewed this page prior to publication. Open Philanthropy recommended a...

      Read more
    • Potential Risks from Advanced AI

      Timaeus — Operating Expenses

      Open Philanthropy recommended two grants totaling $1,557,000 to Timaeus for operating expenses. Timaeus seeks to use singular learning theory to better understand how training data and algorithmic architectures...

      Read more
    • Potential Risks from Advanced AI

      MATS Research — AI Safety Research Expenses

      Open Philanthropy recommended a grant of $660,000 to MATS Research to support research projects undertaken during the winter 2024-2025 ML Alignment & Theory Scholars (MATS) cohort. The MATS...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • We’re Hiring!
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    info@openphilanthropy.org
    Media Inquiries
    media@openphilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT