• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

Future of Life Institute — Artificial Intelligence Risk Reduction

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Future of Life Institute
  • Amount: $1,186,000

  • Award Date: August 2015

Table of Contents

    (Photo Courtesy of Twitter)

    Future of Life Institute staff reviewed this page prior to publication.

    Note: This page was created using content published by Good Ventures and GiveWell, the organizations that created the Open Philanthropy Project, before this website was launched. Uses of “we” and “our” on this page may therefore refer to Good Ventures or GiveWell, but they still represent the work of the Open Philanthropy Project.


    The Open Philanthropy Project recommended $1,186,000 to the Future of Life Institute (FLI) to support research proposals aimed at keeping artificial intelligence robust and beneficial. In the first half of 2015, FLI issued a Request for Proposals (RFP) to gather research proposals on artificial intelligence risk reduction. This RFP was the first wave of a $10 million program funded by Elon Musk; the RFP planned to make grants worth approximately $6 million.

    The RFP solicited applications for research projects (by small teams or individuals) and centers (to be founded focusing on policy and forecasting). After working closely with FLI during the receipt and evaluation of proposals, we determined that the value of high-quality project proposals submitted was greater than the available funding. Consequently, we made a grant of $1,186,000 to FLI to enable additional project proposals to be funded. We consider the value of this grant to comprise both the output of the additional projects funded and the less tangible benefits of supporting the first formal RFP in this field.

    We were very pleased with the overall quality of the applications, and with the decisions made by the selection panel. The proposals that were funded by the RFP span a wide range of approaches, including research on ensuring that advanced AI systems that may be developed in the future are aligned with human values, managing the economic impacts of AI, and controlling autonomous weapons systems.

    Rationale for the grant

    The cause

    In a March 2015 update on the Open Philanthropy Project, we identified ‘risks from artificial intelligence’ as a priority cause within our global catastrophic risk program area.

    In brief, “risks from artificial intelligence (AI)” refers to risks that could potentially emerge as the capabilities of AI systems increase. It seems plausible that sometime this century, systems will be developed that can match human-level performance at a wide range of cognitive tasks. These advances could have extremely positive effects, but may also pose risks from intentional misuse or catastrophic accidents.

    See our writeup of this issue for more detail on our view.

    The Future of Life Institute’s 2015 Request for Proposals

    In January 2015, the Future of Life Institute (FLI) organized a conference in Puerto Rico, called ‘The Future of AI: Opportunities and Challenges’. Following the conference, Elon Musk announced a $10 million donation to FLI to support “a global research program aimed at keeping AI beneficial to humanity.”1 Soon thereafter, FLI issued a Request for Proposals (RFP) to solicit proposals aiming to make AI systems robust and beneficial,2 and published alongside it a document expanding on research priorities within this area.3 The goal of the RFP was to allocate $6 million of Musk’s donation to the most promising proposals submitted, in two categories: “project grants” and “center grants”.4

    We see this RFP as an important step in the development of the nascent field of AI safety research. It represents the first set of grant opportunities explicitly seeking to fund mainstream academic work on the subject, which we feel makes it an unusual opportunity for a funder to engage in early-stage field-building. We felt that it was important that the process go well, in the sense that strong proposals be funded, and that the academics who took part feel that applying was a good use of their time.

    For this reason, we have been working closely with FLI since the announcement of the RFP. We wanted to follow what proposals were submitted, with the intention of potentially contributing additional funding if we believed that high quality proposals would otherwise go unfunded.

    Our decision process

    Our Program Officer for Global Catastrophic Risks, Howie Lempel, reviewed first round applications with the assistance of Dario Amodei, one of our technical advisors; other Open Philanthropy staff looked over some applications and were involved in discussions. Following this review, we determined that at the available level of funding ($6 million), a number of promising proposals would have to be rejected.

    At this point (around May), we told FLI that we would plan to recommend a grant of at least $1 million towards the RFP. Telling FLI about our planned recommendation at this stage was intended to assist FLI in planning the review of second round proposals, and as an expression of good faith while we played an active role in the RFP process.

    We discussed the RFP with a number of people, including applicants, AI researchers from within academia, and researchers from within the existing AI safety field. These conversations included discussion of the value of different types of research, as well as asking for their perceptions of how the RFP was proceeding. We shared our impressions from these discussions with FLI staff, including making some small logistical suggestions to help the RFP run smoothly. Conversation notes from these conversations are not available, though there are some details on what we heard and what suggestions we made below.

    Representatives of the Open Philanthropy Project attended the review panel meeting in late June, where the final funding allocations were decided. There we focused on evaluating the proposals which would be affected by our decision on how much funding to allocate. We decided that contributing a total of $1,186,000 would enable the RFP to fund all the proposals that the panel had determined to be the strongest.

    The proposals

    The full list of proposals receiving funding, including summaries and technical abstracts, may be found here. That link includes all materials that can be shared publicly (we cannot share e.g. rejected proposals, though we give general comments on the process below).

    FLI’s announcement of the grants gives the following overview of the awardees:5

    The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

    The 37 projects being funded include:

    • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
    • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
    • A project lead by Manuela Veloso from Carnegie-Mellon University on making AI systems explain their decisions to humans
    • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
    • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
    • A new Oxford-Cambridge research center for studying AI-relevant policy

    With the research field of AI safety at such an early stage, we feel that it would be premature to have confident expectations of which research directions will end up being relevant and important once AI systems become advanced enough to pose significant risks. As such, we were hoping that this RFP would provide an opportunity to support experts from a variety of backgrounds to tackle different aspects of the problem, and expand the range of approaches being taken. We were excited to see breadth across the research proposals submitted, which were diverse both relative to each other and to previous work on AI safety.

    There are some specific research directions which we find particularly promising, though with low confidence. Some of these directions – which we were glad to see represented among the successful projects – include transparency and meta-reasoning, robustness to context changes, calibration of uncertainty, and general forecasting.6 We also believe that early attempts to explore what it looks like when AI systems attempt to learn what humans value may be useful in informing later work, if more advanced and general AI (which may need to have some internal representation of human values, in a way current systems do not) is eventually developed. A number of different approaches to this value-learning question were among the successful proposals, including projects led by Stuart Russell, Owain Evans, and Paul Christiano.

    Overall, we were very impressed by the quality of the applicants and the proposals submitted. We were also pleased with the decisions made by the review panel; we feel that recommendations were consistently based on the quality of the proposals, without unduly prioritizing either projects from within mainstream academia, or from within the existing AI safety field.

    Room for more funding

    We believe it is very unlikely that the proposals funded due to this grant would have received funding from this RFP in the absence of the grant.

    Elon Musk’s donation was the only other source of funding for this RFP, and this contribution was capped at $6 million from the outset.7 We considered the possibility that this amount would be increased late in the process if the quality of submissions was higher than expected. We decided to make the grant once we had decided that despite the large number of excellent proposals, it was unlikely that the total amount of funding for this set of grants would be raised.

    In addition, the funding provided by Musk was restricted such that it had to be disbursed evenly across three years, funding $2 million of proposals each year. Given that the proposals submitted could last one, two, or three years and generally requested an even amount of funding across their duration, this restriction meant that it would be essentially impossible for the RFP to allocate its entire budget if it funded any one- or two-year projects. Our funding meant that shorter projects could be funded without running into this issue. We also made an effort to determine whether some projects which had applied for one or two years of funding could be restructured to receive funds over a longer time horizon. A large number of the projects which we suggested be restructured in this way accepted the suggestion, and were able to receive funds which they might not have had access to without our involvement.

    We find it relatively likely that some of the projects funded by this RFP would have received funding from other sources, if they had not been successful here. However, we do not consider this a major concern. One reason is that while some projects would have received funding, it’s far from clear that the majority would have. Secondly, it was important to us not only that the specific additional projects received funding, but also that the RFP be successful as a whole. This included ensuring that talented applicants not have their proposals rejected in a way that could cause them to feel demoralized or unmotivated to do research related to AI safety in future, and funding the set of proposals which would best ensure the productive development of the field as a whole.

    The organization

    Overall, we have been impressed with what FLI has been able to achieve over the past six months, and are very happy with the outcome of the RFP.

    Points we were particularly pleased with:

    • The conference in Puerto Rico was organized quickly, brought together an impressive group of participants, and led to a decisive result.8
    • The RFP received a large number of what we (and Dario, our advisor) perceived as high quality proposals, which we consider especially impressive considering the relatively short timeline and the unusual topic.
    • We felt that the review panel was strong, and were happy with the decisions it made.

    There were some areas in which we found working with FLI challenging. We feel these areas were minor compared to FLI’s strengths listed above, but wish to explain them to give a sense of the considerations that went into this grant. We discuss them in the next section.

    Risks and reservations about this grant

    The most significant risk to the success of this grant is that the research it funds may simply turn out not to contribute usefully to the field of AI safety. We see this risk as intrinsic to the field as a whole at present, and not as specific to this grant; see this section of our writeup on risks from artificial intelligence for more detail on our thoughts on this front.

    There are two other notable reservations we had while making this grant:

    • We see some risk that any work in this field will increase the likelihood of counterproductive regulation being introduced in response to increased attention to the field. On balance, we expect that this grant and similar work will reduce the risk of this type of regulation, rather than increasing it; we have written more on our views on this question in this section of our cause writeup.
    • We find it fairly likely that some of the successful projects would have received funding from other sources if they had not been funded by this RFP. While this reduces this grant’s expected counterfactual impact on research output, we do not consider it a major concern; we also believe the indirect benefits on community and field-building are important.

    As a more minor consideration, we share some of the challenges of working with FLI as an indication of reservations we had during the process of considering the grant, though we don’t consider these to be major issues for the grant going forward.

    • There is a built-in tension in our relationships with potential grantees; they are both organizations fundraising from us as well as partners in achieving our goals. The latter role incentivized FLI to share information with us about possible weaknesses in the competing projects, so that we could provide maximally helpful input, whereas the former role incentivized FLI to present the proposals and review process to us in a positive light. Although FLI did share all proposal details and written panel reports with us, there were cases in which we felt that certain issues weren’t sufficiently called to our attention, and/or that the tension between fundraising and achieving goals hampered clear communication.
    • We first observed this dynamic while evaluating the first round of RFP proposals. Prior to selecting proposals for the second round, FLI asked us to provide a rough initial estimate of our likely level of funding so they could decide how many second rounders to solicit. To aid our determination, we were given an estimate of the number of high quality first round proposals based on FLI’s polling of the reviewers (as well as access to the proposals themselves). We felt the proposals were strong overall but that there were significantly fewer very strong proposals than this information had implied.

    Over the course of the RFP we encountered additional communication issues, although these were smaller and/or more ambiguous. Communication may also have been smoother if the RFP involved fewer short timelines and more advance planning, issues discussed immediately below.

    • We believe that parts of the process might have gone more smoothly, particularly in terms of communications with applicants, by a combination of more thorough advance planning, a less compressed schedule, and more input on the process from external experts who have experience with RFPs within the AI field. For example:
      • Some applicants commented that it was not clear what types of research were eligible for funding (although this is somewhat to be expected given the novelty of the overall topic).
      • The grants included a specific requirement that overhead costs not exceed 15%. However, some applicants were unsure when or whether they should negotiate this arrangement with their institution.
      • Based on input from first-round AI expert reviewers, finalists were specifically advised that proposals for less than $100,000 were more likely to be funded,9 putting downward pressure on budgets. This could have been better explained and communicated, as our impression is that the length of the requested proposals, the modest amount of funding available, and this budget pressure negatively affected the cost-benefit tradeoff of applying. We spoke with several applicants who were considering withdrawing from the process and raised this concern, although the specific applicants we spoke with did not withdraw. This may have contributed to some (in our opinion) strong proposals from leading institutions withdrawing between rounds, and may have been part of the reason that some cutting-edge topics in current AI research (e.g. deep learning) were somewhat under-represented among the projects funded. We did not have the opportunity to discuss these applicants’ reasons for withdrawal.
    • We consider FLI’s model as an all-volunteer organization to be somewhat risky in general, although it has clear advantages in terms of maintaining relatively low expenses. An example of a specific concern is that FLI did not have a PR professional involved in the grant process, even though it generated a reasonable amount of press coverage. However, we should note that we have been pleased overall with how the PR around the grants has gone.

    Lessons learned

    There are two major lessons that we took away from the process of making this grant:

    1. We communicated poorly with FLI about the public announcement of our grant, which caused a relatively significant misunderstanding when it came time to make the announcement. We did not understand FLI’s planned timeline for issuing a press release, and failed to communicate that we had expected more time to review the release before publishing. In the future, we will put more emphasis on establishing both parties’ expectations for public announcements more clearly and further in advance. We view this as our mistake.
    2. In the future, when running a grants competition in a given academic field, we will place more emphasis on having people from within that field provide feedback on the application process and on communications to applicants before they are sent out. We believe this will be an effective way to stay on top of how communications with applicants will be perceived and interpreted, and how the program compares with expectations from various communities across the field.

    Plans for follow-up

    Follow-up expectations

    Grantees are required to update FLI on their progress once per year for the duration of their funding. We plan to discuss these updates with FLI, and intend to write publicly about our impressions of how the projects are progressing. Given that the full benefits of the research funded by this RFP relate to long-term field development, we do not expect to be able to say with confidence to what extent the projects are succeeding. However, we do plan to keep track of the projects’ research output, including, for example, publications and conference presentations, where appropriate.

    Sources

    DOCUMENT SOURCE
    Future of Life Institute announcement of grantees Source (archive)
    Future of Life Institute grants competition 2015 Source (archive)
    Future of Life Institute open letter Source (archive)
    Future of Life Institute press release, Jan 15 2015 Source (archive)
    Future of Life Institute research priorities 2015 Source (archive)
    Expand Footnotes Collapse Footnotes

    1. Future of Life Institute press release, Jan 15 2015

    2. Future of Life Institute grants competition 2015

    3. Future of Life Institute research priorities 2015

    4. “Grants will be made in two categories: Project Grants and Center Grants.” Future of Life Institute grants competition 2015

    5.Future of Life Institute announcement of grantees

    6.Transparency and meta-reasoning: As an AI system grows more capable, it does not necessarily remain possible for human observers to understand the internal processes which lead to the system’s outputs. The ability for humans to understand and check the reasoning behind predictions or decisions made by AI systems might become more important as the role these systems play in human society increases, but this ability may not be prioritized (by default) by research focusing on AI capabilities. An example of work in this area which will be funded by the present RFP is the project ‘Explanations for Complex AI Systems’, led by Manuela Veloso of Carnegie Mellon University.

    Robustness to context changes: Systems which are trained in one context may behave strangely or dangerously when placed in an unfamiliar situation. A robust and beneficial system would either continue performing well in a new situation, or recognize when it should not proceed using its existing models. An example of work in this area which will be funded by the present RFP is the project ‘Robust and Transparent Artificial Intelligence Via Anomaly Detection and Explanation’, led by Thomas Dietterich of Oregon State University.

    Calibration of uncertainty: This refers to the ability of an AI system to identify how confident it is in a given prediction or decision. It may be important that a system is able to recognize when it is uncertain and (for example) ask for human assistance, rather than moving forward with its best guess. An example of work in this area which will be funded by the present RFP is the project ‘Robust probabilistic inference engines for autonomous agents’, led by Stefano Ermon of Stanford University.

    General forecasting: AI’s impact on society is likely to increase. Work to consider which factors or indicators will be relevant as this happens seems likely to help ensure that future developments in the field of AI are beneficial for humanity. An example of work in this area which will be funded by the present RFP is the project ‘AI Impacts’, led by Katja Grace of the Machine Intelligence Research Institute.

    7.“This 2015 grants competition is the first wave of the $10M program announced this month, and will give grants totaling about $6M to researchers in academic and other non-profit institutions for projects up to three years in duration, beginning September 1, 2015.” Future of Life Institute grants competition 2015

    8.Future of Life Institute open letter

    Future of Life Institute press release, Jan 15 2015

    9.FLI used the following language in an email to applicants:

    “Because there were so many high-quality applications, there will be an incentive for our final reviewers to approve many small projects instead a few large ones; we expect that keeping your budget low can improve your chance of acceptance. For example, we anticipate a much higher success rate for $250k projects than for $500k projects, and another significant increase in acceptance rate for $100k projects over $250k projects.

    Please note that because of the low overhead rate, a grant of e.g. $250k from FLI should enable you to do the same research as a significantly larger grant than from e.g. a US federal research grant. Additionally, keep in mind that it may be a good long-term strategy to be accepted for a somewhat smaller project and use your success to secure funding for follow-up grants from FLI, which we hope to offer in the near future, instead of taking a longer shot for a bigger budget up-front. Please do whatever you can to lower your project’s budget in order to maximize your chances!”

    Related Items

    • Potential Risks from Advanced AI

      Timaeus — Operating Expenses

      Open Philanthropy recommended two grants totaling $1,557,000 to Timaeus for operating expenses. Timaeus seeks to use singular learning theory to better understand how training data and algorithmic architectures...

      Read more
    • Potential Risks from Advanced AI

      MATS Research — AI Safety Research Expenses

      Open Philanthropy recommended a grant of $660,000 to MATS Research to support research projects undertaken during the winter 2024-2025 ML Alignment & Theory Scholars (MATS) cohort. The MATS...

      Read more
    • Potential Risks from Advanced AI

      University of Texas at Austin — AI Safety Research

      Open Philanthropy recommended a gift of $885,000 over two years to the University of Texas at Austin to support AI safety research and field-building, led by Christian Tarsney...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • We’re Hiring!
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    info@openphilanthropy.org
    Media Inquiries
    media@openphilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT