(Photo Courtesy of Twitter)
Organization Name 
Award Date 
Grant Amount 
To support research proposals aimed at keeping artificial intelligence robust and beneficial.

Published: August 2015

Future of Life Institute staff reviewed this page prior to publication.

Note: This page was created using content published by Good Ventures and GiveWell, the organizations that created the Open Philanthropy Project, before this website was launched. Uses of “we” and “our” on this page may therefore refer to Good Ventures or GiveWell, but they still represent the work of the Open Philanthropy Project.

The Open Philanthropy Project awarded $1,186,000 to the Future of Life Institute (FLI) to support research proposals aimed at keeping artificial intelligence robust and beneficial. In the first half of 2015, FLI issued a Request for Proposals (RFP) to gather research proposals on artificial intelligence risk reduction. This RFP was the first wave of a $10 million program funded by Elon Musk; the RFP planned to make grants worth approximately $6 million.

The RFP solicited applications for research projects (by small teams or individuals) and centers (to be founded focusing on policy and forecasting). After working closely with FLI during the receipt and evaluation of proposals, we determined that the value of high quality project proposals submitted was greater than the available funding. Consequently, we made a grant of $1,186,000 to FLI to enable additional project proposals to be funded. We consider the value of this grant to comprise both the output of the additional projects funded, and the less tangible benefits of supporting the first formal RFP in this field.

We were very pleased with the overall quality of the applications, and with the decisions made by the selection panel. The proposals that were funded by the RFP span a wide range of approaches, including research on ensuring that advanced AI systems that may be developed in the future are aligned with human values, managing the economic impacts of AI, and controlling autonomous weapons systems.

Rationale for the grant

The cause

In a March 2015 update on the Open Philanthropy Project, we identified ‘risks from artificial intelligence’ as a priority cause within our global catastrophic risk program area.

In brief, “risks from artificial intelligence (AI)” refers to risks that could potentially emerge as the capabilities of AI systems increase. It seems plausible that sometime this century, systems will be developed that can match human-level performance at a wide range of cognitive tasks. These advances could have extremely positive effects, but may also pose risks from intentional misuse or catastrophic accidents.

See our writeup of this issue for more detail on our view.

The Future of Life Institute’s 2015 Request for Proposals

In January 2015, the Future of Life Institute (FLI) organized a conference in Puerto Rico, called ‘The Future of AI: Opportunities and Challenges’. Following the conference, Elon Musk announced a $10 million donation to FLI to support “a global research program aimed at keeping AI beneficial to humanity.”1 Soon thereafter, FLI issued a Request for Proposals (RFP) to solicit proposals aiming to make AI systems robust and beneficial,2 and published alongside it a document expanding on research priorities within this area.3 The goal of the RFP was to allocate $6 million of Musk’s donation to the most promising proposals submitted, in two categories: “project grants” and “center grants”.4

We see this RFP as an important step in the development of the nascent field of AI safety research. It represents the first set of grant opportunities explicitly seeking to fund mainstream academic work on the subject, which we feel makes it an unusual opportunity for a funder to engage in early-stage field-building. We felt that it was important that the process go well, in the sense that strong proposals be funded, and that the academics who took part feel that applying was a good use of their time.

For this reason, we have been working closely with FLI since the announcement of the RFP. We wanted to follow what proposals were submitted, with the intention of potentially contributing additional funding if we believed that high quality proposals would otherwise go unfunded.

Our decision process

Our Program Officer for Global Catastrophic Risks, Howie Lempel, reviewed first round applications with the assistance of Dario Amodei, one of our technical advisors; other Open Philanthropy staff looked over some applications and were involved in discussions. Following this review, we determined that at the available level of funding ($6 million), a number of promising proposals would have to be rejected.

At this point (around May), we told FLI that we would plan to recommend a grant of at least $1 million towards the RFP. Telling FLI about our planned recommendation at this stage was intended to assist FLI in planning the review of second round proposals, and as an expression of good faith while we played an active role in the RFP process.

We discussed the RFP with a number of people, including applicants, AI researchers from within academia, and researchers from within the existing AI safety field. These conversations included discussion of the value of different types of research, as well as asking for their perceptions of how the RFP was proceeding. We shared our impressions from these discussions with FLI staff, including making some small logistical suggestions to help the RFP run smoothly. Conversation notes from these conversations are not available, though there are some details on what we heard and what suggestions we made below.

Representatives of the Open Philanthropy Project attended the review panel meeting in late June, where the final funding allocations were decided. There we focused on evaluating the proposals which would be affected by our decision on how much funding to allocate. We decided that contributing a total of $1,186,000 would enable the RFP to fund all the proposals that the panel had determined to be the strongest..

The proposals

The full list of proposals receiving funding, including summaries and technical abstracts, may be found here. That link includes all materials that can be shared publicly (we cannot share e.g. rejected proposals, though we give general comments on the process below).

FLI’s announcement of the grants gives the following overview of the awardees:5

The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project lead by Manuela Veloso from Carnegie-Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
  • A new Oxford-Cambridge research center for studying AI-relevant policy

With the research field of AI safety at such an early stage, we feel that it would be premature to have confident expectations of which research directions will end up being relevant and important once AI systems become advanced enough to pose significant risks. As such, we were hoping that this RFP would provide an opportunity to support experts from a variety of backgrounds to tackle different aspects of the problem, and expand the range of approaches being taken. We were excited to see breadth across the research proposals submitted, which were diverse both relative to each other and to previous work on AI safety.

There are some specific research directions which we find particularly promising, though with low confidence. Some of these directions - which we were glad to see represented among the successful projects - include transparency and meta-reasoning, robustness to context changes, calibration of uncertainty, and general forecasting.6 We also believe that early attempts to explore what it looks like when AI systems attempt to learn what humans value may be useful in informing later work, if more advanced and general AI (which may need to have some internal representation of human values, in a way current systems do not) is eventually developed. A number of different approaches to this value-learning question were among the successful proposals, including projects led by Stuart Russell, Owain Evans, and Paul Christiano.

Overall, we were very impressed by the quality of the applicants and the proposals submitted. We were also pleased with the decisions made by the review panel; we feel that recommendations were consistently based on the quality of the proposals, without unduly prioritizing either projects from within mainstream academia, or from within the existing AI safety field.

Room for more funding

We believe it is very unlikely that the proposals funded due to this grant would have received funding from this RFP in the absence of the grant.

Elon Musk’s donation was the only other source of funding for this RFP, and this contribution was capped at $6 million from the outset.7 We considered the possibility that this amount would be increased late in the process if the quality of submissions was higher than expected. We decided to make the grant once we had decided that despite the large number of excellent proposals, it was unlikely that the total amount of funding for this set of grants would be raised.

In addition, the funding provided by Musk was restricted such that it had to be disbursed evenly across three years, funding $2 million of proposals each year. Given that the proposals submitted could last one, two, or three years and generally requested an even amount of funding across their duration, this restriction meant that it would be essentially impossible for the RFP to allocate its entire budget if it funded any one- or two-year projects. Our funding meant that shorter projects could be funded without running into this issue. We also made an effort to determine whether some projects which had applied for one or two years of funding could be restructured to receive funds over a longer time horizon. A large number of the projects which we suggested be restructured in this way accepted the suggestion, and were able to receive funds which they might not have had access to without our involvement.

We find it relatively likely that some of the projects funded by this RFP would have received funding from other sources, if they had not been successful here. However, we do not consider this a major concern. One reason is that while some projects would have received funding, it’s far from clear that the majority would have. Secondly, it was important to us not only that the specific additional projects received funding, but also that the RFP be successful as a whole. This included ensuring that talented applicants not have their proposals rejected in a way that could cause them to feel demoralized or unmotivated to do research related to AI safety in future, and funding the set of proposals which would best ensure the productive development of the field as a whole.

The organization

Overall, we have been impressed with what FLI has been able to achieve over the past six months, and are very happy with the outcome of the RFP.

Points we were particularly pleased with:

  • The conference in Puerto Rico was organized quickly, brought together an impressive group of participants, and led to a decisive result.8
  • The RFP received a large number of what we (and Dario, our advisor) perceived as high quality proposals, which we consider especially impressive considering the relatively short timeline and the unusual topic.
  • We felt that the review panel was strong, and were happy with the decisions it made.

There were some areas in which we found working with FLI challenging. We feel these areas were minor compared to FLI’s strengths listed above, but wish to explain them to give a sense of the considerations that went into this grant. We discuss them in the next section.

Risks and reservations about this grant

The most significant risk to the success of this grant is that the research it funds may simply turn out not to contribute usefully to the field of AI safety. We see this risk as intrinsic to the field as a whole at present, and not as specific to this grant; see this section of our writeup on risks from artificial intelligence for more detail on our thoughts on this front.

There are two other notable reservations we had while making this grant:

  • We see some risk that any work in this field will increase the likelihood of counterproductive regulation being introduced in response to increased attention to the field. On balance, we expect that this grant and similar work will reduce the risk of this type of regulation, rather than increasing it; we have written more on our views on this question in this section of our cause writeup.
  • We find it fairly likely that some of the successful projects would have received funding from other sources if they had not been funded by this RFP. While this reduces this grant’s expected counterfactual impact on research output, we do not consider it a major concern; we also believe the indirect benefits on community and field-building are important.

As a more minor consideration, we share some of the challenges of working with FLI as an indication of reservations we had during the process of considering the grant, though we don’t consider these to be major issues for the grant going forward.

  • There is a built-in tension in our relationships with potential grantees; they are both organizations fundraising from us as well as partners in achieving our goals. The latter role incentivized FLI to share information with us about possible weaknesses in the competing projects, so that we could provide maximally helpful input, whereas the former role incentivized FLI to present the proposals and review process to us in a positive light. Although FLI did share all proposal details and written panel reports with us, there were cases in which we felt that certain issues weren’t sufficiently called to our attention, and/or that the tension between fundraising and achieving goals hampered clear communication.
  • We first observed this dynamic while evaluating the first round of RFP proposals. Prior to selecting proposals for the second round, FLI asked us to provide a rough initial estimate of our likely level of funding so they could decide how many second rounders to solicit. To aid our determination, we were given an estimate of the number of high quality first round proposals based on FLI’s polling of the reviewers (as well as access to the proposals themselves). We felt the proposals were strong overall but that there were significantly fewer very strong proposals than this information had implied.

Over the course of the RFP we encountered additional communication issues, although these were smaller and/or more ambiguous. Communication may also have been smoother if the RFP involved fewer short timelines and more advance planning, issues discussed immediately below.

  • We believe that parts of the process might have gone more smoothly, particularly in terms of communications with applicants, by a combination of more thorough advance planning, a less compressed schedule, and more input on the process from external experts who have experience with RFPs within the AI field. For example:
    • Some applicants commented that it was not clear what types of research were eligible for funding (although this is somewhat to be expected given the novelty of the overall topic).
    • The grants included a specific requirement that overhead costs not exceed 15%. However, some applicants were unsure when or whether they should negotiate this arrangement with their institution.
    • Based on input from first-round AI expert reviewers, finalists were specifically advised that proposals for less than $100,000 were more likely to be funded,9 putting downward pressure on budgets. This could have been better explained and communicated, as our impression is that the length of the requested proposals, the modest amount of funding available, and this budget pressure negatively affected the cost-benefit tradeoff of applying. We spoke with several applicants who were considering withdrawing from the process and raised this concern, although the specific applicants we spoke with did not withdraw. This may have contributed to some (in our opinion) strong proposals from leading institutions withdrawing between rounds, and may have been part of the reason that some cutting-edge topics in current AI research (e.g. deep learning) were somewhat under-represented among the projects funded. We did not have the opportunity to discuss these applicants’ reasons for withdrawal.
  • We consider FLI’s model as an all-volunteer organization to be somewhat risky in general, although it has clear advantages in terms of maintaining relatively low expenses. An example of a specific concern is that FLI did not have a PR professional involved in the grant process, even though it generated a reasonable amount of press coverage. However, we should note that we have been pleased overall with how the PR around the grants has gone.

Lessons learned

There are two major lessons that we took away from the process of making this grant:

  1. We communicated poorly with FLI about the public announcement of our grant, which caused a relatively significant misunderstanding when it came time to make the announcement. We did not understand FLI’s planned timeline for issuing a press release, and failed to communicate that we had expected more time to review the release before publishing. In the future, we will put more emphasis on establishing both parties’ expectations for public announcements more clearly and further in advance. We view this as our mistake.
  2. In the future, when running a grants competition in a given academic field, we will place more emphasis on having people from within that field provide feedback on the application process and on communications to applicants before they are sent out. We believe this will be an effective way to stay on top of how communications with applicants will be perceived and interpreted, and how the program compares with expectations from various communities across the field.

Plans for follow-up

Follow-up expectations

Grantees are required to update FLI on their progress once per year for the duration of their funding. We plan to discuss these updates with FLI, and intend to write publicly about our impressions of how the projects are progressing. Given that the full benefits of the research funded by this RFP relate to long-term field development, we do not expect to be able to say with confidence to what extent the projects are succeeding. However, we do plan to keep track of the projects’ research output, including, for example, publications and conference presentations, where appropriate.

Relationship disclosures

We wish to disclose a number of prior relationships between Open Philanthropy staff and others involved in this RFP. Note that there is some overlap between Open Philanthropy Project staff and a broader community around “effective altruism,” which includes many people who invest significant time in the cause of potential risks from advanced artificial intelligence. Many people in this community have social or professional connections.

Daniel Dewey, a research fellow at the Future of Humanity Institute (FHI) who was highly involved in administering the RFP and one of our main points of contact for this grant, is a friend and former coworker of Nick Beckstead, a GiveWell Research Analyst who participated in decisions around this grant.

There are also existing relationships between GiveWell/Open Philanthropy staff and many of the people who applied for and received grants. For example:

  • One applicant is a long-time donor and fan of GiveWell.
  • One applicant is a friend of several staff, and is also a paid scientific advisor to the Open Philanthropy Project.
  • Five applicants are or have been associated with FHI as employees or Research Associates. Nick Beckstead was previously an employee and is now a Research Associate at FHI, and has social ties with many of their staff.
  • Two proposals were submitted on behalf of the Machine Intelligence Research Institute (MIRI). MIRI’s former Executive Director, Luke Muehlhauser, is now a GiveWell Research Analyst; MIRI’s current Executive Director, Nate Soares, lives in a shared house with Helen Toner and Ben Hoffman, GiveWell Research Analysts, and has social ties to several GiveWell staff.
  • Two proposals were submitted on behalf of the Centre for Effective Altruism, where Nick Beckstead is a trustee.
  • There are other, more minor cases of grant proposals’ being submitted by people who have some form of social tie to some GiveWell staff.


Document Source
Future of Life Institute announcement of grantees Source (archive)
Future of Life Institute grants competition 2015 Source (archive)
Future of Life Institute open letter Source (archive)
Future of Life Institute press release, Jan 15 2015 Source (archive)
Future of Life Institute research priorities 2015 Source (archive)
  • 1.

    Future of Life Institute press release, Jan 15 2015

  • 2.

    Future of Life Institute grants competition 2015

  • 3.

    Future of Life Institute research priorities 2015

  • 4.

    “Grants will be made in two categories: Project Grants and Center Grants.” Future of Life Institute grants competition 2015

  • 5. Future of Life Institute announcement of grantees
  • 6.
    • Transparency and meta-reasoning: As an AI system grows more capable, it does not necessarily remain possible for human observers to understand the internal processes which lead to the system’s outputs. The ability for humans to understand and check the reasoning behind predictions or decisions made by AI systems might become more important as the role these systems play in human society increases, but this ability may not be prioritized (by default) by research focusing on AI capabilities. An example of work in this area which will be funded by the present RFP is the project ‘Explanations for Complex AI Systems’, led by Manuela Veloso of Carnegie Mellon University.
    • Robustness to context changes: Systems which are trained in one context may behave strangely or dangerously when placed in an unfamiliar situation. A robust and beneficial system would either continue performing well in a new situation, or recognize when it should not proceed using its existing models. An example of work in this area which will be funded by the present RFP is the project ‘Robust and Transparent Artificial Intelligence Via Anomaly Detection and Explanation’, led by Thomas Dietterich of Oregon State University.
    • Calibration of uncertainty: This refers to the ability of an AI system to identify how confident it is in a given prediction or decision. It may be important that a system is able to recognize when it is uncertain and (for example) ask for human assistance, rather than moving forward with its best guess. An example of work in this area which will be funded by the present RFP is the project ‘Robust probabilistic inference engines for autonomous agents’, led by Stefano Ermon of Stanford University.
    • General forecasting: AI’s impact on society is likely to increase. Work to consider which factors or indicators will be relevant as this happens seems likely to help ensure that future developments in the field of AI are beneficial for humanity. An example of work in this area which will be funded by the present RFP is the project ‘AI Impacts’, led by Katja Grace of the Machine Intelligence Research Institute.
  • 7.

    “This 2015 grants competition is the first wave of the $10M program announced this month, and will give grants totaling about $6M to researchers in academic and other non-profit institutions for projects up to three years in duration, beginning September 1, 2015.” Future of Life Institute grants competition 2015

  • 8.
  • 9.

    FLI used the following language in an email to applicants:

    “Because there were so many high-quality applications, there will be an incentive for our final reviewers to approve many small projects instead a few large ones; we expect that keeping your budget low can improve your chance of acceptance. For example, we anticipate a much higher success rate for $250k projects than for $500k projects, and another significant increase in acceptance rate for $100k projects over $250k projects.

    Please note that because of the low overhead rate, a grant of e.g. $250k from FLI should enable you to do the same research as a significantly larger grant than from e.g. a US federal research grant. Additionally, keep in mind that it may be a good long-term strategy to be accepted for a somewhat smaller project and use your success to secure funding for follow-up grants from FLI, which we hope to offer in the near future, instead of taking a longer shot for a bigger budget up-front. Please do whatever you can to lower your project’s budget in order to maximize your chances!”