Future of Life Institute — General Support (2020)

Panel entitled “Should we build superintelligence?” at the 2019 Beneficial AGI Conference. (Photo courtesy of FLI.)

Grant investigator: Daniel Dewey

This page was reviewed but not written by the grant investigator. Future of Life Institute staff also reviewed this page prior to publication.

Open Philanthropy recommended a grant of $176,000 to the Future of Life Institute (FLI) for general support. FLI is a research and outreach organization that works to mitigate global catastrophic risks. This funding is intended to support the production of educational materials on issues related to potential risks from advanced artificial intelligence.

This follows our October 2019 support.

Future of Life Institute — General Support (2019)

Exploring AGI Scenarios at the 2019 Beneficial AGI Conference. (Photo courtesy of FLI.)

Grant investigator: Daniel Dewey

This page was reviewed but not written by the grant investigator. Future of Life Institute staff also reviewed this page prior to publication.

Open Philanthropy recommended a grant of $100,000 to the Future of Life Institute (FLI) for general support. FLI is a research and outreach organization that works to mitigate global catastrophic risks. We have previously collaborated with FLI on issues related to potential risks from advanced artificial intelligence.

This follows our June 2018 support.

Future of Life Institute — General Support (2018)

Grant investigator: Nick Beckstead

This page was reviewed but not written by the grant investigator. Future of Life Institute staff also reviewed this page prior to publication.

The Open Philanthropy Project recommended a grant of $250,000 over two years to the Future of Life Institute (FLI) for general support. FLI is a research and outreach organization that works to mitigate global catastrophic risks. We have previously collaborated with FLI on issues related to potential risks from advanced artificial intelligence.

This grant is a renewal of our May 2017 support.

Future of Life Institute — General Support (2017)

Grant investigator: Nick Beckstead

This page was reviewed but not written by the grant investigator. Future of Life Institute staff also reviewed this page prior to publication.

The Open Philanthropy Project recommended a grant of $100,000 to the Future of Life Institute (FLI), a research and outreach organization that works to mitigate global catastrophic risks, for general support. This grant will primarily be used to help organize and administer a request for proposals for technical research related to potential risks from advanced artificial intelligence.

We have recommended two previous grants to FLI.

Future of Life Institute — General Support

(Image courtesy of Twitter)

Future of Life Institute staff reviewed this page prior to publication.

The Open Philanthropy Project recommended a grant of $100,000 to the Future of Life Institute (FLI) for general support.

FLI is a research and outreach organization that works to mitigate global catastrophic risks (GCRs). We have previously collaborated with FLI on issues related to potential risks from advanced artificial intelligence.

FLI is now seeking general operating support for the coming year. We have been impressed with FLI’s past work and are glad to support future efforts, especially since they may generate more opportunities for good work in this area. We do have some reservations about FLI’s current plans, discussed below.

Rationale for the grant


The Open Philanthropy Project has identified global catastrophic risks (GCRs) as one of the categories that we plan to prioritize in our grantmaking.

We have previously worked with the Future of Life Institute (FLI), a research and outreach organization that works to mitigate GCRs, on potential risks from advanced artificial intelligence (AI), one of our focus areas in this category. Last year, we worked with FLI to evaluate responses to a Request for Proposals (RFP) it issued, and made a grant of $1,186,000 to increase the number of high-quality proposals FLI was able to fund.

Grant details

The major activities FLI has planned for 2016 (for which it also plans to do additional fundraising) include:

  • News operation: FLI recently hired a staffer dedicated to curating and writing news content related to GCRs for the recently added news section of its website. It will require approximately $150,000 to support its two-person communications staff for one year.
  • Nuclear weapons campaign: FLI plans to launch and run a campaign to encourage individuals and organizations (e.g. universities and municipalities) not to invest in the production of new nuclear weapons systems. This campaign is estimated to cost approximately $100,000 over the next year, including about $50,000 for financial research to identify companies investing in nuclear weapons, $45,000 for several part-time on-site university student organizers, and $5,000 for incidental expenses.
  • AI safety conference: In 2015, FLI organized a conference on AI safety, held in Puerto Rico. It plans to host another in 2016, which it estimates will cost at least $150,000. FLI told us that it expects to be able to raise the required funding for this conference from other sources.
  • AI conference travel: FLI will support travel expenses related to any symposia, panels, and/or discussions that it helps organize on AI safety, and support FLI-affiliated researchers to travel to several major machine learning conferences this year. FLI plans to spend approximately $20,000 on this.

The case for the grant

In organizing its 2015 AI safety conference (which we attended), FLI demonstrated a combination of network, ability to execute, and values that impressed us. We felt that the conference was well-organized, attracted the attention of high-profile individuals who had not previously demonstrated an interest in AI safety, and seemed to lead many of those individuals to take the issue more seriously. An open letter issued following the conference, calling for “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial”, was signed by a number of prominent figures in machine learning and the broader scientific community.1 The conference also allowed FLI to mobilize private funding2, which it used to launch an RFP that resulted in an unexpectedly high number of strong proposals.

Although we have some reservations about FLI’s plans for this year, we believe that they have the potential to be successful, which could create opportunity for further good work related to GCRs. Details follow.

News operations

We feel that media discussion of potential risks from advanced artificial intelligence is often unclear and poorly informed. Having a news operation dedicated to improving the quality of the coverage on this issue could be helpful. FLI plans to have its high-profile advisory board available for comment, and we can imagine a number of scenarios in which this may improve the quality of discussion on these issues.

AI safety

We believe FLI’s plans related to AI conferences (both its own and others) are worthwhile efforts to continue to allow AI researchers to have more reasonable conversations about the future of AI and possible risks. This looks to us like a positive development for the field.

Nuclear weapons campaign

We are most uncertain about FLI’s proposed nuclear weapons campaign, for reasons stated in a later section, but we do see a case for it. We believe that nuclear weapons advocacy is a neglected space in need of new voices and that FLI is well positioned to work on a university divestment campaign. The organization has strong ties to many prominent academics, as well as existing relationships with campus-based effective altruism groups who might welcome the opportunity to do concrete work in this space.

If this type of advocacy can achieve small wins on nuclear weapons, we think this might better position FLI to do more impactful larger-scale advocacy work on this issue in the future.

Regardless of its success, this work will help us better understand whether FLI can successfully execute on issues related to nuclear weapons policy. If this campaign goes well, we may feel more comfortable supporting FLI on more ambitious efforts in this space going forward.

Concerns about the grant

Although we have been impressed with FLI’s capacity to organize and execute, we have some concern that its capacity to effect change may be reduced to some extent outside of issues related to AI. FLI was able to bring attention and credibility to potential risks from AI, but it is not clear that this is necessarily what is needed on other topics.

We have some reservations about FLI’s planned news operations, because the public content FLI has put out so far does not appear to us to be highly likely to contribute to improved press coverage, though our impression could be wrong and FLI’s work in this category is fairly early.

We also have reservations about FLI’s approach to its nuclear weapons campaign, which we believe is unlikely to lead to significant change on this issue. The theory of change implied by FLI’s plans for this campaign seems to be that increasing the stigma attached to nuclear weapons would push decision-makers toward policies that call for fewer nuclear weapons. We would guess that there is likely to be only a weak link between success in the divestment campaign and broader attitudes toward nuclear weapons policy. Note that we have done some work to understand the space of nuclear weapons policy.

In addition, we are somewhat concerned that if FLI does achieve success on this issue, it may be challenging to recruit staff that would be needed to transform its efforts into a broader and sustained campaign.

Room for more funding

In the absence of our funding, we believe it is fairly likely (but still uncertain) that FLI would be able to raise most or all of the funds it requires from other donors. We expect that these donors would largely have similar values and priorities to us (e.g. donors from the effective altruism community), and are therefore not overly concerned by this possibility. With this grant, we expect FLI to be highly likely to raise the funds it requires.

Plans for learning and follow-up

Goals for the grant

This grant will support an organization we believe has done good work in the past and allow it to expand its work. We hope that the grant will help us learn more about FLI’s capacity to do good work beyond potential risks from advanced artificial intelligence.

Key questions for follow-up

We expect to have a conversation with FLI staff every 3-6 months for the next 12 months. After that, we plan to consider renewal. Although we recognize that not all of FLI’s planned activities may have come to fruition within 12 months, we believe that we will be able to get a good sense of how they have gone so far. Questions we might seek to answer include:

  • Is the coverage of GCRs on the news page (both original and curated) of high quality?
  • Is FLI a recognized source of information on GCRs?
  • Has the nuclear weapons campaign received media coverage?
  • Have any universities or other investors demonstrated increased interest in the issue of nuclear weapons, or shown any indication that they are considering divestment?
  • Has the presence of AI researchers affiliated with FLI at major machine learning conferences had an impact on the nature of the discussions at these conferences?

Our process

Following our collaboration last year, we kept in touch with FLI regarding its funding situation and plans for future activities.


FLI Open Letter Source (archive)
FLI press release, Jan 15 2015 Source (archive)

Future of Life Institute — Artificial Intelligence Risk Reduction

(Photo Courtesy of Twitter)

Future of Life Institute staff reviewed this page prior to publication.

Note: This page was created using content published by Good Ventures and GiveWell, the organizations that created the Open Philanthropy Project, before this website was launched. Uses of “we” and “our” on this page may therefore refer to Good Ventures or GiveWell, but they still represent the work of the Open Philanthropy Project.

The Open Philanthropy Project recommended $1,186,000 to the Future of Life Institute (FLI) to support research proposals aimed at keeping artificial intelligence robust and beneficial. In the first half of 2015, FLI issued a Request for Proposals (RFP) to gather research proposals on artificial intelligence risk reduction. This RFP was the first wave of a $10 million program funded by Elon Musk; the RFP planned to make grants worth approximately $6 million.

The RFP solicited applications for research projects (by small teams or individuals) and centers (to be founded focusing on policy and forecasting). After working closely with FLI during the receipt and evaluation of proposals, we determined that the value of high-quality project proposals submitted was greater than the available funding. Consequently, we made a grant of $1,186,000 to FLI to enable additional project proposals to be funded. We consider the value of this grant to comprise both the output of the additional projects funded and the less tangible benefits of supporting the first formal RFP in this field.

We were very pleased with the overall quality of the applications, and with the decisions made by the selection panel. The proposals that were funded by the RFP span a wide range of approaches, including research on ensuring that advanced AI systems that may be developed in the future are aligned with human values, managing the economic impacts of AI, and controlling autonomous weapons systems.

Rationale for the grant

The cause

In a March 2015 update on the Open Philanthropy Project, we identified ‘risks from artificial intelligence’ as a priority cause within our global catastrophic risk program area.

In brief, “risks from artificial intelligence (AI)” refers to risks that could potentially emerge as the capabilities of AI systems increase. It seems plausible that sometime this century, systems will be developed that can match human-level performance at a wide range of cognitive tasks. These advances could have extremely positive effects, but may also pose risks from intentional misuse or catastrophic accidents.

See our writeup of this issue for more detail on our view.

The Future of Life Institute’s 2015 Request for Proposals

In January 2015, the Future of Life Institute (FLI) organized a conference in Puerto Rico, called ‘The Future of AI: Opportunities and Challenges’. Following the conference, Elon Musk announced a $10 million donation to FLI to support “a global research program aimed at keeping AI beneficial to humanity.”1 Soon thereafter, FLI issued a Request for Proposals (RFP) to solicit proposals aiming to make AI systems robust and beneficial,2 and published alongside it a document expanding on research priorities within this area.3 The goal of the RFP was to allocate $6 million of Musk’s donation to the most promising proposals submitted, in two categories: “project grants” and “center grants”.4

We see this RFP as an important step in the development of the nascent field of AI safety research. It represents the first set of grant opportunities explicitly seeking to fund mainstream academic work on the subject, which we feel makes it an unusual opportunity for a funder to engage in early-stage field-building. We felt that it was important that the process go well, in the sense that strong proposals be funded, and that the academics who took part feel that applying was a good use of their time.

For this reason, we have been working closely with FLI since the announcement of the RFP. We wanted to follow what proposals were submitted, with the intention of potentially contributing additional funding if we believed that high quality proposals would otherwise go unfunded.

Our decision process

Our Program Officer for Global Catastrophic Risks, Howie Lempel, reviewed first round applications with the assistance of Dario Amodei, one of our technical advisors; other Open Philanthropy staff looked over some applications and were involved in discussions. Following this review, we determined that at the available level of funding ($6 million), a number of promising proposals would have to be rejected.

At this point (around May), we told FLI that we would plan to recommend a grant of at least $1 million towards the RFP. Telling FLI about our planned recommendation at this stage was intended to assist FLI in planning the review of second round proposals, and as an expression of good faith while we played an active role in the RFP process.

We discussed the RFP with a number of people, including applicants, AI researchers from within academia, and researchers from within the existing AI safety field. These conversations included discussion of the value of different types of research, as well as asking for their perceptions of how the RFP was proceeding. We shared our impressions from these discussions with FLI staff, including making some small logistical suggestions to help the RFP run smoothly. Conversation notes from these conversations are not available, though there are some details on what we heard and what suggestions we made below.

Representatives of the Open Philanthropy Project attended the review panel meeting in late June, where the final funding allocations were decided. There we focused on evaluating the proposals which would be affected by our decision on how much funding to allocate. We decided that contributing a total of $1,186,000 would enable the RFP to fund all the proposals that the panel had determined to be the strongest.

The proposals

The full list of proposals receiving funding, including summaries and technical abstracts, may be found here. That link includes all materials that can be shared publicly (we cannot share e.g. rejected proposals, though we give general comments on the process below).

FLI’s announcement of the grants gives the following overview of the awardees:5

The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project lead by Manuela Veloso from Carnegie-Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
  • A new Oxford-Cambridge research center for studying AI-relevant policy

With the research field of AI safety at such an early stage, we feel that it would be premature to have confident expectations of which research directions will end up being relevant and important once AI systems become advanced enough to pose significant risks. As such, we were hoping that this RFP would provide an opportunity to support experts from a variety of backgrounds to tackle different aspects of the problem, and expand the range of approaches being taken. We were excited to see breadth across the research proposals submitted, which were diverse both relative to each other and to previous work on AI safety.

There are some specific research directions which we find particularly promising, though with low confidence. Some of these directions – which we were glad to see represented among the successful projects – include transparency and meta-reasoning, robustness to context changes, calibration of uncertainty, and general forecasting.6 We also believe that early attempts to explore what it looks like when AI systems attempt to learn what humans value may be useful in informing later work, if more advanced and general AI (which may need to have some internal representation of human values, in a way current systems do not) is eventually developed. A number of different approaches to this value-learning question were among the successful proposals, including projects led by Stuart Russell, Owain Evans, and Paul Christiano.

Overall, we were very impressed by the quality of the applicants and the proposals submitted. We were also pleased with the decisions made by the review panel; we feel that recommendations were consistently based on the quality of the proposals, without unduly prioritizing either projects from within mainstream academia, or from within the existing AI safety field.

Room for more funding

We believe it is very unlikely that the proposals funded due to this grant would have received funding from this RFP in the absence of the grant.

Elon Musk’s donation was the only other source of funding for this RFP, and this contribution was capped at $6 million from the outset.7 We considered the possibility that this amount would be increased late in the process if the quality of submissions was higher than expected. We decided to make the grant once we had decided that despite the large number of excellent proposals, it was unlikely that the total amount of funding for this set of grants would be raised.

In addition, the funding provided by Musk was restricted such that it had to be disbursed evenly across three years, funding $2 million of proposals each year. Given that the proposals submitted could last one, two, or three years and generally requested an even amount of funding across their duration, this restriction meant that it would be essentially impossible for the RFP to allocate its entire budget if it funded any one- or two-year projects. Our funding meant that shorter projects could be funded without running into this issue. We also made an effort to determine whether some projects which had applied for one or two years of funding could be restructured to receive funds over a longer time horizon. A large number of the projects which we suggested be restructured in this way accepted the suggestion, and were able to receive funds which they might not have had access to without our involvement.

We find it relatively likely that some of the projects funded by this RFP would have received funding from other sources, if they had not been successful here. However, we do not consider this a major concern. One reason is that while some projects would have received funding, it’s far from clear that the majority would have. Secondly, it was important to us not only that the specific additional projects received funding, but also that the RFP be successful as a whole. This included ensuring that talented applicants not have their proposals rejected in a way that could cause them to feel demoralized or unmotivated to do research related to AI safety in future, and funding the set of proposals which would best ensure the productive development of the field as a whole.

The organization

Overall, we have been impressed with what FLI has been able to achieve over the past six months, and are very happy with the outcome of the RFP.

Points we were particularly pleased with:

  • The conference in Puerto Rico was organized quickly, brought together an impressive group of participants, and led to a decisive result.8
  • The RFP received a large number of what we (and Dario, our advisor) perceived as high quality proposals, which we consider especially impressive considering the relatively short timeline and the unusual topic.
  • We felt that the review panel was strong, and were happy with the decisions it made.

There were some areas in which we found working with FLI challenging. We feel these areas were minor compared to FLI’s strengths listed above, but wish to explain them to give a sense of the considerations that went into this grant. We discuss them in the next section.

Risks and reservations about this grant

The most significant risk to the success of this grant is that the research it funds may simply turn out not to contribute usefully to the field of AI safety. We see this risk as intrinsic to the field as a whole at present, and not as specific to this grant; see this section of our writeup on risks from artificial intelligence for more detail on our thoughts on this front.

There are two other notable reservations we had while making this grant:

  • We see some risk that any work in this field will increase the likelihood of counterproductive regulation being introduced in response to increased attention to the field. On balance, we expect that this grant and similar work will reduce the risk of this type of regulation, rather than increasing it; we have written more on our views on this question in this section of our cause writeup.
  • We find it fairly likely that some of the successful projects would have received funding from other sources if they had not been funded by this RFP. While this reduces this grant’s expected counterfactual impact on research output, we do not consider it a major concern; we also believe the indirect benefits on community and field-building are important.

As a more minor consideration, we share some of the challenges of working with FLI as an indication of reservations we had during the process of considering the grant, though we don’t consider these to be major issues for the grant going forward.

  • There is a built-in tension in our relationships with potential grantees; they are both organizations fundraising from us as well as partners in achieving our goals. The latter role incentivized FLI to share information with us about possible weaknesses in the competing projects, so that we could provide maximally helpful input, whereas the former role incentivized FLI to present the proposals and review process to us in a positive light. Although FLI did share all proposal details and written panel reports with us, there were cases in which we felt that certain issues weren’t sufficiently called to our attention, and/or that the tension between fundraising and achieving goals hampered clear communication.
  • We first observed this dynamic while evaluating the first round of RFP proposals. Prior to selecting proposals for the second round, FLI asked us to provide a rough initial estimate of our likely level of funding so they could decide how many second rounders to solicit. To aid our determination, we were given an estimate of the number of high quality first round proposals based on FLI’s polling of the reviewers (as well as access to the proposals themselves). We felt the proposals were strong overall but that there were significantly fewer very strong proposals than this information had implied.

Over the course of the RFP we encountered additional communication issues, although these were smaller and/or more ambiguous. Communication may also have been smoother if the RFP involved fewer short timelines and more advance planning, issues discussed immediately below.

  • We believe that parts of the process might have gone more smoothly, particularly in terms of communications with applicants, by a combination of more thorough advance planning, a less compressed schedule, and more input on the process from external experts who have experience with RFPs within the AI field. For example:
    • Some applicants commented that it was not clear what types of research were eligible for funding (although this is somewhat to be expected given the novelty of the overall topic).
    • The grants included a specific requirement that overhead costs not exceed 15%. However, some applicants were unsure when or whether they should negotiate this arrangement with their institution.
    • Based on input from first-round AI expert reviewers, finalists were specifically advised that proposals for less than $100,000 were more likely to be funded,9 putting downward pressure on budgets. This could have been better explained and communicated, as our impression is that the length of the requested proposals, the modest amount of funding available, and this budget pressure negatively affected the cost-benefit tradeoff of applying. We spoke with several applicants who were considering withdrawing from the process and raised this concern, although the specific applicants we spoke with did not withdraw. This may have contributed to some (in our opinion) strong proposals from leading institutions withdrawing between rounds, and may have been part of the reason that some cutting-edge topics in current AI research (e.g. deep learning) were somewhat under-represented among the projects funded. We did not have the opportunity to discuss these applicants’ reasons for withdrawal.
  • We consider FLI’s model as an all-volunteer organization to be somewhat risky in general, although it has clear advantages in terms of maintaining relatively low expenses. An example of a specific concern is that FLI did not have a PR professional involved in the grant process, even though it generated a reasonable amount of press coverage. However, we should note that we have been pleased overall with how the PR around the grants has gone.

Lessons learned

There are two major lessons that we took away from the process of making this grant:

  1. We communicated poorly with FLI about the public announcement of our grant, which caused a relatively significant misunderstanding when it came time to make the announcement. We did not understand FLI’s planned timeline for issuing a press release, and failed to communicate that we had expected more time to review the release before publishing. In the future, we will put more emphasis on establishing both parties’ expectations for public announcements more clearly and further in advance. We view this as our mistake.
  2. In the future, when running a grants competition in a given academic field, we will place more emphasis on having people from within that field provide feedback on the application process and on communications to applicants before they are sent out. We believe this will be an effective way to stay on top of how communications with applicants will be perceived and interpreted, and how the program compares with expectations from various communities across the field.

Plans for follow-up

Follow-up expectations

Grantees are required to update FLI on their progress once per year for the duration of their funding. We plan to discuss these updates with FLI, and intend to write publicly about our impressions of how the projects are progressing. Given that the full benefits of the research funded by this RFP relate to long-term field development, we do not expect to be able to say with confidence to what extent the projects are succeeding. However, we do plan to keep track of the projects’ research output, including, for example, publications and conference presentations, where appropriate.


Future of Life Institute announcement of grantees Source (archive)
Future of Life Institute grants competition 2015 Source (archive)
Future of Life Institute open letter Source (archive)
Future of Life Institute press release, Jan 15 2015 Source (archive)
Future of Life Institute research priorities 2015 Source (archive)