Future of Humanity Institute — Research Scholars Programs (2021)

Grant Investigator: Committee for Effective Altruism Support

This page was reviewed but not written by members of the committee. The Future of Humanity staff also reviewed this page prior to publication.

Open Philanthropy recommended a grant of $3,121,861 over two years to the Future of Humanity Institute (FHI), via the University of Oxford, to support its early career researcher programs. FHI is a multidisciplinary research institute working on global catastrophic risks at the University of Oxford.

This follows our May 2020 support and falls under our work aimed at growing the community of people doing research on humanity’s long-run future. While we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter, our ultimate grant figure was set by the aggregated judgments of our committee reviewing the grant proposal.

Future of Humanity Institute — Research Scholars Programme


Grant Investigator: Committee for Effective Altruism Support

This page was reviewed but not written by members of the committee. The Future of Humanity staff also reviewed this page prior to publication.


Open Philanthropy recommended two grants totaling £1,298,023 ($1,586,224 at the time of conversion) to the Future of Humanity Institute to hire future scholars to its Research Scholars Programme. FHI is a multidisciplinary research institute working on global catastrophic risks at the University of Oxford.

This grant follows our July 2018 support. While we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter, our ultimate grant figure was set by the aggregated judgments of our committee reviewing the grant proposal.

Future of Humanity Institute — New DPhil Positions


Grant investigator: Committee for Effective Altruism Support

This page was reviewed but not written by members of the committee. Future of Humanity Institute staff also reviewed this page prior to publication.


Open Philanthropy recommended a grant of $939,263 to the Future of Humanity Institute (FHI), a multidisciplinary research institute working on global catastrophic risks at the University of Oxford, to support new DPhil positions. This grant follows our July 2018 support, which was intended to support work on risks from advanced artificial intelligence, biosecurity and pandemic preparedness, and macrostrategy.

While we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter, our ultimate grant figure was set by the aggregated judgments of our committee reviewing the grant proposal.

The grant amount was updated in October 2020.

Future of Humanity Institute — Work on Global Catastrophic Risks

Staff at the Future of Humanity Institute, a multidisciplinary research institute housed at the University of Oxford. (Photo courtesy of FHI)

Grant Investigator: Nick Beckstead

This page was reviewed but not written by the grant investigator. Future of Humanity staff also reviewed this page prior to publication.

The Open Philanthropy Project recommended a series of awards totaling up to £13,428,434 over three years to the Future of Humanity Institute (FHI), a multidisciplinary research institute working on global catastrophic risks at the University of Oxford. Of the total amount, $12,250,810 (at the time of conversion) has been committed to date, with the remainder conditional on successful hiring; these totals may be updated in the future. These funds will support work on risks from advanced artificial intelligence, biosecurity and pandemic preparedness, and macrostrategy. The largest pieces of the omnibus award package will allow FHI to recruit and hire for an education and training program led by Owen Cotton­ Barratt, and retain and attract talent in biosecurity research and FHI’s Governance of AI program.

We previously recommended grants to FHI for biosecurity in 2016 and general support in 2017 which helped fund part of the Governance of AI program.

The grant amount was updated in November 2018, June 2019, September 2019, December 2020, and January 2021.

Future of Humanity Institute — General Support

Published: March 2017

Future of Humanity Institute staff reviewed this page prior to publication.

The Open Philanthropy Project recommended a grant of £1,620,452 ($1,995,425 at time of conversion) in general support to the Future of Humanity Institute (FHI). FHI plans to use this grant primarily to increase its reserves and to make a number of new hires; this grant also conceptually encompasses an earlier grant we recommended to support FHI in hiring Dr. Piers Millett for research on biosecurity and pandemic preparedness, and this page describes the case for both this grant and that earlier grant.

1. Background

The Future of Humanity Institute (FHI) is a research center based at the University of Oxford that focuses on strategic analysis of existential risks, especially potential existential risks from advanced artificial intelligence (AI). This grant fits into our potential risks from advanced AI focus area and within the category of global catastrophic risks in general, and is also relevant to our interest in growing and empowering the effective altruism community.

2. About the grant

2.1 Budget and room for more funding

FHI plans to use £1 million of this grant to increase its unrestricted reserves from about £327,000 to £1.327 million. It plans to use the remainder to support new junior staff members.

FHI’s annual revenues and expenses are currently around £1 million per year, and the bulk of its funding is from academic grants (which are lumpy and hard to predict). It seems to us that having a little over a year of unrestricted reserves will help with FHI’s planning. For example, it might allow FHI to make potential staff members offers that are not conditional on whether FHI receives a grant it has applied for, or to promise staff currently funded by other grants that there will be funding available to support their work when one source of grant funding has ended and another has not yet begun. Much less of this would be possible at £327,000 in unrestricted funds.

FHI’s other current funders include:

  • The Future of Life Institute
  • The European Research Council
  • The Leverhulme Trust
  • Several individual donors

2.2 Case for the grant

Nick Beckstead, who investigated this grant, (“Nick” throughout this page) sees FHI as having a number of strengths as an organization:

  • Nick believes that Professor Nick Bostrom, Director of FHI, is a particularly original and insightful thinker on the topics of AI, global catastrophic risks, and technology strategy in general. Nick views Professor Bostrom’s book, Superintelligence, as FHI’s most significant output so far and the best strategic analysis of potential risks from advanced AI to date. Nick’s impression is that Superintelligence has helped to raise awareness of potential risks from advanced AI. Nick considers the possibility that this grant will boost Professor Bostrom’s output to be a key potential benefit.
  • Nick finds FHI’s researchers in general to have impressive breadth, values aligned with effective altruism, and philosophical sophistication.
  • FHI collaborates with Google DeepMind on technical work focused on addressing potential risks from advanced AI. We believe this is valuable because DeepMind is generally considered (including by us) to be one of the leading organizations in AI development.
  • FHI constitutes a shovel-ready opportunity to support work on potential risks from advanced AI.

We believe that hiring Dr. Millett will address what we view as a key staffing need for FHI (expertise in biosecurity and pandemic preparedness). We have also been pleased to see that some of FHI’s more recent hires have backgrounds in machine learning.

We are not aware of other groups with a comparable focus on, and track record of, exploring strategic questions related to potential risks from advanced AI. We think FHI is one of the most generally impressive (in terms of staff and track record) organizations with a strong focus on effective altruism.

2.3 Risks and reservations

We have some concerns about FHI as an organization:

  • Our impression is that FHI’s junior researchers operate without substantial attention from a “principal investigator”-type figure. While giving researchers independence in this way may have benefits, we also believe that researchers might select better projects or execute them more effectively with additional guidance.
  • It seems to us that a substantial fraction of FHI’s most impactful work is due to Professor Nick Bostrom. Since Professor Bostrom’s own work is already funded and since he offers relatively limited guidance to junior research staff, the impact of additional funding may scale strongly sublinearly. (Our understanding is that Professor Bostrom’s allocation of attention is a deliberate choice, and does not necessarily seem unreasonable to us.)
  • FHI has relatively limited experience with policy analysis and advocacy.
  • Our impression is that FHI’s technical proficiency in machine learning and biotechnology is somewhat limited, which we believe may reduce its credibility when writing about these topics and/or cause it to overlook important points in these areas. We are optimistic that recent and forthcoming hires, discussed above, will be helpful on this front.

3. Plans for learning and follow-up

3.1 Key questions for follow-up

  • What new staff has FHI hired, and what have they produced?
  • What has been the most important research output from FHI’s staff?
  • Have FHI’s additional reserves been useful, and if so, how?
  • How are FHI’s collaborations with industrial AI labs going?
  • Has FHI had success applying for other grants?
  • What has Dr. Millett produced during his time at FHI?

Future of Humanity Institute — Biosecurity and Pandemic Preparedness

Published: March 2017

The Open Philanthropy Project recommended a grant of £88,922 ($115,652 at time of conversion) to the Future of Humanity Institute (FHI), to support the hiring of Dr. Piers Millett to work on biosecurity and pandemic preparedness. Conceptually, we consider this part of a larger grant to FHI, which was made later for logistical reasons. We laid out the case for this grant in our writeup of the larger grant.