• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

Future of Humanity Institute — General Support

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Future of Humanity Institute
  • Amount: $1,995,425

  • Award Date: March 2017

Table of Contents

    Published: March 2017

    Future of Humanity Institute staff reviewed this page prior to publication.

    The Open Philanthropy Project recommended a grant of £1,620,452 ($1,995,425 at the time of conversion) in general support to the Future of Humanity Institute (FHI). FHI plans to use this grant primarily to increase its reserves and to make a number of new hires; this grant also conceptually encompasses an earlier grant we recommended to support FHI in hiring Dr. Piers Millett for research on biosecurity and pandemic preparedness, and this page describes the case for both this grant and that earlier grant.

    1. Background

    The Future of Humanity Institute (FHI) is a research center based at the University of Oxford that focuses on strategic analysis of existential risks, especially potential existential risks from advanced artificial intelligence (AI). This grant fits into our potential risks from advanced AI focus area and within the category of global catastrophic risks in general, and is also relevant to our interest in growing and empowering the effective altruism community.

    2. About the grant

    2.1 Budget and room for more funding

    FHI plans to use £1 million of this grant to increase its unrestricted reserves from about £327,000 to £1.327 million. It plans to use the remainder to support new junior staff members.

    FHI’s annual revenues and expenses are currently around £1 million per year, and the bulk of its funding is from academic grants (which are lumpy and hard to predict). It seems to us that having a little over a year of unrestricted reserves will help with FHI’s planning. For example, it might allow FHI to make potential staff members offers that are not conditional on whether FHI receives a grant it has applied for, or to promise staff currently funded by other grants that there will be funding available to support their work when one source of grant funding has ended and another has not yet begun. Much less of this would be possible at £327,000 in unrestricted funds.

    FHI’s other current funders include:

    • The Future of Life Institute
    • The European Research Council
    • The Leverhulme Trust
    • Several individual donors

    2.2 Case for the grant

    Nick Beckstead, who investigated this grant, (“Nick” throughout this page) sees FHI as having a number of strengths as an organization:

    • Nick believes that Professor Nick Bostrom, Director of FHI, is a particularly original and insightful thinker on the topics of AI, global catastrophic risks, and technology strategy in general. Nick views Professor Bostrom’s book, Superintelligence, as FHI’s most significant output so far and the best strategic analysis of potential risks from advanced AI to date. Nick’s impression is that Superintelligence has helped to raise awareness of potential risks from advanced AI. Nick considers the possibility that this grant will boost Professor Bostrom’s output to be a key potential benefit.
    • Nick finds FHI’s researchers in general to have impressive breadth, values aligned with effective altruism, and philosophical sophistication.
    • FHI collaborates with Google DeepMind on technical work focused on addressing potential risks from advanced AI. We believe this is valuable because DeepMind is generally considered (including by us) to be one of the leading organizations in AI development.
    • FHI constitutes a shovel-ready opportunity to support work on potential risks from advanced AI.

    We believe that hiring Dr. Millett will address what we view as a key staffing need for FHI (expertise in biosecurity and pandemic preparedness). We have also been pleased to see that some of FHI’s more recent hires have backgrounds in machine learning.

    We are not aware of other groups with a comparable focus on, and track record of, exploring strategic questions related to potential risks from advanced AI. We think FHI is one of the most generally impressive (in terms of staff and track record) organizations with a strong focus on effective altruism.

    2.3 Risks and reservations

    We have some concerns about FHI as an organization:

    • Our impression is that FHI’s junior researchers operate without substantial attention from a “principal investigator”-type figure. While giving researchers independence in this way may have benefits, we also believe that researchers might select better projects or execute them more effectively with additional guidance.
    • It seems to us that a substantial fraction of FHI’s most impactful work is due to Professor Nick Bostrom. Since Professor Bostrom’s own work is already funded and since he offers relatively limited guidance to junior research staff, the impact of additional funding may scale strongly sublinearly. (Our understanding is that Professor Bostrom’s allocation of attention is a deliberate choice, and does not necessarily seem unreasonable to us.)
    • FHI has relatively limited experience with policy analysis and advocacy.
    • Our impression is that FHI’s technical proficiency in machine learning and biotechnology is somewhat limited, which we believe may reduce its credibility when writing about these topics and/or cause it to overlook important points in these areas. We are optimistic that recent and forthcoming hires, discussed above, will be helpful on this front.

    3. Plans for learning and follow-up

    3.1 Key questions for follow-up

    • What new staff has FHI hired, and what have they produced?
    • What has been the most important research output from FHI’s staff?
    • Have FHI’s additional reserves been useful, and if so, how?
    • How are FHI’s collaborations with industrial AI labs going?
    • Has FHI had success applying for other grants?
    • What has Dr. Millett produced during his time at FHI?

    Related Items

    • Potential Risks from Advanced AI

      Timaeus — Operating Expenses

      Open Philanthropy recommended two grants totaling $1,557,000 to Timaeus for operating expenses. Timaeus seeks to use singular learning theory to better understand how training data and algorithmic architectures...

      Read more
    • Potential Risks from Advanced AI

      MATS Research — AI Safety Research Expenses

      Open Philanthropy recommended a grant of $660,000 to MATS Research to support research projects undertaken during the winter 2024-2025 ML Alignment & Theory Scholars (MATS) cohort. The MATS...

      Read more
    • Potential Risks from Advanced AI

      University of Texas at Austin — AI Safety Research

      Open Philanthropy recommended a gift of $885,000 over two years to the University of Texas at Austin to support AI safety research and field-building, led by Christian Tarsney...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • We’re Hiring!
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    info@openphilanthropy.org
    Media Inquiries
    media@openphilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT