• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

Future of Life Institute — General Support

Visit Grantee Site
  • Category: Global Catastrophic Risks
  • Organization Name: Future of Life Institute
  • Amount: $100,000

  • Award Date: March 2016

Table of Contents

    (Image courtesy of Twitter)

    Future of Life Institute staff reviewed this page prior to publication.


    The Open Philanthropy Project recommended a grant of $100,000 to the Future of Life Institute (FLI) for general support.

    FLI is a research and outreach organization that works to mitigate global catastrophic risks (GCRs). We have previously collaborated with FLI on issues related to potential risks from advanced artificial intelligence.

    FLI is now seeking general operating support for the coming year. We have been impressed with FLI’s past work and are glad to support future efforts, especially since they may generate more opportunities for good work in this area. We do have some reservations about FLI’s current plans, discussed below.

    Rationale for the grant

    Background

    The Open Philanthropy Project has identified global catastrophic risks (GCRs) as one of the categories that we plan to prioritize in our grantmaking.

    We have previously worked with the Future of Life Institute (FLI), a research and outreach organization that works to mitigate GCRs, on potential risks from advanced artificial intelligence (AI), one of our focus areas in this category. Last year, we worked with FLI to evaluate responses to a Request for Proposals (RFP) it issued, and made a grant of $1,186,000 to increase the number of high-quality proposals FLI was able to fund.

    Grant details

    The major activities FLI has planned for 2016 (for which it also plans to do additional fundraising) include:

    • News operation: FLI recently hired a staffer dedicated to curating and writing news content related to GCRs for the recently added news section of its website. It will require approximately $150,000 to support its two-person communications staff for one year.
    • Nuclear weapons campaign: FLI plans to launch and run a campaign to encourage individuals and organizations (e.g. universities and municipalities) not to invest in the production of new nuclear weapons systems. This campaign is estimated to cost approximately $100,000 over the next year, including about $50,000 for financial research to identify companies investing in nuclear weapons, $45,000 for several part-time on-site university student organizers, and $5,000 for incidental expenses.
    • AI safety conference: In 2015, FLI organized a conference on AI safety, held in Puerto Rico. It plans to host another in 2016, which it estimates will cost at least $150,000. FLI told us that it expects to be able to raise the required funding for this conference from other sources.
    • AI conference travel: FLI will support travel expenses related to any symposia, panels, and/or discussions that it helps organize on AI safety, and support FLI-affiliated researchers to travel to several major machine learning conferences this year. FLI plans to spend approximately $20,000 on this.

    The case for the grant

    In organizing its 2015 AI safety conference (which we attended), FLI demonstrated a combination of network, ability to execute, and values that impressed us. We felt that the conference was well-organized, attracted the attention of high-profile individuals who had not previously demonstrated an interest in AI safety, and seemed to lead many of those individuals to take the issue more seriously. An open letter issued following the conference, calling for “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial”, was signed by a number of prominent figures in machine learning and the broader scientific community.1 The conference also allowed FLI to mobilize private funding2, which it used to launch an RFP that resulted in an unexpectedly high number of strong proposals.

    Although we have some reservations about FLI’s plans for this year, we believe that they have the potential to be successful, which could create opportunity for further good work related to GCRs. Details follow.

    News operations

    We feel that media discussion of potential risks from advanced artificial intelligence is often unclear and poorly informed. Having a news operation dedicated to improving the quality of the coverage on this issue could be helpful. FLI plans to have its high-profile advisory board available for comment, and we can imagine a number of scenarios in which this may improve the quality of discussion on these issues.

    AI safety

    We believe FLI’s plans related to AI conferences (both its own and others) are worthwhile efforts to continue to allow AI researchers to have more reasonable conversations about the future of AI and possible risks. This looks to us like a positive development for the field.

    Nuclear weapons campaign

    We are most uncertain about FLI’s proposed nuclear weapons campaign, for reasons stated in a later section, but we do see a case for it. We believe that nuclear weapons advocacy is a neglected space in need of new voices and that FLI is well positioned to work on a university divestment campaign. The organization has strong ties to many prominent academics, as well as existing relationships with campus-based effective altruism groups who might welcome the opportunity to do concrete work in this space.

    If this type of advocacy can achieve small wins on nuclear weapons, we think this might better position FLI to do more impactful larger-scale advocacy work on this issue in the future.

    Regardless of its success, this work will help us better understand whether FLI can successfully execute on issues related to nuclear weapons policy. If this campaign goes well, we may feel more comfortable supporting FLI on more ambitious efforts in this space going forward.

    Concerns about the grant

    Although we have been impressed with FLI’s capacity to organize and execute, we have some concern that its capacity to effect change may be reduced to some extent outside of issues related to AI. FLI was able to bring attention and credibility to potential risks from AI, but it is not clear that this is necessarily what is needed on other topics.

    We have some reservations about FLI’s planned news operations, because the public content FLI has put out so far does not appear to us to be highly likely to contribute to improved press coverage, though our impression could be wrong and FLI’s work in this category is fairly early.

    We also have reservations about FLI’s approach to its nuclear weapons campaign, which we believe is unlikely to lead to significant change on this issue. The theory of change implied by FLI’s plans for this campaign seems to be that increasing the stigma attached to nuclear weapons would push decision-makers toward policies that call for fewer nuclear weapons. We would guess that there is likely to be only a weak link between success in the divestment campaign and broader attitudes toward nuclear weapons policy. Note that we have done some work to understand the space of nuclear weapons policy.

    In addition, we are somewhat concerned that if FLI does achieve success on this issue, it may be challenging to recruit staff that would be needed to transform its efforts into a broader and sustained campaign.

    Room for more funding

    In the absence of our funding, we believe it is fairly likely (but still uncertain) that FLI would be able to raise most or all of the funds it requires from other donors. We expect that these donors would largely have similar values and priorities to us (e.g. donors from the effective altruism community), and are therefore not overly concerned by this possibility. With this grant, we expect FLI to be highly likely to raise the funds it requires.

    Plans for learning and follow-up

    Goals for the grant

    This grant will support an organization we believe has done good work in the past and allow it to expand its work. We hope that the grant will help us learn more about FLI’s capacity to do good work beyond potential risks from advanced artificial intelligence.

    Key questions for follow-up

    We expect to have a conversation with FLI staff every 3-6 months for the next 12 months. After that, we plan to consider renewal. Although we recognize that not all of FLI’s planned activities may have come to fruition within 12 months, we believe that we will be able to get a good sense of how they have gone so far. Questions we might seek to answer include:

    • Is the coverage of GCRs on the news page (both original and curated) of high quality?
    • Is FLI a recognized source of information on GCRs?
    • Has the nuclear weapons campaign received media coverage?
    • Have any universities or other investors demonstrated increased interest in the issue of nuclear weapons, or shown any indication that they are considering divestment?
    • Has the presence of AI researchers affiliated with FLI at major machine learning conferences had an impact on the nature of the discussions at these conferences?

    Our process

    Following our collaboration last year, we kept in touch with FLI regarding its funding situation and plans for future activities.

    Sources

    DOCUMENT SOURCE
    FLI Open Letter Source (archive)
    FLI press release, Jan 15 2015 Source (archive)
    Expand Footnotes Collapse Footnotes

    1.FLI Open Letter

    2.FLI press release, Jan 15 2015

    Related Items

    • Global Catastrophic Risks

      Future of Life Institute — General Support (2020)

      Open Philanthropy recommended a grant of $176,000 to the Future of Life Institute (FLI) for general support. FLI is a research and outreach organization that works to mitigate...

      Read more
    • Global Catastrophic Risks

      Future of Life Institute — General Support (2019)

      Open Philanthropy recommended a grant of $100,000 to the Future of Life Institute (FLI) for general support. FLI is a research and outreach organization that works to mitigate...

      Read more
    • Global Catastrophic Risks

      Future of Life Institute — General Support (2018)

      The Open Philanthropy Project recommended a grant of $250,000 over two years to the Future of Life Institute (FLI) for general support. FLI is a research and outreach...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • We’re Hiring!
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    info@openphilanthropy.org
    Media Inquiries
    media@openphilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT