• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

George Mason University — Research into Future Artificial Intelligence Scenarios

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: George Mason University
  • Amount: $277,435

  • Award Date: June 2016

Table of Contents

    Professor Hanson reviewed this page prior to publication.


    The Open Philanthropy Project recommended a grant of $277,435 over three years to Robin Hanson (Associate Professor of Economics, George Mason University) to analyze potential scenarios in the future development of artificial intelligence (AI). Professor Hanson plans to focus on scenarios in which AI is developed through the steady accumulation of individual pieces of software and leads to a “multipolar” outcome (i.e. a scenario in which the control of advanced AI is distributed among multiple actors, rather than controlled by a single group, firm, or state). Part of this grant will pay to hire a research assistant. Ideally, this research will culminate in a book by Professor Hanson on the topic.

    Update: In July of 2017, we added $12,910 to the original grant amount to cover an increase in George Mason University’s instructional release costs (“teaching buyouts”). The “grant amount” above has been updated to reflect this.

    1. Background

    This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks.

    2. About the grant

    Professor Hanson’s grant proposal describes the project as follows:1

    Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario wherein AI results from relatively steady accumulation of software tools. That is, he proposes to assume that human level AI will result mainly from the continued accumulation of software tools and packages, with distributions of cost and value correlations similar to those seen so far in software practice, in an environment where no one actor dominates the process of creating or fielding such software. He will attempt a mostly positive analysis of the social consequences of these assumptions, both during and after a transition to a world dominated by AI. While this is hardly the universe of all desired analyses, it does seem to cover a non-trivial fraction of interesting cases.

    2.1 Case for the grant

    While we do not believe that the class of scenarios that Professor Hanson will be analyzing is necessarily the most likely way for future AI development to play out, we expect his research to contribute a significant amount of useful data collection and analysis that might be valuable to our thinking about AI more generally, as well as provide a model for other people to follow when performing similar analyses of other AI scenarios of interest.

    Professor Hanson appears to us to be particularly well suited for this project, for several reasons:

    • His recently published book on the potential future of whole brain emulations, The Age of Em,2 seems to us to be a thoughtful analysis of what might happen if brain emulations were developed (though we do not agree with all of the book’s claims and predictions). We believe Professor Hanson’s analysis of future AI scenarios could prove similarly thoughtful.
    • He had developed an outline and plan for this analysis before we expressed interest in supporting it, making this an unusually “shovel-ready” grant.
    • He appears to us to be knowledgeable about economics, AI, and futurism generally, and to be a particularly original thinker.
    • He is particularly interested in analyzing scenarios where advances in AI have a transformative impact on the world.

    In general, we would like to see a larger amount of thoughtful analysis of how AI-related scenarios might play out.

    2.2 Room for more funding

    We do not believe that Professor Hanson would undertake this work in the near future without this funding. He had planned to turn his attention to other research if he did not receive funding for this specific project, and we are fairly confident that no other funder was planning to support the project.

    2.3 Risks and reservations

    Our main concern is that, after further consideration, we might later conclude that the scenario analyzed was foreseeably very unlikely (e.g. because advanced AI systems turn out to be very different from other kinds of software). However, we see value in having many potential scenarios analyzed, and see this is as a risk worth taking.

    3. Plans for learning and follow-up

    Key questions for follow-up:

    • Has Professor Hanson found a research assistant, and if so, how has working with him or her gone?
    • What progress has Professor Hanson made on the book?
    • Does the progress that has been made appear useful and/or insightful to us?

    4. Sources

    DOCUMENT SOURCE
    George Mason University Proposal Source
    The Age of Em Homepage Source (archive)
    Expand Footnotes Collapse Footnotes

    1.George Mason University Proposal, pg. 4

    2.Archived copy of link: The Age of Em Homepage

    Related Items

    • Potential Risks from Advanced AI

      Timaeus — Operating Expenses

      Open Philanthropy recommended two grants totaling $1,557,000 to Timaeus for operating expenses. Timaeus seeks to use singular learning theory to better understand how training data and algorithmic architectures...

      Read more
    • Potential Risks from Advanced AI

      MATS Research — AI Safety Research Expenses

      Open Philanthropy recommended a grant of $660,000 to MATS Research to support research projects undertaken during the winter 2024-2025 ML Alignment & Theory Scholars (MATS) cohort. The MATS...

      Read more
    • Potential Risks from Advanced AI

      University of Texas at Austin — AI Safety Research

      Open Philanthropy recommended a gift of $885,000 over two years to the University of Texas at Austin to support AI safety research and field-building, led by Christian Tarsney...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • We’re Hiring!
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    info@openphilanthropy.org
    Media Inquiries
    media@openphilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT