George Mason University — Research into Future Artificial Intelligence Scenarios

Organization Name 
Award Date 
6/2016
Grant Amount 
$264,525
Purpose 
To support Robin Hanson's analysis of potential future artificial intelligence scenarios.

Published: July 2016

Professor Hanson reviewed this page prior to publication.

The Open Philanthropy Project awarded a grant of $264,525 over three years to Robin Hanson (Associate Professor of Economics, George Mason University) to analyze potential scenarios in the future development of artificial intelligence (AI). Professor Hanson plans to focus on scenarios in which AI is developed through the steady accumulation of individual pieces of software and leads to a “multipolar” outcome (i.e. a scenario in which the control of advanced AI is distributed among multiple actors, rather than controlled by a single group, firm, or state). Part of this grant will pay to hire a research assistant. Ideally, this research will culminate in a book by Professor Hanson on the topic.

Background

This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks.

About the grant

Professor Hanson’s grant proposal describes the project as follows:1

Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario wherein AI results from relatively steady accumulation of software tools. That is, he proposes to assume that human level AI will result mainly from the continued accumulation of software tools and packages, with distributions of cost and value correlations similar to those seen so far in software practice, in an environment where no one actor dominates the process of creating or fielding such software. He will attempt a mostly positive analysis of the social consequences of these assumptions, both during and after a transition to a world dominated by AI. While this is hardly the universe of all desired analyses, it does seem to cover a non-trivial fraction of interesting cases.

Case for the grant

While we do not believe that the class of scenarios that Professor Hanson will be analyzing is necessarily the most likely way for future AI development to play out, we expect his research to contribute a significant amount of useful data collection and analysis that might be valuable to our thinking about AI more generally, as well as provide a model for other people to follow when performing similar analyses of other AI scenarios of interest.

Professor Hanson appears to us to be particularly well suited for this project, for several reasons:

  • His recently published book on the potential future of whole brain emulations, The Age of Em,2 seems to us to be a thoughtful analysis of what might happen if brain emulations were developed (though we do not agree with all of the book’s claims and predictions). We believe Professor Hanson’s analysis of future AI scenarios could prove similarly thoughtful.
  • He had developed an outline and plan for this analysis before we expressed interest in supporting it, making this an unusually “shovel-ready” grant.
  • He appears to us to be knowledgeable about economics, AI, and futurism generally, and to be a particularly original thinker.
  • He is particularly interested in analyzing scenarios where advances in AI have a transformative impact on the world.

In general, we would like to see a larger amount of thoughtful analysis of how AI-related scenarios might play out.

Room for more funding

We do not believe that Professor Hanson would undertake this work in the near future without this funding. He had planned to turn his attention to other research if he did not receive funding for this specific project, and we are fairly confident that no other funder was planning to support the project.

Risks and reservations

Our main concern is that, after further consideration, we might later conclude that the scenario analyzed was foreseeably very unlikely (e.g. because advanced AI systems turn out to be very different from other kinds of software). However, we see value in having many potential scenarios analyzed, and see this is as a risk worth taking.

Plans for learning and follow-up

Key questions for follow-up:

  • Has Professor Hanson found a research assistant, and if so, how has working with him or her gone?
  • What progress has Professor Hanson made on the book?
  • Does the progress that has been made appear useful and/or insightful to us?

Relationship disclosures

Nick Beckstead (our Program Officer for Scientific Research, who led this investigation), Daniel Dewey (our Program Officer for Potential Risks from Advanced Artificial Intelligence), and Professor Hanson are all Research Associates at the Future of Humanity Institute.

Sources

Document Source
George Mason University Proposal Source
The Age of Em Homepage Source (archive)