April 2017 Update on Grant to the Future of Life Institute for Artificial Intelligence RFP

Published: April 2017

Note: FLI staff reviewed this page prior to publication.

In the first half of 2015, the Future of Life Institute (FLI) issued a request for proposals1 (RFP) to gather research proposals aimed at ensuring that artificial intelligence systems are robust and beneficial. As part of our work on potential risks from advanced artificial intelligence, The Open Philanthropy Project contributed a grant of $1,186,000 to FLI to help fund the proposals selected by FLI’s RFP. The projects funded by the RFP have now been running for one and a half years. This page gives an update on the results of this grant so far.

Daniel Dewey (“Daniel” throughout this page) was highly involved in helping FLI administer the RFP. Daniel has since become our Program Officer for Potential Risks from Advanced Artificial Intelligence. Howie Lempel (formerly our Program Officer for Global Catastrophic Risks, “Howie” throughout this page) was the main contact from the Open Philanthropy Project during the RFP.

Because we view this grant as primarily aimed at helping grow and develop the field of AI safety, it is somewhat difficult to assess how effectively the grant has achieved our goals.

Key questions we’ve asked in assessing the impact of our grant include:

  • Did considering this grant allow us to participate in the RFP process in the ways we expected? What was the impact of our involvement?
  • Has the RFP had the types of positive impacts we expected? How beneficial were the best, average, and marginal projects funded? Did the RFP have the impact on field growth that we expected?
  • Which projects was our contribution responsible for funding, and what have the quality and/or impact of those projects been so far?
  • What lessons have we learned from this process overall? What have we learned about the field of AI safety?

These are discussed in detail below.

Did we participate in the RFP process in the ways we expected?

A major reason we considered contributing to the funding available to the RFP was our hope that FLI would be willing to allow us, as potential funders, to participate in the RFP process more than it would otherwise have reason to. Note that we did not decide to recommend our grant until relatively late in the process, and our grant was not finalized until well after the RFP was completed.

Howie believes that the Open Philanthropy Project ended up participating in the RFP process in most of the ways that we expected:

  • Dario Amodei (“Dario” throughout this page), one of our Scientific Advisors, joined the review panel.
  • Howie was involved in logistical support, especially during the final decision-making.
  • Howie had the opportunity to request a number of changes around communications and believes that most of them were implemented.
  • We raised the possibility of us recommending a grant near the end of the first round of review. At that time, we had the opportunity to offer opinions on which proposals should continue to the second round. In particular, Dario advocated for 11 proposals to advance to the second round, of which 10 were ultimately chosen to advance. Our impression is that these proposals would not have made it to the second round without Dario’s support, though we are not certain of this. Of those 10, four ended up receiving grants, four were rejected by the panel, and two were either rejected by the panel or possibly had withdrawn from consideration by the final stage (our notes do not indicate whether they were considered).

What was the impact of our involvement?

Overall, Daniel and Howie have similar impressions of the Open Philanthropy Project’s impact on the RFP process:

  • We are somewhat uncertain about the overall impact of Dario’s presence on the review panel, though Daniel and Howie both think it is likely that having Dario on the panel was a net benefit. As mentioned above, it’s possible that Dario’s input led to some projects receiving funding that wouldn’t have otherwise. While we do not have a confident sense of the likely impact of those projects in particular, we think the fact that four of the projects that we believe Dario helped get through the first review round ultimately made it through the second round of review and were funded indicates that his presence on the review panel was likely beneficial, and Daniel and Howie came away from the review panel’s meetings with the impression that Dario’s participation was valuable. We also think it is possible (though unlikely) that Dario might have prevented funding from going to projects that would have had negative impacts on the field.
  • Howie helped design final budgets for a number of grants (including re-structuring funding to fit the three-year budget framing required by the RFP). We think it is likely that some value would have been lost across about ten projects if Howie had not done this.
  • Our understanding is that Howie passed useful information to FLI from the Open Philanthropy Project’s technical advisors about applicants’ perspectives on the grant application and selection process. For example, our technical advisors became aware that some applicants were confused about the rules around indirect costs, were concerned about the review panel’s composition, or were concerned about the decision to lower the average grant amount in order to fund more projects. We think that the communications changes resulting from Howie’s input were likely valuable (though it is difficult to estimate their impact precisely).
  • More generally, we believe that positive overall effects of the RFP include prompting AI researchers in adjacent fields to consider what work they might do on safety, increasing the perceived legitimacy and necessity of AI safety research, and opening new lines of AI safety research that others may build on. We are uncertain how large these benefits have been, and to what extent any possible benefits can be attributed to our grant. To give a rough idea of researcher exposure to AI safety ideas, FLI reports that the RFP has resulted in 43 peer-reviewed papers and 87 workshops organized and/or participated in by grantees (more details on FLI’s AI Safety Research2 page). We believe that by adding to the available funding and adding a second funder to the RFP we may have increased these effects somewhat, but given that the RFP would have proceeded without our involvement we think the marginal effect of our funding on these benefits was probably relatively small.

On net, Howie believes that the positive impact of our participation in the process was worth the funding that we contributed. He is less confident that it was worth the (substantial) time that he spent on it.

It’s possible that our grant and subsequent involvement in the process had benefits in terms of Daniel and Howie’s relationship, or in terms of Dario’s engagement with AI safety, but we are less confident about these effects.

Has the RFP had the types of positive impacts we expected?

Note that in this section, and throughout this page, we mostly avoid naming specific projects, to allow us to state our rough impressions about the RFP as a whole without risking unwarranted reputational or other damage to specific projects.

  • Daniel believes four of the projects funded by the RFP seem particularly likely to be beneficial for field growth. Nine other projects appear to us to be promising enough that with our current knowledge, the Open Philanthropy Project might be interested in funding them directly, had FLI not done so, and seem unlikely to have happened without funding from this RFP. (However, Daniel has relatively low confidence in these judgments, because we have not evaluated the projects in depth.)
  • Six policy- and strategy-related projects received grants; we think it is somewhat likely that at least some of these projects would have happened even without this RFP. Based on a brief review of these projects, we think it is unlikely that the Open Philanthropy Project would have been interested in funding these projects directly (though, as above, we have low confidence in this judgment because we have not evaluated the projects in depth).
  • Eight grants went to individuals or organizations within the effective altruism community that we think would have been somewhat likely to pursue similar projects with or without this RFP. We think associating these projects with the other projects funded by the RFP may have some minor benefits.
  • Two grants went to the Center for Applied Rationality (CFAR) and to SPARC, a summer program run by CFAR. We have since made separate grants to CFAR and SPARC, and think that these two grants are likely to be valuable for similar reasons to those laid out in our writeups (linked above).

We also believe the RFP process may have had a number of more indirect positive effects on field growth:

  • Jacob Steinhardt’s workshop at the 2016 International Conference on Machine Learning (ICML) was supported by this grant, and appears to us to have been beneficial for field building. (Note that Jacob is one of our technical advisors on AI safety.)
  • Stuart Russell advocated for the inclusion of some language similar to FLI’s RFP in a National Science Foundation (NSF) call for proposals3 as part of its Robust Intelligence program. Our understanding is that NSF Program Director Hector Munoz-Avila contacted Professor Russell in part because of FLI’s open letter4 on “Research Priorities for Robust and Beneficial Artificial Intelligence”; we are not sure whether his outreach was directly related to the RFP.
  • We believe this DARPA program5 on “explainable artificial intelligence” may be related to the interest and press around the RFP (though we do not have any specific information about a causal connection).
  • Our impression is that the tone of popular media articles about potential risks from advanced AI has recently become somewhat more accurate in describing the nature of AI safety concerns and more sympathetic to people expressing those concerns. However, we are not sure how large this shift has been, or to what extent the RFP helped lead to it.

Which projects were we responsible for funding, and what has the impact of those projects been?

We are not completely confident which projects would counterfactually not have been funded without our grant. About eight out of the 37 projects that were ultimately funded were “below the bar” to receive funding at the time we decided to recommend our grant. However, we think it’s plausible that some of these projects would still have been funded without our contribution, but that the size of grants to other projects would have been cut in order to for this to happen. Based on our review of the first year reports of these eight projects, six appear to have gone fairly well and two appear to have made less progress than expected.

Since our impression is that four projects that ultimately received grants would likely not have reached the second round of review without Dario’s participation in the first-round selection panel, we think it is reasonable to largely attribute those projects getting funding to our involvement.

We are somewhat surprised by the relatively low costs of a number of these projects. Many of them are three years long, with an average budget of around $60,000/year (for reference, this is less than we would expect one graduate student to cost). We plan to follow up to get a better understanding of this.

Lessons learned

In this section, we outline some lessons we have learned and some areas where we have changed our minds since making this grant:

  • We have become somewhat more concerned about the possibility of project proposals that nominally address issues relevant to AI safety, but in practice continue other work that the researchers would have carried out anyway, rather than attempting to identify or solve open problems in AI control or reliability.
  • We now believe that academics such as Stuart Russell may have somewhat more potential influence on the types of RFPs issued by agencies such as the NSF and DARPA than we previously expected.
  • Most projects funded through the RFP were multi-year projects, but on the logistical side, these “multi-year grants” were issued as several consecutive one-year grants due to the structure of the funding available to FLI. Based on some feedback from grantees and from Daniel’s experience helping administer these grants, it seems to us that issuing logistically multi-year grants would make life easier for the grantees (in terms of planning future work and hires and having these plans approved by their universities). This is a minor lesson learned, and doesn’t perceptibly impact our assessment of this RFP.

Sources

Document Source
DARPA, Explainable Artificial Intelligence Source (archive)
FLI, 2015 International Grants Competition Source (archive)
FLI, AI Safety Research Source (archive)
FLI, Open Letter Source (archive)
NSF, Dear Colleague Letter: Self-Monitoring and Self-Assessing Intelligent Systems Research Source (archive)