Grant Investigator: Luke Muehlhauser
This page was reviewed but not written by the grant investigator. Center for Security and Emerging Technology staff also reviewed this page prior to publication.
The Open Philanthropy Project recommended a grant of $55,000,000 over five years to Georgetown University to launch the Center for Security and Emerging Technology (CSET), a new think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET is led by Jason Matheny, former Assistant Director of National Intelligence and Director of Intelligence Advanced Research Projects Activity (IARPA), the U.S. intelligence community’s research organization.
CSET plans to provide nonpartisan technical analysis and advice related to emerging technologies and their security implications to the government, key media outlets, and other stakeholders. Its initial focus will be on the intersection of security and artificial intelligence, a key issue of relevance to our focus area on the future of AI.
We have written in detail about the case we see for funding work related to AI on our blog. As we wrote in that post, we see AI and machine learning research as being on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. Broadly, we expect the results of continuing progress in AI research to be positive for society at large, but we see some risks (both from unintended consequences of AI use, and from deliberate misuse), and believe that we—as a philanthropic organization, separate from academia, industry, and government—may be well-placed to support work to reduce those risks.
CSET is a new Georgetown University-based think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET is led by Jason Matheny, former Assistant Director of National Intelligence and Director of IARPA, the U.S. intelligence community’s research organization. He is a Commissioner of the National Security Commission on AI, created by the US Congress in 2018. Based on multiple conversations with leaders in the defense, intelligence, and policymaking communities, we believe he is respected and considered well-informed and credible in the relevant circles.1
Matheny is knowledgeable about several emerging technologies, particularly AI, and their intersection with security. We believe he is unusually attentive to risks from emerging technologies (discussed below). We also believe he broadly shares our interest in cost-effective ways of doing as much good as possible; he co-founded New Harvest, worked for many years in global health, and contributed to the World Bank publication Disease Control Priorities in Developing Countries. We believe Matheny combines these qualities with strong government experience and national security qualifications to a unique degree.
Matheny has assembled an impressive founding team with strong qualifications to provide high-quality, safety-conscious advice to policymakers on AI. Team members include:
- Dewey Murdick, CSET’s Director of Data Science, was previously the Director of Science Analytics at the Chan Zuckerberg Initiative, where he led metric development, data science, and machine learning and statistical research for Meta and science-related initiatives; Chief Analytics Officer and Deputy Chief Scientist within the Department of Homeland Security; and IARPA Program Manager and Office Co-Director.
- William Hannas, CSET Lead Analyst, was a member of the Senior Intelligence Service at the Central Intelligence Agency, where he served as the Open Source Enterprise’s primary China science and technology analyst. He was previously an Assistant Professor of Chinese at Georgetown, where he taught Chinese and Korean, and concurrently served with the Foreign Broadcast Information Service, monitoring Asian language publications.
- Helen Toner, CSET’s Director of Strategy and Plans, previously worked as a Senior Research Analyst here at the Open Philanthropy Project, advising policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, she lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI.
- Tessa Baker, CSET’s Director of Operations, conducted survey and qualitative research among Fortune 500 business leaders as a Sr. Principal at Gartner, and served government executives at OPM, DHS, Joint Staff, OSD, and FEMA as a consultant with IBM and NSI.
- Ben Buchanan, CSET Faculty Fellow, is an Assistant Teaching Professor at Georgetown University’s School of Foreign Service, where he conducts research on the intersection of cybersecurity and statecraft. His first book, The Cybersecurity Dilemma, was published by Oxford University Press in 2017.
- Jamie Baker, CSET Distinguished Fellow, is a Professor at Syracuse University, where he directs the Institute for National Security and Counterterrorism. Judge Baker served in the US Department of State, Foreign Intelligence Advisory Board, and National Security Council. He served on the US Court of Appeals for the Armed Forces for 15 years—the last four as Chief Judge.
- Michael Sulmeyer, CSET Senior Fellow, was the Director of the Cyber Security Project at Harvard University. Before Harvard, he served as the Director for Plans and Operations for Cyber Policy in the Office of the Secretary of Defense, and previously worked at the Pentagon on arms control and the maintenance of strategic stability between the United States, Russia, and China.
Read full bios for these and other founding team members here.
Case for the grant
Given our focus on increasing potential benefits and reducing potential risks from AI, we are interested in opportunities to inform current and future policies that could affect long-term outcomes. We think one of the key factors in whether AI is broadly beneficial for society is whether policymakers are well-informed and well-advised about the nature of AI’s potential benefits, potential risks, and how these relate to potential policy actions. If AI research continues to progress quickly in the coming years, we anticipate that demands for government action will become more frequent. Government action—such as major funding decisions or regulatory measures—could greatly affect the benefits and risks of AI, depending on details that are difficult to foresee today, and we accordingly think that good information and advice could be key.
We share the view expressed in Technology Roulette2 that key policymaking communities in national security and foreign policy are often overly focused on technological superiority, which is not synonymous with security because it does not reduce accidents or address emergent effects.3 We think some of the most important potential risks from advanced AI are in the category of accidents or emergent effects (the latter could include arms-race dynamics or “use it or lose it” first strikes); that the single-minded pursuit of technological superiority could make such risks much worse; and that a well-informed policymaking apparatus that internalizes concern over the potential accidents and emergent risks of AI could be especially important in reducing risks.
Overall, we feel that ensuring high-quality and well-informed advice to policymakers over the long run is one of the most promising ways to increase the benefits and reduce the risks from advanced AI, and that the team put together by CSET is uniquely well-positioned to provide such advice. Hence, we believe this grant represents the best chance we can expect to see for some time to address one of the most important gaps we’re aware of in this cause. We recognize a number of risks to this grant (below), but within our “hits-based” framework, we believe it is an excellent bet. For context, this grant is large compared to other individual grants we’ve made to date, in part because of the long commitment necessary to get a new center started, but even with it our annual giving around potential risks from artificial intelligence continues to be smaller than our giving in global health or scientific research.
Goals and plans for follow-up
CSET has identified a number of primary goals that fall under the umbrella of informing key discussions and decisions related to AI, computing, and potentially other high-impact emerging technologies:
- Assess global developments in key technology areas, with particular focus on developments in countries of interest to U.S. policy communities.
- Generate written products and briefings tailored to policy communities, with practical policy options.
- Train and prepare staff, students, faculty, and affiliates for key roles within the policy community.
Risks and reservations
We see a number of potential risks for this grant:
- It’s inherently difficult to provide useful advice on emerging technologies, and the potential benefits and risks of advanced AI are currently speculative and poorly understood, so it could easily be the case that this grant is premature for the goal of generating useful important advice to policymakers.
- We worry that heavy government involvement in, and especially regulation of, AI could be premature and might be harmful at this time. We think it’s possible that by drawing attention to the nexus of security and emerging technologies (including AI), CSET could lead to premature regulatory attention and thus to harm. However, we believe CSET shares our interest in caution on this front and is well-positioned to communicate carefully.
- For a new organization like CSET, we see leadership as even more important than usual, and Matheny’s leadership contributes substantially to our case for this grant. Were he to depart, we would need to reconsider our support for, and alignment with, the future plans for CSET.
- More generally, we expect any new organization to face significant challenges on the path to impact, many of which will likely be unanticipated.
Because these were effectively reference checks, they were on background.
- 2. Technology Roulette is a report on technological risk authored by former Secretary of the Navy Richard Danzig for the Center for a New American Security. Open Philanthropy funded outreach about the report.
- 3. Danzig defines emergent effects as “attributes that are not identifiable in any individual component of a system but that can be observed in the overall system,” citing World War I as an emergent consequence of technological adaptation.