- Research & Ideas
- Focus Areas
- U.S. Policy
- Criminal Justice Reform
- Farm Animal Welfare
- Macroeconomic Stabilization Policy
- Immigration Policy
- Land Use Reform
- Global Catastrophic Risks
- Biosecurity and Pandemic Preparedness
- Potential Risks from Advanced Artificial Intelligence
- Scientific Research
- Global Health & Development
- Other areas
- About Us
- Get Involved
The Open Philanthropy Blog
October 25, 2016
Our grantmaking decisions rely crucially on our uncertain, subjective judgments — about the quality of some body of evidence, about the capabilities of our grantees, about what will happen if we make a certain grant, about what will happen if we don’t make that grant, and so on.
In some cases, we need to make judgments about relatively tangible outcomes in the relatively near future, as when we have supported campaigning work for criminal justice reform. In others, our work relies on speculative forecasts about the much longer term, as for example with potential risks from advanced artificial intelligence. We often try to quantify our judgments in the form of probabilities — for example, the former link estimates a 20% chance of success for a particular campaign, while the latter estimates a 10% chance that a particular sort of technology will be developed in the next 20 years.
We think it’s important to improve the accuracy of our judgments and forecasts if we can. I’ve been working on a project to explore whether there is good research on the general question of how to make good and accurate forecasts, and/or specialists in this topic who might help us do so. Some preliminary thoughts follow.
In brief:Read More
September 26, 2016
In February, out of concern that the US is experiencing a new crime wave, I blogged about a data set Open Phil assembled on crime in major American cities. In comparison with the FBI’s widely cited national totals, our data covered far less territory—18 cities for which we found daily incident data—but did better in the time dimension, with higher resolution and more up-to-date counts. We could compute daily totals, and from data sets that for many cities are almost literally up-to-the-minute.
Some places that have recently made national crime news also appear in our data, including Baltimore, Chicago, St. Louis, and Washington, DC. Within our geographic scope, we gain a better view into the latest trends than we can get from the FBI’s annual totals, which appear with a long lag.
Indeed the FBI will probably release its 2015 crime totals in the next few days, which may stoke discussion about crime in the US. [Update: it just did].
In this post, I update all the graphs presented in the earlier one, which I suggest you read first. These updates generate predictions about what the FBI will announce, and perhaps point to one trend that it won’t yet discern.
With 8 more months of data on these 18 cities, plus the addition of New York for 2006–15, the main updates on per-capita crime rates are:
- On a population-weighted basis, the hints in the old post of decline at the end of 2015, in violent crime in general and homicide in particular, have faded—or at least have been pushed forward in time.
- Instead, after the homicide rise of late 2014 and 2015—which indeed was one of the largest increases in modern times—the homicide trend has flattened.
- Violent crime rose slowly, as it has since mid-2014. It remains low historically, down roughly a third since 2001.
- Property crime (burglary, theft, arson) continues to sink like a stone.
If our data capture national trends (which is far from certain), then the FBI will soon report that the 2015 homicide rate rose a lot from 2014, that violent crime rose a little bit, that property crime fell, and that total crime, which is dominated in sheer quantity by property crime, also fell. [Update: these look right.]
Here are the Open Phil graphs, updated through a few weeks ago and starting with homicide (data and code here):
September 20, 2016
Our goal is to give blog readers and followers of the Open Philanthropy Project an opportunity to publicly raise comments or questions about the Open Philanthropy Project or related topics (in the comments section below). As always, you’re also welcome to email us at email@example.com if there’s feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.
You can see our previous open thread here.Read More
September 16, 2016
One of our core values is sharing what we’re learning. We envision a world in which philanthropists increasingly discuss their research, reasoning, results and mistakes publicly to help each other learn more quickly and serve others more effectively.
However, we think there has been confusion - including in our own heads - between the above idea and a related one: the idea that philanthropists should share and explain their thinking near-comprehensively so that the reasoning behind every decision can be understood and critiqued.
Such near-comprehensive information sharing is an appropriate goal for GiveWell, which exists primarily to make recommendations to the public, and emphasizes the transparency of these recommendations as a key reason to follow them. (See GiveWell’s approach to transparency.)
However, we now feel it is not an appropriate goal for the Open Philanthropy Project, whose mission is to give as effectively as we can and share our findings openly so that anyone can build on our work. For our mission, it seems more appropriate to aim for extensive information sharing (well in excess of what other funders currently do) but not to aim for near-comprehensiveness.
This distinction has become more salient to us as our picture of the costs and benefits of information sharing has evolved. This post lays out that evolution, and some changes we plan to make going forward. In brief:Read More
September 6, 2016
Philanthropy - especially hits-based philanthropy - is driven by a large number of judgment calls. At the Open Philanthropy Project, we’ve explicitly designed our process to put major weight on the views of individual leaders and program officers in decisions about the strategies we pursue, causes we prioritize, and grants we ultimately make. As such, we think it’s helpful for individual staff members to discuss major ways in which our personal thinking has changed, not only about particular causes and grants, but also about our background worldviews.
I recently wrote up a relatively detailed discussion of how my personal thinking has changed about three interrelated topics: (1) the importance of potential risks from advanced artificial intelligence, particularly the value alignment problem; (2) the potential of many of the ideas and people associated with the effective altruism community; (3) the properties to look for when assessing an idea or intervention, and in particular how much weight to put on metrics and “feedback loops” compared to other properties. My views on these subjects have changed fairly dramatically over the past several years, contributing to a significant shift in how we approach them as an organization.
I’ve posted my full writeup as a personal Google doc. A summary follows.Read More
July 15, 2016
In the last few months, we’ve welcomed several new team members:
- Jaime Yassif has joined as a Program Officer, leading our work on Biosecurity and Pandemic Preparedness. Previously, Jaime was a Science & Technology Policy Advisor at the U.S. Department of Defense, where she focused on oversight of the Cooperative Threat Reduction Program. During this period, she also worked on the Global Health Security Agenda at the Department of Health and Human Services.
- Chris Somerville has joined as a Scientific Advisor, and will be leading much of our work on scientific research. Chris has been a professor at UC Berkeley, Stanford University and Michigan State University.
- Chris will be working with two other new full-time Scientific Advisors: Heather Youngs (joining in August), formerly director of the Bakar Fellows Program for faculty entrepreneurs at UC Berkeley, and Daniel Martin Alarcon, who recently got his PhD in Biological Engineering under Professor Ed Boyden.
These hires conclude long-running searches, and we’re very excited to have Jaime, Chris, Heather and Daniel on board.Read More
June 23, 2016
Earlier this week, Google Research (in collaboration with scientists at OpenAI, Stanford and Berkeley) released Concrete Problems in AI Safety, which outlines five technical problems related to accident risk in AI systems. Four of the authors are friends and technical advisors of the Open Philanthropy Project.
We’re very excited about this paper. We highly recommend it to anyone looking to get a sense for the tractability of reducing potential risks from advanced AI (a cause we’ve previously written about) - as well as for what sorts of research we would be most excited to fund in this cause.Read More
June 15, 2016
Ben Soskis, a consultant who has been working with us as part of our History of Philanthropy project, recently finished a case study on the founding and growth of the Center for Global Development (CGD), a think tank that conducts research on and promotes improvements to rich-world policies that affect the global poor.
We are very impressed with CGD and its accomplishments. Earlier this year, we announced a $3 million grant to CGD. Our writeup includes a review of CGD’s track record, concluding that “it seems reasonable to us to estimate that CGD has produced at least 10 times as much value for the global poor as it has spent, though we consider this more of a rough lower bound than an accurate estimate.”
CGD appears to be an example of a funder-initiated startup: philanthropist Ed Scott was the driving force behind the original high-level concept, and he recruited Nancy Birdsall to be CGD’s leader.
There are a number of similarities between this case study and our recent case study on the Center on Budget and Policy Priorities (CBPP). In particular, both cases involved a funder exploring an extensive personal network, finding a strong leader for the organization they envisioned, committing flexible support up-front to that leader, and then trusting the leader with a lot of autonomy in creating the organization (while retaining a high level of personal involvement throughout the early years of the organization). However, there are a couple of major points of contrast: the crowdedness of the landscape and the amount of money committed.
Amount committed: in the case of CBPP, the founding funder “committed $175,000 in the first year  and $150,000 in the second year, and asked CBPP to find other supporters so it could reduce its support after that point.” In the case of CGD, the founding funder committed $25 million, to be disbursed over five years, in 2001.Read More
May 20, 2016
Suzanne Kahn, a consultant who has been working with us as part of our History of Philanthropy project, recently finished a case study on the founding and growth of the Center on Budget and Policy Priorities (CBPP), a well-regarded D.C. think tank that focuses on tax and budget policy with an aim of improving outcomes for low-income people.
We were interested in learning more about the history and founding of CBPP because:
- It appeared to be a successful institution that funders played a central role in helping to establish.
- Relative to many other think tanks, CBPP appears to have an unusually clear and concrete track record.
- We’ve supported CBPP’s Full Employment Project.
The report’s account of CBPP’s history and founding mirrors, in many ways, that of Center for Global Development (the subject of another upcoming case study). In particular:Read More
May 6, 2016
We’re planning to make potential risks from advanced artificial intelligence a major priority in 2016. A future post will discuss why; this post gives some background.
- I first give our definition of “transformative artificial intelligence,” our term for a type of potential advanced artificial intelligence we find particularly relevant for our purposes. Roughly and conceptually, transformative AI refers to potential future AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution. I also provide (below) a more detailed definition. The concept of “transformative AI” has some overlap with concepts put forth by others, such as “superintelligence” and “artificial general intelligence.” However, “transformative AI” is intended to be a more inclusive term, leaving open the possibility of AI systems that count as “transformative” despite lacking many abilities humans have.
- I then discuss the question of whether, and when, we might expect transformative AI to be developed. This question has many properties (long timelines, relatively vague concepts, lack of detailed public analysis) I associate with developments that are nearly impossible to forecast, and I don’t think it is possible to make high-certainty forecasts on the matter. With that said, I am comfortable saying that I think there is a nontrivial likelihood (at least 10% with moderate robustness, and at least 1% with high robustness) of transformative AI within the next 20 years. I can’t feasibly share all of the information that goes into this view, but I try to outline the general process I have followed to reach it.
- Finally, I briefly discuss whether there are other potential future developments that seem to have similar potential for impact on similar timescales to transformative AI, in order to put our interest in AI in context.
The ideas in this post overlap with some arguments made by others, but I think it is important to lay out the specific views on these issues that I endorse. Note that this post is confined in scope to the above topics; it does not, for example, discuss potential risks associated with AI or potential measures for reducing them. I will discuss the latter topics more in the future.Read More