The Open Philanthropy Blog

Our grantmaking decisions rely crucially on our uncertain, subjective judgments — about the quality of some body of evidence, about the capabilities of our grantees, about what will happen if we make a certain grant, about what will happen if we don’t make that grant, and so on.

In some cases, we need to make judgments about relatively tangible outcomes in the relatively near future, as when we have supported campaigning work for criminal justice reform. In others, our work relies on speculative forecasts about the much longer term, as for example with potential risks from advanced artificial intelligence. We often try to quantify our judgments in the form of probabilities — for example, the former link estimates a 20% chance of success for a particular campaign, while the latter estimates a 10% chance that a particular sort of technology will be developed in the next 20 years.

We think it’s important to improve the accuracy of our judgments and forecasts if we can. I’ve been working on a project to explore whether there is good research on the general question of how to make good and accurate forecasts, and/or specialists in this topic who might help us do so. Some preliminary thoughts follow.

In brief:

Read More

In February, out of concern that the US is experiencing a new crime wave, I blogged about a data set Open Phil assembled on crime in major American cities. In comparison with the FBI’s widely cited national totals, our data covered far less territory—18 cities for which we found daily incident data—but did better in the time dimension, with higher resolution and more up-to-date counts. We could compute daily totals, and from data sets that for many cities are almost literally up-to-the-minute.

Some places that have recently made national crime news also appear in our data, including Baltimore, Chicago, St. Louis, and Washington, DC. Within our geographic scope, we gain a better view into the latest trends than we can get from the FBI’s annual totals, which appear with a long lag.

Indeed the FBI will probably release its 2015 crime totals in the next few days, which may stoke discussion about crime in the US. [Update: it just did].

In this post, I update all the graphs presented in the earlier one, which I suggest you read first. These updates generate predictions about what the FBI will announce, and perhaps point to one trend that it won’t yet discern.

With 8 more months of data on these 18 cities, plus the addition of New York for 2006–15, the main updates on per-capita crime rates are:

  • On a population-weighted basis, the hints in the old post of decline at the end of 2015, in violent crime in general and homicide in particular, have faded—or at least have been pushed forward in time.
  • Instead, after the homicide rise of late 2014 and 2015—which indeed was one of the largest increases in modern times—the homicide trend has flattened.
  • Violent crime rose slowly, as it has since mid-2014. It remains low historically, down roughly a third since 2001.
  • Property crime (burglary, theft, arson) continues to sink like a stone.

If our data capture national trends (which is far from certain), then the FBI will soon report that the 2015 homicide rate rose a lot from 2014, that violent crime rose a little bit, that property crime fell, and that total crime, which is dominated in sheer quantity by property crime, also fell. [Update: these look right.]

Here are the Open Phil graphs, updated through a few weeks ago and starting with homicide (data and code here):

Homicide-pop.png

Read More

Our goal is to give blog readers and followers of the Open Philanthropy Project an opportunity to publicly raise comments or questions about the Open Philanthropy Project or related topics (in the comments section below). As always, you’re also welcome to email us at info@openphilanthropy.org if there’s feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can see our previous open thread here.

Read More

One of our core values is sharing what we’re learning. We envision a world in which philanthropists increasingly discuss their research, reasoning, results and mistakes publicly to help each other learn more quickly and serve others more effectively.

However, we think there has been confusion - including in our own heads - between the above idea and a related one: the idea that philanthropists should share and explain their thinking near-comprehensively so that the reasoning behind every decision can be understood and critiqued.

Such near-comprehensive information sharing is an appropriate goal for GiveWell, which exists primarily to make recommendations to the public, and emphasizes the transparency of these recommendations as a key reason to follow them. (See GiveWell’s approach to transparency.)

However, we now feel it is not an appropriate goal for the Open Philanthropy Project, whose mission is to give as effectively as we can and share our findings openly so that anyone can build on our work. For our mission, it seems more appropriate to aim for extensive information sharing (well in excess of what other funders currently do) but not to aim for near-comprehensiveness.

This distinction has become more salient to us as our picture of the costs and benefits of information sharing has evolved. This post lays out that evolution, and some changes we plan to make going forward. In brief:

Read More

Philanthropy - especially hits-based philanthropy - is driven by a large number of judgment calls. At the Open Philanthropy Project, we’ve explicitly designed our process to put major weight on the views of individual leaders and program officers in decisions about the strategies we pursue, causes we prioritize, and grants we ultimately make. As such, we think it’s helpful for individual staff members to discuss major ways in which our personal thinking has changed, not only about particular causes and grants, but also about our background worldviews.

I recently wrote up a relatively detailed discussion of how my personal thinking has changed about three interrelated topics: (1) the importance of potential risks from advanced artificial intelligence, particularly the value alignment problem; (2) the potential of many of the ideas and people associated with the effective altruism community; (3) the properties to look for when assessing an idea or intervention, and in particular how much weight to put on metrics and “feedback loops” compared to other properties. My views on these subjects have changed fairly dramatically over the past several years, contributing to a significant shift in how we approach them as an organization.

I’ve posted my full writeup as a personal Google doc. A summary follows.

Read More

In the last few months, we’ve welcomed several new team members:

  • Jaime Yassif has joined as a Program Officer, leading our work on Biosecurity and Pandemic Preparedness. Previously, Jaime was a Science & Technology Policy Advisor at the U.S. Department of Defense, where she focused on oversight of the Cooperative Threat Reduction Program. During this period, she also worked on the Global Health Security Agenda at the Department of Health and Human Services.
  • Chris Somerville has joined as a Scientific Advisor, and will be leading much of our work on scientific research. Chris has been a professor at UC Berkeley, Stanford University and Michigan State University.
  • Chris will be working with two other new full-time Scientific Advisors: Heather Youngs (joining in August), formerly director of the Bakar Fellows Program for faculty entrepreneurs at UC Berkeley, and Daniel Martin Alarcon, who recently got his PhD in Biological Engineering under Professor Ed Boyden.

These hires conclude long-running searches, and we’re very excited to have Jaime, Chris, Heather and Daniel on board.

Read More

Earlier this week, Google Research (in collaboration with scientists at OpenAI, Stanford and Berkeley) released Concrete Problems in AI Safety, which outlines five technical problems related to accident risk in AI systems. Four of the authors are friends and technical advisors of the Open Philanthropy Project.

We’re very excited about this paper. We highly recommend it to anyone looking to get a sense for the tractability of reducing potential risks from advanced AI (a cause we’ve previously written about) - as well as for what sorts of research we would be most excited to fund in this cause.

Read More

Ben Soskis, a consultant who has been working with us as part of our History of Philanthropy project, recently finished a case study on the founding and growth of the Center for Global Development (CGD), a think tank that conducts research on and promotes improvements to rich-world policies that affect the global poor.

We are very impressed with CGD and its accomplishments. Earlier this year, we announced a $3 million grant to CGD. Our writeup includes a review of CGD’s track record, concluding that “it seems reasonable to us to estimate that CGD has produced at least 10 times as much value for the global poor as it has spent, though we consider this more of a rough lower bound than an accurate estimate.”

CGD appears to be an example of a funder-initiated startup: philanthropist Ed Scott was the driving force behind the original high-level concept, and he recruited Nancy Birdsall to be CGD’s leader.

There are a number of similarities between this case study and our recent case study on the Center on Budget and Policy Priorities (CBPP). In particular, both cases involved a funder exploring an extensive personal network, finding a strong leader for the organization they envisioned, committing flexible support up-front to that leader, and then trusting the leader with a lot of autonomy in creating the organization (while retaining a high level of personal involvement throughout the early years of the organization). However, there are a couple of major points of contrast: the crowdedness of the landscape and the amount of money committed.

Amount committed: in the case of CBPP, the founding funder “committed $175,000 in the first year [1981] and $150,000 in the second year, and asked CBPP to find other supporters so it could reduce its support after that point.” In the case of CGD, the founding funder committed $25 million, to be disbursed over five years, in 2001.

Read More

Suzanne Kahn, a consultant who has been working with us as part of our History of Philanthropy project, recently finished a case study on the founding and growth of the Center on Budget and Policy Priorities (CBPP), a well-regarded D.C. think tank that focuses on tax and budget policy with an aim of improving outcomes for low-income people.

We were interested in learning more about the history and founding of CBPP because:

The report’s account of CBPP’s history and founding mirrors, in many ways, that of Center for Global Development (the subject of another upcoming case study). In particular:

Read More

We’re planning to make potential risks from artificial intelligence a major priority this year. We feel this cause presents an outstanding philanthropic opportunity — with extremely high importance, high neglectedness, and reasonable tractability (our three criteria for causes) — for someone in our position. We believe that the faster we can get fully up to speed on key issues and explore the opportunities we currently see, the faster we can lay the groundwork for informed, effective giving both this year and in the future.

With all of this in mind, we’re placing a larger “bet” on this cause, this year, than we are placing even on other focus areas — not necessarily in terms of funding (we aren’t sure we’ll identify very large funding opportunities this year, and are more focused on laying the groundwork for future years), but in terms of senior staff time, which at this point is a scarcer resource for us. Consistent with our philosophy of hits-based giving, we are doing this not because we have confidence in how the future will play out and how we can impact it, but because we see a risk worth taking. In about a year, we’ll formally review our progress and reconsider how senior staff time is allocated.

This post will first discuss why I consider this cause to be an outstanding philanthropic opportunity. (My views are fairly representative, but not perfectly representative, of those of other staff working on this cause.) It will then give a broad outline of our planned activities for the coming year, some of the key principles we hope to follow in this work, and some of the risks and reservations we have about prioritizing this cause as highly as we are.

In brief:

  • It seems to me that artificial intelligence is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. I believe there’s a nontrivial probability that transformative AI will be developed within the next 20 years, with enormous global consequences.
  • By and large, I expect the consequences of this progress — whether or not transformative AI is developed soon — to be positive. However, I also perceive risks. Transformative AI could be a very powerful technology, with potentially globally catastrophic consequences if it is misused or if there is a major accident involving it. Because of this, I see this cause as having extremely high importance (one of our key criteria), even while accounting for substantial uncertainty about the likelihood of developing transformative AI in the coming decades and about the size of the risks. I discuss the nature of potential risks below; note that I think they do not apply to today’s AI systems.
  • I consider this cause to be highly neglected in important respects. There is a substantial and growing field around artificial intelligence and machine learning research, but most of it is not focused on reducing potential risks. We’ve put substantial work into trying to ensure that we have a thorough landscape of the researchers, funders, and key institutions whose work is relevant to potential risks from advanced AI. We believe that the amount of work being done is well short of what it productively could be (despite recent media attention); that philanthropy could be helpful; and that the activities we’re considering wouldn’t be redundant with those of other funders.
  • I believe that there is useful work to be done today in order to mitigate future potential risks. In particular, (a) I think there are important technical problems that can be worked on today, that could prove relevant to reducing accident risks; (b) I preliminarily feel that there is also considerable scope for analysis of potential strategic and policy considerations.
  • More broadly, the Open Philanthropy Project may be able to help support an increase in the number of people – particularly people with strong relevant technical backgrounds - thinking through how to reduce potential risks, which could be important in the future even if the work done in the short term does not prove essential. I believe that one of the things philanthropy is best-positioned to do is provide steady, long-term support as fields and institutions grow.
  • I consider this a challenging cause. I think it would be easy to do harm while trying to do good. For example, trying to raise the profile of potential risks could contribute (and, I believe, has contributed to some degree) to non-nuanced or inaccurate portrayals of risk in the media, which in turn could raise the risks of premature and/or counterproductive regulation. I consider the Open Philanthropy Project relatively well-positioned to work in this cause while being attentive to pitfalls, and to deeply integrate people with strong technical expertise into our work.
  • I see much room for debate in the decision to prioritize this cause as highly as we are. However, I think it is important that a philanthropist in our position be willing to take major risks, and prioritizing this cause is a risk that I see as very worth taking.

My views on this cause have evolved considerably over time. I will discuss the evolution of my thinking in detail in a future post, but this post focuses on the case for prioritizing this cause today.

Read More

Pages