The Open Philanthropy Blog

Note: in this post, “we” refers to the Open Philanthropy Project. I use “I” for cases where I am going into detail on thoughts of mine that don’t necessarily reflect the views of the Open Philanthropy Project as such, though they have factored into our decision-making.

Last year, we wrote about the question:

Once we have investigated a potential grant, how do we decide where the bar is for recommending it? With all the uncertainty about what we’ll find in future years, how do we decide when grant X is better than saving the money and giving later?

(The full post is here; note that it is on the GiveWell website because we had not yet launched the Open Philanthropy Project website.)

In brief, our answer was to consider both:

  • An overall budget for the year, which we set at 5% of available capital. This left room to give a lot more than we gave last year.
  • A benchmark. We determined that we would recommend giving opportunities when they seemed like a better use of money than direct cash transfers to the lowest-income people possible, as carried out by GiveDirectly, subject to some other constraints (being within the budget indicated above, having done enough investigation for an informed decision, and some other complicating factors and adjustments).

This topic is particularly important when deciding how much to recommend that Good Ventures donate to GiveWell’s top charities. It is also becoming more important overall because our staff capacity and total giving has grown significantly this year. Changing the way we think about the “bar for recommending a grant” could potentially change decisions about tens of millions of dollars’ worth of giving.

We have put some thought into this topic since last year, and our thinking has evolved noticeably. This post outlines our current views, while also noting that I believe we failed to put as much thought into this question as should have in 2016, and are hoping to do more in 2017.

Read More

Last year, we published a set of suggestions for individual donors looking for organizations to support. This year, we are repeating the practice and publishing updated suggestions from Open Philanthropy Project staff who chose to provide them.

The same caveats as last year apply:

Read More

In principle, we try to find the best giving opportunities by comparing many possibilities. However, many of the comparisons we’d like to make hinge on very debatable, uncertain questions.

For example:

  • Some people think that animals such as chickens have essentially no moral significance compared to that of humans; others think that they should be considered comparably important, or at least 1-10% as important. If you accept the latter view, farm animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options: billions of chickens are treated incredibly cruelly each year on factory farms, and we estimate that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. But if you accept the former view, this work is arguably a poor use of money.
  • Some have argued that the majority of our impact will come via its effect on the long-term future. If true, this could be an argument that reducing global catastrophic risks has overwhelming importance, or that accelerating scientific research does, or that improving the overall functioning of society via policy does. Given how difficult it is to make predictions about the long-term future, it’s very hard to compare work in any of these categories to evidence-backed interventions serving the global poor.
  • We have additional uncertainty over how we should resolve these sorts of uncertainty. We could try to quantify our uncertainties using probabilities (e.g. “There’s a 10% chance that I should value chickens 10% as much as humans”), and arrive at a kind of expected value calculation for each of many broad approaches to giving. But most of the parameters in such a calculation would be very poorly grounded and non-robust, and it’s unclear how to weigh calculations with that property. In addition, such a calculation would run into challenges around normative uncertainty (uncertainty about morality), and it’s quite unclear how to handle such challenges.

In this post, I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving. One worldview might imply that evidence-backed charities serving the global poor are far more worthwhile than either of the types of giving discussed above; another might imply that farm animal welfare is; another might imply that global catastrophic risk reduction is. A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty, as laid out in the third bullet point above). Some slightly more detailed descriptions of example worldviews are in a footnote.1

A challenge we face is that we consider multiple different worldviews plausible. We’re drawn to multiple giving opportunities that some would consider outstanding and others would consider relatively low-value. We have to decide how to weigh different worldviews, as we try to do as much good as possible with limited resources.

When deciding between worldviews, there is a case to be made for simply taking our best guess2 and sticking with it. If we did this, we would focus exclusively on animal welfare, or on global catastrophic risks, or global health and development, or on another category of giving, with no attention to the others. However, that’s not the approach we’re currently taking.

Instead, we’re practicing worldview diversification: putting significant resources behind each worldview that we find highly plausible. We think it’s possible for us to be a transformative funder in each of a number of different causes, and we don’t - as of today - want to pass up that opportunity to focus exclusively on one and get rapidly diminishing returns.

Read More

Our grantmaking decisions rely crucially on our uncertain, subjective judgments — about the quality of some body of evidence, about the capabilities of our grantees, about what will happen if we make a certain grant, about what will happen if we don’t make that grant, and so on.

In some cases, we need to make judgments about relatively tangible outcomes in the relatively near future, as when we have supported campaigning work for criminal justice reform. In others, our work relies on speculative forecasts about the much longer term, as for example with potential risks from advanced artificial intelligence. We often try to quantify our judgments in the form of probabilities — for example, the former link estimates a 20% chance of success for a particular campaign, while the latter estimates a 10% chance that a particular sort of technology will be developed in the next 20 years.

We think it’s important to improve the accuracy of our judgments and forecasts if we can. I’ve been working on a project to explore whether there is good research on the general question of how to make good and accurate forecasts, and/or specialists in this topic who might help us do so. Some preliminary thoughts follow.

In brief:

Read More

In February, out of concern that the US is experiencing a new crime wave, I blogged about a data set Open Phil assembled on crime in major American cities. In comparison with the FBI’s widely cited national totals, our data covered far less territory—18 cities for which we found daily incident data—but did better in the time dimension, with higher resolution and more up-to-date counts. We could compute daily totals, and from data sets that for many cities are almost literally up-to-the-minute.

Some places that have recently made national crime news also appear in our data, including Baltimore, Chicago, St. Louis, and Washington, DC. Within our geographic scope, we gain a better view into the latest trends than we can get from the FBI’s annual totals, which appear with a long lag.

Indeed the FBI will probably release its 2015 crime totals in the next few days, which may stoke discussion about crime in the US. [Update: it just did].

In this post, I update all the graphs presented in the earlier one, which I suggest you read first. These updates generate predictions about what the FBI will announce, and perhaps point to one trend that it won’t yet discern.

With 8 more months of data on these 18 cities, plus the addition of New York for 2006–15, the main updates on per-capita crime rates are:

  • On a population-weighted basis, the hints in the old post of decline at the end of 2015, in violent crime in general and homicide in particular, have faded—or at least have been pushed forward in time.
  • Instead, after the homicide rise of late 2014 and 2015—which indeed was one of the largest increases in modern times—the homicide trend has flattened.
  • Violent crime rose slowly, as it has since mid-2014. It remains low historically, down roughly a third since 2001.
  • Property crime (burglary, theft, arson) continues to sink like a stone.

If our data capture national trends (which is far from certain), then the FBI will soon report that the 2015 homicide rate rose a lot from 2014, that violent crime rose a little bit, that property crime fell, and that total crime, which is dominated in sheer quantity by property crime, also fell. [Update: these look right.]

Here are the Open Phil graphs, updated through a few weeks ago and starting with homicide (data and code here):

Homicide-pop.png

Read More

Our goal is to give blog readers and followers of the Open Philanthropy Project an opportunity to publicly raise comments or questions about the Open Philanthropy Project or related topics (in the comments section below). As always, you’re also welcome to email us at info@openphilanthropy.org if there’s feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can see our previous open thread here.

Read More

One of our core values is sharing what we’re learning. We envision a world in which philanthropists increasingly discuss their research, reasoning, results and mistakes publicly to help each other learn more quickly and serve others more effectively.

However, we think there has been confusion - including in our own heads - between the above idea and a related one: the idea that philanthropists should share and explain their thinking near-comprehensively so that the reasoning behind every decision can be understood and critiqued.

Such near-comprehensive information sharing is an appropriate goal for GiveWell, which exists primarily to make recommendations to the public, and emphasizes the transparency of these recommendations as a key reason to follow them. (See GiveWell’s approach to transparency.)

However, we now feel it is not an appropriate goal for the Open Philanthropy Project, whose mission is to give as effectively as we can and share our findings openly so that anyone can build on our work. For our mission, it seems more appropriate to aim for extensive information sharing (well in excess of what other funders currently do) but not to aim for near-comprehensiveness.

This distinction has become more salient to us as our picture of the costs and benefits of information sharing has evolved. This post lays out that evolution, and some changes we plan to make going forward. In brief:

Read More

Philanthropy - especially hits-based philanthropy - is driven by a large number of judgment calls. At the Open Philanthropy Project, we’ve explicitly designed our process to put major weight on the views of individual leaders and program officers in decisions about the strategies we pursue, causes we prioritize, and grants we ultimately make. As such, we think it’s helpful for individual staff members to discuss major ways in which our personal thinking has changed, not only about particular causes and grants, but also about our background worldviews.

I recently wrote up a relatively detailed discussion of how my personal thinking has changed about three interrelated topics: (1) the importance of potential risks from advanced artificial intelligence, particularly the value alignment problem; (2) the potential of many of the ideas and people associated with the effective altruism community; (3) the properties to look for when assessing an idea or intervention, and in particular how much weight to put on metrics and “feedback loops” compared to other properties. My views on these subjects have changed fairly dramatically over the past several years, contributing to a significant shift in how we approach them as an organization.

I’ve posted my full writeup as a personal Google doc. A summary follows.

Read More

In the last few months, we’ve welcomed several new team members:

  • Jaime Yassif has joined as a Program Officer, leading our work on Biosecurity and Pandemic Preparedness. Previously, Jaime was a Science & Technology Policy Advisor at the U.S. Department of Defense, where she focused on oversight of the Cooperative Threat Reduction Program. During this period, she also worked on the Global Health Security Agenda at the Department of Health and Human Services.
  • Chris Somerville has joined as a Scientific Advisor, and will be leading much of our work on scientific research. Chris has been a professor at UC Berkeley, Stanford University and Michigan State University.
  • Chris will be working with two other new full-time Scientific Advisors: Heather Youngs (joining in August), formerly director of the Bakar Fellows Program for faculty entrepreneurs at UC Berkeley, and Daniel Martin Alarcon, who recently got his PhD in Biological Engineering under Professor Ed Boyden.

These hires conclude long-running searches, and we’re very excited to have Jaime, Chris, Heather and Daniel on board.

Read More

Earlier this week, Google Research (in collaboration with scientists at OpenAI, Stanford and Berkeley) released Concrete Problems in AI Safety, which outlines five technical problems related to accident risk in AI systems. Four of the authors are friends and technical advisors of the Open Philanthropy Project.

We’re very excited about this paper. We highly recommend it to anyone looking to get a sense for the tractability of reducing potential risks from advanced AI (a cause we’ve previously written about) - as well as for what sorts of research we would be most excited to fund in this cause.

Read More

Pages