Note: in this post, “we” refers to the Open Philanthropy Project. I use “I” for cases where I am going into detail on thoughts of mine that don’t necessarily reflect the views of the Open Philanthropy Project as such, though they have factored into our decision-making.

Last year, we wrote about the question:

Once we have investigated a potential grant, how do we decide where the bar is for recommending it? With all the uncertainty about what we’ll find in future years, how do we decide when grant X is better than saving the money and giving later?

(The full post is here; note that it is on the GiveWell website because we had not yet launched the Open Philanthropy Project website.)

In brief, our answer was to consider both:

  • An overall budget for the year, which we set at 5% of available capital. This left room to give a lot more than we gave last year.
  • A benchmark. We determined that we would recommend giving opportunities when they seemed like a better use of money than direct cash transfers to the lowest-income people possible, as carried out by GiveDirectly, subject to some other constraints (being within the budget indicated above, having done enough investigation for an informed decision, and some other complicating factors and adjustments).

This topic is particularly important when deciding how much to recommend that Good Ventures donate to GiveWell’s top charities. It is also becoming more important overall because our staff capacity and total giving has grown significantly this year. Changing the way we think about the “bar for recommending a grant” could potentially change decisions about tens of millions of dollars’ worth of giving.

We have put some thought into this topic since last year, and our thinking has evolved noticeably. This post outlines our current views, while also noting that I believe we failed to put as much thought into this question as should have in 2016, and are hoping to do more in 2017.

In brief:

  • We are still using a budget of 5% of available capital. (The percentage will rise at some point in the future.) Current grantmaking is still well short of our budget for the year, though grantmaking will be much higher for 2016 than it was for 2015.
  • We have a great deal of uncertainty about the value of giving later. We could imagine that funds we save and give later will end up doing much less good than donations to GiveWell’s highest rated top charities would - or much more. On balance, our very tentative, unstable guess is the “last dollar” we will give (from the pool of currently available capital) has higher expected value than gifts to GiveWell’s top charities today. This is a notable change from last year’s position, and our position could easily change again in the near future.
  • There are a number of other factors we take into account in setting the budgets for different causes and deciding which grants to make:
    • We practice worldview diversification: putting significant resources behind each worldview that we find highly plausible. We do this for a number of reasons, enumerated at our blog post on the subject. One such reason was also highlighted in last year’s post on this giving now vs. later: we are willing to make grants with less concern for cost-effectiveness when they contribute to our knowledge/capacity building. As such, when we are in doubt (as we usually are) about how to compare grants between one cause and another, we often give substantially in each, up to the point of major diminishing returns.
    • Though much of our giving is driven by a hits-based approach - emphasizing boldness and embracing unconventional opportunities - we also have some elements of conservatism in our grantmaking. We avoid big, irrevocable decisions - particularly the kind that would commit us to highly unusual and potentially regret-prone behavior, and/or might interfere with others’ planning - when we can’t ground them in thoughtful analysis and reasonably stable conclusions. This is an important consideration in thinking through this year’s giving level for GiveWell’s top charities.
    • We consider the fact that different grants have different non-monetary costs, in terms of the risk of creating that can cost staff time.
    • As noted last year, we consider the effect of our giving on other donors’ behavior. We do not want to be in the habit of – or gain a reputation for – fully closing the funding gap of every strong giving opportunity we see. In the long run, we feel this would create incentives for other donors to avoid the causes and grants we’re interested in; this, in turn, could lead to a much lower-than-optimal amount of total donor interest in the things we find most promising.

At the end of this post, we discuss this year’s recommendation to Good Ventures regarding gifts to GiveWell’s top charities, and our plans for thinking through these issues more thoroughly in the coming year.

Update on basic criteria: budget and benchmark

Budget

Last year, we wrote:

For now – since we still have so much progress to make in terms of learning and capacity building – we think it makes sense to err on the side of recommending grants totaling a relatively small percentage of the available capital. We think the idea of a “relatively small percentage” maps intuitively to 5% … As long as total giving is below 5%/year, we’ll be happy to recommend more (subject to the benchmark below).

This is still our position, and we still expect giving for the year to come in below this budget, though it will be significantly higher than it was last year. Specifically, we expect to have over $100 million worth of grants for which the investigation is completed this year (with a recommendation made either this year or early in 2017). (A comparable figure for last year would have been under $20 million.)

Benchmark

Last year, we gave a preliminary look at what we’re now calling the “last dollar” question: when we choose to save money instead of granting it, such that it ultimately effectively adds to the giving we do much later on (when we’re exhausting the available capital), how cost-effective should we expect that saved money (the “last dollar”) to be?

We laid out something of a lower bound on “last dollar” cost-effectiveness:

By default, we feel that any given grant of $X should look significantly better than making direct cash transfers (totaling $X) to people who are extremely low-income by global standards – abbreviated as “direct cash transfers.” We believe it will be possible to give away very large amounts, at any point in the next couple of decades, via direct cash transfers, so any grant that doesn’t meet this bar seems unlikely to be worth making …

It’s possible that this standard is too lax, since we might find plenty of giving opportunities in the future that are much stronger than direct cash transfers. However, at this early stage, it isn’t obvious how we will find several billion dollars’ worth of such opportunities, and so – as long as total giving remains within the 5% budget – we prefer to err on the side of recommending grants when we’ve completed an investigation and when they look substantially better than direct cash transfers …

When considering grants that will primarily benefit people in the U.S. (such as supporting work on criminal justice reform), benchmarking to direct cash transfers can be a fairly high standard … in considering grants that primarily benefit Americans, we look for a better than “100x return” in financial terms (e.g. increased income). Of course, there are always huge amounts of uncertainty in these comparisons, and we try not to take them too literally.

GiveWell’s current analysis implies that its top charities other than GiveDirectly solidly beat the benchmark laid out above. Their cost-effectiveness is estimated at 3-7x that of GiveDirectly, which provides direct cash transfers. (Though “3-7x” is probably an overstatement for a number of reasons, including considerations not accounted for in the cost-effectiveness calculations and general issues with cost-effectiveness analysis.) By last year’s reasoning, this implies that we should recommend that Good Ventures give as much as possible to these charities, subject to the budget limitation discussed above.

However, since last year’s post, we’ve continued to reflect on the “last dollar” question, though we haven’t made as much progress as we’d have liked to. We hope to do considerably more writing and thinking on this topic, but a few thoughts for now:

1. The framework we’ve been using to compare US-focused interventions with interventions targeting global health and development is extremely preliminary, and it may change a lot in the future. Carl Shulman has raised some considerations (all with fairly preliminary analysis) that could bear on the “100x” figure mentioned above (and elaborated in last year’s post), such as the fact that adding economic value to the world as a whole could indirectly help the global poor via its effects on research and development, foreign aid, etc.

2. One consideration that could change the framework a lot is the moral value of the far future: the idea that most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet. It could be argued that improving policy, advancing scientific research, and/or reducing global catastrophic risks matters more to the long-term future than the kinds of activities GiveWell’s top charities carry out. This, in turn, could mean that the kind of comparison described above - equating cash transfers to very low-income people with a “100x return” for US-focused activities - is inappropriate, and substantially understates the benefits of improving policy, advancing scientific research, and/or reducing global catastrophic risks.

3. My current views on both #1 and #2 are more preliminary and unstable than I would like. I could easily imagine that, at some point in the future, I might believe - particularly based on “moral value of the far future” arguments - that some particular area of giving (e.g., improving policy, advancing scientific research, or reducing global catastrophic risks) looks generally more promising than giving to GiveWell’s top charities. And I might come to believe that there are billions of dollars’ worth of giving opportunities in one or more of these areas that I would prefer over further support of GiveWell’s top charities. This seems particularly easy to imagine for scientific research, which seems capable of absorbing very large amounts of money.

4. Another major question mark for us is how to think about the moral significance of animals. As noted previously, our early work on farm animal welfare could - depending on the answer to this question - be considered far more cost-effective than giving to GiveWell’s top charities.

We are actively working on getting a better handle on how to weigh helping humans vs. animals, though it could take several years or more to start making significant progress (more on this in the future). Given the very large numbers of animals that are in horrible conditions, it could imaginably turn out to be the case that there are billions of dollars’ worth of animal-focused giving opportunities that we end up considering more beneficial than giving to GiveWell’s top charities. I wouldn’t guess that we’ll reach this conclusion, but it seems possible.

5. There are some causes that we might consider especially outstanding in the future, regardless of where we end up landing on the questions discussed above. For example, I currently think there is at least a 10% chance of that transformative artificial intelligence will be developed in the next 20 years, and that anticipating and mitigating associated risks is an outstanding cause. Several years from now, it is possible that transformative AI will look more robustly imminent, and that there will be billions of dollars’ worth of opportunities to give for risk reduction. (So far, I haven’t seen many giving opportunities, but this could easily change - not least because we are actively trying to lay the groundwork for a larger ecosystem of people and organizations working in this area.) Similar considerations apply to biosecurity and pandemic preparedness: in the future, it’s possible that we will perceive a sufficiently large and urgent need on this front to favor directing billions of dollars toward it.

6. There are also “unknown unknowns”: it’s possible that some other type of giving opportunity will come to seem a better use of billions of dollars than giving to GiveWell’s top charities, for reasons I’m not anticipating now.

7. The points above list several possible ways in which we might later come to believe that there are billions of dollars’ worth of giving opportunities that we would prefer over further support of GiveWell’s top charities. I don’t think any one of them is highly likely to play out, but I think there are reasonable odds that at least one of them does. One very rough way of thinking about this is to imagine that there are four such possibilities (one featuring our views about the moral value of the far future; one featuring our views about the moral significance of helping animals; one featuring our estimated “room for more funding” for some outstanding cause such as potential risks from advanced AI or biosecurity and pandemic preparedness; and “unknown unknowns”), each with an independent 10% chance of leading to a “last dollar” that seems 5x as good as GiveWell’s current top charities. In aggregate, this would imply a 34% chance1that there is some way to spend the “last dollar” 5x as well as GiveWell’s current top charities. This would imply that GiveWell’s current top charities should be considered less cost-effective in expectation than the “last dollar.”

8. There are additional considerations for the “last dollar” question, such as the question of how much total giving the Open Philanthropy Project will end up influencing.

Bottom line on the “last dollar” question

Overall, a thorough treatment of the “last dollar” question would involve thinking through all of these issues and more, yet our current views on these issues are extremely tentative, rough, and unstable. And many of the key questions seem very difficult to make much progress on in the near future, if ever.

Because of this, I have very low confidence in my working view on how good the “last dollar” is likely to be, and I expect my view to change quite a bit in the future. On balance, our very tentative, unstable guess is that the “last dollar” has higher expected value than gifts to GiveWell’s top charities today.

This is a notable change from last year’s position, and this stance could easily change again in the near future. Part of the change is due to the fact that we have an easier time picturing a scaled-up giving operation, as our giving has risen substantially compared to last year; it no longer seems as difficult to imagine that we might find billions of dollars’ worth of outstanding giving opportunities in some area. But most of the change simply comes from more reflection and discussion. I’ve found input from Nick Beckstead and Carl Shulman particularly helpful on this topic.

Last year’s model (quoted above) was to associate the “last dollar” with direct cash transfers, or a 100x return (in terms of economic value added to society per dollar spent). Since we estimate cost-effectiveness for many (though not all) grants in these terms, last year’s model often gave us fairly concrete guidance on which grant opportunities were above the bar. By contrast, our updated thinking on the “last dollar” seems to have very limited usefulness for deciding on particular grants, since so much of a grant’s value depends on fundamental questions that we’re deeply uncertain about.

One clear change is that the “last dollar” looks better (in terms of how much good one might expect that “last dollar” to accomplish) compared to a year ago. In isolation, this change would seem to point toward reducing today’s giving across the board in order to give more later, but it’s counteracted by other considerations discussed below.

Other factors in deciding when to make a grant

Worldview diversification

We recently discussed a number of benefits to worldview diversification. When facing a number of areas in which we might give - each of which looks like it could be outstanding, according to some plausible worldview - we see advantages to putting significant resources behind each. Some of the reasons for this have to do with maximizing good accomplished under conditions of high uncertainty. Others have to do with practical benefits for our mission and organization, such as capacity building and option value: putting ourselves in better position to ramp up giving in the future, in whatever causes we ultimately end up prioritizing.

In the absence of a clear benchmark (see previous section), we see value in following the rough heuristic outlined in the final section of the post on worldview diversification:

Currently, we tend to invest resources in each cause up to the point where it seems like there are strongly diminishing returns, or the point where it seems the returns are clearly worse than what we could achieve by reallocating the resources - whichever comes first.

Conservatism

We are a funder with major financial resources and few constraints or guidelines on how to allocate them. We are continually making high-stakes decisions based on intellectual frameworks that are (as discussed above) far from settled. These decisions reflect to some degree on all of the 20+ people who work for the Open Philanthropy Project, and can substantially affect grantees’ (and other donors’) long-term planning.

I believe that we need to move quickly, give substantially, and not let our uncertainty paralyze us - not only because of the immediate good we can do by giving, but also because this approach is (in my opinion) the fastest practical way to build the framework and staff we’ll need to become more informed on key questions over time. At the same time, there are some aspects of our work where I think it’s a good idea to be “conservative,” in the sense of avoiding big, irrevocable decisions - particularly the kind that would commit us to highly unusual and potentially regret-prone behavior, and/or might interfere with others’ planning - when we can’t ground them in thoughtful analysis and reasonably stable conclusions. A few particular ways in which we’re conservative follow.

Avoiding whiplash

As noted above, our very tentative, unstable guess is that the “last dollar” has higher expected value than gifts to GiveWell’s top charities today. That view has changed in the past, and it might change in the future. When I think the “last dollar” has higher expected value than GiveWell’s top charities, this would seem to imply that we should minimize gifts to these charities; when I think the “last dollar” has lower expected value than GiveWell’s top charities, this would seem to imply that we should maximize gifts to these charities. The difference between minimizing and maximizing could be in excess of $100 million per year.

Similar analysis could be applied to any cause we’re in. If we were to draw up the “ideal allocation between causes” according to our views each day on some of the thorny, fundamental questions laid out above, that allocation would be quite volatile: any given cause would probably see large allocations on some days and small allocations on others.

It seems to me that it would be a very bad policy to dramatically shift our giving allocation each time our views change. At the amounts we’re giving, this could make it extremely hard to for others (grantees, other donors) to plan effectively, as well as sending confusing messages to people trying to understand our giving priorities. And we have a lot of work to do beyond determining how much we allocate to each cause; we don’t want to spend inordinate time on thinking about the allocation, so we shouldn’t put ourselves in a position where every new piece of information or insight compels us to urgently revisit the issue.

Accordingly, I think principles like the following are appropriate:

  • We revisit our allocations between causes periodically, generally around once a year, unless an unusually large new consideration pops up.
  • Once we have made an allocation decision, we default strongly to sticking with it until we next revisit it.
  • If we change our minds such that we want to reduce the allocation to some cause, we move slowly and deliberately in the direction of reducing our allocation - for example, holding our allocation steady for some time as we continue to consider the situation, then reducing it on a schedule - rather than slashing it quickly. This means that we’re honoring not only explicit but implicit commitments to causes, and makes it less difficult for people to plan around us. (Given that we’re currently under target giving, we have fewer reservations - though still a degree of caution - about quickly increasing the allocation to some cause.)
  • The more confident we are in a decision, the more quickly we move. By contrast, when our views seem very unstable and tentative (as they are on many questions relating to the value of the “last dollar”), we are more likely to move slowly in changing our behavior from the status quo, as described above.

Some considerations regarding conventional vs. unconventional giving

Some of our giving seems very in line with what I’d call “conventional” altruism. Gifts to GiveWell’s top charities, and to a lesser extent most of our work on US policy, consist of applying an unusually analytical approach to our goals, but the work is otherwise widely recognized as worthy, important, and altruistic. Some of our other work is more “unconventional”: the basic case for the work relies on many unusual premises, and there’s far less consensus - particularly from casual observers - on its worthiness.

I feel quite comfortable making big bets on unconventional work. But at this stage, given how uncertain I am about many key considerations, I would be uncomfortable if that were all we were doing. My views on the value of unconventional work are unstable; at times when it seems extremely cost-effective to me, I worry that this is because I’m misguided in a fundamental way. There are some people who would argue that this worry about being misguided can, itself, be assigned to a probability estimate and straightforwardly quantified from there. But I don’t accept this argument: I worry that explicit quantification is not a good methodology for handling worries like this one.2 It intuitively seems more sensible to “compromise” - by dividing resources between unconventional and conventional work - than to allocate near-exclusively to one or the other using my unstable and non-robust expected-value estimates.

I’m not sure this intuition is reasonable, and I plan to continue examining it. For now, I am following it, for several reasons:

  • I think that cutting back too heavily on support for “conventional” causes would have real and potentially irrevocable costs for our work. I think it would communicate a level of certainty and conviction, regarding the relative value of the conventional and unconventional work, that I don’t have.
  • I feel some non-consequentialist pull toward the “compromise” position. I generally believe in trying to be an ethical person by a wide variety of different ethical standards (not all of which are consequentialist). If I were giving away billions of dollars during my lifetime (the hypothetical I generally use to generate recommendations), I would feel that this goal would call for some significant giving to things on the more conventional side of the spectrum. “Significant” need not mean “exclusive” or anything close to it. But I wouldn’t feel that I was satisfying my desired level of personal morality if I were giving $0 (or a trivial amount) to known, outstanding opportunities to help the less fortunate, in order to save as much money as possible for more speculative projects relating to e.g. artificial intelligence.
  • I haven’t yet seen a formal approach I find satisfying and compelling for questions like “How should I behave when I perceive a significant risk that I’m badly misguided in a fundamental way?” Because of this, I haven’t felt compelled enough by arguments along the lines of “You should focus on unconventional work, since it appears to be higher expected value” to overcome the above points in favor of a compromise position. I plan further discussion and reflection on this point, in particular.

The upshot is that even at times when it seems to me that unconventional causes present much more “expected good accomplished” than conventional ones, I want to recommend some significant amount of giving to the conventional ones. This is a distinct point from the more general points in favor of worldview diversification.3

Some clarifications

I had some hesitation about discussing “conservatism” in this post. I think the Open Philanthropy Project’s ability to take big risks is one of its greatest assets. I don’t want to shy away from outstanding work just because it is unconventional, and I don’t want to give the impression that we would. So I think it’s worth reiterating here that:

  • We put significant resources into unconventional work, and see significant value in hits-based giving.
  • The “conservatism” discussed in this section is generally about not cutting certain kinds of giving too fast or too much. Given that we are currently under our target giving level, this kind of conservatism currently means little for our ability to support unconventional work.

Coordination

Last year, we wrote:

Trying to anticipate and adjust to other givers’ behavior can lead to thorny-seeming dilemmas. We do not want to be in the habit of – or gain a reputation for – recommending that Good Ventures fill the entire funding gap of every strong giving opportunity we see. In the long run, we feel this would create incentives for other donors to avoid the causes and grants we’re interested in; this, in turn, could lead to a much lower-than-optimal amount of total donor interest in the things we find most promising …

When trying to coordinate with another funder who can’t be directly negotiated with, one approach is to come up with what seems like a “fair share” of the funding gap each would provide, and simply recommend that Good Ventures commit to providing its “fair share” – no more and no less, regardless of the other funder’s behavior …

Even in theory, it’s hard to reconcile the basic goals of (a) closing important funding gaps and (b) creating good incentives for other donors … For this year, we have chosen the “split” approach [previous paragraph]. It is relatively simple to execute … and keeps incentives relatively simple for donors (unlike with “funging” approaches) … It avoids the worst problems with each of the other approaches, while not being perfect by any criterion.

We have done more investigation and reflection on these issues, which we will write about separately. But for the moment, we still favor “splitting” as the default approach to the most challenging coordination issues. Unlike last year, this is not a relevant consideration for determining how much to give to GiveWell’s top charities; more on that below.

Non-monetary costs of grants

As our grantmaking has risen this year, it’s become more salient to us that different grants have different non-monetary costs, in terms of the risk of creating distractions that can cost staff time. Distractions can include:

  • Time-sensitive grants with logistical challenges. One of the things we’ve discovered over the last year is that some grants present much greater logistical challenges than others. Grants that go to universities - or other large institutions housing the people we’re looking to support - often require significant additional attention compared to other grants, which can be due to discussion of our indirect costs policy, delays while various university staff provide required information or sign-off, or other miscellaneous factors. It also can be complex to make grants to support work not housed at a 501(c)(3) organization (such as a 501(c)(4) or an international organization). When these challenges are combined with time sensitivity on a grant, the result can be a drain on senior staff time as they rush to work out the details.
  • Grants that present communications challenges. Another lesson we’ve been learning is that often, “money speaks louder than words” - a grant is itself a form of communication, regardless of what (if anything) we write about it. Some grants risk causing controversy, in the media or simply among experts in the field, that can risk damaging our relationships and cause distractions. Some grants, while not controversial per se, pose a risk of sending misleading messages about our priorities and values, and this is important because - among other things - perceptions about our priorities and values can affect what giving opportunities we encounter. We feel that we’re often able to ameliorate these risks with careful communications, but putting significant effort into communications can itself be a distraction.
  • On the flip side, occasionally there are communications benefits to making a particular grant or working in a particular cause. Some grants are easier to understand the case for than others, so it’s sometimes the case that a relatively straightforward-to-understand grant helps clarify the values and goals we bring to a cause - and this in turn might affect how other, less straightforward grants in the same cause are interpreted.

To give one example of “grants as communications,” our support of the Fed Up campaign drew unfounded speculation about the motives behind the grant, despite the fact that our writeup laid out the rationale for the grant in detail. As reporters followed up on the story, we felt that we ultimately benefited from having a detailed public writeup - as well as from having a large number of other grants in less confusing areas than monetary policy. The latter helped to support the idea that our interest in this area is consistent with our general interest in neglected (and important) opportunities to improve lives.

The “non-monetary costs” discussed here are very rarely overwhelming. We wouldn’t make a grant just for the sorts of benefits described above, or pass on one just because of the sorts of costs described above. These factors can be important for grants that are otherwise borderline.

This year’s recommendation for giving to GiveWell’s top charities

The framework laid out above has many different considerations for a given grant. As one example of how it’s applied, I’ll discuss how I thought through how much to recommend that Good Ventures give to GiveWell’s top charities this year.

I considered the following:

  • Giving is much higher for 2016 than for 2015, but remains under budget overall.
  • My very tentative, unstable guess is the “last dollar” we will give has higher expected value than gifts to GiveWell’s top charities today. In isolation, this would point to minimizing giving to GiveWell’s top charities.
  • I think most of the arguments given for worldview diversification don’t apply here, or don’t apply beyond the first $10-20 million in giving.
  • However, I am hesitant to reduce the recommendation too much compared to last year, or to a non-significant level, for reasons mostly outlined in the section on conservatism - including the section on conventional vs. unconventional giving.

I chose to recommend $50 million in gifts to GiveWell’s top charities, which is similar to the level of giving from the end of last year (not counting an earlier major gift to GiveDirectly). Alexander Berger and Elie Hassenfeld recommended the same level for similar reasons.

Plans for the coming year

Our framework for “giving now vs. later” has gotten a lot more complex compared to last year’s, and it’s very far at this point from being as systematic as I would like. Currently, we apply the framework very loosely and informally.

Primarily, we operate by committing to focus areas based on our cause selection process; assigning specific staff to each focus area who lead strategy development and grant proposals; and approving grants that seem, broadly, to be useful and “reasonably cost-effective” by the standards of the cause. We err on the side of giving more (since we are under budget); we are fairly “conservative” in terms of cutting allocations to causes we’ve committed to, particularly those that are on the more “conventional” end of the spectrum; we also take non-monetary costs of grants into account.

I think we can do a much better job thinking through the issues presented in this post. For many issues, particularly those relating to modeling the “last dollar” of our giving, our thinking is very preliminary and might benefit a lot from further investigation, reflection and discussion.

For the most part, I think we’re at a fairly natural stage in the evolution of our framework. In 2015, we focused on capacity-building; 2016 has been the first year we focused on grantmaking, and the need for a thoughtful framework has become more salient as our giving has risen. In 2017, I hope to put significantly more time into the issues that were preliminarily addressed in this post, such as the value of the “last dollar” and what sort of conservatism we should practice.

I do think I made a noticeable mistake on this front in 2016. I allocated a modest amount of time to improving my thinking about how much we should recommend that Good Ventures give to GiveWell’s top charities; I spent nearly all of that time working through our framework for dealing with coordination issues, and I think some of that time would have been better spent on the “last dollar” question. For much of 2016, I worried that I was missing something basic about how one might approach coordination issues, and this - combined with the relatively large amount of critical attention our stance on coordination has received - made me focus on them. But I think the “last dollar” question is more important, both specifically for the question of how much to recommend for GiveWell’s top charities, and overall. I think it would have been better to spend more time on this question, partly at the cost of time spent on coordination issues, and partly at the cost of other activities.

  • 1.

    Explanation: the probability that at least one of these outcomes will happen is equal to 1 - the probability that all of these outcomes do not happen. Each of the four possibilities has a 90% chance of not occurring. Because we assumed these outcomes to be independent the probability that they all fail to occur is the product of the separate probabilities that each one fails to occur: in other words, (0.9)4. Therefore, the probability that at least one of these four outcomes does occur is equal to 1 - (0.9)4, which is approximately 34%.

  • 2.

    To be clear, my concern is not with the conceptual idea of using probability to quantify uncertainty, but with the methodological path of estimating a probability through introspection (“What is the probability that I’m badly and fundamentally misguided?”) Just as one can believe in maximizing expected utility but still prefer heuristics to expected-value estimates in some scenarios, one can believe in probability as the best framework for quantifying uncertainty but still prefer heuristics to estimating probabilities via introspection in some scenarios.

  • 3.

    Practicing worldview diversification means putting significant resources into multiple causes; it doesn’t necessarily call for the specific attention to “conventional vs. unconventional” giving I’ve discussed in this section.

Comments

““How should I behave when I perceive a significant risk that I’m badly misguided in a fundamental way?” “

If you think you’re far from hitting diminishing returns, and you’re consequentialist, then on a narrow interpretation, you have reasons to explore more if you’re more uncertain about your impact. [1] So there’s at least one consideration that pushes in favor of high variance options beyond the naive EV estimates of each. (in an opposite direction to some conservative prior)

1. http://www.jmlr.org/papers/volume3/auer02a/auer02a.pdf

If I were giving away billions of dollars during my lifetime (the hypothetical I generally use to generate recommendations), I would feel that this goal would call for some significant giving to things on the more conventional side of the spectrum….
I haven’t yet seen a formal approach I find satisfying and compelling for questions like “How should I behave when I perceive a significant risk that I’m badly misguided in a fundamental way?” Because of this, I haven’t felt compelled enough by arguments along the lines of “You should focus on unconventional work, since it appears to be higher expected value” to overcome the above points in favor of a compromise position. I plan further discussion and reflection on this point, in particular.

Maybe a possible sketch of a formal explication of your view, Holden, is instead of trying to maximize the expected value of good done, you’re trying to maximize some other functional on probability distributions of possible outcomes, one that controls the left tail. Or you could keep the functional as expected value, but seek to maximize subject to a constraint that controls the left tail.

For example (using the second approach), you might want to be 90% confident that the good you do is at least 25% as much as if you had kept to conventional work, and subject to that constraint you want to maximize expected value. Let’s assume that unconventional approaches are unlikely to have material negative impact and furthermore that you can’t realistically get to 90% confidence of doing good with even a combination of high-expected value unconventional approaches. Then that might lead to an optimum that is a combination of roughly 25% conventional and 75% unconventional.

(I’m happy to write this up with formulas and in more detail if that might be helpful, but for now I’ll leave it at that. Also, apologies if this is considered obvious and/or standard stuff.)

Branching a bit off-topic:

I’ve been thinking about this- a lot of EA’s seem to penalize low-probability outcomes beyond what an Expected Value calculation over their declared utility says.

I wonder if the “Mental Value” could be well-modeled as simply as MV = sum( (p_i)^k * u_i), where k >= 1 (so EV = MV iff k=1).

I also wonder whether Mental Value, regardless of whether the above model works, is still reducible to coherent utility functions over something other than the original declared utilities.

Zachary, I don’t think that specific form quite works for k>1.

E.g. for k=2, consider two interventions: A which has a 100% chance of doing 1 unit of good and B which has a 50% chance of doing 2 units and a 50% chance of doing 1.5 units. Anyone would agree B is better, but in fact A has higher “Mental Value” for your formula (MV(A) =1 > MV(B)=0.875 for k=2).

(k=2 should get the point across, but we can generalize to arbitrary k>1, by taking A = certainty of 1 unit of good as before and B = 50% chance of 2^(k-1) and 50% chance of (2^(k-1)+1)/2. B again is better than A in all states of the world, but nonetheless MV(A)=1>MV(B) = 0.75+1/2^(k+1).)

A simple fix for your examples would be to divide by the sum of p_i^k afterwards.

But I don’t think you can really formally penalize low-probability outcomes like that. If you do, you get different scores for “do X” and “flip a coin, if heads, do X, if tails, do X”.

It’s not really the fact that low-probability interventions are low-probability that makes them unreliable and makes you want to discount them. It’s more that calibrating yourself on probabilities that low is pretty hard, so it’s much harder to argue that the probabilities are actually grounded in a reasonable model.

(I guess most of that was a reply to Zachary rather than Colin)

A simple fix for your examples would be to divide by the sum of p_i^k afterwards.

Sure, though we can find examples with similar properties for your modification also.

It’s not really the fact that low-probability interventions are low-probability that makes them unreliable and makes you want to discount them. It’s more that calibrating yourself on probabilities that low is pretty hard, so it’s much harder to argue that the probabilities are actually grounded in a reasonable model.

I think pretty much everyone people would agree the difficulty of determining numerical probabilities for unlikely events (“calibrati[on]”) is at least a big part of the issue. But it’s not clear to me it’s the whole issue. Even if hypothetically the probabilities were know with certainty, I do think some people would sometimes choose a lower EV of good in return for a greater confidence they are doing good. (See my first comment to this post for a bit more discussion on this.)

I’m not sure of this and plan more reflection, but I think a lot of my reasoning has more to do with coordination/deontology (what kinds of general guides to behavior would produce a good world if people followed them?) than with trying to get a specific probability distribution over impact.

I’ll be interested to read more, if you do flesh it out.

A couple of tentative thoughts in this general ballpark on the difficulty of reasoning with low/uncertain probabilities:

1. Maybe our reasoning about low probabilities is so poor that they aren’t that useful as steps towards selecting / allocating to charities/interventions. In other words, maybe in extreme cases we can reason better about allocations directly (and to the extent one’s interested in subjective probabilities, it’s perhaps better to infer them from the allocation rather than estimate them directly).

2. On coordination: One possible issue is that people may look to others – either consciously or subconsciously – to help decide probabilities and the like, leading to herding. (Say I see people funding mitigation against some at first blush implausible risk, so maybe I reconsider my prior and conclude it’s not so unlikely.)

Hi Colin, I agree that both of those seem like relevant considerations. I do expect that we’ll be writing more on this topic.

Leave a comment