In principle, we try to find the best giving opportunities by comparing many possibilities. However, many of the comparisons we’d like to make hinge on very debatable, uncertain questions.
- Some people think that animals such as chickens have essentially no moral significance compared to that of humans; others think that they should be considered comparably important, or at least 1-10% as important. If you accept the latter view, farm animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options: billions of chickens are treated incredibly cruelly each year on factory farms, and we estimate that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. But if you accept the former view, this work is arguably a poor use of money.
- Some have argued that the majority of our impact will come via its effect on the long-term future. If true, this could be an argument that reducing global catastrophic risks has overwhelming importance, or that accelerating scientific research does, or that improving the overall functioning of society via policy does. Given how difficult it is to make predictions about the long-term future, it’s very hard to compare work in any of these categories to evidence-backed interventions serving the global poor.
- We have additional uncertainty over how we should resolve these sorts of uncertainty. We could try to quantify our uncertainties using probabilities (e.g. “There’s a 10% chance that I should value chickens 10% as much as humans”), and arrive at a kind of expected value calculation for each of many broad approaches to giving. But most of the parameters in such a calculation would be very poorly grounded and non-robust, and it’s unclear how to weigh calculations with that property. In addition, such a calculation would run into challenges around normative uncertainty (uncertainty about morality), and it’s quite unclear how to handle such challenges.
In this post, I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving. One worldview might imply that evidence-backed charities serving the global poor are far more worthwhile than either of the types of giving discussed above; another might imply that farm animal welfare is; another might imply that global catastrophic risk reduction is. A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty, as laid out in the third bullet point above). Some slightly more detailed descriptions of example worldviews are in a footnote.
A challenge we face is that we consider multiple different worldviews plausible. We’re drawn to multiple giving opportunities that some would consider outstanding and others would consider relatively low-value. We have to decide how to weigh different worldviews, as we try to do as much good as possible with limited resources.
When deciding between worldviews, there is a case to be made for simply taking our best guess and sticking with it. If we did this, we would focus exclusively on animal welfare, or on global catastrophic risks, or global health and development, or on another category of giving, with no attention to the others. However, that’s not the approach we’re currently taking.
Instead, we’re practicing worldview diversification: putting significant resources behind each worldview that we find highly plausible. We think it’s possible for us to be a transformative funder in each of a number of different causes, and we don’t - as of today - want to pass up that opportunity to focus exclusively on one and get rapidly diminishing returns.