Error message

Warning: Use of undefined constant openphil_social - assumed 'openphil_social' (this will throw an Error in a future version of PHP) in openphil_social_block_block_view() (line 90 of /var/www/html/openphil/live/sites/all/modules/custom/openphil_social_block/openphil_social_block.module).

Radical Empathy

One theme of our work is trying to help populations that many people don’t feel are worth helping at all. We’ve seen major opportunities to improve the welfare of factory-farmed animals, because so few others are trying to do it. When working on immigration reform, we’ve seen big debates about how immigration affects wages for people already in the U.S., and much less discussion of how it affects immigrants. Even our interest in global health and development is fairly unusual: many Americans may agree that charitable dollars go further overseas, but prefer to give domestically because they so strongly prioritize people in their own country compared to people in the rest of the world.1

The question, “Who deserves empathy and moral concern?” is central for us. We think it’s one of the most important questions for effective giving, and generally. Unfortunately, we don’t think we can trust conventional wisdom and intuition on the matter: history has too many cases where entire populations were dismissed, mistreated and deprived of basic rights for reasons that fit the conventional wisdom of the time but today look indefensible. Instead, we aspire to radical empathy: working hard to extend empathy to everyone it should be extended to, even when it is unusual or seems strange to do so.

To clarify the choice of terminology:

  • “Radical” is intended as the opposite of “traditional” or “conventional.” It doesn’t necessarily mean “extreme” or “all-inclusive”: we don’t extend empathy to everyone and everything (this would leave us essentially no basis for making decisions about morality). It refers to working hard to make the best choices we can, without anchoring to convention.
  • “Empathy” is intended to capture the idea that one could imagine oneself in another’s position, and recognizes the other as having experiences that are worthy of consideration. It is not intended to refer to literally feeling what another feels, and is therefore distinct from the “empathy” critiqued in Against Empathy (a book that acknowledges the multiple meanings of the term and explicitly focuses on one).

Conventional wisdom and intuition aren’t good enough

In The Expanding Circle, Peter Singer discusses how, over the course of history, “The circle of altruism has broadened from the family and tribe to the nation and race … to all human beings” (and adds that “The process should not stop there”).2 By today’s standards, the earliest cases he describes are striking:

At first [the] insider/ outsider distinction applied even between the citizens of neighboring Greek city-states; thus there is a tombstone of the mid-fifth century B.C. which reads:

This memorial is set over the body of a very good man. Pythion, from Megara, slew seven men and broke off seven spear points in their bodies … This man, who saved three Athenian regiments … having brought sorrow to no one among all men who dwell on earth, went down to the underworld felicitated in the eyes of all.

This is quite consistent with the comic way in which Aristophanes treats the starvation of the Greek enemies of the Athenians, starvation which resulted from the devastation the Athenians had themselves inflicted. Plato, however, suggested an advance on this morality: he argued that Greeks should not, in war, enslave other Greeks, lay waste their lands or raze their houses; they should do these things only to non-Greeks. These examples could be multiplied almost indefinitely. The ancient Assyrian kings boastfully recorded in stone how they had tortured their non-Assyrian enemies and covered the valleys and mountains with their corpses. Romans looked on barbarians as beings who could be captured like animals for use as slaves or made to entertain the crowds by killing each other in the Colosseum. In modern times Europeans have stopped treating each other in this way, but less than two hundred years ago some still regarded Africans as outside the bounds of ethics, and therefore a resource which should be harvested and put to useful work. Similarly Australian aborigines were, to many early settlers from England, a kind of pest, to be hunted and killed whenever they proved troublesome.3

The end of the quote transitions to more recent, familiar failures of morality. In recent centuries, extreme racism, sexism and other forms of bigotry - including slavery - have been practiced explicitly and without apology, and often widely accepted by the most respected people in society.

From today’s vantage point, these seem like extraordinarily shameful behaviors, and people who were early to reject them - such as early abolitionists and early feminists - look to have done extraordinary amounts of good. But at the time, looking to conventional wisdom and intuition wouldn’t necessarily have helped avoid the shameful behaviors or seek out the helpful ones.

Today’s norms seem superior in some respects. For example, racism is much more rarely explicitly advocated (which is not to say that it is rarely practiced). However, we think today’s norms are still fundamentally inadequate for the question of who deserves empathy and moral concern. One sign of this is the discourse in the U.S. around immigrants, which tends to avoid explicit racism but often to embrace nationalism - to exclude or downplay the rights and concerns of people who aren’t American citizens (and even more so, people who aren’t in the U.S. but would like to be).

Intellect vs. emotion

I sometimes hear the sentiment that moral atrocities tend to come from thinking of morality abstractly, losing sight of the basic emotional basis for empathy, and distancing oneself from the people one’s actions affect.

I think this is true in some cases, but importantly false in others. People living peaceful lives are often squeamish about violence, but it seems that this squeamishness can be overcome disturbingly quickly with experience. There are ample examples throughout history where large numbers of “conventional” people casually and even happily practiced direct cruelty and violence to those whose rights they didn’t recognize.4 Today, watching the casualness with which factory farm workers handle animals (as shown in this gruesome video), I doubt that people would eat much less meat if they had to kill animals themselves. I don’t think the key is whether people see and feel the consequences of their actions. More important is whether they recognize those their actions affect as fellow persons, meriting moral consideration.

On the flipside, there seems to be at least some precedent for using logical reasoning to reach moral conclusions that look strikingly prescient in retrospect. For example, see Wikipedia on Jeremy Bentham, who is known for basing his morality on the  straightforward, quantitative logic of utilitarianism:

He advocated individual and economic freedom, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and the decriminalising of homosexual acts. [My note: he lived from 1747-1832, well before most of these views were common.] He called for the abolition of slavery, the abolition of the death penalty, and the abolition of physical punishment, including that of children. He has also become known in recent years as an early advocate of animal rights.

Aspiring to radical empathy

Who deserves empathy and moral concern? To the extent that we get this question wrong, we risk making atrocious choices. If we can get it right to an unusual degree, we might be able to do outsized amounts of good.

Unfortunately, we don’t think it is necessarily easy to get it right, and we’re far from confident that we are doing so. But here are a few principles we try to follow, in making our best attempt:

Acknowledge our uncertainty. For example, we’re quite unsure of where animals should fit into our moral framework. My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more. However, I don’t think either my reflections or my intuitions are highly reliable, especially given that many thoughtful people disagree. And if chickens do indeed merit moral concern, the amount and extent of their mistreatment is staggering. With worldview diversification in mind, I don’t want us to pass up the potentially considerable opportunities to improve their welfare.

I think the uncertainty we have on this point warrants putting significant resources into farm animal welfare, as well as working to generally avoid language that implies that only humans are morally relevant.5

That said, I don’t feel uncertain about all of our unusual choices. I’m confident that differences in geography, nationality, and race ought not affect moral concern, and our giving should reflect this.

Be extremely careful about too quickly dismissing “strange” arguments on this topic. Relatively small numbers of people argue that insects, and even some algorithms run on today’s computers, merit moral concern. It’s easy and intuitive to laugh these viewpoints off, since they seem so strange on their face and have such radical implications. But as argued above, I think we should be highly suspicious of our instincts to dismiss unusual viewpoints on who merits moral concern. And the stakes could certainly be high if these viewpoints turn out to be more reasonable than they appear at first.

So far I remain unconvinced that insects, or any algorithms run on today’s computers, are strong candidates for meriting moral concern. But I think it’s important to keep an open mind.

Explore the idea of supporting deeper analysis.Luke Muehlhauser is currently exploring the current state of research and argumentation on the question of who merits moral concern (which he calls the question of moral patienthood). It’s possible that if we identify gaps in the literature, and opportunities to become better informed, we’ll recommend funding further work. In the near future, work along these lines could affect our priorities within farm animal welfare - for example, it could affect how we prioritize work focused on improving treatment of fish. Ideally, our views on moral patienthood would be informed by an extensive literature drawing on as much deep reflection, empirical investigation and principled argumentation as possible.

Don’t limit ourselves to the “frontier.” Widely recognized problems still do a great deal of damage. In our work we often find ourselves focusing on unconventional targets for charitable giving, such as farm animal welfare and potential risks from advanced artificial intelligence. This is because we often find that opportunities to do disproportionate amounts of good are in areas that have been, in our view, relatively neglected by others. However, our goal is to do the most good we can, not to seek out and support those causes which are most “radical” in our present society. When we see great opportunities to play a role in addressing harms in more widely-acknowledged areas – for example, in the U.S. criminal justice system – we take them.

  • 1.

    For example, according to data from Giving USA, only approximately 4% of US giving in 2015 was focused on international aid. (Reported by Charity Navigator here.)

  • 2.

    Page 120.

  • 3.

    Pages 112-113.

  • 4.

    Many examples available in the first chapter of Better Angels of our Nature.

  • 5.

    As a side note, it is often tricky to avoid such language. We generally use the term “persons” when we want to refer to beings that merit moral concern, without pre-judging whether such beings are human and also without causing too much distraction for casual readers. A more precise term is “moral patients.”


Have you considered looking into interventions which could help unborn children? If you’re open to maternal health interventions and far-future Xrisk interventions, currently-existing-but-not-yet-born children seem like a a natural group to include in your empathy circle, and pretty neglected from a cause analysis point of view.

(Note: I apologize for the delay in responding. I wasn’t getting alerts for new comments, and believe this problem is now fixed.)

Sarah, we haven’t looked into this. I don’t think I would agree with the characterization of this area as “neglected.”

My impression of OpenPhil’s approach to radical empathy is that (1) it’s insufficiently weird, by a huge margin, and (2) possibly hamstrung by founder effects. I’ll focus on ‘insufficiently weird’ in this comment. What does risk look like? Is OpenPhil willing and able to engage in, and with, ‘risky’ research? Is OpenPhil able to determine which ‘risky’ research is worth pursuing, and capable of planning out good ways of pursuing it? At best, I think OpenPhil has a mixed track-record here. I like that you feel comfortable mentioning PETRL’s arguments, and I like that you’re supporting Luke’s interview series. But that seems to be the extent of the ‘weirdness’ you’re comfortable with. Notably, you don’t seem comfortable directly engaging with PETRL’s claims and trying to determine *why* they could be true or false, nor do you seem involved with any attempt to systematize and evaluate some of the claims made my Luke’s interviewees. I believe I can give this criticism because I’ve been doing exactly this sort of research, and OpenPhil hasn’t seemed very interested in learning from what I’ve done, or evaluating its truth content. (Granted, “OpenPhil” isn’t a monolithic entity- I should say, during the course of my research I’ve run across multiple people at OpenPhil.) If my experience is representative, I think people should have a fairly low estimation of OpenPhil’s capacity for weirdness, which translates into a relatively low capacity for seeking out & evaluating fundamental philosophical research, which translates into low potential for doing a good job at ‘radical empathy’. (Subtext: I’d love to be proven wrong.)

I should clarify that I have enjoyed all personal interactions with people at OpenPhil, and I have a high estimation of their intelligence and sincerity. My critique is aimed the institution of OpenPhil, not its people.

(Note: I apologize for the delay in responding. I wasn’t getting alerts for new comments, and believe this problem is now fixed.)

Mike, thanks for the thoughts. Luke is the point person for this investigation, and his report will hopefully be out within the next few months (note that he is working on a comprehensive report, not just an interview series). I think the report will provide a better basis for evaluating how we’re approaching these questions, and I would encourage you to revisit this topic at that point.

Holden, thanks for the reply. I will do that and look forward to it.

A couple criticisms I have regarding this piece. Firstly, I don’t understand the author’s argument against universal empathy. Using the author’s definition of empathy, I don’t see why it would ever be a bad thing to be able to see oneself in another’s shoes and recognize that their experiences are worthy of consideration. This seems to me to be equivalent to being able to understand someone, which in my eyes never seems like a bad thing (regardless of how bad they might be). Secondly, the meaning intended behind the word ‘deserve’ is never given. IMO ‘deserve’ is a very vague and loose idea: I have very different ideas of what I deserve than others have, and furthermore ‘deserving’ often doesn’t intuitively align with what ‘should’ happen (e.g. charity A has done marvellous good but has little use for future donations, while charity B has the opportunity to put the funds to a much better use despite not being able to implement much change in the past—’deserve’ intuitively matches cumulative value, while ‘should’ matches marginal value).

(Note: I apologize for the delay in responding. I wasn’t getting alerts for new comments, and believe this problem is now fixed.)

Alistair, thanks for the thoughts. A couple of responses: (a) I’m not sure exactly what you are referring to when you mention “the author’s argument against universal empathy.” My best guess is that you’re referring to my comment that “we don’t extend empathy to everyone and everything (this would leave us essentially no basis for making decisions about morality).” Here I mean that we don’t extend empathy to, e.g., chairs and tables; if we extended empathy and moral concern to every definable object or process, then it would seem that every way in which an action benefits “someone” would be offset by a way in which it injures “someone,” which would seem to leave no basis for making decisions about morality. (b) I agree that the term “deserves” can be problematic; here I only intended it as a compact way of asking whom we should extend empathy and moral concern to.

How is OpenPhil weighing the moral concerns of things that don’t exist yet (and might never exist)? It’s possible that an extinction event could prevent hundreds of billions of humans (and other animals, and AI) from existing. That could even be a laughable understatement if nuclear war is the difference between humanity lasting thousands of years versus millions. It might be shortsighted to worry about the real suffering of billions now instead of the potential existence of trillions in the future. Does OpenPhil have a preferred methodology for dealing with this? -Christian

I’ve just now found your posts on worldview diversification and cause prioritization that cover what I was thinking about. Thanks! -Christian

Hello there, I understand this article is almost 5 years old and am not sure if you’re still responding to questions/comments. Nonetheless, firstly, I wanted to give my appreciation for the perspective of this article - the abnormality of the perspective is intriguing. Secondly, I have become exceedingly aware of the global warming issue and of the startling projections which leads to me to some questions - though they are slightly vague, hopefully they’re answerable! - Have you/OpenPhil investigated if there are any correlations between increasing animal welfare through radical empathy - like you suggest - (to particularly industrial farm animals) and how that might help with the reduction of CO2/climate change? There are obviously many articles discussing the science of this, but considering this standpoint, I was especially curious of your thoughts considering its been almost 5 years since this was published. If this just goes into the abyss of the web I’m also cool with that, but would be utterly pleased to hear back! - Gal from Tulsa, OK

Leave a comment