• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

2017 Report on Consciousness and Moral Patienthood

  • Content Type: Research Reports

Table of contents

1. How to read this report

2 Explaining my approach to the question

2.1 Why we care about the question of moral patienthood
2.2 Moral patienthood and consciousness
2.2.1 My metaethical approach
2.2.2 Proposed criteria for moral patienthood
2.2.3 Why I investigated phenomenal consciousness first
2.3 My approach to thinking about consciousness
2.3.1 Consciousness, innocently defined
2.3.2 My assumptions about the nature of consciousness

3 Specific efforts to sharpen my views about the distribution question

3.1 Theories of consciousness
3.1.1 PANIC as an example theory of consciousness
3.2 Potentially consciousness-indicating features (PCIFs)
3.2.1 How PCIF arguments work
3.2.2 A large (and incomplete) table of PCIFs and taxa
3.2.3 My overall thoughts on PCIF arguments
3.3 Searching for necessary or sufficient conditions
3.3.1 Is a cortex required for consciousness?
3.3.1.1 Arguments for cortex-required views
3.3.1.2 Unconscious vision
3.3.1.3 Suggested other lines of evidence for CRVs
3.3.1.4 Overall thoughts on arguments for CRVs
3.3.1.5 Arguments against cortex-required views
3.4 Big-picture considerations that pull toward or away from “consciousness is rare”
3.4.1 Consciousness inessentialism
3.4.2 The complexity of consciousness
3.4.3 We continue to find that many sophisticated behaviors are more extensive than we once thought
3.4.4 Rampant anthropomorphism

4 Summary of my current thinking about the distribution question

4.1 High-level summary
4.2 My current probabilities
4.3 Why these probabilities?
4.4 Acting on my probabilities
4.5 How my mind changed during this investigation
4.6 Some outputs from this investigation

5 Potential future investigations

5.1 Things I considered doing, but didn’t, due to time constraints
5.2 Projects that others could conduct
5.2.1 Projects related to theories of consciousness
5.2.2 Projects related to theory-agnostic guesses about the distribution of consciousness
5.2.3 Projects related to moral judgments about moral patienthood
5.2.4 Additional thoughts on useful projects

6 Appendices

6.1 Appendix A. Elaborating my moral intuitions
6.1.1 Which kinds of consciousness-related processes do I morally care about?
6.1.2 The “extreme effort” version of my process for making moral judgments
6.1.3 My moral judgments about some particular cases
6.1.3.1 My moral judgments about some first-person cases
6.1.3.2 The Phenumb thought experiment
6.1.3.3 My moral judgments about some third-person cases
6.1.3.4 My moral judgments, illustrated with the help of a simple game
6.2 Appendix B. Toward a more satisfying theory of consciousness
6.2.1 Temporal binding theory
6.2.2 Integrated information theory
6.2.3 Global workspace theory
6.2.4 What a more satisfying theory of consciousness could look like
6.3 Appendix C. Evidence concerning unconscious vision
6.3.1 Multiple vision systems in simpler animals
6.3.2 Two vision systems in primates
6.3.3 Visual form agnosia in Dee Fletcher
6.3.4 Optic ataxia
6.3.5 Lesions in monkeys
6.3.6 Dissociation studies in healthy subjects
6.3.7 Single-neuron recordings
6.3.8 Challenges
6.4 Appendix D. Some clarifications on nociception and pain
6.5 Appendix E. Some clarifications on “neuroanatomical similarity”
6.6 Appendix F. Illusionism and its implications
6.6.1 What I mean by “illusionism”
6.6.2 Other cognitive illusions
6.6.3 Where do the illusionist and the realist disagree?
6.6.4 Illusionism and moral patienthood
6.7 Appendix G. Consciousness and fuzziness
6.7.1 Fuzziness and moral patienthood
6.7.2 Fuzziness and Darwin
6.7.3 Fuzziness and auto-activation deficit
6.8 Appendix H. First-order views, higher-order views, and hidden qualia
6.8.1 Block’s overflow argument
6.8.2 Split-brain patients
6.8.3 Other cases of hemisphere disconnection
6.8.4 Shiller’s arguments
6.9 Appendix Z. Miscellaneous elaborations and clarifications
6.9.1 Appendix Z.1. Some theories of consciousness
6.9.2 Appendix Z.2. Some varieties of conscious experience
6.9.3 Appendix Z.3. Challenging dualist intuitions
6.9.4 Appendix Z.4. Brief comments on unconscious emotions
6.9.5 Appendix Z.5. The lack of consensus in consciousness studies
6.9.6 Appendix Z.6. Against hasty eliminativism
6.9.7 Appendix Z.7. Some candidate dimensions of moral concern
6.9.8 Appendix Z.8. Some reasons for my default skepticism of published studies
6.9.9 Appendix Z.9. Early scientific progress tends to lead to more complicated models of phenomena
6.9.1o Appendix Z.10. Recommended readings

7 Sources

Published: June 08, 2017 | by Luke Muehlhauser

Updated January 2018.

We aspire to extend empathy to every being that warrants moral concern, including animals. And while many experts, government agencies, and advocacy groups agree that some animals live lives worthy of moral concern,1 there seems to be little agreement on which animals warrant moral concern.2 Hence, to inform our long-term giving strategy, I (Luke Muehlhauser) investigated the following question: “In general, which types of beings merit moral concern?” Or, to phrase the question as some philosophers do, “Which beings are moral patients?”3

For this preliminary investigation, I focused on just one commonly endorsed criterion for moral patienthood: phenomenal consciousness, a.k.a. “subjective experience.” I have not come to any strong conclusions about which (non-human) beings are conscious, but I think some beings are more likely to be conscious than others, and I make several suggestions for how we might make progress on the question.

In the long run, to form well-grounded impressions about how much we should value grants aimed at (e.g.) chicken or fish welfare, we need to form initial impressions not just about which creatures are more and less likely to be conscious, but also about (a) other plausible criteria for moral patienthood besides consciousness, and also about (b) the question of “moral weight” (see below). However, those two questions are beyond the scope of this initial report on consciousness. In the future I hope to build on the initial framework and findings of this report, and come to some initial impressions about other criteria for moral patienthood and about moral weight.

This report is unusually personal in nature, as it necessarily draws heavily from the empirical and moral intuitions of the investigator. Thus, the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.

1. How to read this report

The length of this report, compared to the length of my other reports for the Open Philanthropy Project, might suggest to the reader that I am a specialist on consciousness and moral patienthood. Let me be clear, then, that I am not a specialist on these topics. This report is long not because it engages its subject with the depth of an expert, but because it engages an unusual breadth of material — with the shallowness of a non-expert.

The report’s unusual breadth is a consequence of the fact that, when it comes to examining the likely distribution of consciousness (what I call “the distribution question”), we barely even know which kinds of evidence are relevant (besides human self-report), and thus I must survey an unusually broad variety of types of evidence that might be relevant. Compare to my report on behavioral treatments for insomnia: in that case, it was quite clear which studies would be most informative, so I summarized only a tiny portion of the available literature.4 But when it comes to the distribution-of-consciousness question, there is extreme expert disagreement about which types of evidence are most informative. Hence, this report draws from a very large set of studies across a wide variety of domains — comparative ethology, comparative neuroanatomy, cognitive neuroscience, neurology, moral philosophy, philosophy of mind, etc. — and I am not an expert in any of those fields.5

Given all this, my goal for this report cannot be to argue for my conclusions, in the style of a scholarly monograph on consciousness, written by a domain expert.6 Nor is it my goal to survey the evidence which plausibly bears on the distribution question, as such a survey would likely run thousands of pages, and require the input of dozens of domain experts. Instead, my more modest goals for this report are to:

  1. survey the types of evidence and argument that have been brought to bear on the distribution question,
  2. briefly describe example pieces of evidence of each type,7 without attempting to summarize the vast majority of the evidence (of each type) that is currently available,
  3. report what my own intuitions and conclusions are as a result of my shallow survey of those data and arguments,
  4. try to give some indication of why I have those intuitions, without investing the months of research that would be required to rigorously argue for each of my many reported intuitions, and
  5. list some research projects that seem (to me) like they could make progress on the key questions of this report, given the current state of evidence and argument.

Given these limited goals, I don’t expect to convince career consciousness researchers of any non-obvious substantive claims about the distribution of consciousness. Instead, I focused on finding out whether I could convince myself of any non-obvious substantive claims about the distribution of consciousness. As you’ll see, even this goal proved challenging enough.

Despite this report’s length, I have attempted to keep the “main text” (sections 2-4) modular and short (roughly 20,000 words).8 I provide many clarifications, elaborations, and links to related readings (that I don’t necessarily endorse) in the appendices and footnotes.

In my review of the relevant literature, I noticed that it’s often hard to interpret claims about consciousness because they are often grounded in unarticulated assumptions, and (perhaps unavoidably) stated vaguely. To mitigate this problem somewhat for this report, section 2 provides some background on “where I’m coming from,” and can be summarized in a single jargon-filled paragraph:

This report examines which beings and processes might be moral patients given their phenomenal consciousness, but does not examine other possible criteria for moral patienthood, and does not examine the question of moral weight. I define phenomenal consciousness extensionally, with as much metaphysical innocence and theoretical neutrality as I can. My broad philosophical approach is naturalistic (a la Dennett or Wimsatt) rather than rationalistic (a la Chalmers or Chisholm),9 and I assume physicalism, functionalism, and illusionism about consciousness. I also assume the boundary around “consciousness” is fuzzy (a la “life”) rather than sharp (a la “water” = H2O), both between and within individuals. My meta-ethical approach employs an anti-realist kind of ideal advisor theory.

If that paragraph made sense to you, then you might want to jump ahead to section 3, where I survey the types of evidence and arguments that have been brought to bear on the distribution question. Otherwise, you might want to read the full report.

In section 3, I conclude that no existing theory of consciousness (that I’ve seen) is satisfying, and thus I investigate the distribution question via relatively theory-agnostic means, examining analogy-driven arguments, potential necessary or sufficient conditions for consciousness, and some big-picture considerations that pull toward or away from a “consciousness is rare” conclusion.

To read only my overall tentative conclusions, see section 4. In short, I think mammals, birds, and fishes,10 are more likely than not to be conscious, while (e.g.) insects are unlikely to be conscious. However, my probabilities are very “made-up” and difficult to justify, and it’s not clear to us what actions should be taken on the basis of such made-up probabilities.

I also prepared a list of potential future investigations that I think could further clarify some of these issues for us — at least, given my approach to the problem.

This report includes several appendices:

  • Appendix A explains how I use my moral intuitions, reports some of my moral intuitions about particular cases, and illustrates how existing theories of consciousness and moral patienthood could be clarified by frequent reference to code snippets or existing computer programs.
  • Appendix B explains what I find unsatisfying about current theories of consciousness, and says a bit about what a more satisfying theory of consciousness could look like.
  • Appendix C summarizes the evidence concerning unconscious vision and the “two streams of visual processing” theory that is discussed briefly in my section on whether a cortex is required for consciousness.
  • Appendix D makes several clarifications concerning the distinction between nociception (which can be unconscious) and pain (which cannot).
  • Appendix E makes some clarifications about how I’m currently estimating “neuroanatomical similarity,” which plays a role in my “theory-agnostic estimation process” for guessing whether a being is conscious (described here).
  • Appendix F explains illusionism in a bit more detail, and makes some brief comments about how illusionism interacts with my intuitions about moral patienthood.
  • Appendix G elaborates my views on the “fuzziness” of consciousness.
  • Appendix H examines how the possibility of hidden qualia might undermine the central argument for higher-order theories.
  • Appendix Z collects a variety of less-important sub-appendices, for example a list of theories of consciousness, a list of varieties of conscious experience, a list of questions about which consciousness scholars exhibit extreme disagreement, a list of candidate dimensions of moral concern (for estimating moral weight), some brief comments on unconscious emotions, some reasons for my default skepticism about published studies, and some recommended readings.

Acknowledgements: Many thanks to those who gave me substantial feedback on earlier drafts of this report: Scott Aaronson, David Chalmers, Daniel Dewey, Julia Galef, Jared Kaplan, Holden Karnofsky, Michael Levine, Buck Shlegeris, Carl Shulman, Taylor Smith, Brian Tomasik, and those who participated in a series of GiveWell discussions on this topic. I am also grateful to several people for helping me find some of the data related to potentially consciousness-indicating features presented below: Julie Chen, Robin Dey, Laura Ong, and Laura Muñoz. My thanks also to Oxford University Press and MIT Press for granting permission to reproduce some images to which they own the copyright.

2 Explaining my approach to the question

2.1 Why we care about the question of moral patienthood

How does the question of moral patienthood fit into our framework for thinking about effective giving?

The Open Philanthropy Project focuses on causes that score well on our three criteria — importance, neglectedness, and tractability. Our “importance” criterion is: “How many individuals does this issue affect, and how deeply?” Elaborating on this, we might say our importance criterion is: “How many moral patients does this issue affect, and how much could we benefit them, with respect to appropriate dimensions of moral concern (e.g. pain, pleasure, desire fulfillment, self-actualization)?”

As with many framing choices in this report, this is far from the only way to approach the question,11 but we find it to be a framing that is pragmatically useful to us as we try to execute our mission to “accomplish as much good as possible with our giving” without waiting to first resolve all major debates in moral philosophy.12 (See also our blog post on radical empathy.)

In the long run, we’d like to have better-developed views not just about which beings are moral patients, but also about how to weigh the interests of different kinds of moral patients against each other. For example: suppose we conclude that fishes, pigs, and humans are all moral patients, and we estimate that, for a fixed amount of money, we can (in expectation) dramatically improve the welfare of (a) 10,000 rainbow trout, (b) 1,000 pigs, or (c) 100 adult humans. In that situation, how should we compare the different options? This depends (among other things) on how much “moral weight” we give to the well-being of different kinds of moral patients. Or, more granularly, it depends on how much moral weight we give to various “appropriate dimensions of moral concern,” which then collectively determine the moral weight of each particular moral patient.13

This report, however, focuses on articulating my early thinking about which beings are moral patients at all.14 We hope to investigate plausibly appropriate dimensions of moral concern — i.e., the question of “moral weight” — in the future. For now, I merely list some candidate dimensions in Appendix Z.7.

2.2 Moral patienthood and consciousness

2.2.1 My metaethical approach

Philosophers, along with everyone else, have very different views about “metaethics,” i.e. about the foundations of morality and the meaning of moral terms.15 In this section, I explain my own metaethical approach — not because my moral judgments depend on this metaethical approach (they don’t), but merely to give my readers a sense of “where I’m coming from.”

Terms like “moral patient” and “moral judgment” can mean different things depending on one’s metaethical views, for example whether one is a “moral realist.” I have tried to phrase this report in a relatively “metaethically neutral” way, so that e.g. if you are a moral realist you can interpret “moral judgment” to mean “my best judgment as to what the moral facts are,” whereas if you are a certain kind of moral anti-realist you can interpret “moral judgment” to mean “my best guess as to what I would personally value if I knew more and had more time to think about my values,” and if you have different metaethical views, you might mean something else by “moral judgment.” But of course, my own metaethical views unavoidably lead my report down some paths and not others.

Personally, I use moral terms in such a way that my “moral judgments” are not about objective moral facts, but instead about my own values, idealized in various ways, such as by being better-informed (more on this in Appendix A).16 Under such a view, the question of (e.g.) whether some particular fish is a moral patient, given my values, is a question about whether that fish has certain properties (e.g. conscious experience, or desires that can be satisfied or frustrated) about which I have idealized preferences. For example: if the fish isn’t conscious, I’m not sure I care whether its preferences are satisfied or not, any more than I care whether a (presumably non-conscious) chess-playing computer wins its chess matches or not. But if the fish is conscious (in a certain way), then I probably do care about how much pleasure and how little pain it experiences, for the same reason I care about the pleasure and pain of my fellow humans.

Thus, my aim is not to conduct a conceptual analysis17 of “moral patient,” nor is my aim to discover what the objective moral facts are about which beings are moral patients. Instead, my aim is merely to examine which beings I should consider to be moral patients, given what I predict my values would be if they were better-informed, and idealized in other ways.

I suspect my metaethical approach and my moral judgments overlap substantially with those of at least some other Open Philanthropy Project staff members, and also with those of many likely readers, but I also assume there will be a great deal of non-overlap with my colleagues at the Open Philanthropy Project and especially with other readers. My only means for dealing with that fact is to explain as clearly as I can which judgments I am making and why, so that others can consider what the findings of this report might imply given their own metaethical approach and their own moral judgments.

For example, in Appendix A I discuss my moral intuitions with respect to the following cases:

  • An ankle injury that I don’t notice right away.
  • A fictional character named Phenumb who is conscious in general but has no conscious feelings associated with the satisfaction or frustration of his desires.
  • A short computer program that continuously increments a variable called my_pain.
  • A Mario-playing program that engages in fairly sophisticated goal-directed behavior using a simple search algorithm called A* search.
  • A briefly-sketched AI program that controls the player character in a puzzle game in a way that (seemingly/arguably) exhibits some commonly-endorsed indicators of consciousness, and that (seemingly/arguably) satisfies some theories of moral patienthood.

I suspect most readers share my moral intuitions about some of these cases, and have differing moral intuitions with respect to others.

2.2.2 Proposed criteria for moral patienthood

Presumably a cognitively unimpaired adult human is a moral patient, and a rock is not.18 But what about someone in a persistent vegetative state? What about an anencephalic infant? What about a third-trimester human fetus?19 What about future humans? What about chimpanzees, dogs, cows, chickens, fishes, squid, lobsters, beetles, bees, Venus flytraps, and bacteria? What about sophisticated artificial intelligence systems, such as Facebook’s face recognition system or a self-driving car?20 What about a (so-called) self-aware, self-expressive, and self-adaptive camera network?21 What about a non-player character in a first-person shooter video game, which makes plans and carries them out, ducks for cover when the player shoots virtual bullets at it, and cries out when hit?22 What about the enteric nervous system in your gut, which employs about 5 times as many neurons as the brain of a rat, and would continue to autonomously coordinate your digestion even if its main connection with your brain was severed?23 Is each brain hemisphere in a split-brain patient a separate moral patient?24 Can ecosystems or companies or nations be moral patients?25

Such questions are usually addressed by asking whether a potential moral patient satisfies some criteria for moral patienthood. Criteria I have seen proposed in the academic literature include:

  • Personhood or interests. (I won’t discuss these criteria separately, as they are usually composed of one or more of the criteria listed below.26)
  • Phenomenal consciousness, a.k.a. “subjective experience.” See the detailed discussion below.27
  • Valenced experience: This criterion presumes not just phenomenal consciousness but also some sense in which phenomenal consciousness can be “valenced” (e.g. pleasure vs. pain).
  • Various sophisticated cognitive capacities such as rational agency, self-awareness, desires about the future, ability to abide by moral responsibilities, ability to engage in certain kinds of reciprocal relationships, etc.28
  • Capacity to develop these sophisticated cognitive capacities, e.g. as is true of human fetuses.29
  • Less sophisticated cognitive capacities, or the capacity to develop them, e.g. learning, nociception, memory, selective attention, etc.30
  • Group membership: e.g. all members of the human species, or all living things.31

Note that moral patienthood can be seen as binary or scalar,32 and the boundary between beings that are and are not moral patients might be “fuzzy” (see below).

It is also important to remember that, whatever criteria for moral patienthood we endorse upon reflection, our intuitive attributions of moral patienthood are probably unconsciously affected33 by factors that we would not endorse if we understood how they were affecting us. For example, we might be more likely to attribute moral patienthood to something if it has a roughly human-like face, even though few if any of us would endorse “possession of a human-like face” as a legitimate criterion of moral patienthood. A similar warning can be made about factors which might affect our attributions of phenomenal consciousness and other proposed criteria for moral patienthood.34

An interesting test case is this video of a crab tearing off its own claw. To me, the crab looks “nonchalant” while doing this, which gives me the initial intuition that the crab must not be consciousness, or else it would be “writhing in agony.” But crabs are different from humans in many ways. Perhaps this is just what a crab in conscious agony looks like. Or perhaps not.35

2.2.3 Why I investigated phenomenal consciousness first

The only proposed criterion of moral patienthood I have investigated in any depth thus far is phenomenal consciousness. I chose to examine phenomenal consciousness first because:

  1. My impression is that phenomenal consciousness is perhaps the most commonly-endorsed criterion of moral patienthood, and that it is often considered to be the most important such criterion (by those who use multiple criteria). Self-awareness (sometimes called “self-consciousness”) and valenced experience are other contenders for being the most commonly-endorsed criterion of moral patienthood, but in most cases it is assumed that the kinds of self-awareness or valenced experience that confer moral patienthood necessarily involve phenomenal consciousness as well.
  2. My impression is that phenomenal consciousness, or a sort of valenced experience that presumes phenomenal consciousness, are especially commonly-endorsed criteria of moral patienthood among consequentialists, whose normative theories most easily map onto our mission to “accomplish as much good as possible with our giving.”
  3. Personally, I’m not sure whether consciousness is the only thing I care about, but it is the criterion of moral patienthood I feel most confident about (for my own values, anyway).

However, it’s worth bearing in mind that most of us probably intuitively morally care about other things besides consciousness. I focus on consciousness in this report not because I’m confident it’s the only thing that matters, but because my report on consciousness alone is long enough already! I hope to investigate other potential criteria for moral patienthood in the future.

I’m especially eager to think more about valenced experiences such as pain and pleasure. As I explain below, my own intuitions are that if a being had conscious experience, but literally none of it was “valenced” in any way, then I might not have any moral concern for such a creature. But in this report, I focus on the issue of phenomenal consciousness itself, and say very little about the issue of valenced experience.

2.3 My approach to thinking about consciousness

In consciousness studies there is so little consensus on anything — what’s meant by “consciousness,” what it’s made of, what kinds of methods are useful for studying it, how widely it is distributed, which theories of consciousness are most promising, etc. (see Appendix Z.5) — that there are no safe guesses about “where someone is coming from” when they write about consciousness. This often made it difficult for me to understand what writers on consciousness were trying to say, as I read through the literature.

To mitigate this problem somewhat for this explanation of my own tentative views about consciousness, I’ll try to explain “where I’m coming from” on consciousness, even if I can’t afford the time to explain in much detail why I make the assumptions I do.

2.3.1 Consciousness, innocently defined

Van Gulick (2014) describes six different senses in which “an animal, person, or other cognitive system” can be regarded as “conscious,” and the four I can explain most quickly are:

  • Sentience: capable of sensing and responding to its environment
  • Wakefulness: awake (e.g. not asleep or in a coma)
  • Self-consciousness: aware of itself as being aware
  • What it is like: subjectively experiencing36 a certain “something it is like to be” (Nagel 1974), a.k.a. “phenomenal consciousness” (Block 1995), a.k.a. “raw feels” (Tolman 1932)

When I say “consciousness,” I have in mind the fourth concept.37

In particular, I have in mind a relatively “metaphysically and epistemically innocent” definition, a la Schwitzgebel (2016):38

Phenomenal consciousness can be conceptualized innocently enough that its existence should be accepted even by philosophers who wish to avoid dubious epistemic and metaphysical commitments such as dualism, infallibilism, privacy, inexplicability, or intrinsic simplicity. Definition by example allows us this innocence. Positive examples include sensory experiences, imagery experiences, vivid emotions, and dreams. Negative examples include growth hormone release, dispositional knowledge, standing intentions, and sensory reactivity to masked visual displays. Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack…

There are many other examples we can point to.39 For example, when I played sports as a teenager, I would occasionally twist my ankle or acquire some other minor injury while chasing after (e.g.) the basketball, and I didn’t realize I had hurt myself until after the play ended and I exited my flow state. In these cases, a “rush of pain” suddenly “flooded” my conscious experience — not because I had just then twisted my ankle, but because I had twisted it 5 seconds earlier, and was only just then becoming aware of it. The pain I felt 5 seconds after I twisted my ankle is a positive example of conscious experience, and whatever injury-related processing occurred in my nervous system during those initial 5 seconds is, as far as I know, a negative example.

However, I would qualify Schwitzgebel’s extensional definition of consciousness by noting that the negative examples, in particular, are at least somewhat contested. A rock is an obvious negative example for most people, but panpsychists disagree, and it is easy to identify other contested examples.40

More plausible than rock consciousness, I think, is the possibility that somewhere in my brain, there was a conscious experience of my injured ankle before “I” became aware of it. Indeed, there may be many conscious cognitive processes that “I” never have cognitive access to. If this is the case, it can in principle be weakly suggested by certain kinds of studies (see Appendix H), and could in principle be strongly suggested once we have a compelling theory of consciousness.41 But for now, I’ll count the injury-related cognitive processing that happened “before I noticed it” as a likely negative example of conscious experience, while allowing that it could be discovered to be a positive example due to future scientific progress.

So perhaps we should say that “Phenomenal consciousness is the most folk psychologically obvious thing (or set of things) that the uncontested positive examples possess, and that the least-contested negative examples plausibly lack,”42 or something along those lines. Similarly, when I use related terms like “qualia” and “phenomenal properties,” I intend them to be defined by example as above, with as much metaphysical innocence as possible. Ideally, one would “flesh out” these definitions with many more examples and clarifications, but I shall leave that exercise to others.43

Importantly, this definition is as “innocent” and theory-neutral as I know how to make it. On this definition, consciousness could still be physical or non-physical, scientifically tractable or intractable, ubiquitous or rare, ineffable or not-ineffable, “real” or “illusory” (see next section), and so on. And in my revised version of Schwitzgebel’s definition, we are not committed to absolute certainty that purported negative examples will turn out to actually be negative examples as we learn more.

Furthermore, I do not define consciousness as “cognitive processes I morally care about,” as that blends together scientific explanation and moral judgment (see Appendix A) in a way can be confusing to disentangle and interpret.

No doubt our concept of “consciousness” and related concepts will evolve over time in response to new discoveries, and our evolving concepts will influence which empirical inquiries we prioritize, and those inquiries will suggest further revisions to our concepts, as is typically the case.44 But in our current state of ignorance, I prefer to use a notion of “consciousness” that is defined as innocently as I can manage.

I must also stress that my aim here is not to figure out what we “mean” by “consciousness,” any more than Antonie van Leeuwenhoek (1632-1723) was, in studying microbiology, trying to figure out what people meant by “life.”45 Rather, my aim is to understand how the cluster of stuff we now naively call “consciousness” works. Once we understand how those things work, we’ll be in a better position to make moral judgments about which beings are and aren’t moral patients (insofar as consciousness-related properties affect those judgments, anyway). Whether we continue to use the concept of “consciousness” at that point is of little consequence. But for now, since we don’t yet know the details of how consciousness works, I will use terms like “consciousness” and “subjective experience” to point at the ill-defined cluster of stuff I’m talking about, as defined by example above.

2.3.2 My assumptions about the nature of consciousness

Despite preferring a metaphysically innocent definition of consciousness, I will, for this report, make four key assumptions about the nature of consciousness. It is beyond the scope of this report to survey and engage with the arguments for or against these assumptions; instead, I merely report what my assumptions are, and provide links to the relevant scholarly debates. My purpose here isn’t to contribute to these debates, but merely to explain “where I’m coming from.”

First, I assume physicalism. I assume consciousness will turn out to be fully explained by physical processes.46 Specifically, I lean toward a variety of physicalism called “type A materialism,” or perhaps toward the varieties of “type Q” or “type C” materialism that threaten to collapse into “type A” materialism anyway (see footnote47).

Second, I assume functionalism. I assume that anything which “does the right thing” — e.g., anything which implements a certain kind of information processing — is an example of consciousness, regardless of what that thing is made of.48 Compare to various kinds of memory, attention, learning, and so on: these processes are found not just in humans and animals, but also in, for example, some artificial intelligence systems.49 These kinds of memory, attention, and learning are implemented by a wide variety of substrates (but, they are all physical substrates). On the case for functionalism, see footnote.50

Third, I assume illusionism, at least about human consciousness. What this means is that I assume that some seemingly-core features of human conscious experience are illusions, and thus need to be “explained away” rather than “explained.” Consider your blind spot: your vision appears to you as continuous, without any spatial “gaps,” but physiological inspection shows us that there aren’t any rods and cones where your optic nerve exits the eyeball, so you can’t possibly be transducing light from a certain part of your (apparent) visual field. Knowing this, the job of cognitive scientists studying vision is not to explain how it is that your vision is really continuous despite the existence of your physiological blind spot, but instead to explain why your visual field seems to you to be continuous even though it’s not.51 We might say we are “illusionists” about continuous visual fields in humans.

Similarly, I think some core features of consciousness are illusions, and the job of cognitive scientists is not to explain how those features are “real,” but rather to explain why they seem to us to be real (even though they’re not). For example, it seems to us that our conscious experiences have “intrinsic” properties beyond that which could ever be captured by a functional, mechanistic account of consciousness. I agree that our experiences seem to us to have this property, but I think this “seeming” is simply mistaken. Consciousness (as defined above) is real, of course. There is “something it is like” to be us, and I doubt there it is “something it is like” to be a chess-playing computer, and I think the difference is morally important. I just think our intuitions mislead us about some of the properties of this “something it’s like”-ness. (For elaborations on these points, see Appendix F.)

Fourth, I assume fuzziness about consciousness, both between and within individuals.52 In other words, I suspect that once we understand how “consciousness” works, there will be no clear dividing line between individuals that have no conscious experience at all and individuals that have any conscious experience whatsoever (I’ll call this “inter-individual fuzziness”),53 and I also suspect there will be no clear dividing line, within a single individual, between mental states or processes that are “conscious” and those which are “not conscious” (I’ll call this “intra-individual fuzziness”).54

Unfortunately, assuming fuzziness means that “wondering whether it is ‘probable’ that all mammals have [consciousness] thus begins to look like wondering whether or not any birds are wise or reptiles have gumption: a case of overworking a term from folk psychology that has [lost] its utility along with its hard edges.”55 One could say the same of questions about which cognitive processes within an individual are “conscious” vs. “unconscious,” which play a key role in arguments about which beings are conscious.

As the scientific study of “consciousness” proceeds, I expect our naive concept of consciousness to break down into a variety of different capacities, dispositions, representations, and so on, each of which will vary along many different dimensions in different beings. As that happens, we’ll be better able to talk about which features we morally care about and why, and there won’t be much utility to arguing about “where to draw the line” between which beings and processes are and aren’t “conscious.” But, given that we currently lack such a detailed decomposition of “consciousness,” I reluctantly organize this report around the notion of “consciousness,” and I write about “which beings are conscious” and “which cognitive processes are conscious” and “when such-and-such cognitive processing becomes conscious,” while pleading with the reader to remember that I think the line between what is and isn’t “conscious” is extremely “fuzzy” (and as a consequence I also reject any clear-cut “Cartesian theater.”)56 For more on the fuzziness of consciousness, see Appendix G.

My assumptions of physicalism and functionalism are quite confident, but probably don’t affect my conclusions about the distribution of consciousness very much anyway, except to make some common pathways to radical panpsychism less plausible.57 My assumption of illusionism is also quite confident, at least about human consciousness, but I’m not sure it implies much about the distribution question (see Appendix F). My assumption of fuzziness is moderately confident, and implies that the distribution question is difficult even to formulate, let alone answer, though I’m not sure it directly implies much about how extensive we should expect “consciousness” to be.

As with any similarly-sized set of assumptions about consciousness (see Appendix Z.5), my own set of assumptions is highly debatable. Physicalism and functionalism are fairly widely held among consciousness researchers, but are often debated and far from universal.58 Illusionism seems to be an uncommon position.59 I don’t know how widespread or controversial “fuzziness” is.

I’m not sure what to make of the fact that illusionism seems to be endorsed by a small number of theorists, given that illusionism seems to me to be “the obvious default theory of consciousness,” as Daniel Dennett argues.60 In any case, the debates about the fundamental nature of consciousness are well-covered elsewhere,61 and I won’t repeat them here.

A quick note about “eliminativism”: the physical processes which instantiate consciousness could turn out be so different from our naive guesses about their nature that, for pragmatic reasons, we might choose to stop using the concept of “consciousness,” just as we stopped using the concept of “phlogiston.” Or, we might find a collection of processes that are similar enough to those presumed by our naive concept of consciousness that we choose to preserve the concept of “consciousness” and simply revise our definition of it, as happened when we eventually decided to identify “life” with a particular set of low-level biological features (homeostasis, cellular organization, metabolism, reproduction, etc.) even though life turned out not to be explained by any Élan vital or supernatural soul, as many people throughout history62 had assumed.63 But I consider this only a possibility, not an inevitability. In other words, I’m not trying to take a strong position on “eliminativism” about consciousness here — I see that as a pragmatic issue to be decided later (see Appendix Z.6). For now, I think it’s easiest to talk about “consciousness,” “qualia,” and so on as truly existing phenomena that can be defined by example as above, despite those concepts having very “fuzzy” boundaries.

3 Specific efforts to sharpen my views about the distribution question

Now we turn to the key question: What is the likely distribution of phenomenal consciousness — as defined by example — across different taxa? (I call this the “distribution question.”)

Note that in this report, I’ll use “taxa” very broadly to mean “classes of systems,” including:

  • Phylogenetic taxa, such as “primates,” “fishes,” “rainbow trout,” “plants,” and “bacteria.”
  • Subsets of phylogenetic taxa, such as “humans in a persistent vegetative state” and “anencephalic infants.”
  • Biological sub-systems, such as “human enteric nervous systems” and “non-dominant brain hemispheres of split-brain patients.”
  • Classes of computer software and/or hardware, such as “deep reinforcement learning agents,” “industrial robots,” “versions of Microsoft Windows,” and “such-and-such application-specific integrated circuit.”

In the academic literature on the distribution question, the three most common argumentative strategies I’ve seen are:64

  1. Theory: Assume a particular theory of consciousness, then consider whether a specific taxon is likely to be conscious if that theory is true. [More]
  2. Potentially consciousness-indicating features: Rather than relying on a specific theory of consciousness, instead suggest a list of behavioral and neurobiological/architectural features which intuitively suggest a taxon might be conscious. Then, check how many of those potentially consciousness-indicating features (PCIFs) are possessed by a given taxon. If the taxon possesses all or nearly all the PCIFs, conclude that its members are probably conscious. If the taxon possesses very few of the proposed PCIFs, conclude that its members probably are not conscious. [More]
  3. Necessary or sufficient conditions: Another approach is to argue that some feature is likely necessary for consciousness (e.g. a neocortex), or that some feature is likely sufficient for consciousness (e.g. mirror self-recognition), without relying on any particular theory of consciousness. If successful, such arguments might not give us a detailed picture of which systems are and aren’t conscious, but they might allow us to conclude that some particular taxa either are or aren’t conscious. [More]

Below, I consider each of these approaches in turn, and then I consider various big-picture considerations that “pull” toward or away from a “consciousness is rare” conclusion (here).

3.1 Theories of consciousness

I briefly familiarized myself with several physicalist functionalist theories of consciousness, listed in Appendix Z.1. Overall, my sense is that the current state of our scientific knowledge is such that it is difficult to tell whether any currently proposed theory of consciousness is promising. My impression from the literature I’ve read, and from the conversations I’ve had, is that many (perhaps most) consciousness researchers agree,65 even though some of the most well-known consciousness researchers are well-known precisely because they have put forward specific theories they see as promising. But if most researchers agreed with their optimism, I would expect theories of consciousness to have been winnowed over the last couple decades, rather than continuing to proliferate,66 under a huge variety of metaphysical and methodological assumptions, as they currently do. (In other words, consciousness studies seems to be in what Thomas Kuhn called a “pre-paradigmatic stage of development.”67)

One might also argue about the distribution question not from the perspective of theories of how consciousness works, but from theories of how consciousness evolved (see the list in Appendix Z.5). Unfortunately, I didn’t find any of these theories any more convincing than currently available theories of how consciousness works.68

Given the unconvincing-to-me nature of current theories of consciousness (see also Appendix B), I decided to pursue investigation strategies that do not require me to put much emphasis on any specific theories of consciousness, starting with the “potentially consciousness-indicating features” strategy described in the next section.

First, though, I’ll outline one example theory of consciousness, so that I can explain what I find unsatisfying about currently-available theories of consciousness. In particular, I’ll describe Michael Tye’s PANIC theory,69 which is an example of “first-order representationalism” (FOR) about consciousness.

3.1.1 PANIC as an example theory of consciousness

To explain Tye’s PANIC theory, I need to explain what philosophers mean by “representation.”70 For philosophers, a representation is a thing that carries information about something else. An image of a flower carries information about a flower. The sentence “The flower smells good” carries information about a flower — specifically, the information that it smells good. Perhaps a nociceptive signal represents, to some brain module, that there is tissue damage of a certain sort occurring at some location on the body. There can also be representations that carry information about things that don’t exist, such as Luke Skywalker. If a representation mischaracterizes the thing it is about in some important way, we say that it is misrepresenting its target. Representational theories of consciousness, then, say that if a system does the right kind of representing, then that system is conscious.

Michael Tye’s PANIC theory, for example, claims that a mental state is phenomenally conscious if it has some Poised, Abstract, Nonconceptual, Intentional Content (PANIC). I’ll briefly summarize what that means.

To simplify just a bit, “intentional content” is just a phrase that (in philosophy) means “representational content” or “representational information.” What about the other three terms?

  • Poised: Conscious representational contents, unlike unconscious representational contents, must be suitably “poised” to play a certain kind of functional role. Specifically, they are poised to impact beliefs and desires. E.g. conscious perception of an apple can change your belief about whether there are apples in your house, and the conscious feeling of hunger can create a desire to eat.
  • Abstract: Conscious representational contents are representations of “general features or properties” rather than “concrete objects or surfaces,” e.g. because in hallucinatory experiences, “no concrete objects need be present at all,” and because under some circumstances two different objects can “look exactly alike phenomenally.”71
  • Nonconceptual: The representational contents of consciousness are more detailed than anything we have words or concepts for. E.g. you can consciously perceive millions of distinct colors, but you don’t have separate concepts for red17 and red18, even though you can tell them apart when they are placed next to each other.

So according to Tye, conscious experiences have poised, abstract, nonconceptual, representational contents. If a representation is missing one of these properties, then it isn’t conscious. For example, consider how Tye explains the consistency of his PANIC theory of consciousness with the phenomenon of blindsight:72

…given a suitable elucidation of the “poised” condition, blindsight poses no threat to [my theory]. Blindsight subjects are people who have large blind areas or scotoma in their visual fields due to brain damage… They deny that they can see anything at all in their blind areas, and yet, when forced to guess, they produce correct responses with respect to a range of simple stimuli (for example, whether an X or an O is present, whether the stimulus is moving, where the stimulus is in the blind field).

If their reports are to be taken at face value, blindsight subjects… have no phenomenal consciousness in the blind region. What is missing, on the PANIC theory, is the presence of appropriately poised, nonconceptual, representational states. There are nonconceptual states, no doubt representationally impoverished, that make a cognitive difference in blindsight subjects. For some information from the blind field does reach the cognitive centers and controls their guessing behavior. But there is no complete, unified representation of the visual field, the content of which is poised to make a direct difference in beliefs. Blindsight subjects do not believe their guesses. The cognitive processes at play in these subjects are not belief-forming at all.

Now that I’ve explained Tye’s theory, I’ll use it to illustrate why I find it (and other theories of consciousness) unsatisfying.

In my view, a successful explanation of consciousness would show how the details of some theory (such as Tye’s) predict, with a fair amount of precision, the explananda of consciousness — i.e., the specific features of consciousness that we know about from our own phenomenal experience and from (reliable, validated) cases of self-reported conscious experience (e.g. in experiments, or in brain lesion studies). For example, how does Tye’s theory of consciousness explain the details of the reports we make about conscious experience? What concept of “belief” does the theory refer to, such that the guesses of blindsight subjects do not count as “beliefs,” but (presumably) some other weakly-held impressions do count? Does PANIC theory make any testable, fairly precise predictions akin to the testable prediction Daniel Dennett made on the final page of Consciousness Explained?73

In short, I think that current theories of consciousness (such as Tye’s) simply do not “go far enough” — i.e., they don’t explain enough consciousness explananda, with enough precision — to be compelling (yet). In Appendix B, I discuss this issue in more detail, and describe how one might construct a theory of consciousness that explains more consciousness explananda, with more precision, than Tye’s theory (or any other theory I’m aware of) does.

3.2 Potentially consciousness-indicating features (PCIFs)

3.2.1 How PCIF arguments work

The first theory-agnostic approach to the distribution question that I examined was the approach of “arguments by analogy” or, as I call them, “arguments about potentially consciousness-indicating features (PCIFs).”74

As Varner (2012) explains, analogy-driven arguments appeal to the principle that because things P and Q share many “seemingly relevant” features (a, b, c, …n), and we know that P has some additional property x, we should infer that Q probably has property x, too.

After all, it is by such an analogy that I believe other humans are conscious. I cannot directly observe that my mother is conscious, but she talks about consciousness like I do, she reacts to stimuli like I do, she has a brain that is virtually identical to my own in form and function and evolutionary history, and so on. And since I know I am conscious, I conclude that my mother is conscious as well.75 The analogy between myself and a chimpanzee is weaker than that between myself and my mother, but it is, we might say, “fairly strong.” The analogy between myself and a pig is weaker still. The analogies between myself and a fish are even weaker but, some argue, still strong enough that we should put some substantial probability on fish consciousness.

One problem with analogy-driven arguments, and one reason they are difficult to fully separate from theory-driven arguments, is this: to decide how salient a given analogy between two organisms is, we need some “guiding theory” about what consciousness is, or what its function is. Varner (2012) explains:76

[The] point about needing such a “guiding theory” can be illustrated with this obviously bad argument by analogy:

  1. Both turkeys (P) and cattle (Q) are animals, they are warm blooded, they have limited stereoscopic vision, and they are eaten by humans (a, b, c, …, and n).
  2. Turkeys are known to hatch from eggs (x).
  3. So probably cattle hatch from eggs, too.

One could come up with more and more analogies to list (e.g., turkeys and cattle both have hearts, they have lungs, they have bones, etc., etc.). The above argument is weak, not because of the number of analogies considered, but because it ignores a crucial disanalogy: that cattle are mammals, whereas turkeys are birds, and we have very different theories about how the two are conceived, and how they develop through to birth and hatching, respectively. Another way of putting the point would be to say that the listed analogies are irrelevant because we have a “guiding theory” about the various ways in which reproduction occurs, and within that theory the analogies listed above are all irrelevant…

So in assessing an argument by analogy, we do not just look at the raw number of analogies cited. Rather, we look at both how salient are the various analogies cited and whether there are any relevant disanalogies, and we determine how salient various comparisons are by reference to a “guiding theory.”

Unfortunately, as explained above, it isn’t clear to me what our guiding theory about consciousness should be. Because of this, I present below a table that includes an unusually wide variety of PCIFs that I have seen suggested in the literature, along with a few of my own. From this initial table, one can use one’s own guiding theories to discard or de-emphasize various PCIFs (rows), perhaps temporarily, to see what doing so seems to suggest about the likely distribution of consciousness.

Another worry is that one’s choices about which PCIFs to include, and which taxa to check for those PCIFs, can bias the conclusions of such an exercise.77 To mitigate this problem, my table below is unusually comprehensive with respect to both PCIFs and taxa. As a result, I could not afford the time to fill out most cells in the table, and thus my conclusions about the utility and implications of this approach (below) are limited.

3.2.2 A large (and incomplete) table of PCIFs and taxa

The taxa represented in the table below were selected either (a) for comparison purposes (e.g. human, bacteria), or (b) because they are killed or harmed in great numbers by human activity and are thus plausible targets of welfare interventions if they are thought of as moral patients (e.g. chickens, fishes), or (c) a mix of both. To represent each very broad taxon of interest (e.g. fishes, insects), I chose a representative sub-taxon that has been especially well-studied (e.g. rainbow trout, common fruit fly). More details on the taxa and PCIFs I chose, and why I chose them, are provided in a footnote.78

One column below — “Function sometimes executed non-consciously in humans?” — requires special explanation. Many behavioral and neurofunctional PCIFs can be executed by humans either consciously or non-consciously. In fact, most cognitive processing in humans seems to occur non-consciously, and humans sometimes engage in fairly sophisticated behaviors without conscious awareness of them, as in (it is often argued) cases of sleepwalking, or when someone daydreams while driving a familiar route, or in cases of absence seizures involving various “automatisms” like this one described by Antonio Damasio:79

…a man sat across from me… [and] we talked quietly. Suddenly the man stopped, in midsentence, and his face lost animation; his mouth froze, still open, and his eyes became vacuously fixed on some point on the wall behind me. For a few seconds he remained motionless. I spoke his name but there was no reply. Then he began to move a little, he smacked his lips, his eyes shifted to the table between us, he seemed to see a cup of coffee and a small metal vase of flowers; he must have, because he picked up the cup and drank from it. I spoke to him again and again he did not reply. He touched the vase. I asked him what was going on, and he did not reply, his face had no expression. He did not look at me. Now, he rose to his feet and I was nervous; I did not know what to expect. I called his name and he did not reply. When would this end? Now he turned around and walked slowly to the door. I got up and called him again. He stopped, he looked at me, and some expression returned to his face — he looked perplexed. I called him again, and he said, “What?”

For a brief period, which seemed like ages, this man suffered from an impairment of consciousness. Neurologically speaking, he had an absence seizure followed by an absence automatism, two among the many manifestations of epilepsy…

If such PCIFs are observed in humans either with or without consciousness, then perhaps the case for them as being indicative of consciousness in other taxa is less strong than one might think:80 e.g. are fishes conscious of their behavior, or are they continuously “sleepwalking”?81

In the table below, a cell is left blank if I didn’t take the time to investigate, or in some cases even think about, what its value should be, or if I investigated briefly but couldn’t find a clear value for the cell. A cell’s value is “n/a” when a PCIF is not applicable to that taxon, and it is “unavailable” when I’m fairly confident the relevant data has not (as of December 2016) been collected. In some cases, data are not available for my taxon of choice, but I guess or estimate the value of that cell from data available for a related taxon (e.g. a closely related species), and in cases where this leaves me with substantial uncertainty about the appropriate value for that cell, I indicate my extra uncertainty with a question mark, an “approximately” tilde symbol (“~”) for scalar data, or both. To be clear: a question mark does not necessarily indicate that domain experts are uncertain about the appropriate value for that cell of the table; it merely means that I am substantially uncertain, given the very few sources I happened to skim. Sources and reasoning for the value in each cell are given in the footnote immediately following each row’s PCIF.

The values of the cells in this table have not been vetted by any domain experts. In many cases, I populated a cell with a value drawn from a single study, without reading the study carefully or trying hard to ensure I was interpreting it correctly. Moreover, I suspect many of the studies used to populate the cells in this table would not hold up under deeper scrutiny or upon a rigorous replication attempt (see below). Hence, the contents of this table should be interpreted as a set of tentative estimates and guesses, collected hastily by a non-expert.

Because the table doesn’t fit on the page, the table must be scrolled horizontally and vertically to view all its contents.

POTENTIALLY CONSCIOUSNESS-INDICATING FEATURE HUMAN CHIMPANZEE COW CHICKEN RAINBOW TROUT GAZAMI CRAB COMMON FRUIT FLY E. COLI FUNCTION SOMETIMES EXECUTED NON-CONSCIOUSLY IN HUMANS? HUMAN ENTERIC NERVOUS SYSTEM
Last common ancestor with humans (Mya)82 n/a 6.7 96.5 311.9 453.3 796.6 796.6 4290 n/a n/a
Category: Neurobiological features
Adult average brain mass (g)83 1509 385 480.5 3.5 0.2 n/a n/a
Neurons in brain (millions)84 86060 unavailable unavailable ~221 unavailable unavailable 0.12 n/a n/a 400
Neurons in pallium (millions)85 16340 unavailable unavailable ~60.7 unavailable n/a n/a n/a n/a n/a
Encephalization quotient86 7.6 2.35 n/a n/a n/a
Has a neocortex87 Yes Yes Yes No No n/a n/a n/a
Has a central nervous system88 Yes Yes Yes Yes Yes Yes Yes No n/a n/a
Category: Nociceptive features
Has nociceptors89 Yes Yes Yes Yes Yes Yes? Yes Yes? n/a Yes?
Has neural nociceptors90 Yes Yes Yes Yes Yes Yes? Yes No n/a Yes?
Nociceptive reflexes91 Yes Yes Yes Yes Yes Yes? Yes? Yes? Yes
Physiological responses to nociception or handling92 Yes Yes Yes Yes Yes n/a n/a
Long-term alteration in behavior to avoid noxious stimuli93 Yes
Taste aversion learning94 Yes
Protective behavior (e.g. wound guarding, limping, rubbing, licking)95 Yes Yes Yes? Yes? Yes?
Nociceptive reflexes or avoidant behaviors reduced by analgesics96 Yes Yes Yes Yes Yes
Self-administration of analgesia97 Yes Yes Yes
Will pay a cost to access analgesia98 Yes
Selective attention to noxious stimuli over other concurrent events99 Yes Yes
Pain-relief learning100 Yes Yes
Category: Other behavioral/cognitive features
Reports details of conscious experiences to scientists101 Yes No No No No No No No No No
Cross-species measures of general cognitive ability102
Plastic behavior103 Yes
Detour behaviors104 Yes Yes
Play behaviors105 Yes Yes Yes Yes? Yes? n/a
Grief behaviors106 Yes
Expertise107 Yes
Goal-directed behavior108 Yes
Mirror self-recognition109 Yes Yes unavailable? unavailable? unavailable? unavailable? unavailable? n/a n/a
Mental time travel110 Yes
Distinct sleep/wake states111 Yes
Advanced social politics112 Yes
Uncertainty monitoring113 Yes probably? unavailable? unavailable? unavailable? unavailable? unavailable?
Intentional deception114 Yes
Teaching others115
Abstract language capabilities116 Yes
Intentional agency117 Yes
Understands pointing at distant objects118 Yes
Non-associative learning119 Yes
Tool use120 Yes
Can spontaneously plan for future days without reference to current motivational state121 Yes unavailable? unavailable? unavailable? unavailable? unavailable?
Can take into account another’s spatial perspective122 Yes Yes? unavailable? unavailable? unavailable? unavailable? n/a n/a
Theory of mind123 Yes

A fuller examination of the PCIFs approach, which I don’t conduct here, would involve (1) explaining these PCIFs in some detail, (2) cataloging and explaining the strength of the evidence for their presence or absence (or scalar value) for a wide variety of taxa, (3) arguing for some set of “weights” representing how strongly each of these PCIFs indicate consciousness and why, with some PCIFs perhaps being assigned ~0 weight, and (4) arguing for some resulting substantive conclusions about the likely distribution of consciousness.

3.2.3 My overall thoughts on PCIF arguments

Given that my table of PCIFs and taxa is so incomplete, not much can be concluded from it concerning the distribution question. However, my investigation into analogy-driven arguments, and my incomplete attempt to construct my own table of analogies, left me with some impressions I will now share (but not defend).

First, I think that analogy-driven arguments about the distribution of consciousness typically draw from far too narrow a range of taxa and PCIFs. In particular, it seems to me that analogy-driven arguments, as they are typically used, do not take seriously enough the following points:

  1. Many commonly-used PCIFs are executed both with and without conscious awareness in humans (e.g. at different times), and are thus not particularly compelling evidence for the presence of consciousness in non-humans (without further argument).124
  2. Many commonly-used PCIFs are possessed by biological subsystems which are typically thought to be non-conscious, for example the enteric nervous system and the spinal cord.125
  3. Many commonly-used PCIFs are possessed by simple, short computer programs, or in other cases by more complicated programs in widespread use (such as Microsoft Windows or policy gradients). Yet, these programs are typically thought to be non-conscious, even by functionalists.126
  4. Many commonly-used PCIFs are possessed by plants and bacteria and other very “simple” organisms, which are typically thought to be non-conscious.127 For example, a neuron-less slime mold can store memories, transfer learned behaviors to conspecifics, escape traps, and solve mazes.128
  5. Analogy-driven arguments typically make use of a very short list of PCIFs, and a very short list of taxa. Including more taxa and PCIFs would, I think, give a more balanced picture of the situation.

Second, I think analogy-driven arguments about consciousness too rarely stress the general point that “functionally similar behavior, such as communicating, recognizing neighbors, or way finding, may be accomplished in different ways by different kinds of animals.”129 This holds true for software as well130 — consider the many different algorithms that can be used to sort information, or implement a shared memory system, or make complex decisions,131 or learn from data.132 Clearly, many behavioral PCIFs can be accomplished by many different means, and for any given behavioral PCIF, it may be the case that it is achieved with the help of conscious awareness in some cases, and without conscious awareness in other cases.

Third, analogy-driven arguments typically do not (in my opinion) take seriously enough the possibility of hidden qualia (i.e. qualia inaccessible to introspection; see Appendix H), which has substantial implications for how one should weight the evidential importance of various PCIFs against each other. For example, one might argue that the presence of feature A should be seen as providing stronger evidence of consciousness than the presence of feature B does, because in humans we only observe feature A with conscious accompaniments, but we do sometimes (in humans) observe feature B without conscious accompaniments. But if hidden qualia exist, then perhaps our observations of B “without conscious accompaniments” are mistaken, and these are merely observations of B without conscious accompaniments accessible to introspection.

Fourth, analogy-driven arguments typically do not (in my opinion) take seriously enough the possibility that the ethology literature would suffer from its own “replication crisis” if rigorous attempts at mass replication were undertaken (see here).

3.3 Searching for necessary or sufficient conditions

How else might we learn something about the distribution question, without putting much weight on any single, specific theory of consciousness?

One possibility is to argue that some structure or capacity is likely necessary for consciousness — without relying much on any particular theory of consciousness — and then show that this structure or capacity is present for some taxa and not others. This wouldn’t necessarily prove which taxa are conscious, but it would tell us something about which ones aren’t.

Another possibility is to argue that some structure or capacity is likely sufficient for consciousness, and then show that this structure or capacity is present for some taxa and not others. This wouldn’t say much about which taxa aren’t conscious, but it would tell us something about which ones are.

(Technically, all potential necessary or sufficient conditions are just PCIFs, but with a different “strength” to their indication of consciousness.133 I discuss them separately in this report mainly for organizational reasons.)

Of course, we’d want such necessary or sufficient conditions to be “substantive.” For example, I think there’s a pretty strong case that information processing of some sort is a necessary condition for consciousness, but this doesn’t tell me much about distribution of consciousness: even bacteria process information. I also think there’s a pretty strong case that “human neurobiology plus detailed self-report of conscious experience” should be seen as sufficient evidence for consciousness, but again this doesn’t tell me anything novel or interesting about the distribution question.

I assume that at this stage of scientific progress we cannot definitely prove a “substantive” necessary or sufficient condition for consciousness, but can we make a “moderately strong argument” for some such necessary or sufficient condition?

Below I consider the case for just one proposed necessary or sufficient condition for consciousness.134 There are other candidates I could have investigated,135 but decided not to at this time.

3.3.1 Is a cortex required for consciousness?

One commonly-proposed necessary condition for phenomenal consciousness is possession of a cortex, or sometimes possession of a neocortex, or possession of a specific part of the neocortex such as the association cortex. Collectively, I’ll refer to these as “cortex-required views” (CRVs). Below, I report my findings about the credibility of CRVs.

Even sources arguing against CRVs often acknowledge that, for many years, it has been commonly believed by cognitive scientists and medical doctors that the cortex is the organ of consciousness in humans,136 though it’s not clear whether they would have also endorsed the much stronger claim that a cortex is required for consciousness in general. However, some experts have recently lost confidence that the cortex is required for consciousness (even just in humans), for several reasons (which I discuss below).137

One caveat about the claims above, and throughout the rest of this section, is that different authors appeal to slightly different definitions of “consciousness,” and so it is not always the case that the authors I cite or quote explicitly argued for or against the view that a cortex is required for “consciousness” as defined above. Still, these are arguments are sometimes used by others to make claims about the dependence or non-dependence of consciousness (as defined above) on a cortex, and certainly the arguments could easily be adapted to make such claims.

In this section, I describe some of the evidence used to argue for and against a variety of cortex-required views. As with the table of PCIFs and taxa above, please keep in mind that I am not an expert on the topics reviewed below, and my own understanding of these topics is based on a quick and shallow reading of various overview books and articles, plus a small number of primary studies.138

3.3.1.1 Arguments for cortex-required views

For decades, much (but not all) of the medical literature and the bioethics literature more-or-less assumed one or another CRV, at least in the case of humans, without much argument.139 In those sources which argue for a CRV,140 I typically see two types of arguments:

  1. For multiple types of cognitive processing (visual processing, emotional processing, etc.), we have neuroimaging evidence and other evidence showing that we are consciously aware of activity occurring in (some regions of) the cortex, but we are not aware of activity occurring outside (those regions of) the cortex.
  2. In cases where (certain kinds of) cortical operations are destroyed or severely disrupted, conscious experience seems to be abolished. When those cortical operations are restored, conscious experience returns.

According to James Rose141 and some other proponents of one or another CRV, the case for CRVs about consciousness comes not from any one line of evidence, but from several converging lines of evidence of types (1) and (2), all of which somewhat-independently suggest that conscious processing must be subserved by certain regions of the cortex, whereas unconscious processing can be subserved by other regions. If this is true, this could provide a very suggestive case in favor of some kind of CRV about consciousness. (Though, this suggestive case would still be undercut somewhat by the possibility of hidden qualia.)

Unfortunately, I did not have the time to survey several different lines of evidence to check whether they converged in favor of some kind of CRV. Instead, I examine below just one of these lines of evidence — concerning conscious and unconscious vision — in order to illustrate how the case for a CRV could be constructed, if other lines of evidence showed a similar pattern of results.

 

3.3.1.2 Unconscious vision

Probably the dominant142 (but still contested) theory of human visual processing holds that most human visual processing occurs in two largely (but not entirely) separate streams of processing. According to this theory, the ventral stream, also known “vision for perception,” serves to recognize and identify objects and people, and typically leads to conscious visual experience. The dorsal stream, also known as “vision for action,” serves to locate objects precisely and interact with them, but is not part of conscious experience.143 Below, I summarize what this theory says, but I don’t summarize the evidence in favor of the theory. I summarize that evidence in Appendix C.

These two streams are thought to be supported by different regions of the cortex, as shown below:

 

Two streams of visual processing
Image from Wikimedia Commons, Creative Commons license.

 

In primates, most visual information from the retina passes through the lateral geniculate nucleus (LGN) in the thalamus on its way to the primary visual cortex (V1) in the occipital lobe at the back of the skull.144 From there, visual information is passed from V1 to two separate streams of processing. The ventral stream leads into the inferotemporal cortex in the temporal lobe, while the dorsal stream leads into the posterior parietal cortex in the parietal lobe. (The dorsal stream also receives substantial input from several subcortical structures in addition to its inputs from V1, whereas the the ventral stream depends almost entirely on inputs from V1.145)

To illustrate how these systems are thought to interact, consider an analogy to the remote control of a robot in a distant or hostile environment (e.g. Mars):146

In tele-assistance, a human operator identifies and “flags” the goal object, such as an interesting rock on the surface of Mars, and then uses a symbolic language to communicate with a semi-autonomous robot that actually picks up the rock.

A robot working with tele-assistance is much more flexible than a completely autonomous robot… Autonomous robots work well in situations such as an automobile assembly line, where the tasks they have to perform are highly constrained and well specified… But autonomous robots… [cannot] cope with events that its programmers [have] not anticipated…

At present, the only way to make sure that the robot does the right thing in unforeseen circumstances is to have a human operator somewhere in the loop. One way to do this is to have the movements or instructions of the human operator… simply reproduced in a one-to-one fashion by the robot… [but this setup] cannot cope well with sudden changes in scale (on the video monitor) or with a significant delay between the communicated action and feedback from that action [as with a Mars robot]. This is where tele-assistance comes into its own.

In tele-assistance the human operator doesn’t have to worry about the real metrics of the workspace or the timing of the movements made by the robot; instead, the human operator has the job of identifying a goal and specifying an action toward that goal in general terms. Once this information is communicated to the semi-autonomous robot, the robot can use its on-board range finders and other sensing devices to work out the required movements for achieving the specified goal. In short, tele-assistance combines the flexibility of tele-operation with the precision of autonomous robotic control.

…[By analogy,] the perceptual systems in the ventral stream, along with their associated memory and other higher-level cognitive systems in the brain, do a job rather like that of the human operator in tele-assistance. They identify different objects in the scene, using a representational system that is rich and detailed but not metrically precise. When a particular goal object has been flagged, dedicated visuomotor networks in the dorsal stream, in conjunction with output systems elsewhere in the brain… are activated to perform the desired motor act. In other words, dorsal stream networks, with their precise egocentric coding of the location, size, orientation, and shape of the goal object, are like the robotic component of tele-assistance. Both systems have to work together in the production of purposive behavior — one system to help select the goal object from the visual array, the other to carry out the required metrical computations for the goal-directed action.

 

If something like this account is true — and it might not be; see Appendix C — then it could be argued to fit with a certain kind of CRV, according to which some parts of the cortex — those which include the ventral stream but not the dorsal stream — are required for conscious experience (at least in humans).147

3.3.1.3 Suggested other lines of evidence for CRVs

On its own, this theory of conscious and unconscious vision is not very suggestive, but if several different types of cognitive processing tell a similar story — with all of them seeming to depend on certain areas of the cortex for conscious processing, with unconscious processing occurring elsewhere in the brain — then this could add up to a suggestive argument for some kind of CRV.

Here are some other bodies of evidence that could (but very well might not) turn out to collectively suggest some sort of CRV about consciousness:

  • Preliminary evidence suggests there may be multiple processing streams for other sense modalities, too, but I haven’t checked whether this evidence is compatible with CRVs.148
  • There is definitely “unconscious pain” (technically, unconscious nociception), but I haven’t checked whether the evidence is CRVs-compatible. (See my linked sources on this in Appendix D.)
  • There are both conscious and unconscious aspects to our emotional responses, but I haven’t checked whether the relevant evidence is CRVs-compatible. (See Appendix Z.4.)
  • Likewise, there are both conscious and unconscious aspects of (human) learning and memory,149 but I haven’t checked whether the relevant evidence is CRVs-compatible.
  • According to Laureys (2005), patients in a persistent vegetative state (PVS), who are presumed to be unconscious, show greatly reduced activity in the associative cortices, and also show disrupted cortico-cortical and thalamo-cortical connectivity. Laureys also says that recovery from PVS is accompanied by restored connectivity of some of these thalamo-cortical pathways.150
  • The mechanism by which general anesthetics abolish consciousness in humans isn’t well-understood, but at least one live hypothesis is that (at least some) general anesthetics abolish consciousness primarily by disrupting cortical functioning. If true, perhaps this account would lend some support to some CRVs.151
  • Coma states, in which consciousness is typically assumed to be absent, seem to be especially associated with extensive cortical damage.152

3.3.1.4 Overall thoughts on arguments for CRVs

I have not taken the time to assess the case for CRVs about consciousness. I can see how such a case could be made, if multiple lines of evidence about a variety of cognitive functions aligned with the suggestive evidence concerning the neural substrates of conscious vs. unconscious vision. On the other hand, my guess is that if I investigated these additional lines of evidence, I would find the following:

  1. I expect I would find that the evidence base on these other topics is less well-developed than the evidence base concerning conscious vs. unconscious vision, since vision neuroscience seems to be the most “developed” area within cognitive neuroscience.
  2. I expect I would find that the evidence concerning which areas of the brain subserve specifically conscious processing of each type would be unclear, and subject to considerable expert debate.
  3. I expect I would find that the underlying studies often suffer from the weaknesses described in Appendix Z.8.

And, as I mentioned earlier, the possibility of hidden qualia undermines the strength of any pro-CRVs argument one could make from empirical evidence about which processes are conscious vs. unconscious, since the “unconscious” processes might actually be conscious, but in a way that is not accessible to introspection.

Overall, then, my sense is that the case for CRVs about consciousness is currently weak or at least equivocal, though I can imagine how the case could turn out to be quite suggestive in the future, after much more evidence is collected in several different subdomains of neurology and cognitive neuroscience.

3.3.1.5 Arguments against cortex-required views

One influential paper, Merker (2007), pointed to several pieces of evidence that seem, to some people, to argue against CRVs. One piece of evidence is the seemingly-conscious behavior of hydranencephalic children,153 whose cerebral hemispheres are almost entirely missing and replaced by cerebrospinal fluid filling that part of the skull:

These children are not only awake and often alert, but show responsiveness to their surroundings in the form of emotional or orienting reactions to environmental events…, most readily to sounds, but also to salient visual stimuli… They express pleasure by smiling and laughter, and aversion by “fussing,” arching of the back and crying (in many gradations), their faces being animated by these emotional states. A familiar adult can employ this responsiveness to build up play sequences predictably progressing from smiling, through giggling, to laughter and great excitement on the part of the child. The children respond differentially to the voice and initiatives of familiars, and show preferences for certain situations and stimuli over others, such as a specific familiar toy, tune, or video program, and apparently can even come to expect their regular presence in the course of recurrent daily routines.

…some of these children may even take behavioral initiatives within the severe limitations of their motor disabilities, in the form of instrumental behaviors such as making noise by kicking trinkets hanging in a special frame constructed for the purpose (“little room”), or activating favorite toys by switches, presumably based upon associative learning of the connection between actions and their effects… The children are, moreover, subject to the seizures of absence epilepsy. Parents recognize these lapses of accessibility in their children, commenting on them in terms such as “she is off talking with the angels,” and parents have no trouble recognizing when their child “is back.”

In a later survey of 108 primary caregivers of hydranencephalic children (Aleman & Merker 2014), 94% of respondents said they thought their child could feel pain, and 88% said their child takes turns with the caregiver during play activities.

However, these findings are not a certain refutation of CRVs, for at least three reasons. First, hydranencephalic children cannot provide verbal reports of conscious experiences (if they have any). Second, it is typically the case that hydranencephaly allows for small portions of the cortex to develop, which might subserve conscious experience. Third, there is the matter of plasticity: perhaps consciousness normally requires certain regions of the cortex, but in cases of hydranencephaly, other regions are able to support conscious functions. Nevertheless, these observations of hydranencephalic children are suggestive to many people that CRVs cannot be right.

Another line of evidence against CRVs comes from isolated case studies in which conscious experience remains despite extensive cortical damage.154 But once again these cases are not a definitive refutation of CRVs, because “extensive cortical damage” is not the same as “complete destruction of the cortex,” and also because of the issue of plasticity mentioned above.

Or, consider the case of Mike the headless chicken. On September 10, 1945, a Colorado farmer named Lloyd Olsen decapitated a chicken named “Mike.” The axe removed most of Mike’s head, but left intact the jugular vein, most of the brain stem, and one ear. Mike got back up and began to strut around as normal. He survived another 18 months, being fed with milk and water from an eyedropper, as well as small amounts of grit (to help with digestion) and small grains of corn, dropped straight into the exposed esophagus. Mike’s behavior was reported to be basically normal, albeit without sight. For example, according to various reports, he tried to crow (which made a gurgling sound), he could hear and respond to other chickens, and he tried to preen himself (which didn’t accomplish much without a beak). He was taken on tour, photographed for dozens of magazines and newspapers, and examined by researchers at the University of Utah.155

For those who endorse a CRV, Mike could be seen as providing further evidence that a wide variety of behaviors can be produced without any conscious experience. For those who reject CRVs, Mike could be seen as evidence that the brain stem alone can be sufficient for consciousness.

Another problem remains. Even if it could be proved that particular cortical structures are required to produce conscious experiences in humans, this wouldn’t prove that other animals can’t be phenomenally conscious via other brain structures. For example, it might be the case that once the cortex evolved in mammals, some functions critical to consciousness “migrated” from subcortical structures to cortical ones.156 To answer this question, we’d need to have a highly developed theory of how consciousness works in general, and not just evidence about its necessary substrates in humans.

Several authors have summarized additional arguments against CRVs,157 but I don’t find any of them to be even moderately conclusive. I do, however, think all this is sufficient to conclude that the case for CRVs is unconvincing. Hence, I don’t think there is even a “moderately strong” case for the cortex as a necessary condition for phenomenal consciousness (in humans and animals). But, I could imagine the case becoming stronger (or weaker) with further research.

3.4 Big-picture considerations that pull toward or away from “consciousness is rare”

Given that (1) I lack a satisfying theory of consciousness, (2) I don’t know which PCIFs are actually consciousness-indicating, and that (3) I haven’t found any convincing and substantive necessary or sufficient conditions for consciousness, my views about the distribution of consciousness at this point seem to be quite sensitive to how much weight I assign to various “big picture considerations.” I explain four of these below.

Putting substantial weight on some of these considerations pulls me toward a “consciousness is rare” conclusion, whereas putting more weight on other considerations pulls me toward a “consciousness is extensive” conclusion.

3.4.1 Consciousness inessentialism

How did artificial intelligence (AI) researchers build machines that outperform many or most humans (and in some cases all humans) at intelligent tasks such as playing Go or DOOM, driving cars, reading lips, and translating texts between languages? They did not do it by figuring out how consciousness works, even though consciousness might be required for how we do those things. In my experience, most AI researchers don’t think they’ll need to understand consciousness to successfully automate other impressive feats of human intelligence, either,158 and that fits with my intuitions as well (though I won’t argue the point here).

That said, AI scientists might produce consciousness as a side effect of trying to automate certain intelligent behaviors, without first understanding how consciousness works, just as AI researchers at Google DeepMind produced a game-playing AI that learned to exploit the “tunneling” strategy in Breakout!, even though the AI programmers didn’t know about that strategy themselves, and didn’t specifically write the AI to use it. Perhaps it is even the case that certain intelligent behaviors can only be achieved with the participation of conscious experience, even if the designers don’t need to understand consciousness themselves to produce a machine capable of exhibiting those intelligent behaviors.

My own intuitions lean the other way, though. I think it’s plausible that, “for any intelligent activity i, performed in any cognitive domain d, even if we do i with conscious accompaniments, i can in principle be done without these conscious accompaniments.” Flanagan (1992) called this view “conscious inessentialism,”159 but I think it is more properly called “consciousness inessentialism.”160

Defined this way, consciousness inessentialism is not the same as epiphenomenalism about consciousness, nor does it require that one think philosophical zombies are empirically possible. (Indeed, I reject both those views.) Instead, consciousness inessentialism merely requires that it be possible in principle for a system to generate the same input-output behavior as a human (or some other conscious system), without that system being conscious.

To illustrate this view, imagine replacing a human brain with a giant lookup table:

A Giant Lookup Table… is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation. If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices. There are times when you do want to do this, though not for multiplication — times when you’re going to reuse the function a lot and it doesn’t have many possible inputs; or when clock cycles are cheap while you’re initializing, but very expensive while executing.

Giant Lookup Tables [GLUTs] get very large, very fast. A GLUT of all possible [twenty-remark] conversations with ten words per remark, using only 850-word Basic English, would require 7.6 * 10585 entries.

Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage. But “in principle”… it could be done.

The GLUT is not a [philosophical] zombie… because it is microphysically dissimilar to a human.

A GLUT of a human brain is not physically possible because it is too large to fit inside the observable universe, let alone inside a human skull, but it illustrates the idea of consciousness inessentialism: if it was possible, for example via a hypercomputer, a GLUT would exhibit all the same behavior as a human — including talking about consciousness, writing articles about consciousness, and so on — without (I claim) being conscious.

Can we imagine a physically possible system that would exhibit all human input-output behavior without consciousness? Unfortunately, answering that question seems to depend on knowing how consciousness works. But, I think the answer might very well turn out to be “yes,” largely because, as every programmer knows, there are almost always many, many ways to write any given computer program (defined in terms of its input-output behavior), and those different ways of writing the program typically use different internal sequences of information processing and different internal representations. In most contexts, what separates a “good” programmer from a mediocre one is not that the good programmer can find a way to write a program satisfying some needed input-output behavior while the mediocre programmer cannot; rather, what separates them is that the good programmer can write the needed program using particular sequences of information processing and internal representations that (1) are easy to understand and debug, (2) are modular and thus easy to modify and extend, (3) are especially computationally efficient, and so on.

Similarly, if consciousness is instantiated by some kinds of sequences of information processing and internal representations but not others, then it seems likely to me that there are many cognitive algorithms that could give rise to my input-output behavior without the particular sequences of information processing and internal representations that instantiate consciousness. (Again, remember that as with the GLUT example, this does not imply epiphenomenalism, nor the physical possibility of zombies.)

For example, suppose (for the sake of illustration) that consciousness is only instantiated in a human brain if, among other necessary conditions, some module A shares information I with some module B in I’s “natural” form. Afterward, module B performs additional computations on I, and passes along the result to module C, which eventually leads to verbal reports and stored memories of conscious experience. But now, suppose that my brain is rewired such that module A encrypts I before passing it to B, and B knows how to perform the requisite computations on I via fully homomorphic encryption, but B doesn’t know how to decrypt the encrypted version of I. Next, B passes the result to module C which, as a result of the aforementioned rewiring, does know how to decrypt the encrypted version of I, and passes it along (in unencrypted form) so that it eventually results in verbal reports and stored memories of conscious experience. In this situation (with the hypothesized “rewiring”), the same input-output behavior as before is always observed, even my verbal reports about conscious experience, but consciousness is never instantiated inside my brain, because module B never sees information I in its natural form.161

Of course, it seems unlikely in the extreme that the human brain implements fully homomorphic encryption — this is just an illustration of the general principle that there are many ways to compute a quite sophisticated behavioral function, and it’s plausible that not all of those methods also compute an internal cognitive function that is sufficient for consciousness.

Unfortunately, it’s still unclear whether there is any system of a reasonable scale that would replicate my behavior without any conscious experience. That seems to depend in part on whether the computations necessary for human consciousness are highly specific (akin to those of a specific, small-scale device driver but not other device drivers), or whether consciousness is a result of relatively broad, general kinds of information processing (a la global workspace theory). In the former case, one might imagine relatively small (but highly unlikely to have evolved) tweaks to my brain that would result in identical behavior without conscious experience. In the latter case, this is more difficult to imagine, at least without vastly more computational resources, e.g. the computational resources required to execute a large fraction of my brain’s total information processing under fully homomorphic encryption.

Moreover, even if I’m right about consciousness inessentialism, I also think it’s quite plausible that, as a matter of contingent fact, many animals do have conscious experiences (of some sort) accompanying some or all of their sophisticated behaviors. In fact, at least for the animals with relatively similar brains to ours (primates, and probably all mammals), it seems more reasonable than not to assume they do have conscious experiences (at least, before consulting additional evidence), simply because we have conscious experiences, we share a not-so-distant common ancestor with those animals,162 and their brains seem similar to ours in many ways (see Appendix E).

Still, if you find consciousness inessentialism as plausible as I do, then you can’t take for granted that if an animal exhibits certain sophisticated behaviors, it must be conscious.163 On the other hand, if you find consciousness inessentialism highly implausible, then perhaps at least some sophisticated behaviors should be taken (by you) to be very strong evidence of consciousness. In this way, putting more weight on consciousness inessentialism should shift one’s view in the direction of “consciousness might be rare,” whereas giving little credence to consciousness inessentialism should shift one’s view in the direction of “consciousness might be widespread.”

3.4.2 The complexity of consciousness

How complex is consciousness? That is, how many “components” are needed, and how precisely must they be organized, for a conscious experience to be instantiated? The simpler consciousness is, the more extensive it should be (all else equal), for the same reason that both “unicellular life” and “muliticellular life” are rarer than “life,” and for the same reason that instances of both “Microsoft Windows” and “Mac OS” are rarer than instances of “personal computer operating systems.”

To illustrate this point, I’ll very briefly survey some families of theories of consciousness which differ in how complex they take consciousness to be.

Panpsychism posits that consciousness is a fundamental feature of reality, e.g. a fundamental property in physics.164 This, of course, is as simple (and therefore ubiquitous) as consciousness could possibly be.

Compared to panpsychism, first-order representationalism (FOR) posits a substantially more complex account of consciousness. Above, I described an example FOR theory, Michael Tye’s PANIC theory. On Tye’s theory of consciousness (and on other FOR theories), consciousness is a much more specific, complicated, and rare sort of thing than it is on a panpsychist view. If conscious states are states with PANIC, then we are unlikely to find them in carbon dioxide molecules, stars, and rocks,165 as the panpsychist claims. Nevertheless the PANIC theory seems to imply that consciousness might be relatively extensive within the animal kingdom. When Tye applies his own theory to the distribution question, he concludes that even some insects are clearly conscious.166 Personally, I think a PANIC theory of consciousness also seems to imply that some webcams and many common software programs are also conscious, though I suspect Tye disagrees that his theory has that implication.

Meanwhile, higher-order approaches to consciousness typically posit an even more complex account of consciousness, relative to FOR theories like PANIC. Carruthers (2016) explains:167

According to first-order views, phenomenal consciousness consists in analog or fine-grained contents that are available to the first-order processes that guide thought and action. So a phenomenally-conscious percept of red, for example, consists in a state with the analog content red which is tokened in such a way as to feed into thoughts about red, or into actions that are in one way or another guided by redness…

The main motivation behind higher-order theories of consciousness… derives from the belief that all (or at least most) mental-state types admit of both conscious and unconscious varieties. Almost everyone now accepts, for example, …that beliefs and desires can be activated unconsciously. (Think, here, of the way in which problems can apparently become resolved during sleep, or while one’s attention is directed to other tasks. Notice, too, that appeals to unconscious intentional states are now routine in cognitive science.) And then if we ask what makes the difference between a conscious and an unconscious mental state, one natural answer is that conscious states are states that we are aware of… That is to say, these are states that are the objects of some sort of higher-order representation…

As a further example, consider the case of unconscious vision, discussed here. Visual processing in the dorsal stream seems to satisfy something very close to Tye’s PANIC criteria,168 and yet these processes are unconscious (as far as anyone can tell). Hence the suggestion that more is required — specifically, that some “higher-order” processing is required. For example, perhaps it’s the case that for visual processing to be conscious, some circuits of the brain need to represent parts of that processing as being attended-to by the self, or something like that.

It’s easy to see, then, why higher-order theories will tend to be more complex than FOR theories. Basically, higher-order theories tend to assume FOR-style information processing, but they say that some additional processing is required in order for consciousness to occur. If higher-order theories are right, then (all else equal) we should expect consciousness to be rarer than if first-order theories are right.

What about illusionist theories? As of today, most illusionist theories seem to be at least as complex as higher-order theories tend to be.169 But perhaps in the future illusionists will put forward compelling illusionist theories which do not imply a particularly complex account of consciousness.170

So, how complex will consciousness turn out to be?

Much of the academic debate over the complexity of consciousness and the distribution question has taken place in the context of the debate between first-order and higher-order approaches to consciousness, and experts seem to agree that higher-order theories imply a less-extensive distribution of consciousness than first-order theories do. If I assume this framing for the debate, I generally find myself more sympathetic with higher-order theories (for the usual reason summarized by Carruthers above), though I think there are some reasons to take a first-order (or at least “lower-order”171) approach as a serious possibility (see Appendix H).

However, I think the first-order / higher-order dichotomy is a very limiting way to argue about theories of consciousness, the complexity of consciousness, and the distribution question. I would much rather see these debates transition to being debates about proposed (and at least partially coded) cognitive architectures — architectures which don’t neatly fit into the first-order / higher-order dichotomy. (I say more about this here.)

One final comment on the likely complexity of consciousness is that, as far as I can tell, early scientific progress (outside physics) tends to lead to increasingly complex models of the phenomena under study. If this pattern is real, and holds true for the study of consciousness, then perhaps future accounts of consciousness will tend to be more complex than the accounts we have come up with thus far. (For more on this, see Appendix Z.9.)

3.4.3 We continue to find that many sophisticated behaviors are more extensive than we once thought

My third “big-picture consideration” is motivated by the following fact: turn to almost any chapter of a recent textbook on animal cognition,172 check the publication years of the cited primary studies, and you’ll find an account that could be summarized like this: “A few decades ago, we didn’t know that [some animal taxon] exhibited [some sophisticated behavior], suggesting they may have [some sophisticated cognitive capacity]. Today, we have observed multiple examples.” In this sense, at least, it seems true that “research constantly moves forward, and the tendency of research is to extend the number of animals that might be able to suffer, not decrease it.”173

Consider, for example, these (relatively) recent reported discoveries:174

  • Several forms of tool use and tool manufacture by insects and other invertebrates175
  • Complex food preparation by a wild dolphin176
  • The famous feats of Alex the parrot
  • Fish using transitive inference to learn their social rank177
  • Fairly advanced puzzle-solving by New Caledonian crows178
  • Western scrub-jays planning for future days without reference to their current motivational states179

Are these observations of sophisticated animal behavior trustworthy? In many cases, I have my doubts. Studies of animal behavior often involve very small samples sizes, no controls (or poorly constructed controls), and inadequate reporting. Many studies fail to replicate.180 In general, the study of animal behavior seems to suffer from many of the weaknesses of scientific methodology that I summarize (for other fields) in Appendix Z.8.181 On the other hand, the trend in the reported sophistication of animal behaviors seems clear. Can it all be a mirage?

I suspect not, for at least two reasons.

First, one skeptical explanation of the trend described above might be the following: “People who decide to devote their careers to ethology are more likely to be people who are intuitively empathic toward (and think highly of) animals, and they’re just ‘finding’ what they want to find, and the reason for the trend is just that it takes time for a small number of scientists to get around to running the right sorts of experiments with an expanding set of species.” The part about “it takes time” is almost certainly true whether the field is generally biased or not, but what can we say about the likelihood of the proposed bias itself?

One piece of evidence is this: for most of the 20th century, ethologists were generally reluctant to attribute sophisticated cognitive capacities to animals, in part due to the dominating influence182 of Morgan’s Canon of 1894, which states that “In no case is an animal activity to be interpreted in terms of higher psychological processes if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development.” Or as Breed (2017) puts it: “Do not over-credit animals with human-like capacities, look for the simplest possible explanations for animal behavior.” According to Breed, “It really has been only in the last 20 years that consideration of animal cognition, thoughts and feelings has gained substantial scientific credibility.”183

Given this history, it doesn’t seem that those who devote their careers to the study of animal behavior are in general heavily biased toward ‘finding’ more sophisticated cognitive capacities in animals than those animals actually possess.184 If anything, my quick read of the history of the field is more consistent with a story according to which ethologists have generally been too biased against the possibility of sophisticated animal capacities, and are only recently overcoming that initial bias.

A second reason I suspect the trend of discovering sophisticated capacities in an ever-widening set of species is not entirely a mirage is that even the most skeptical ethologists seem to accept the general trend. For example, consider Clive Wynne, a comparative psychologist at Arizona State University. Wynne avoids talking of animal “thought” or “intelligence,” refers to Morgan’s Canon as “the most awesome weapon in animal psychology,” remains agnostic about whether animal behavior is driven by internal representations of the world, thinks that not even chimpanzees have been shown (yet) to have a theory of mind, does not think mirror self-recognition demonstrates the possession of a self-concept, does not think teaching of conspecifics has yet been demonstrated in apes, and does not think language-trained apes like Kanzi have demonstrated grammatical competence.185 And yet, his textbook on animal cognition (Wynne & Udell 2013) exhibits the same trend as the other textbooks do, and he seems to more-or-less accept the reported evidence concerning a variety of relatively sophisticated cognitive capacities, for example: the ability to count, in Alex the Parrot; the ability to form concepts of individual persons in a natural environment, in northern mockingbirds; the ability of pigeons to discriminate paintings by which school of art produced them (the Monet school vs. the Picasso school); the ability of some birds to modify and/or use tools to retrieve food, without training; the ability of several species to perform transitive inference; the ability of several species to follow human pointing; the teaching of conspecifics by meerkats; and the dog Chaser’s 1000+ word vocabulary of both nouns and verbs.186

Of course, it’s possible that even relatively skeptical ethologists like Wynne are still not skeptical enough. Indeed, I suspect this is generally true, given that the ethology literature seems to suffer from the same problems as medical and social science literatures do (see Appendix Z.8), but there is not yet as much discussion of these problems and how to solve them (in ethology) as there now is in medicine and the social sciences.187 Even still, I suspect a large fraction of past findings (concerning sophisticated animal behavior) — at least, the findings which have persuaded even relatively skeptical ethologists such as Wynne — would be mostly upheld by rigorous replication attempts. I don’t know which findings would be retained, and that makes it difficult for me to fill out my table of PCIFs with much confidence, but I suspect the broad trend would survive, even if (say) 60% of past findings accepted by Wynne failed to replicate when using more rigorous study designs.

If I’m right, and the general trend is real, then I have every reason to think the trend will continue: the more we look, the more we’ll find that a wide variety of animals, including many simple animals, engages in fairly sophisticated behaviors. The question is: Exactly which behaviors will eventually be observed in which taxa, and how indicative of consciousness are those behaviors?

3.4.4 Rampant anthropomorphism

My fourth “big-picture consideration” is this: we humans are nearly-incorrigible anthropomorphizers. We seem to be hard-wired to attribute human-like cognitive traits and emotions to non-humans, including animals, robots, chatbots, inanimate objects, and even simple geometric shapes. Indeed, after extensive study of the behavior of unicellular organisms, the microbiologist H.S. Jennings was convinced that (e.g.) if an amoeba was large enough that humans came into regular contact with it, we would assume it is conscious for the same reasons we instinctively assume a dog is conscious.188 As cognitive scientist Dan Sperber put it, “Attribution of mental states is to humans as echolocation is to the bat.”189

Of course, a general warning about anthropomorphism is no substitute for reading through a great many examples of (false) anthropomorphisms, from Clever Hans onward, which you can find in the sources I list in a footnote.190

Here, I’ll give but one example of flawed anthropomorphism. My sense is that many people, when imaginging what it must be like for an animal to undergo some experience, imagine that the animal’s subjective experience is similar to their own, minus various kinds of “sophisticated reasoning,” such as long-term planning, a stream of thoughts in a syntactically advanced language, and occasional use of explicit logical reasoning about the expected results of different possible actions one could take. However, studies of animal behavior and neurology tend to suggest the differences between human and animal experiences are much more profound than this. Consider, for example, studies of “interocular transfer.” Dennett gives an example:191

What is it like to be a rabbit? Well you may think that it’s obvious that rabbits have an inner life that’s something like ours. Well, it turns out that if you put a patch over a rabbit’s left eye and train it in a particular circumstance to be (say) afraid of something, and then you move the patch to the right eye, so that… the very same circumstance that it has been trained to be afraid of [is now] coming in the other eye, you have a naive rabbit [i.e. the rabbit isn’t afraid of the stimulus it had previously learned to be afraid of], because in the rabbit brain the connections that are standard in our brains just aren’t there, there isn’t that unification. What is it like to be which rabbit? The rabbit on the left, or the rabbit on the right? The disunity in a rabbit’s brain is stunning when you think about it….

On the basis of many decades of such counterintuitive studies of animal behavior, I think that if there is “something it’s like” to be a rabbit, I suspect it is not “roughly like my own subjective experience, minus various kinds of sophisticated reasoning.”192

The biologist and animal welfare advocate Marian Dawkins has expressed something close to my view on anthropomorphism, in her book Why Animals Matter (2012):

Anthropomorphic interpretations may be the first ones to spring to mind and they may, for all we know, be correct. But there are usually other explanations, often many of them, and the real problem with anthropomorphism is that it discourages, or even disparages, a more rigorous exploration of these other explanations. Rampant anthropomorphism threatens the very basis of ethology by substituting anecdotes, loose analogies, and an ‘I just know what the animal is thinking so don’t bother me with science’ attitude to animal behaviour.

…

We need all the scepticism we can muster, precisely because we are all so susceptible to the temptation to anthropomorphize. If we don’t resist this temptation, we risk ending up being seriously wrong.

My guess is that most people anthropomorphize animals far too quickly — including, by attributing consciousness to them193 — and as such, a proper undercutting of these anthropomorphic tendencies should pull one’s views about the distribution of consciousness toward a “consciousness is rare” conclusion, relative to where one’s views were before.

4 Summary of my current thinking about the distribution question

4.1 High-level summary

Below is a high-level summary of my current thinking about the distribution-of-consciousness question (with each point numbered for ease of reference):

  1. Given that we don’t yet have a compelling theory of consciousness, and given that just about any behavior194 could (as far as I know) be accomplished with or without consciousness (consciousness inessentialism), it seems to me that we can’t know which potentially consciousness-indicating features (PCIFs) are actually consciousness-indicating,195 except insofar as we continue to get evidence about how consciousness works from the best source of evidence about consciousness we have: human self-report.
  2. Unfortunately, as far as I can tell, studies of human consciousness haven’t yet confidently identified any particular “substantive” PCIFs as necessary for consciousness, sufficient for consciousness, or strongly indicative of consciousness.
  3. Still, there are some limits to my uncertainty about the distribution question, for reasons I give below.
  4. As far as we know,196 the vast majority of human cognitive processing is unconscious, including a large amount of fairly complex, “sophisticated” processing. This suggests that consciousness is the result of some particular kinds of information processing, not just any information processing.
  5. Assuming a relatively complex account of consciousness, I find it intuitively hard to imagine how (e.g.) the 302 neurons of C. elegans could support cognitive algorithms which instantiate consciousness. However, it is more intuitive to me that the ~100,000 neurons of the Gazami crab might support cognitive algorithms which instantiate consciousness. But I can also imagine it being the case that not even a chimpanzee happens to have the right organization of cognitive processing to have conscious experiences.
  6. Given the uncertainties involved, it is hard for me to justify assigning a “probability of consciousness” lower than 5% to any creature with a neuron count at least a couple orders of magnitude larger than that of C. elegans, and it is hard for me to justify assigning a “probability of consciousness” higher than 95% to any non-human species, including chimpanzees. Indeed, I think I can make a weakly plausible case for (e.g.) Gazami crab consciousness, and I think I can make a weakly plausible case for chimpanzee non-consciousness.
  7. When introspecting about how I was intuitively assigning “probabilities of consciousness” (between 5% and 95%) to various species within (say) the “Gazami crabs to chimpanzees” range, it seemed that the four most important factors influencing my “wild guess” probabilities were:
    1. evolutionary distance from humans (years since last common ancestor),
    2. neuroanatomical similarity with humans (see Appendix E),
    3. apparent cognitive-behavioral “sophistication” (advanced social politics, mirror self-recognition, abstract language capabilities, and some other PCIFs197), and
    4. total “processing power” (neurons, and maybe especially pallial neurons198).
  8. But then, maybe I’m confused about consciousness at a fairly basic level, and consciousness isn’t at all complicated (see Appendix H), as a number of scholars of consciousness currently think. I should give some weight to such views, nearly all of which would imply higher probabilities of consciousness for most animal taxa than more complex accounts of consciousness typically do.

I should say a bit more about the four factors mentioned in (7). Each of these factors provide very weak evidence concerning the distribution question, and can be thought of as providing one component of a four-factor “theory-agnostic estimation process” for the presence of consciousness in some animal.199

The reasoning behind the first two factors is this: Given that I know very little about consciousness beyond the fact that humans have it, and it is implemented by information processing in brains,200 then all else equal, creatures that are more similar to humans, especially in their brains, are more likely to be conscious.

The reasoning behind the third factor is twofold. First: in humans, consciousness seems to be especially (but not exclusively) associated with some of our most “sophisticated” behaviors, such as problem-solving and long-term planning. (For example, we have many cases of apparently unconscious simple nocifensive behaviors, but I am not aware of any cases of unconscious long-term logical planning.) Second, suppose we give each extant theory of consciousness a small bit of consideration. Some theories assume that consciousness requires only some very basic supporting functions (e.g. some neural information processing, a simple body schema, and some sensory inputs), whereas others assume that consciousness requires a fuller suite of supporting functions (e.g. a more complex self-model, long-term memory, and executive control over attentional mechanisms). As a result, the total number of theories which predict consciousness in an animal that exhibits both simple and “sophisticated” behaviors is much greater than the number of theories which predict consciousness in an animal that exhibits only simple behaviors.

The reasoning behind the fourth factor is just that a brain with more total processing power is (all else equal) more likely to be performing a greater variety of computations (some of which might be conscious), and is also more likely to be conscious if consciousness depends on a brain passing some threshold of repeated, recursive, or “integrated” computations.

Here is a table showing how the animals I ranked compare on these factors (according to my own quick, non-expert judgments):

EVOLUTIONARY DISTANCE FROM HUMANS NEUROANATOMICAL SIMILARITY WITH HUMANS (SEE APPENDIX E) APPARENT COGNITIVE-BEHAVIORAL SOPHISTICATION (SEE PCIFS TABLE) TOTAL PROCESSING POWER (NEURONS)
Humans (for comparison) 0 ∞ Very high 86 billion201
Chimpanzees 6.7 Mya High High ~28 billion??202
Cows 96.5 Mya Moderate/high Low ~10 billion??203
Chickens 311.9 Mya Low/moderate Low ~221 million
Rainbow trout 453.3 Mya Low Low ~12 million??204
Common fruit flies 796.6 Mya Very low Very low 0.12 million
Gazami crabs 796.6 Mya Very low Very low 0.1 million??205

But let me be clear about my process: I did not decide on some particular combination rule for these four factors, assign values to each factor for each species, and then compute a resulting probability of consciousness for each taxon. Instead, I used my intuitions to generate my probabilities, then reflected on what factors seemed to be affecting my intuitive probabilities, and then filled out this table. However, once I created this table the first time, I continued to reflect on how much I think such weak sources of evidence should be affecting my probabilities, and my probabilities shifted around a bit as a result.

Given the uncertainties involved, and given how ad-hoc and unjustified the reasoning process described in this section is, and given that consciousness is likely a “fuzzy” concept, it might seem irresponsible or downright silly to say “There’s an X% chance that chickens are ‘conscious,’ a Y% chance that rainbow trout are ‘conscious,’ and a Z% chance that the Tesla Autopilot algorithms are ‘conscious.’”

Nevertheless, I will make some such statements in the next section, for the following reasons:

  • As subjective Bayesians would point out, my ongoing decisions imply that I already implicitly assign (something like) probabilities to consciousness-related or moral patienthood-related statements. I treat rocks differently than fishes, and fishes differently than humans. Also, there are some bets on this topic I would take, and some I would not. For example, given a suitably specified arbiter (e.g. a well-conducted poll of relevant leading scientists, taken 40 years from now), if someone wanted to bet me, at 100-to-1 odds, that no fishes are “conscious” (as determined by a plurality of relevant leading scientists, 40 years from now), I would take the bet — meaning I think there’s better than a 1-in-100 chance that scientists will conclude at least one species of fish is conscious.
  • Even if my probabilities have no principled justification, and even if they aren’t straightforwardly action-guiding (see below), putting “made-up” numbers on my beliefs makes it easier for others to productively disagree with my conclusions, and argue against them.

4.2 My current probabilities

Below, I list some of my current probabilities for the possession of consciousness by normally-functioning adult members of several different animal taxa, and also for the possession of consciousness by an example AI program: DeepMind’s AlphaGo.

I always assigned a lower probability to “consciousness as loosely defined by example above” than I did to “consciousness of a sort I intuitively morally care about,” since I suspect the latter (given my moral intuitions) will end up being a slightly (but perhaps not hugely) broader concept than the former, since the former is defined with reference to the human example even though it is typically meant to apply substantially beyond it.

PROBABILITY OF CONSCIOUSNESS AS LOOSELY DEFINED BY EXAMPLE ABOVE PROBABILITY OF CONSCIOUSNESS OF A SORT I INTUITIVELY MORALLY CARE ABOUT
Humans >99% >99%
Chimpanzees 85% 90%
Cows 75% 80%
Chickens 70% 80%
Rainbow trout 60% 70%
Common fruit flies 10% 25%
Gazami crabs 7% 20%
AlphaGo206 <5% <5%

Unfortunately, I don’t have much reason to believe my judgments about consciousness are well-calibrated (such that statements I make with 70% confidence turn out to be correct roughly 70% of the time, etc.).207 But then, neither does anybody else, as far as I know.

Please keep in mind that I don’t think this report argues for my stated probabilities. Rather, this report surveys the kinds of evidence and argument that have been brought to bear on the distribution question, reports some of my impressions about those bodies of evidence and argument, and then reports what my own intuitive probabilities seem to be at this time. Below, I try — to a limited degree — to explore why I self-report these probabilities rather than others, but of course, I have limited introspective access to the reasons why my brain has produced these probabilities rather than others.208 I assume the evidence and argument I’ve surveyed here has a substantial impact on my current probabilities, but I do not think my brain even remotely approximates an ideal Bayesian integrator of evidence (at this scale, anyway), and I do not think my brief and shallow survey of such a wide range of complicated evidence (from fields in which I have little to no expertise) argues for the probabilities I’ve given here. Successfully arguing for any set of probabilities about the distribution of consciousness would, I think, require a much larger effort than I have undertaken here.

Also, remember that whichever of these taxa turn out to actually be “conscious,” they could vary by several orders of magnitude in moral weight. In particular, I suspect the arthropods on this list, if they are conscious, might be several orders of magnitude lower in moral weight (given my moral judgments) than (e.g.) most mammals, given the factors listed briefly in Appendix Z.7. (But that is just a hunch; I haven’t yet thought about moral weight much at all.)

4.3 Why these probabilities?

It is difficult to justify, or even to explain, why I give these probabilities and not others, beyond what I’ve already said above. My hope is that the rest of this report gives some insight into why I report these probabilities, but there is no clear weighted combination rule for synthesizing the many different kinds of argument and evidence I survey above, let alone the many considerations that are affecting my judgments but which I did not have time to explain in this report, and (in some cases) that I don’t even know about myself. Nevertheless, I offer a few additional explanatory comments below.

My guess is that to most consciousness-interested laypeople, the most surprising facts about the probabilities I state above will be that my probability of chimpanzee consciousness is so low, and that my probability of Gazami crab consciousness is so high. In a sense, these choices may simply reflect my view that, as Dawkins (2012) put it, “consciousness is harder [to understand] than you think”209 — i.e., that I’m unusually uncertain about my attributions of consciousness, which pulls the probability of consciousness for a wide range of taxa closer to some kind of intuitive “total ignorance prior” around 50%.210

What might experts in animal consciousness think of my probabilities? My guess is that most of them would think that my probabilities are too low, at least for the mammalian taxa, and probably for all the animal taxa I listed except for the arthropods. If that’s right, then my guess is that our disagreements are largely explained by (1) my greater uncertainty about ~all attributions of consciousness, and (2) selection effects on the field of animal consciousness studies. (If you don’t think it’s likely that many animals are conscious, you’re unlikely to devote a large fraction of your career to studying the topic!)211

I should say a bit more about why I might be less confident in “~all attributions of consciousness” than most experts in consciousness are. In part, this may be a result of the fact that, in my experience, I seem to be more skeptical of published scientific findings than most working scientists and philosophers are. Hence, whereas some people are convinced by (for example) the large, diverse, and cohesive body of evidence for global neuronal workspace theory assembled in Dehaene (2014), I read a book like that and think “Based on my experience, I’d guess that many of the cited studies would fail to hold up under attempted replication, or even under close scrutiny of the methods used, and I’m not sure how much that would affect the overall conclusions.”212 I could be wrong, of course, and I haven’t scrutinized the studies cited in Dehaene’s book myself; I’m just making a prediction based on the base rate for how often studies of various kinds fail to “hold up” upon closer inspection (by me), or upon attempted replication. I don’t defend my general skepticism of published studies here, but I list some example sources of my skepticism in Appendix Z.8.

In any case, heightened skepticism of published studies — e.g. studies offered as support for some theory of consciousness, or for the presence of some cognitive or behavioral feature in some animal taxon — will tend to pull one’s views closer to a “total ignorance” prior, relative to the views of someone who takes the published studies more seriously.

What about AlphaGo? My very low probability for AlphaGo consciousness is obviously not informed by most of the reasoning that informs my probabilities for animal species. AlphaGo has no evolutionary continuity with humans, it has no neuroanatomical similarity with humans (except for AlphaGo’s “neurons,” which are similar to human neurons only in a very abstract way), and its level of “cognitive-behavioral sophistication” is essentially “none” except for the very narrow task at which it is specifically programmed to excel (playing Go). Also, unlike with animal brains, I can trace, to a large extent, what AlphaGo is doing, and I don’t think it does anything that looks to me like it could instantiate consciousness (e.g. on an illusionist account of consciousness). Nevertheless, I feel I must admit some non-negligible probability that AlphaGo is conscious, given how many scholars of consciousness endorse views that seem to imply AlphaGo is conscious (see below). Though even if AlphaGo is conscious, it might have negligible moral weight.

4.4 Acting on my probabilities

Should one take action based on such made-up, poorly-justified probabilities? I’m genuinely unsure. There are many different kinds of uncertainty, and I’m not sure how to act given uncertainty of this kind.213 (We hope to write more about this issue in the future.)

4.5 How my mind changed during this investigation

First, a note on how my mind did not change during this investigation. By the time I began this investigation, I had already found persuasive my four key assumptions about the nature of consciousness: physicalism, functionalism, illusionism, and fuzziness. During this investigation I studied the arguments for and against these views more deeply than I had in the past, and came away more convinced of them than I was before. Perhaps that is because the arguments for these views are stronger than the arguments against them, or perhaps it is because I am roughly just as subject to confirmation bias as nearly all people seem to be (including those who, like me, know about confirmation bias and actively try to mitigate it).214 In any case: as you consider how to update your own views based on this report, keep in mind that I began this investigation as a physicalist functionalist illusionist who thought consciousness was likely a very fuzzy concept.

How did my mind change during this investigation? First, during the first few months of this investigation, I raised my probability that a very wide range of animals might be conscious. However, this had more to do with a “negative” discovery than a “positive” one, in the following sense: Before I began this investigation, I hadn’t studied consciousness much, and I held out some hope that there would turn out to be compelling reasons to “draw lines” at certain points in phylogeny, for example between animals which do and don’t have a cortex, and that I could justify a relatively sharp drop in probability of consciousness for species falling “below” those lines. But, as mentioned above, I eventually lost hope that there would (at this time) be compelling arguments for drawing any such lines in phylogeny (short of having a nervous system at all). Hence, my probability of a species being conscious now drops gradually as the values of my “four factors” decrease,215 with no particularly “sharp” drops in probability among creatures with a nervous system.

A few months into the investigation, I began to elicit my own intuitive probabilities about the possession of consciousness by several different animal taxa. I did this to get a sense of how my opinions were changing during the investigation, and perhaps also to harness a single-person “wisdom of crowds” effect (though, I don’t think this worked very well).216 Between July and October of 2016, my probabilities changed very little (see footnote for details217). Then, in January 2017, I finally got around to investigating the arguments for “hidden qualia” (and thus for the plausibility of relatively simple accounts of consciousness; see Appendix H), and this moved my probabilities upward a bit, especially for “consciousness of a sort I intuitively morally care about.”

There are some other things on which my views shifted noticeably as a result of this investigation:

  • During the investigation, I became less optimistic that philosophical arguments of the traditional analytic kind will contribute much to our understanding of the distribution question on the present margin. I see more promise in scientific work — such as the scientific work which would feed into my four-factor “theory-agnostic estimation process” described above, that which could contribute toward progress on theories of consciousness (see Appendix B), and that which can provide the raw data that can be used in arguments about whether specific taxa are conscious (such as those in Tye 2016, chs. 5-9).218 I also see some promise in certain kinds of “non-traditional philosophical work,” such as computational modeling of theories of consciousness, and direct collaborations between philosophers and scientists so that some scientific work can target philosophically important hypotheses as directly as possible.219 Some philosophers are likely well-positioned to do some of this work, regardless of how well it resembles “traditional” philosophical argumentation.
  • During the investigation, it became clear to me that I think too much professional effort is being spent on different schools of thought arguing with each other, and not enough effort spent on schools of thought ignoring each other and making as much progress as they can on their own assumptions to see what those assumptions can lead to. The latter practice seems necessary in order to have much hope of refining one’s views on the central question of this report (“Which beings should we consider to be moral patients?”), and seems neglected relative to the former practice. For example, I would like to see more books and articles similar to Prinz (2012), Dehaene (2014), and Kammerer (2016).220
  • When I began this investigation, I felt fundamentally confused about consciousness in a way that I did not, for example, feel confused about many other classically confusing phenomena, for example free will or quantum mechanics. I couldn’t grok how any set of cognitive algorithms could ever “add up to” the phenomenality of phenomenal consciousness, though I assumed, via a “system 2 override” of my dualistic intuitions, that somehow, some set of cognitive algorithms must add up to phenomenal consciousness. Now, having spent so much time trying to both solve and dissolve the perplexities of consciousness, I no longer feel confused about them in that way. Of course, I still don’t know which cognitive systems are conscious, and I don’t know which cognitive-behavioral evidence is most indicative of consciousness, and so on — but the puzzle of consciousness now feels to me more like the puzzle of how different cognitive systems achieve different sorts of long-term hierarchical planning, or the puzzle of how different cognitive systems achieve different sorts of metacognition (see here). This loss of confusion might be mistaken, of course; perhaps I ought to feel more confused about consciousness than I now do!

4.6 Some outputs from this investigation

The first key output from the investigation is my stated set of probabilities, but — as mentioned above — I’m not sure they’re of much value for decision-making at this point.

Another key output of this investigation is a partial map of which activities might give me greater clarity about the distribution of consciousness (see the next section).

A third key output from this investigation is that we decided (months ago) to begin investigating possible grants targeting fish welfare. This is largely due to my failure to find any compelling reason to “draw lines” in phylogeny (see previous section). As such, I could find little justification for suggesting that there is a knowably large difference between the probability of chicken consciousness and the probability of fish consciousness. Furthermore, humans harm and kill many, many more fishes than chickens, and some fish welfare interventions appear to be relatively cheap. (We’ll write more about this later.)

Of course, this decision to investigate possible fish welfare grants could later be shown to have been unwise, even if the Open Philanthropy Project assumes my personal probabilities of consciousness in different taxa, and even if those probabilities don’t change. For example, I have yet to examine other potential criteria for moral patienthood besides consciousness, and I have not yet examined the question of moral weight (see above). The question of moral weight, especially, could eventually undermine the case for fish welfare grants, even if the case for chicken welfare grants remains robust. Nevertheless, and consistent with our strategies of hits-based giving and worldview diversification, we decided to seek opportunities to benefit fishes in case they should be considered moral patients with non-negligible weight.

5 Potential future investigations

5.1 Things I considered doing, but didn’t, due to time constraints

There are many things I considered doing to reduce my own uncertainty about the likely distribution of morally-relevant consciousness, but which I ended up not doing, due to time constraints. I may do some of these things in the future.221 In no particular order:

  • I’d like to speak to consciousness experts about, and think through more thoroughly, which potentially fundable projects seem as though they’d shed the most light on the likely distribution of morally-relevant consciousness.
  • I’d like to get more feedback on this report from long-time “consciousness experts” of various kinds. (So far, the only long-time “consciousness expert” from which I’ve gotten extensive feedback is David Chalmers.)
  • I’d like to think through more carefully whether my four-factor “theory-agnostic estimation process” described above makes sense given my current state of ignorance, get help from some ethologists and comparative neuroanatomists to improve the “accuracy” of my ratings for “neuroanatomical similarity with humans” and “apparent cognitive-behavioral sophistication,” and explore what my updated factors and ratings suggest about the distribution question.
  • As mentioned elsewhere, I’d like to work with a more experienced programmer to sketch a toy program that I think might be conscious if elaborated, coded fully, and run. Then, I’d like to adjust the details of its programming so that it more closely matches my own first-person data222 and the data gained from others’ self-reports of conscious experience (e.g. in experiments and in brain damage cases), and then check how my intuitions about the program’s moral patienthood respond to various tweaks to its design. I would especially like to think more carefully about algorithms that might instantiate conscious “pain” or “pleasure,” and how they might be dissociated from behavior. We have begun to collaborate with a programmer on such a project, but we’re not sure how much effort we will put into it at this time.
  • I’d like to collect others’ moral intuitions, and their explanations of those intuitions, with respect to many cases I have also considered, possibly including different versions of the MESH: Hero program described here (or something like it).
  • I’d like to check my moral intuitions against many more cases — including those proposed by philosophers,223 and further extensions of the MESH: Hero exercise I started elsewhere — and, when making my moral judgments about each case and each version of the program, expend more effort to more closely approximate the “extreme effort” version of my process for making moral judgments than I did for this report.
  • I’d like to research and make the best case I can for insect consciousness, and also research and make the best case I can for chimpanzee non-consciousness, so as to test my intuition that weakly plausible cases can be made for both hypotheses.224
  • I’d like to more closely examine current popular theories (including computational models) of consciousness, and write down what I do and don’t find to be satisfying about them, in a much more thorough and less hand-waving way than I do in Appendix B. In particular, I’d like to evaluate Dehaene (2014) more closely, as it seems to have been convincing to a decent number of theorists who previously endorsed other theories, e.g. see Carruthers (2017).
  • I’d like to more closely study the potential and limits of current methods for studying consciousness in humans, e.g. the psychometric validity of different self-report schemes,225 interpretations of neuroimaging data,226 and the feasibility of different strategies for making progress toward a satisfying theory of consciousness via triangulation of data coming from “the phenomenological reports of patients, psychological testing at the cognitive/behavioral level, and neurophysiological and neuroanatomical findings.”227
  • I’d like to make the exposition of my views on consciousness and moral patienthood more thoroughly-argued and better-explained, so that others can more easily and productively engage with them.
  • I’d like to more thoroughly investigate the current state of the arguments about what the human unconscious can and can’t do.
  • I’d like to expand on my “big picture considerations” and think through more carefully and thoroughly what I should think about them, and what they imply about the distribution question, and which other “big picture considerations” seem most important (that I haven’t already listed).
  • I’d like to more closely examine the arguments concerning “hidden qualia” (see Appendix H).
  • I’d like to more carefully examine current theories about how consciousness evolved.
  • I’d like to think more about what my intuitions about consciousness suggest about consciousness during early human development, current AI systems, and other potential moral patients besides non-human animals, since this report mostly focused on animals.
  • I’d like to study arguments and evidence about the unity of consciousness more closely.228
  • I’d like to study the arguments for and against illusionism more closely, and consider in more depth how illusionism and other approaches should affect my views on the distribution question.
  • I’d like to think more about “valenced” experience (e.g. pain and pleasure), and how it might interact with “basic” consciousness and behavior.
  • I’d like to get a better sense of the likely robustness / reproducibility of empirical work on human consciousness, given the general concerns I outline in Appendix Z.8.

5.2 Projects that others could conduct

During this investigation, I came to think of progress on the “distribution of morally relevant consciousness” question as occurring on three “fronts”:

  1. Progress on theories of consciousness: If we can arrive at a convincing theory of consciousness, or at least a convincing theory of human consciousness, then we can apply that theory to the distribution question. This is the most obvious way forward, and the way science usually works.
  2. Progress on our best theory-agnostic guess about the distribution of consciousness: Assuming we are several decades away from having a convincing theory of consciousness, what should our “best theory-agnostic guess” about the distribution question be in the meantime? Should it be derived from something like a better-developed version of the four-factor approach I described above? Which other factors should be added, and what is our best guess for the value of each variable in that model? Etc.
  3. Progress on our moral judgments about moral patienthood: How do different judges’ moral intuitions respond to different first-person and third-person cases of possible moral patienthood? Do those intuitions change when people temporarily adopt my approach, after engaging in some training for forecasting accuracy? Do we know enough about how moral intuitions vary over time and in different contexts to say much about which moral intuitions we should see as “legitimate,” and which ones we shouldn’t, when thinking about which beings are moral patients? Etc.

Below are some projects that seem like they’d be useful, organized by the “front” each one is advancing.

5.2.1 Projects related to theories of consciousness

  1. Personally, I’m most optimistic about illusionist theories of consciousness, so I think it could be especially useful for illusionists to gather and discuss how to develop their theories further, especially in collaboration with computer programmers, along the lines described here.
  2. Of course, it could also be useful to engage in similar projects to better develop other types of theories.
  3. It could be useful to produce a large reference work on “the explananda of human consciousness.” (“Explananda” means “things to be explained.”) Each chapter would summarize what is currently known about (self-reported) human conscious experience under various conditions. For example there could be chapters on auto-activation deficit, various sensory agnosias, lucid dreams, absence seizures, pain asymbolia, blindsight, split-brain patients, locked-in syndrome, masking studies on healthy subjects, and many other “natural” or experimentally manipulated conditions. Ideally, each chapter would be co-authored by multiple subject-matter experts, including experts who disagree about the interpretation of the primary studies, and would survey expert disagreement about how to interpret those studies. It might also be best if each chapter explained many of the primary studies in some detail, with frank acknowledgment of their design limitations. This reference work could be updated every 5-10 years, and (I hope) would make it much easier for consciousness theorists to understand the full body of evidence that a successful theory of consciousness should be consistent with, and ideally explain.
  4. It could be useful for a “neutral” party (not an advocate of one of the major theories of consciousness) to summarize each major existing theory of consciousness in a fair but critical way, list the predictions they seem to make (according to the theory as stated, not necessarily according to each theory’s advocates, and with a focus on predictions for which the “ground truth” is not already known but could be tested by future studies), and critically examine how well those predictions match the available data plus data produced by novel experiments. They could also argue for some list of key consciousness explananda, and critically examine how thoroughly and precisely each major theory explains those explananda. Ideally, the author(s) of this synthesis would collaborate with both advocates and critics of these theories to help ensure they are interpreting the theories, and the relevant evidence, accurately.
  5. Given my general study quality concerns (see Appendix Z.8), it could be useful to try to improve study quality standards for the types of studies that are used to support theories of (human) consciousness, for example by organizing a conference attended both by experimentalists studying human consciousness and by experts in study robustness / replicability.

For a higher-level overview of scientific work that can contribute to the development of more satisfying theories of consciousness, see Chalmers.

5.2.2 Projects related to theory-agnostic guesses about the distribution of consciousness

  1. It could be helpful for researchers to collect counts of neurons (especially pallial neurons) for a much wider variety of species, since “processing power” is probably the cheapest of the four factors contributing to my “theory-agnostic estimation process” to collect additional data on — except perhaps for “evolutionary distance from humans,” which is already widely measured. (Note: I’d rather not measure brain masses alone, because neuronal scaling rules vary widely across different taxa.229)
  2. It could be useful for a group of neuroscientists, ethologists, and other relevant experts to collaborate on a large reference book that collects data about a long list of PCIFs in a wide variety of taxa, organized similarly to how Shumaker et al. (2011) organizes different types of animal tool behavior by taxon.230 Ideally, the book would explain how each PCIF and taxon was chosen and defined, fairly characterize any ongoing expert debates about whether those PCIFs should have been chosen and how they should be defined, and also fairly characterize ongoing expert debates about the absence, presence, or scalar value of each PCIF in each taxon. Besides its contribution to “theory-agnostic guesses” about the distribution question, such a book would also make it easier to construct and critique theories of consciousness, by gathering lots of relevant data across disparate fields into one place.
  3. After project (2) is completed, it could be useful for several different consciousness experts to make their own extended arguments about which PCIFs should be considered most strongly consciousness-indicating, and what those conclusions imply about the distribution question.
  4. It could be helpful for someone to write a detailed analysis of the case for and against 3-5 potential (non-obvious and substantive) necessary or sufficient conditions for consciousness, along the lines of (a more thorough version of) my analysis of the case for and against “cortex-required views” above. Two additional potential necessary conditions that could be examined in this way are (1) language and (2) a bilaterally symmetric nervous system.
  5. It could be useful for someone to conduct a high-response-rate survey of a wide variety of “consciousness experts,” asking a variety of questions about phenomenal consciousness, consciousness-derived moral patienthood, and their guesses about the distribution of each.231

5.2.3 Projects related to moral judgments about moral patienthood

  1. It could be useful for several people to more-or-less independently try the “extreme effort” version of my process for making moral judgments, and publish the detailed results of this exercise for dozens of variations on dozens of cases. Ideally, each report would include a summary table of the author’s moral judgments with respect to each variation of each case, as in Beckstead (2013), chs. 4 & 5.
  2. It could be useful for a programmer to do something similar to my incomplete MESH: Hero exercise here, but with a new program written from scratch, and with many more (increasingly complicated) versions of it coded, and with the source code of every version released publicly. Then, the next step could be to gather various “consciousness experts” and moral philosophers at a conference and, over the course of a couple days, have the programmer walk them through how each (progressively more complex) version of the program works, answering questions as needed, and taking a silent electronic survey after each version is explained, so that the consciousness experts and moral philosophers can indicate for each version of the program whether they think it is “conscious,” whether they consider it a moral patient (assuming functionalism), and why. All survey responses could then be published (after being properly anonymized, as desired) and analyzed in various ways. After the conference participants have had several months to digest these results, a follow-up conference could feature public debates about whether specific versions of the program are moral patients or not, and why — again with the programmer present to answer any questions about exactly how the program works. In the event that no non-panpsychist participants think any version of the program is conscious or a moral patient (assuming functionalism), the project could shift to a focus on collecting detailed reasons and intuitions about why no versions of the program are conscious or have moral status, and what changes would be required to (maybe) make some version of the program that is conscious or has moral status.

5.2.4 Additional thoughts on useful projects

One project that seems useful but doesn’t fit into the above categorization is further work addressing the triviality objection to functionalism232 — which in my view may be the most compelling objection to physicalist functionalism I’ve seen — e.g. perhaps via computational complexity theory, as Aaronson (2013) suggests.233

In addition to the specific projects listed above, basic “field-building” work is likely also valuable.234 We’ll make faster progress on the likely distribution of phenomenal consciousness if there are a greater number of skilled researchers devoted to the problem than there are today. So far, the topic has been fairly neglected, though several recent books on the topic235 may begin to help change that. On the “theory of consciousness” front, illusionist approaches seem especially neglected relative to how promising they seem (to me) to be. Efforts on the distribution question and illusionist approaches to consciousness could be expanded via workshops, conferences, post-doctoral positions, etc.

There are also many projects that I would likely suggest as high-priority if I knew more than I do now. I share Daniel Dennett’s intuition that perhaps the most promising path forward on the distribution question is to devise a theory focused on human consciousness — because humans are the taxon for which we can get the strongest evidence about consciousness and its character (self-report) — and then “look and see which features of that account apply to animals, and why.”236 According to that approach, much of the most important work to be done on the distribution of consciousness will take the form of consciousness-related experiments conducted on humans. However, I’m not sure which specific studies I’d most like to see conducted, because I haven’t yet taken the time to deeply familiarize myself with the latest studies and methods of human consciousness research.

It also seems likely that we need fundamental breakthroughs in “tools and techniques” to make truly substantial progress in understanding the mechanisms of consciousness. Consciousness is very likely a phenomenon to be explained at the level of neural networks and the information processes they instantiate, but our our current tools are not equipped to probe that level effectively.237 As such, much of the important progress that could be made in the study of consciousness would come not from consciousness-specific work, but from the development of new tools and techniques that are useful for understanding the brain at the level of information processing in neural networks.238

Many of the projects I’ve suggested above are quite difficult. Some of them would require steady, dedicated work from a moderately large team of experts over the course of many years. But, it seems to me the problem of consciousness is worth a lot of work, especially if you share my intuition that it may be the most important criterion for moral patienthood. A good theory of human consciousness could help us understand which animals and computer programs we should morally care about, and what we can do to benefit them. Without such knowledge, it is difficult for altruists to target their limited resources efficiently.

6 Appendices

See here for a brief description of each appendix.

6.1 Appendix A. Elaborating my moral intuitions

In this appendix, I describe my process for making moral judgments, and then report the outputs of that process for some particular cases, so as to further explain “where I’m coming from” on the topic of consciousness and moral patienthood.

6.1.1 Which kinds of consciousness-related processes do I morally care about?

Given my metaethical approach, when I make a “moral judgment” about something (e.g. about which kinds of beings are moral patients), I don’t conceive of myself as perceiving an objective moral truth, or coming to know an objective moral truth via a series of arguments. Nor do I conceive of myself as merely expressing my moral feelings as they stand today. Rather, I conceive of myself as making a conditional forecast about what my values would be if I underwent a certain “idealization” or “extrapolation” procedure (coming to know more true facts, having more time to consider moral arguments, etc.).

This metaethical approach begins to look a bit like something worthy of being called “moral realism” if you are optimistic that all members of a certain broad class of moral reasoners would converge on roughly the same values if all of them underwent a similar extrapolation procedure (one that was designed “sensibly” rather than designed merely to ensure convergence).239 I think there would be some values convergence among moral reasoners, but not enough for me to be expect that, say, everyone who reads this report within 5 years of its publication would, upon completing a “sensible” extrapolation procedure, converge on roughly the same values.240

Hence, in sharing my intuitions about moral patients below, I see no way to escape the limitation that they are merely my moral judgments. Nevertheless, I suspect many readers will feel that they have similar but not identical moral intuitions. Moreover, as mentioned earlier, I think that sharing my intuitions about moral patients is an important part of being clear about “where I’m coming from” on consciousness, especially since my moral intuitions no doubt affect my preliminary guesses about the distribution of consciousness even if I do not explicitly refer to my moral intuitions in justifying those guesses.

6.1.2 The “extreme effort” version of my process for making moral judgments

To provide more detail on my (ideal) process for making moral judgments, I provide below a description of the “extreme effort” version of my process for making moral judgments. However, I should note that I very rarely engage all the time-consuming cognitive operations described below when making moral judgments, and I did not engage all of them when making the moral judgments reported in this appendix. Rather, I made those moral judgments after running a small subset of the processes described below — whichever processes intuitively seemed, in the moment and for each given case, as though they were likely to quickly and noticeably improve my approximation of the “extreme effort” process described below.

I expect most readers will want to skip to the next subsection, and not bother to read the bullet-points summary of my “extreme effort” process for making moral judgments described below. Nevertheless, here it is:

  • I try to make the scenario I’m aiming to forecast as concrete as possible, so that my brain is able to treat it as a genuine forecasting challenge, akin to participating in a prediction market or forecasting tournament, rather than as a fantasy about which my brain feels “allowed” to make up whatever story feels nice, or signals my values to others, or achieves something else that isn’t forecasting accuracy.241 In my case, I concretize the extrapolation procedure as one involving a large population of copies of me who learn many true facts, consider many moral arguments, and undergo various other experiences, and then collectively advise me about what I should value and why.242
  • However, I also try to make forecasts I can actually check for accuracy, e.g. about what my moral judgment about various cases will be 2 months in the future.
  • When making these forecasts, I try to draw on the best research I’ve seen concerning how to make accurate estimates and forecasts. For example I try to “think like a fox, not like a hedgehog,” and I’ve engaged in several hours of probability calibration training, and some amount of forecasting training.243
  • Clearly, my current moral intuitions serve as one important source of evidence about what my extrapolated values might be. However, recent findings in moral psychology and related fields lead me to assign more evidential weight to some moral intuitions than to others. More generally, I interpret my current moral intuitions as data generated partly by my moral principles and partly by various “error processes” (e.g. a hard-wired disgust reaction to spiders, which I don’t endorse upon reflection). Doing so allows me to make use of some standard lessons from statistical curve-fitting when thinking about how much evidential weight to assign to particular moral intuitions.244
  • As part of forecasting what my extrapolated values might be, I like to consider different processes and contexts that could generate alternate moral intuitions in moral reasoners both similar and dissimilar to my current self, and consider how I feel about the the “legitimacy” of those mechanisms as producers of moral intuitions. For example I ask myself questions such as “How might I feel about that practice if I was born into a world for which it was already commonplace?” and “How might I feel about that case if my built-in (and largely unconscious) processes for associative learning and imitative learning had been exposed to different life histories than my own?” and “How might I feel about that case if I had been born in a different century, or a different country, or with a greater propensity for clinical depression?” and “How might a moral reasoner on another planet feel about that case if it belonged to a more strongly r-selected species (compared to humans) but had roughly human-like general reasoning ability?”245
  • Observable patterns in how people’s values change (seemingly) in response to components of my proposed extrapolation procedure (learning more facts, considering moral arguments, etc.) serve as another source of evidence about my extrapolated values. For example, the correlation between aggregate human knowledge and our “expanding circle of moral concern” (Singer 2011) might (very weakly) suggest that, if I continued to learn more true facts, my circle of moral concern would continue to expand. Unfortunately, such correlations are badly confounded, and might not provide much evidence at all with respect to my extrapolated values.246
  • Personal facts about how my own values have evolved as I’ve learned more, considered moral arguments, and so on, serve as yet another source of evidence about my extrapolated values. Of course, these relations are likely confounded as well, and need to be interpreted with care.247

6.1.3 My moral judgments about some particular cases

6.1.3.1 My moral judgments about some first-person cases

What, then, do my moral intuitions say about some specific cases? I’ll start with some “first-person” cases that involve my own internal experiences. The next section will discuss some “third-person” cases, which I can only judge “from the outside,” by guessing about what those algorithms might feel like “from the inside.”

The starting point for my moral intuitions is my own phenomenal experience. The reason I don’t want others to suffer is that I know what it feels like when I cut my hand, or when I feel sad, or when my goals are thwarted, and I don’t want others to have experiences like that. Likewise, the reason I want others to flourish is that I know what it feels when I taste chocolate ice cream, or when I feel euphoric, or when I achieve my goals, and I do want others to have experiences like that.

What if I am injured, or my goals are thwarted, but I don’t have a subjective experience of that? Earlier, I gave the example of injuring myself while playing sports, but not noticing my injury (and its attendant pain) until 5 seconds after the injury occurred, when I exited my flow state. Had such a moment been caught on video, I suspect the video would show that I had been unconsciously favoring my hurt ankle while I continued to chase after the ball, even before I realized I was injured, and before I experienced any pain. So, what if a fish’s experience of nociception is like my “experience” of nociception before exiting the flow state?248 If that’s how it works, then I’m not sure I care about such fish “experiences,” for the same reason I don’t care about my own “experience” of nociception before I exited the flow state. (Of course, I care about the conscious pain that came after, and I care about the conscious experience of sadness at having to sit out the rest of the game as a result of my injury, but I don’t think I care about whatever nociception-related “experience” I had during the 5 seconds before I exited the flow state.)

Next, what if I was conscious, but there was no positive or negative “valence” to any part of my conscious experience? Suppose I was consciously aware of nociceptive signals, but they didn’t bother me at all, as pain asymbolics report.249 Suppose I was similarly aware of sensations that would normally be “positive,” but I didn’t experience them as either positive or negative, but rather experienced them as I experience neutral touch, for example how it feels when my fingers tap away at my keyboard as I write this sentence. Moreover, suppose I had goals, and I had the conscious experience of making plans that I predict would achieve those goals, and I consciously knew when I had achieved or not-achieved those goals, but I didn’t emotionally care whether I achieved them or not, I didn’t feel any happiness or disappointment upon achieving or not-achieving them, and so on. Would I consider such a conscious existence to have moral value? Here again, I’m unsure, but my guess is that I wouldn’t consider such conscious existence to have moral value. If fishes are conscious, but the character of their conscious experience is like this, then I’m not sure I care about fishes. (Keep in mind this is just an illustration: if fishes are conscious at all, then my guess is that they experience at least some nociception as unpleasant pain rather than as an unbothersome signal like the pain asymbolic does.)

This last example is similar to a thought experiment invented by Peter Carruthers, which I consider next.

6.1.3.2 The Phenumb thought experiment

Carruthers (1999) presents an interesting intuition pump concerning consciousness and moral patienthood:

Let us imagine, then, an example of a conscious, language-using, agent — I call him ‘Phenumb’ who is unusual only in that satisfactions and frustrations of his conscious desires take place without the normal sorts of distinctive phenomenology. So when he achieves a goal he does not experience any warm glow of success, or any feelings of satisfaction. And when he believes that he has failed to achieve a goal, he does not experience any pangs of regret or feelings of depression. Nevertheless, Phenumb has the full range of attitudes characteristic of conscious desire-achievement and desire-frustration. So when Phenumb achieves a goal he often comes to have the conscious belief that his desire has been satisfied, and he knows that the desire itself has been extinguished; moreover, he often believes (and asserts) that it was worthwhile for him to attempt to achieve that goal, and that the goal was a valuable one to have obtained. Similarly, when Phenumb fails to achieve a goal he often comes to believe that his desire has been frustrated, while he knows that the desire itself continues to exist (now in the form of a wish); and he often believes (and asserts) that it would have been worthwhile to achieve that goal, and that something valuable to him has now failed to come about.

Notice that Phenumb is not (or need not be) a zombie. That is, he need not be entirely lacking in phenomenal consciousness. On the contrary, his visual, auditory, and other experiences can have just the same phenomenological richness as our own; and his pains, too, can have felt qualities. What he lacks are just the phenomenal feelings associated with the satisfaction and frustration of desire. Perhaps this is because he is unable to perceive the effects of changed adrenaline levels on his nervous system, or something of the sort.

Is Phenumb an appropriate object of moral concern? I think it is obvious that he is. While it may be hard to imagine what it is like to be Phenumb, we have no difficulty identifying his goals and values, or in determining which of his projects are most important to him — after all, we can ask him! When Phenumb has been struggling to achieve a goal and fails, it seems appropriate to feel sympathy: not for what he now feels — since by hypothesis he feels nothing, or nothing relevant to sympathy — but rather for the intentional state which he now occupies, of dissatisfied desire. Similarly, when Phenumb is engaged in some project which he cannot complete alone, and begs our help, it seems appropriate that we should feel some impulse to assist him: not in order that he might experience any feeling of satisfaction — for we know by hypothesis that he will feel none — but simply that he might achieve a goal which is of importance to him. What the example reveals is that the psychological harmfulness of desire-frustration has nothing (or not much — see the next paragraph) to do with phenomenology, and everything (or almost everything) to do with thwarted agency.

The qualifications just expressed are necessary, because feelings of satisfaction are themselves often welcomed, and feelings of dissatisfaction are themselves usually unwanted. Since the feelings associated with desire-frustration are themselves usually unpleasant, there will, so to speak, be more desire-frustration taking place in a normal person than in Phenumb in any given case. For the normal person will have had frustrated both their world-directed desire and their desire for the absence of unpleasant feelings of dissatisfaction. But it remains true that the most basic, most fundamental, way in which desire-frustration is bad for, or harmful to, the agent has nothing to do with phenomenology.

My initial intuitions agree with Carruthers, but upon reflection, I lean toward thinking that Phenumb is not a moral patient (at least, not via the character of his consciousness), so long as he does not have any sort of “valenced” or “affective” experiences. (Phenumb might, of course, be a moral patient via other criteria.)

Carruthers suggests a reason why some people (like me) might have a different moral intuition about this case than he does:

What emerges from the discussions of this paper is that we may easily fall prey to a cognitive illusion when considering the question of the harmfulness to an agent of non-conscious frustrations of desire. In fact, it is essentially the same cognitive illusion which makes it difficult for people to accept an account of mental-state consciousness which withholds conscious mental states from non-human animals. In both cases the illusion arises because we cannot consciously imagine a mental state which is unconscious and lacking any phenomenology. When we imagine the mental states of non-human animals we are necessarily led to imagine states which are phenomenological; this leads us to assert… that if non-human animals have any mental states at all…, then their mental states must be phenomenological ones. In the same way, when we try to allow the thought of non-phenomenological frustrations of desire to engage our sympathy we initially fail, precisely because any state which we can imagine, to form the content of the sympathy, is necessarily phenomenological; this leads us… to assert that if non-human animals do have only non-conscious mental states, then their states must be lacking in moral significance.

In both cases what goes wrong is that we mistake what is an essential feature of (conscious) imagination for something else — an essential feature of its objects, in the one case (hence claiming that animal mental states must be phenomenological); or for a necessary condition of the appropriateness of activities which normally employ imagination, in the other case (hence claiming that sympathy for non-conscious frustrations is necessarily inappropriate). Once these illusions have been eradicated, we see that there is nothing to stand in the way of the belief that the mental states of non-human animals are non-conscious ones, lacking in phenomenology. And we see that this conclusion is perfectly consistent with according full moral standing to the [non-conscious, according to Carruthers] sufferings and disappointments of non-human animals.

It is interesting to consider the similarities between Carruthers’ fictional Phenumb and the real-life cases of auto-activation deficit (AAD) described in Appendix G. These patients are (as far as we can tell) phenomenally conscious like normal humans are, but — at least during the period of time when their AAD symptoms are most acute — they report having approximately no affect or motivation about anything. For example, one patient “spent many days doing nothing, without initiative or motivation, but without getting bored. The patient described this state as “a blank in my mind’ ” (Laplane et al. 1984).

Several case reports (see Appendix G) describe AAD patients as being capable of playing games if prompted to do so. Suppose we could observe an AAD patient named Joan, an avid chess player. Next, suppose we prompted her to play a game of chess, waited until some point in the midgame, and then asked her why she had made her latest move. To pick a dramatic example, suppose her latest move was to take the opponent’s Queen with her Rook. Given the case reports I’ve read, it sounds as though Joan might very well be able (like Phenumb) to explain why her latest move was instrumentally useful for the goal of checkmating the opponent’s King. Moreover, she might be able to explain that, of course, her goal at the moment is to checkmate the opponent’s King, because that is the win condition for a chess game. But, if asked if she felt (to use Carruthers’ phrase) “a warm glow of success” as a result of taking the opponent’s Queen, it sounds (from the case reports) as though Joan would say she did not feel any such thing.250

Or, suppose Joan had her Queen taken by the opponent’s Rook. If asked, perhaps she could report that this event reduced her chances of checkmating the opponent’s King, and that her goal (for the moment) was still to checkmate the opponent’s King. But, based on the AAD case reports I’ve seen, it seems that she would probably report that she felt no affective pang of disappointment or regret at the fact that her Queen had just been captured. Has anything morally negative happened to Joan?251 My intuitions say “no,” but perhaps Carruthers’ intuitions would say “yes.”

So as to more closely match Joan’s characteristics to Phenumb’s, we might also stipulate that Joan is a pain asymbolic (Grahek 2007) and also, let’s say, a “pleasure asymbolic.” Further, let’s stipulate that we can be absolutely certain Joan cannot recover from her conditions of AAD, pain asymbolia, and pleasure asymbolia. Is there now a moral good realized when Joan, say, wins a chess game or accomplishes some other goal? Part of me wants to say “Yes, of course! She has goals and aversions, and she can talk to you about them.” But upon further reflection, I’m not sure I should endorse those empathic impulses in the very strange case of Joan, and I’m not so sure I should think that moral good or harm is realized when Joan’s goals are realized or frustrated — putting aside her earlier experiences, including whatever events led to her AAD, pain asymbolia, and pleasure asymbolia.

6.1.3.3 My moral judgments about some third-person cases

It is difficult to state my moral intuitions about whether specific (brained) animals are moral patients or not, because I don’t know what their brains are doing. Neuroscientists know many things about how individual neurons work, and they are starting to learn a few things about how certain small populations of neurons work, and they can make some observations about how the brain works at the “macro” scale (e.g. via fMRI), but they don’t yet know which particular algorithms brains use to accomplish their tasks.252

Hence, it is easier to state my moral intuitions about computer programs, especially when I have access to their source code, or at least have a rough sense of how they were coded. (As a functionalist, I believe that the right kind of computer program would be conscious, regardless of whether it was implemented via a brain or brain-like structure or implemented some other way.) In the course of reporting some of my moral intuitions, I will also try to illustrate the problematic vagueness of psychological terms (more on this below).

For example, consider the short program below, written in Python (version 3).253 My hope is that even non-programmers will be able to understand what the code below does, especially with the help of my comments. (In Python code, any text following a # symbol is a “comment,” which means it is there to be read by human readers of the source code, and is completely ignored by the interpreter or compiler program that translates the human-readable source code into bytecode for the computer to run. Thus, comments do not affect how the program runs.)

You may need to scroll horizontally to read all of the source code.

# increment_my_pain.py
my_pain = 0   # Create a variable called my_pain, store the value 0 in it.
while True:   # Execute the code below, in a continuous loop, forever.
  my_pain += 1    # Increment my_pain by 1.
  print("My current pain level is " + str(my_pain) + ".")   # Print current value of my_pain.

 

If you compile and run this source code, it will continuously increment the value of my_pain by 1, and print the value of my_pain to the screen after each increment, like this:

My current pain level is 1.
My current pain level is 2.
My current pain level is 3.

…and so on, until you kill the process, the computer runs out of memory and crashes, or the process hits some safeguard built into the browser or operating system from which you are running the program.

My moral intuitions are such that I do not care about this program, at all. This program does not experience pain. It does not “experience” anything. There is nothing that it “feels like” to be this program, running on my computer. It has no “phenomenal consciousness.”

To further illustrate why I don’t care about this program, consider running the following program instead:

# increment_my_pleasure.py
my_pleasure = 0
while True:
  my_pleasure += 1
  print("My current pleasure level is " + str(my_pleasure) + ".")

Is the moral value of this program any different than that of increment_my_pain.py? I think not. The compiler doesn’t know what English speakers mean when we use the strings of letters “pleasure” and “pain.” In fact, if I didn’t hard-code the words “pleasure” and “pain” into the printing string of each program, the compiler would transform increment_my_pain.py and increment_my_pleasure.py into the exact same bytecode, which will run exactly the same on the same virtual machine.254

The same points hold true for a similar program using a nonsensical variable name:

# increment_flibbertygibbets.py
flibbertygibbets = 0
while True:
  flibbertygibbets += 1
  print("My current count of flibbertygibbets is " + str(flibbertygibbets) + ".")

While this simple illustration is fairly uninformative and (I hope) uncontroversial, I do think that testing one’s moral intuitions against snippets of source code — or, against existing programs for which you have some idea of how they work — is a useful way to make progress on the questions of moral patienthood.255 Most discussions of the criteria for moral patienthood use vague psychological language such as “goals” or “experience,” which can be interpreted in many different ways. In contrast, computer code is precise.

To illustrate how problematic vague psychological language can be when discussing theories of consciousness and moral patienthood, I consider below how some computer programs could be said to qualify as conscious on some (perhaps not very charitable) interpretations of vague terms like “goals.”256 (Hereafter on this point, I’ll just say “moral patienthood,” since it is a common view, and the one temporarily assumed for this report, that consciousness is sufficient for moral patienthood.)

I don’t know whether advocates of these theories would agree that the programs I point to below satisfy their verbal description of their favorite theory. My guess is that in most cases, they wouldn’t think these programs are conscious. But, it’s hard to know for sure, and theories of consciousness could be clarified by pointing to existing programs or snippets of code that do and don’t satisfy various components of these theories.257 Such an exercise would provide a clearer account of theories of consciousness than is possible using vague terms such as “goal” and “self-modeling.”

Consider, for example, the algorithm controlling Mario in the animation below:258

 

Mario A* search
Animation captured by the author from a video by Robin Baumgarten.

 

In the full video, Mario dodges bullets, avoids falling into pits, runs toward the goal at the end of the level, stomps on the heads of some enemies but “knows” to avoid doing so for other enemies (e.g. ones with spiky shells), kills other enemies by throwing fireballs at them, intelligently “chooses” between many possible paths through the level (indicated by the red lines), and more. Very sophisticated behavior! And yet it is all a consequence of a very simple search algorithm called A* search.

I won’t explain how the A* search algorithm works, but if you take the time to examine it — see Wikipedia’s article on A* search for a general explanation, or Github for the source code of this Mario-playing implementation — I suspect you’ll be left with the same intuition I have: that the algorithm controlling Mario has no conscious experience, and is not a moral patient.259 And yet, this Mario-controlling algorithm arguably exhibits many of the features that are often considered to be strong indicators of consciousness.

But these are just isolated cases, and there is a more systematic way we can examine our intuitions about moral patients, and explore the problematic vagueness of psychological terms, using computer code — or at least, using a rough description of code that we are confident experienced programmers could figure out how to write. We can start with a simple program, and then gradually add new features to the code, and consult our moral intuitions at each step along the way. That is the exercise I begin (but don’t finish) in the next section. Along the way, it will probably become clearer why I have a “fuzzy” view about consciousness. The next section probably also helps to illustrate what I find unsatisfying about all current theories of consciousness, a topic I discuss in more detail in Appendix B.

6.1.3.4 My moral judgments, illustrated with the help of a simple game

 

MESH: Hero demonstration.gif
Animation captured by the author while playing MESH: Hero.

Many years ago, I discovered a series of top-down puzzle games called MESH: Hero. To get to the exit of each tile-based level, you must navigate the Hero character through the level, picking up items (e.g. keys), using those items (e.g. to open doors), avoiding obstacles and enemies (e.g. fire), and interacting with objects (e.g. pushing a slanted mirror in front of a laser so that laser beam is redirected and burns through an obstacle for you). Each time the player moves the Hero character by one tile, everything else in the game “progresses” one step, too — for example enemies move forward one step. (See animated screenshot.)

I wrote the code to add some additional interactive objects to the game,260 so I have some idea of how the game works at a source-code level. To illustrate, I’ll describe what happens when the Hero is standing on a tile that is within the blast zone of the Bomb when it explodes. First, a message is passed to check whether the Hero object has a Shield in its inventory. If it does, nothing happens. If the Hero object does not have a Shield, then the Hero object is removed from the level and a new HeroDead object — which looks like the Hero lying down beneath a gravestone — is placed on the same tile.

Did anything morally bad happen, there? I think clearly not, for reasons pretty similar to why I don’t morally care about increment_my_pain.py. But, we can use this simplified setup to talk concretely — including with executable source code, if we want — about what we do and don’t intuitively morally care about.

In MESH: Hero, some enemies’ movements can be predicted using (roughly) Daniel Dennett’s “physical stance” (or perhaps his “design stance”). For example, at each time step (when the player moves), the Creeper — the pink object moving about in the animated screenshot — works like this: (1) If there is no obstacle one tile to the left, move one tile to the left, now facing that direction; (2) if there’s an obstacle to the left but no obstacle one tile straight ahead, move one tile straight ahead; (3) if there are obstacles to the left and straight ahead, but no obstacle one tile to the right, move one tile to the right and face that direction; (4) if there are obstacles to the left, straight ahead, and to the right, but not behind, move one space backward, now facing that direction; (5) if there are obstacles on all sides, do nothing.261 Now: is the Creeper a moral patient? I think not.

Some other enemies can be predicted using (roughly) Dennett’s “intentional stance.” For example the Worm, in action, looks as though it wants to get to the Hero. (The Worm is the purple moving object in the animated screenshot.) At each time step, the Worm retrieves the current X/Y coordinates of itself and the Hero (in the level’s grid of tiles), then moves one tile closer to the Hero, so long as there isn’t an obstacle in the way. For example, let’s designate columns with letters and rows with numbers, and say that the Worm is on G5 and the Hero is on E3. In this case, the Worm will be facing diagonally toward the Hero, and will try to move to F4 (diagonal moves are allowed). But if there is an obstacle on F4, it will instead try to move one tile “right and forward” (to G4). But if there’s also an obstacle on G4, it will try to move “left and forward” (to F5). And if there are obstacles on all those tiles, it will stay put. Given that the Worm could be said to have a “goal” — to reach the same tile as the Hero — is the Worm a moral patient? My moral judgment is “no.”

I imagine you have these same intuitions. Now, let’s imagine adding new features to the game, and consider at each step whether our moral intuitions change.262

1. Planning Hero: Imagine the Hero object is programmed to find its own path through the levels. This could essentially work the same way a chess-playing computer does: the Hero object would be programmed with knowledge of how all the objects in the game work, and then it would search all possible “paths” the game would take — including e.g. picking up keys and using them on doors, how each Worm would move in response to each of the Hero’s possible moves, and so on — and find at least one path to the Exit.263 The program could use A* search, or alpha-beta pruning, perhaps with some heuristic improvements. Alternately, the program could use a belief-desire-intention (BDI) architecture to enable its planning.264 Pathfinding or BDI-based versions of the Hero object would be even more tempting to interpret using Dennett’s “intentional stance” than the Worm is. Now is the Hero object a moral patient? (I think not.)

Does this version of the program satisfy any popular accounts of consciousness or moral patienthood? Again, it depends on how we interpret vague psychological terms. For example, Peter Carruthers argues that a being can be morally harmed by “the known or believed frustration of first-order desires,” and he is explicit that this does not require phenomenal consciousness.265 If the Hero object has an explicitly-programmed goal to reach the Exit object, and its (non-conscious) first-order desire to achieve this goal is frustrated (e.g. by obstacle or enemy objects), and the Hero object’s BDI architecture stores the fact that this desire was frustrated as one of its “beliefs,” has the Hero object been harmed in a morally relevant way? I would guess Carruthers thinks the answer is “no,” but why? Why wouldn’t the algorithm I’ve described count as having a belief that its first-order desire was frustrated? How would the program need to be different in order for it to have such a belief?

One might also wonder whether the Hero object in this version of the program satisfies (some interpretations of) the core Kantian criterion for moral patienthood, that of rational agency.266 Given that this Hero object is capable of its own means-end reasoning, is it thus (to some Kantians) an “end in itself,” whose dignity must be respected? Again, I would guess the answer is “no,” but why? What counts as “rational agency,” if not the means-end reasoning of the Hero object described above? What computer program would count as exhibiting “rational agency,” if any?

2. Partially observable environment: Suppose the Hero still uses a pathfinding algorithm to decide its next move, except that instead of having access to the current location and state of every object in the level, it only has access to the location and state of every object “within the Hero’s direct line of sight” — that is, not on the other side of a wall of some other opaque object, relative to the Hero’s position. Now the environment is only “partially observable.” In cases where a path to the Exit is not findable via the objects the Hero can “see,” the Hero object will systematically explore the space (via its modified pathfinding algorithm) until its built-up “knowledge” of the level is complete enough for its pathfinding algorithm to find a path to the Exit. Is the Hero object now a moral patient?

3. Non-discrete movement and collision detection: Suppose that objects in the game “progress” not whenever the Hero moves, but once per second. (The Hero also has one opportunity to move per second.) Moreover, when objects move, they do not “jump” discretely from one tile to the next, but instead their location changes “continuously” (i.e. one pixel at a time; think of a pixel as the smallest possible area in a theory of physics that quantizes area, such as loop quantum gravity) from the center of one tile to the center of the next tile. Let’s say tiles are 1000×1000 pixels (it’s now a very high-resolution game), and since objects move at one tile-width per second, that means they move one pixel per millisecond (ms). Now, instead of objects interacting by checking (at each time step) whether they are located on the same tile as another object, there is instead a collision detection algorithm run by every object to check whether another object has at least one pixel overlapping with one of its own pixels. Each object checks a ten-pixel-deep layer of pixels running around its outermost edge (let’s calls this layer the “skin” of each object), each millisecond. So e.g. if the Hero’s collision detection algorithm detects that a pixel on the Hero’s “face” is overlapping with a pixel of a Worm, then the Hero object is removed from the level and replaced with the HeroDead object immediately, without waiting until both the Hero and the Worm have completed their moves to the center of the same tile. Is the Hero object now a moral patient? (I still think not.)

4. Nociception and nociceptive reflexes: Now, suppose we give the Hero object nociceptors. That is: 1/100th of the pixels in the Hero’s “skin” layer are designated as “nociceptors.” Once per ms, the Hero’s CheckNociception() function checks those pixels for collisions with the pixels of other objects, and if it detects such a “collision,” it runs the NociceptiveReflex() function, which moves the Hero “away” from that collision at a speed of 1 pixel per 0.5ms. By “away,” I mean that, for example, if the collision happened in a pre-defined region of the Hero’s skin layer that is sensibly called the “top-right” region, the Hero moves toward the center of the tile that is one tile down and left from the tile that the center of the Hero is currently within. Naturally, the Hero might fail to move in this direction because it detects an obstacle on that tile, in which case it will stay put. Or there might be a Worm or other enemy on that tile. In any case, another new function executed by the Hero object, CheckInjury(), runs a collision check for all pixels “inside” (closer to the center than) the skin layer, and if there are any such collisions detected, the Hero object is replaced with HeroDead. Is the Hero object a moral patient now? (My moral judgment remains “no.”)

5. Health meter: Next, we give the Hero object an integer-type variable called SelfHealth, which initializes at 1000. When it reaches 0, the Hero object is replaced with the HeroDead object. Each collision detection in the Hero’s skin layer reduces the SelfHealth variable by 1, and each collision detection “inside” the Hero’s skin layer reduces the SelfHealth variable by 5. Now is the Hero a moral patient? (I still think “no.”)

6. Nociception sent to a brain: Now, a new sub-object of the Hero, called Brain, is the object that can call the NociceptiveReflex() function. It also runs its own collision detection for a 50×50 box of pixels (the “brain pixels”) in the middle of the Hero’s “head,” and if it detects collisions with other “external” objects (e.g. a Worm) there, SelfHealth immediately goes to 0. Moreover, rather than a single Hero-wide CheckNociception() function checking for pixel collisions at each of the pixels designated “nociceptors,” each nociceptor is instead defined in the game as its own object, and it runs its own collision detection function. If a nociceptor detects a collision, it creates a new object called NociceptiveSignal, which thereafter moves at a speed of 1 pixel per 0.1ms toward the nearest of the “brain pixels.” If the Brain object’s CheckNociception() function detects a collision between a “brain pixel” and a NociceptiveSignal object (instead of with an “external” object like a Worm), then it executes the NociceptiveReflex() function, using data stored in the NociceptiveSignal object to determine which edge of the Hero to move “away” from. Is the Hero object, finally, a moral patient?

By now, the program I’m describing seems like it might satisfy several of the criteria that Braithwaite (2010) uses to argue that fishes are conscious, including the presence of nociceptors, the transmission of nociceptive signals to a brain for central processing, the ability to use mental representations, a rudimentary form of self-modeling (e.g. via the SelfHealth variable, and via making plans to navigate the Hero object to the Exit while avoiding events that would cause the Hero object to be replaced with HeroDead), and so on.267 And yet, I don’t think this version of the Hero object is conscious, and I’d guess that Braithwaite would agree. But if this isn’t what Braithwaite means by “nociception,” “mental representations,” and so on, then what does she mean? What program would satisfy one or more of her indicators of consciousness?

I think this exercise can be continued, in tiny steps, until we’ve described a sophisticated 2D Hero object that seems to exhibit many commonly-endorsed criteria for (or indicators of) moral patienthood or consciousness.268 Moreover, such sophisticated Hero objects could not just be described, but (I claim) programmed and run. And yet, when I carry out that exercise (in my head), I typically do not end up having the intuition that any of those versions of the MESH: Hero code — especially those described above — are conscious, or moral patients.

There are, however, two kinds of situations, encountered when continuing this exercise in my head, in which I begin to worry that the program I’m imagining might be a phenomenally conscious moral patient if it was coded and run.

First, I begin to worry about the Hero object’s moral patienthood when the program I’m imagining gets so complicated that I can no longer trace what it’s doing, e.g. if I control the Hero agent using a very large deep reinforcement learning agent that has learned to navigate the game world via millions of play-throughs using only raw pixel data, or if I control the Hero object using a complicated candidate solution discovered via an evolutionary algorithm.269

Second, I begin to worry about the Hero object’s moral patienthood when it begins to look like the details of my own phenomenal experience might be pretty fully captured by how the program I’m imagining works, and thus I start to worry it might be a moral patient precisely because I can trace what it’s doing. My approach assumes that phenomenal consciousness is how a certain kind of algorithm feels “from the inside,”270 and, after some thought, I was able to piece together (in my head) a very rough sketch of a program that, from the outside, looks to me like it might, with some elaboration and careful design beyond what I was able to sketch in my head, feel (from the inside) something like my own phenomenal experience feels to me. (Obviously, this conclusion is very speculative, and I don’t give it much weight, and I don’t make use of it in the rest of this report, but it is also quite different from my earlier state of understanding, under which no theory or algorithm I had read about or considered seemed to me like it might even come close to feeling from the inside like my own phenomenal experience feels to me.)

Unfortunately, it would require a very long report for me to explore and then explain what I think such a program looks like (given my intuitions), so for this report all I’ve done is pointed to some of the key inspirations for my intuitive, half-baked “maybe-conscious” program (my “GDAK” account described here). In the future, I hope to describe this program in some detail, and then show how my moral intuitions respond to various design tweaks, but we decided this exercise fell beyond the scope of this initial report on moral patienthood.

In any case, I hope I have explained at least a few things about how my moral intuitions work with respect to moral patienthood and consciousness, so that my readers have some sense of “where I’m coming from.”

6.2 Appendix B. Toward a more satisfying theory of consciousness

In this appendix, I describe some popular theories of (human) consciousness, explain the central reason why I find them unsatisfying, and conclude with some thoughts about how a more satisfying theory of consciousness could be constructed. (See also my comments above about Michael Tye’s PANIC theory.)

In short, I think even the most compelling extant theories of consciousness are, in the words of Cohen & Dennett (2011):

…merely the beginning, rather than the end, of the study of consciousness. There is still much work to be done…

Neuroscientist Michael Graziano states the issue more vividly (and less charitably):271

I was in the audience watching a magic show. Per protocol a lady was standing in a tall wooden box, her smiling head sticking out of the top, while the magician stabbed swords through the middle.

A man sitting next to me whispered to his son, “Jimmy, how do you think they do that?”

The boy must have been about six or seven. Refusing to be impressed, he hissed back, “It’s obvious, Dad.”

“Really?” his father said. “You figured it out? What’s the trick?”

“The magician makes it happen that way,” the boy said.

Graziano’s point is that “the magician makes it happen” is not much of an explanation. There is still much work to be done. Current theories of consciousness take a few steps toward explaining the details of our conscious experience, but at some point they end up saying “and then [such-and-such brain process] makes consciousness happen.” And I want to say: “Well, that might be right, but how do those processes make consciousness happen?”272 Or in some cases, a theory of consciousness might not make any attempt to explain some important feature of consciousness, not even at the level of “[such-and-such brain process] makes it happen.”

As I said earlier, I think a successful explanation of consciousness would show how the details of some theory predict, with a fair amount of precision, the explananda of consciousness — i.e., the specific features of consciousness that we know about from our own phenomenal experience and from (reliable, validated) cases of self-reported conscious experience (e.g. in experiments, or in brain lesion studies).

Current theories of consciousness, I think, do not “go far enough” — i.e., they don’t explain enough consciousness explananda, with enough precision — to be compelling (yet).273 Below, I elaborate this issue with respect to three popular theories of consciousness (for illustrative purposes): temporal binding theory, integrated information theory, and global workspace theory.274

It’s possible this “doesn’t go far enough” complaint would be largely accepted by the leading proponents of these theories, because (I would guess) none of them think they have described a “final” theory of consciousness, and (I would guess) all of them would admit there are many details yet to be filled in. This is, after all, a normal way to make progress in science: propose a simple model, use the model to make novel predictions, test those predictions, revise the model in response to experimental results, and so on. Nevertheless, in some cases the leading proponents of these theories write as though they have already put forward a near-final theory of consciousness, and I hope to illustrate below why I think we have “a long way to go,” even if these theories are “on the right track,” and then explain how I think we can do better (with a lot of hard work).

6.2.1 Temporal binding theory

Of the modern theories of consciousness, the first one Graziano (2013) complains about (ch. 1) is Francis Crick and Christof Koch’s temporal binding theory:

[Crick and Koch] suggested that when the electrical signals in the brain oscillate they cause consciousness. The idea… goes something like this: the brain is composed of neurons that pass information among each other. Information is more efficiently linked from one neuron to another, and more efficiently maintained over short periods of time, if the electrical signals of neurons oscillate in synchrony. Therefore, consciousness might be caused by the electrical activity of many neurons oscillating together.

This theory has some plausibility. Maybe neuronal oscillations are a precondition for consciousness. But note that… the hypothesis is not truly an explanation of consciousness. It identifies a magician. Like the Hippocratic account, “The brain does it” (which is probably true)… this modern theory stipulates that “the oscillations in the brain do it.” We still don’t know how. Suppose that neuronal oscillations do actually enhance the reliability of information processing. That is impressive and on recent evidence apparently likely to be true. But by what logic does that enhanced information processing cause the inner experience? Why an inner feeling? Why should information in the brain — no matter how much its signal strength is boosted, improved, maintained, or integrated from brain site to brain site — become associated with any subjective experience at all? Why is it not just information without the add-on of awareness?

I should note that Graziano is too harsh, here. Crick & Koch (“C&K”) make more of an effort to connect the details of their model to the explananda of consciousness than Graziano suggests. There is more to C&K’s account than just “the oscillations in the brain do it.”275 But, in the end, I agree with Graziano that C&K do not “go far enough” with their theory to make it satisfying. As Graziano says elsewhere:

…the theory provides no mechanism that connects neuronal oscillations in the brain to a person being able to say, “Hey, I have a conscious experience!” You couldn’t give the theory to an engineer and have her understand, even in the foggiest way, how one thing leads to the other.

I think this is a good test for theories of consciousness: If you described your theory of consciousness to a team of software engineers, machine learning experts, and roboticists, would they have a good idea of how they might, with several years of work, build a robot that functions according to your theory? And would you expect it to be phenomenally conscious, and (additionally stipulating some reasonable mechanism for forming beliefs or reports) to believe or report itself to have phenomenal consciousness for reasons that are fundamentally traceable to the fact that it is phenomenally conscious?

For a similar attitude toward theories of consciousness, see also the (illusionist-friendly) introductory paragraph of Molyneux (2012):

…Instead of attempting to solve what appears unsolvable, an alternative reaction is to investigate why the problem seems so hard. In this way, Minsky (1965) hoped, we might at least explain why we are confused. Since a good way to explain something is often to build it, a good way to understand our confusion [about consciousness] may be to build a robot that thinks the way we do… I hope to show how, by attempting to build a smart self-reflective machine with intelligence comparable to our own, a robot with its own hard problem, one that resembles the problem of consciousness, may emerge.

6.2.2 Integrated information theory

Another popular theory of consciousness is Integrated Information Theory (IIT), according to which consciousness is equal to a measure of integrated information denoted Φ (“phi”). Oizumi et al. (2014) explains the basics:

Integrated information theory (IIT) approaches the relationship between consciousness and its physical substrate by first identifying the fundamental properties of experience itself: existence, composition, information, integration, and exclusion. IIT then postulates that the physical substrate of consciousness must satisfy these very properties. We develop a detailed mathematical framework in which composition, information, integration, and exclusion are defined precisely and made operational. This allows us to establish to what extent simple systems of mechanisms, such as logic gates or neuron-like elements, can form complexes that can account for the fundamental properties of consciousness. Based on this principled approach, we show that IIT can explain many known facts about consciousness and the brain, leads to specific predictions, and allows us to infer, at least in principle, both the quantity and quality of consciousness for systems whose causal structure is known. For example, we show that some simple systems can be minimally conscious, some complicated systems can be unconscious, and two different systems can be functionally equivalent, yet one is conscious and the other one is not.

I won’t explain IIT any further; see other sources for more detail.276 Instead, let me jump straight to my reservations about IIT.277

I have many objections to IIT, for example that it predicts enormous quantities of consciousness in simple systems for which we have no evidence of consciousness.278 But here, I want to focus on the issue that runs throughout this section: IIT does not predict many consciousness explananda with much precision.

Graziano provides the following example:279

[One way to test IIT] would be to test whether human consciousness fades when integration in the brain is reduced. Tononi emphasizes the case of anesthesia. As a person is anesthetized, integration among the many parts of the brain slowly decreases, and so does consciousness… But even without doing the experiment, we already know what the result must be. As the brain degrades in its function, so does the integration among its various parts and so does the intensity of awareness. But so do most other functions. Even many unconscious processes in the brain depend on integration of information, and will degrade as integration deteriorates.

The underlying difficulty here is… the generality of integrated information. Integrated information is so pervasive and so necessary for almost all complex functions in the brain that the theory is essentially unfalsifiable. Whatever consciousness may be, it depends in some manner on integrated information and decreases as integration in the brain is compromised.

In other words, IIT doesn’t do much to explain why some brain processes are conscious and others are not, since all of them involve integrated information. Indeed, as far as I can tell, IIT proponents think that a great many brain processes typically thought of as paradigm cases of unconscious cognitive processing are in fact conscious, but we are unaware of this.280 In principle, I agree that a well-confirmed theory could make surprising predictions about things we can’t observe (yet, or possibly ever), and that if the theory is well-enough supported then we should take those predictions quite seriously, but I don’t think IIT is so well-confirmed yet. In the meantime, IIT seems unsatisfying to the extent that it fails to predict some fairly important explananda of consciousness, for example that some highly “integrated” cognitive processing is, as far as we know, unconscious.

Moreover, Graziano says, IIT doesn’t do much to explain the reportability of consciousness (in any detail281):

The only objective, physically measurable truth we have about consciousness is that we can, at least sometimes, report that we have it. I can say, “The apple is green,” like a well-regulated wavelength detector, providing no evidence of consciousness; but I can also claim, “I am sentient; I have a conscious experience of green.”

…The integrated information [theory]… is silent on how we get from being conscious to being able to report, “I have a conscious experience.” Yet any serious theory of consciousness must explain the one objective fact that we have about consciousness: that we can, in principle, at least sometimes, report that we have it.

In discussion with colleagues, I have heard the following argument… The brain has highly integrated information. Highly integrated information is (so the theory goes) consciousness. Problem solved. Why do we need a special mechanism to inform the brain about something that it already has? The integrated information is already in there; therefore, the brain should be able to report that it has it.

…[But] the brain contains a lot of items that it can’t report. The brain contains synapses, but nobody can introspect and say, “Yup, those serotonin synapses are particularly itchy today.” The brain regulates the flow of blood through itself, but nobody has cognitive access to that process either. For a brain to be able to report on something, the relevant item can’t merely be present in the brain but must be encoded as information in the form of neural signals that can ultimately inform the speech circuitry.

The integrated information theory of consciousness does not explain how the brain, possessing integrated information (and, therefore, by hypothesis, consciousness), encodes the fact that it has consciousness, so that consciousness can be explicitly acknowledged and reported. One would be able to report, “The apple is green,” like a well-calibrated spectral analysis machine… One would be able to report a great range of information that is indeed integrated. The information is all of a type that a sophisticated visual processing computer, attached to a camera, could decode and report. But there is no proposed mechanism for the brain to arrive at the conclusion, “Hey, green is a conscious experience.” How does the presence of conscious experience get turned into a report?

To get around this difficulty and save the integrated information theory, we would have to postulate that the integrated information that makes up consciousness includes not just information that depicts the apple but also information that depicts what a conscious experience is, what awareness itself is, what it means to experience. The two chunks of information would need to be linked. Then the system would be able to report that it has a conscious experience of the apple…

These examples illustrate (but don’t exhaust) the ways in which IIT doesn’t predict the explananda of consciousness in as much detail as I’d like.

What about global workspace theory?

6.2.3 Global workspace theory

One particularly well-articulated theory of consciousness is Bernard Baars’ Global Workspace Theory (GWT), including variants such as Stanislas Dehaene’s Global Neuronal Workspace Theory (Dehaene 2014), and GWT’s implementation in the LIDA cognitive architecture (Franklin et al. 2012).282

Weisberg (2014), ch. 6, explains the basics of GWT succinctly:283

Perhaps the best developed empirical theory of consciousness is the global workspace view (Baars 1988; 1997). The basic idea is that conscious states are defined by their “promiscuous accessibility,” by being available to the mind in ways that nonconscious states are not. If a state is nonconscious, you just can’t do that much with it. It will operate automatically along relatively fixed lines. However, if the state is conscious, it connects with the rest of our mental lives, allowing for the generation of far more complex behavior. The global workspace (GWS) idea takes this initial insight and develops a psychological theory – one pitched at the level of cognitive science, involving a high-level decomposition of the mind into functional units. The view has also been connected to a range of data in neuroscience, bolstering its plausibility…

So, how does the theory go? First, the GWS view stresses the idea that much of our mental processing occurs modularly. Modules are relatively isolated, “encapsulated” mechanisms devoted to solving limited, “domain-specific” problems. Modules work largely independently from each other and they are not open to “cross talk” coming from outside their focus of operation. A prime example is how the early vision system works to create the 3-D array we consciously experience. Our early-vision modules automatically take cues from the environment and deliver rapid output concerning what’s in front of us. For example, some modules detect edges, some the intersection of lines or “vertices,” some subtle differences in binocular vision, and so on. To work most efficiently, these modules employ built-in assumptions about what we’re likely to see. In this way, they can quickly take an ambiguous cue and deliver a reliable output about what we’re seeing. But this increase in speed leads to the possibility of error when the situation is not as the visual system assumes. Muller-Lyer.pngIn the Müller-Lyer illusion [see right], two lines of the same length look unequal because of either inward- or outward-facing “points” on the end of the lines. And even if we know they’re the same length, because we’ve seen these dang lines hundreds of times, we still consciously see them as unequal. This is because the process of detecting the lines takes the vertices where the points attach to the lines as cues about depth. In the real world, when we see such vertices, we can reliably use them to tell us what’s closer to what. But the Müller-Lyer illusion uses this fact to trick early vision into seeing things incorrectly. The process is modular because it works automatically and it’s immune to correction from our conscious beliefs about the lines.

Modularity is held to be a widespread phenomenon in the mind. Just how widespread is a matter of considerable debate, but most researchers would accept that at least some processes are modular, and early perceptual processes are the best candidates. The idea of the GWS is that the workspace allows us to connect and integrate knowledge from a number of modular systems. This gives us much more flexible control of what we do. And this cross-modular integration would be especially useful to a mind more and more overloaded with modular processes. Hence, we get an evolutionary rationale for the development of a GWS: when modular processing becomes too unwieldy and when the complexity of the tasks we must perform increases, there will be advantages to having a cross-modular GWS.

Items in the global workspace are like things posted on a message board or a public blog. All interested parties can access the information there and act accordingly. They can also alter the info by adding their own input to the workspace. The GWS is also closely connected to short-term working memory. Things held in the workspace can activate working memory, allowing us to keep conscious percepts in mind as we work on problems. Also, the GWS is deeply intertwined with attention. We can activate attention to focus on specific items in the network. But attention can also influence what gets into the workspace in the first place. Things in the network can exert a global “top-down” influence on the rest of the mind, allowing for coordination and control that couldn’t be achieved by modules in isolation. To return to a functionalist way of putting things, if a system does what the GWS does, then the items in that system are conscious. That’s what consciousness amounts to [according to GWT].

[To sum up:] Much mental activity is nonconscious, occurring in low-level modules. However, when modular information is “taken up” by the GWS, it becomes available to a wide range of mental systems, allowing for flexible top-down control. This is the functional mark of consciousness.

I won’t explain GWT any further here; see other sources for more detail.284 Instead, I jump once again to the primary issue that runs throughout this section285 this time applied to GWT.

To be concrete, I’ll address Dehaene’s neurobiological version of GWT. What, exactly, is a conscious state, according to Dehaene?286

…a conscious state is encoded by the stable activation, for a few tenths of a second, of a subset of active workspace neurons. These neurons are distributed in many brain areas, and they all code for different facets of the same mental representation. Becoming aware of the Mona Lisa involves the joint activation of millions of neurons that care about objects, fragments of meaning, and memories.

During conscious access, thanks to the workspace neurons’ long axons, all these neurons exchange reciprocal messages, in a massively parallel attempt to achieve a coherent and synchronous interpretation. Conscious perception is complete when they converge.

Perhaps Dehaene is right that a conscious state results from the stable activation of workspace neurons that collectively code for all the different facets of that state, which occurs when the messages being passed by these neurons “converge.” But I still want to know: how does merely pooling information into a global workspace, allowing that information to be accessed by diverse cognitive modules, result in a phenomenal experience? Why should this make the brain insist that it is “conscious” of some things and not others? Why does this result in the intuition of an explanatory gap (the “hard problem”)? And so on.

6.2.4 What a more satisfying theory of consciousness could look like

I could make similar comments about many other theories of consciousness, for example the theories which lean heavily on prediction error minimization (Hohwy 2012; Clark 2013), recurrent processing (Lamme 2010), higher-order representations (Carruthers 2016), and “multiple drafts” (Dennett 1991). In all these cases, my concern is not so much that they are wrong (though they may be), but instead that they don’t “go far enough.”

In fact, I think it’s plausible that several of these theories say something important about how various brain functions work, including brain functions that are critical to conscious experience (in humans, at least). Indeed, on my view, it is quite plausibly the case that consciousness depends on integrated information and higher-order representations.287 And it would not surprise me if human consciousness also depends on prediction error minimization, recurrent processing, “multiple drafts,” and a global workspace. The problem is just that none of these ideas, or even all of these ideas combined, seem sufficient to explain, with a decent amount of precision, most of the key features of consciousness we know about.

Graziano’s own “attention schema theory” (described in a footnote288) has this problem, too, but (in my opinion) it “goes further” than most theories do (though, not by much).289 In fact, it does so in part by assuming that integrated information, higher-order representations, a global workspace, and some features of Dennett’s “multiple drafts” account do play a role in consciousness, and then Graziano adds some details to that foundation, to construct a theory of consciousness which (in my opinion) explains the explananda of consciousness a bit more thoroughly, and with a bit more precision, than any of those earlier theories do on their own.

Note that I likely have this opinion of Graziano’s theory largely because it offers an (illusionist) explanation of our dualist intuitions, and our dualist intuitions constitute one explanandum of consciousness that, as far as I can tell, the theories I briefly surveyed above (temporal binding theory, IIT, GWT) don’t do much to explain.

Furthermore, I can think of ways to supplement Graziano’s theory with additional details that explain some additional consciousness explananda beyond what Graziano’s theory (as currently stated) can explain. For example, Graziano doesn’t say much about the ineffability of qualia, but I think a generalization of Gary Drescher’s “qualia as gensyms” account,290 plus the usual points about how the fine-grained details of our percepts “overflow” the concepts we might use to describe those percepts,291 explain that explanandum pretty well, and could be added to Graziano’s account. Graziano also doesn’t explain why we have the conviction that qualia cannot be “just” brain processes and nothing more, but intuitively it seems to me that an inference algorithm inspired by Armstrong (1968) might explain that conviction pretty well.292 But why do we find it so hard to even make sense of the hypothesis of illusionism about consciousness, even though we don’t have trouble understanding how other kinds of illusions could be illusions? Perhaps an algorithm inspired by Kammerer (2016) could instantiate this feature of human consciousness.293

And so on. I take this to be the sort of work that Marinsek & Gazzaniga (2016) call for in response to Frankish (2016b)’s defense of illusionism about consciousness:

One major limitation of [illusionism as described by Frankish] is that it does not offer any mechanisms for how the illusion of phenomenal feelings works. As anyone who has seen a magic trick knows, it’s quite easy to say that the trick is an illusion and not the result of magical forces. It is much, much harder to explain how the illusion was created. Illusionism can be a useful theory if mechanisms are put forth that explain how the brain creates an illusion of phenomenal feelings…

…phenomenal consciousness may not be the product of one grand illusion. Instead, phenomenal consciousness may be the result of multiple ‘modular illusions’. That is, different phenomenal feelings may arise from the limitations or distortions of different cognitive modules or networks… Illusionism therefore may not have to account for one grand illusion, but for many ‘modular illusions’ that each have their own neural mechanisms.

If I were a career consciousness theorist, I think this is how I would try to make progress toward a theory of consciousness, given my current intuitions about what is most likely to be successful:

  1. First, I’d write some “toy programs” that instantiate some of the key aspects of a Graziano / Drescher / Armstrong / Kammerer (GDAK) account to consciousness.294
  2. If step (1) seemed productive, I’d consider taking on the more ambitious project of working with a team of software engineers and machine learning experts to code a GDAK-inspired cognitive architecture295 for controlling an agent in a simple virtual 3D world. We’d share the source code, and we’d write an explanation of how we think it explains, with some precision, many of the key explananda of consciousness.
  3. We’d think about which features of our own everyday internal experiences, including our felt confusions about consciousness, don’t yet seem to be captured by the cognitive architecture we’ve coded, and we’d try to find ways to add those features to the cognitive architecture, and then explain how we think our additions to the cognitive architecture capture those additional features of consciousness.
  4. We’d do the same thing for additional consciousness explananda drawn not from our own internal experiences, but from (reliable, validated) self-reports from others, e.g. from experimental studies and from brain lesion cases.296
  5. We’d invite others to explain why they don’t think this cognitive architecture captures the explananda we claim it captures, and which additional most-important explananda are still not captured by the architecture, and we’d try to modify and extend the cognitive architecture accordingly, and then explain why we think those modifications are successful.
  6. We’d use the latest version of the cognitive architecture to make novel predictions about what human subjects will self-report under various experimental conditions if their consciousness is similar in the right ways to our cognitive architecture, and then test those predictions, and modify the cognitive architecture in response to the experimental results.

One caveat to all this is that I’m not sure the cognitive architecture could ever be run in this case, as some parts of the code would have to be left as “black boxes” that we don’t know how to code. Coding a virtual agent that really acted like a conscious human, including in its generated speech about qualia, might be an AI-complete problem. However, the hope would be that all the incomplete parts of the code wouldn’t be specific to consciousness, but would concern other features, such as general-purpose learning. As a result, the predictions generated from the cognitive architecture couldn’t be directly computed, but would instead need to be argued for, as in usual scientific practice.297

Perhaps this process sounds like a lot of work. Surely, it is. But it does not seem impossible. In fact, it is not too dissimilar from the process Bernard Baars, Stan Franklin, and others have used to implement global workspace theory in the LIDA cognitive architecture.

6.3 Appendix C. Evidence concerning unconscious vision

In this appendix, I summarize much of the evidence cited in favor of the theory that human visual processing occurs in multiple streams, only one of which leads to conscious visual experience, as described briefly in an earlier section. To simplify the exposition, I present here only the positive case for this theory, even though there is also substantial evidence that challenges the theory (see below), and thus I think we should only assign it (or something like it) moderate credence.

My primary source for most of what follows is Goodale & Milner (2013).298 (Hereafter, I refer to Goodale & Milner as “G&M,” and I refer to their 2013 book as “G&M-13.”)

6.3.1 Multiple vision systems in simpler animals

First, consider “vision for action” in organisms much simpler than humans:299

A single-cell organism like the Euglena, which lives in ponds and uses light as a source of energy, changes its pattern of swimming according to the different levels of illumination it encounters in its watery world. Such behavior keeps Euglena in regions of the pond where an important resource, sunlight, is available. But although this behavior is controlled by light, no one would seriously argue that the Euglena “sees” the light or that it has some sort of internal model of the outside world. The simplest and most obvious way to understand this behavior is that it works as a simple reflex, translating light levels into changes in the rate and direction of swimming. Of course, a mechanism of this sort, although activated by light, is far less complicated than the visual systems of multicellular organisms. But even in complex organisms like vertebrates, many aspects of vision can be understood entirely as systems for controlling movement, without reference to perceptual experience or to any general-purpose representation of the outside world.

Vertebrates have a broad range of different visually guided behaviors. What is surprising is that these different patterns of activity are governed by quite independent visual control systems. The neurobiologist, David Ingle, for example, showed during the 1970s that when frogs catch prey they use a quite separate visuomotor “module” from the one that guides them around visual obstacles blocking their path [Ingle (1973)]. These modules run on parallel tracks from the eye right through the brain to the motor output systems that execute the behavior. Ingle demonstrated the existence of these modules by taking advantage of the fact that nerves… in the frog’s brain, unlike those in the mammalian brain, can regenerate new connections when damaged. In his experiments, he was able to “rewire” the visuomotor module for prey catching by first removing a structure called the optic tectum on one side. The optic nerves that brought information from the eye to the optic tectum on the damaged side of the brain were severed by this surgery. A few weeks later, however, the cut nerves re-grew, but finding their normal destination missing, crossed back over and connected with the remaining optic tectum on the other side of the brain. As a result, when these “rewired” frogs were later tested with artificial prey objects, they turned and snapped their tongue to catch the prey — but in the opposite direction… This “mirror-image” behavior reflected the fact that the prey-catching system in these frogs was now wired up the wrong way around.

But this did not mean that their entire visual world was reversed. When Ingle tested the same frogs’ ability to jump around a barrier blocking their route, their movements remained quite normal, even when the edge of the barrier was located in the same part of space where they made prey-catching errors… It was as though the frogs saw the world correctly when skirting around a barrier, but saw the world mirror-imaged when snapping at prey. In fact, Ingle discovered that the optic nerves were still hooked up normally to a separate “obstacle avoidance module” in a part of the brain quite separate from the optic tectum. This part of the brain, which sits just in front of optic tectum, is called the pretectum. Ingle was subsequently able to selectively rewire the pretectum itself in another group of frogs. These animals jumped right into an obstacle placed in front of them instead of avoiding it, yet still continued to show normal prey catching.

So what did these rewired frogs “see”? There is no sensible answer to this. The question only makes sense if you believe that the brain has a single visual representation of the outside world that governs all of an animal’s behavior. Ingle’s experiments reveal that this cannot possibly be true. Once you accept that there are separate visuomotor modules in the brain of the frog, the puzzle disappears. We now know that there are at least five separate visuomotor modules in the brains of frogs and toads, each looking after a different kind of visually guided behavior and each having distinct input and output pathways. Obviously the outputs of these different modules have to be coordinated, but in no sense are they all guided by a single visual representation of the world residing somewhere in the frog’s brain.

The same kind of visuomotor “modularity” exists in mammals as well. Evidence for this can be seen even in the anatomy of the visual system. …[The neurons] in the retina send information (via the optic nerve) directly to a number of different sites in the brain. Each of these brain structures in turn gives rise to a distinctive set of outgoing connections. The existence of these separate input–output lines in the mammalian brain suggests that they may each be responsible for controlling a different kind of behavior — in much the same way as they are in the frog. The mammalian brain is more complex than that of the frog, but the same principles of modularity still seem to apply. In rats and gerbils, for example, orientation movements of the head and eyes toward morsels of food are governed by brain circuits that are quite separate from those dealing with obstacles that need to be avoided while the animal is running around. In fact, each of these brain circuits in the mammal shares a common ancestor with the circuits we have already mentioned in frogs and toads. For example, the circuit controlling orientation movements of the head and eyes in rats and gerbils involves the optic tectum (or “superior colliculus” as it is called in mammals), the same structure in the frog that controls turning and snapping the tongue at flies.

The fact that each part of the animal’s behavioral repertoire has its own separate visual control system refutes the common assumption that all behavior is controlled by a single, general-purpose representation of the visual world. Instead, it seems, vision evolved, not as a single system that allowed organisms to “see” the world, but as an expanding collection of relatively independent visuomotor modules.

According to G&M, at least, “vision for action” systems seem to be primary in most animals, while “vision for perception” systems are either absent entirely or much less developed than what we observe in primates:300

…vision in vertebrates evolved in response to the demands of motor output, not for perceptual experience. Even with the evolution of the cerebral cortex this remained true, and in mammals such as rodents the major emphasis of cortical visual processing still appears to be on the control of navigation, prey catching, obstacle avoidance, and predator detection [Dean (1990)]. It is probably not until the evolution of the primates, at a late stage of phylogenetic history, that we see the arrival on the scene of fully developed mechanisms for perceptual representation. The transformations of visual input required for perception would often be quite different from those required for the control of action. They evolved, we assume, as mediators between identifiable visual patterns and flexible responses to those patterns based on higher cognitive processing.

6.3.2 Two vision systems in primates

Given the evidence for multiple, largely independent vision systems in simpler animals, it should be no surprise that primates, too, have multiple, largely independent vision systems.

The direct evidence for two (mostly) functionally and anatomically distinct vision systems in the primate brain — one serving “vision for action” and the other serving “vision for perception” — comes from several sources, including:

  1. Lesion studies in humans and monkeys.
  2. Dissociation studies in healthy humans and monkeys.
  3. Single-neuron recordings, mostly in monkeys.
  4. Brain imaging studies.
  5. Studies which induce “temporary lesions” via transcranial magnetic stimulation (TMS).

Below, I summarize some of this evidence.

6.3.3 Visual form agnosia in Dee Fletcher

Let’s start with G&M’s most famous lesion patient, Dee Fletcher.301 In February 1988, Dee collapsed into a coma as a result of carbon monoxide poisoning caused by an improperly vented water heater in her home. Fortunately, her partner Carlo soon arrived home and rushed her to the hospital.

After a few days of recovery, it became clear that Dee’s vision was impaired. She could see colors and surface textures (e.g. the tiny hairs on someone’s hand), but she couldn’t recognize shapes, objects, or people unless (1) she could identify them via another sense (e.g. hearing someone’s voice, or touching a hand), or unless (2) she could guess the object or person’s identity with color and surface texture information alone, for example if a close friend visited her while wearing a distinctively blue sweater.

Gabor grating
Image © Oxford University Press.302 Used by permission of Oxford University Press.

This was confirmed in formal testing. For example, she performed just as well “as a normally-sighted person in detecting a circular ‘Gabor’ patch of closely spaced fine lines on a background that had the same average brightness” (see right), but she had no idea whether the lines were horizontal or vertical. Hence, it wasn’t that her vision was a blur. She could see detail. She just couldn’t see edges and outlines that would allow her to identify shapes, objects, and people.

When G&M showed Dee a flashlight made of shiny metal and red plastic, she said: “It’s made of aluminium. It’s got red plastic on it. Is it some sort of kitchen utensil?” Given that she couldn’t see the object’s shape, and only its surface colors and texture, this was a sensible guess, since many kitchen tools are made of metal and plastic. As soon G&M placed the flashlight in her hand, she immediately recognized it as a flashlight.303

Dee often had trouble separating an object from the background. According to her, objects seemed to “run into each other,” such that “two adjacent objects of similar color, such as a knife and fork, will often look to her like a single entity.”

 

edges of objects
Illustration by the Open Philanthropy Project.304

G&M showed Dee shapes whose edges were defined in four different ways: by color contrast, by differences in luminance, by differences in texture, and by way of some dots remaining still while others moved (see left). In none of these cases was she able to reliably detect objects or shapes, though she could report the colors accurately.

G&M also tested Dee on “Efron shapes,” a series of rectangles that differ in shape but not in total surface area. For each round of the test, Dee was shown a pair of these shapes and asked to say whether they were the same or different. D&M-13 reports:

 

When we used any of the three rectangles that were most similar to the square, she performed at chance level. She sometimes even made mistakes when we used the most elongated rectangle, despite taking a long time to decide. Under each rectangle [in the image below] is the number of correct judgments (out of 20) that Dee made in a test run with that particular rectangle.

 

Efron shapes
Illustration by the Open Philanthropy Project.305

 

Dee’s problem is not that she struggles to verbally name shapes or objects, and nor is it a deficit in remembering what common objects look like. G&M-13 reports:

Dee has great difficulties in copying drawings of common objects or geometric shapes [see image below]. Some brain-damaged patients who are unable to identify pictures of objects can still slavishly copy what they see, line by line, and produce something recognizable. But Dee can’t even pick out the individual edges and contours that make up a picture in order to copy them. Presumably, unlike those other patients, Dee’s problem is not one of interpreting a picture that she sees clearly — her problem is that she can’t see the shapes in the picture to start with.

 

Dee model, copy, memory
Image © Oxford University Press.306 Used by permission of Oxford University Press.

Dee couldn’t recognize any of the drawings in the left-most column above. When she tried to copy those objects (middle column), she could incorporate some elements of the drawing (such as the small dots representing text), but her overall copies are unrecognizable. However, when asked to draw objects from memories she formed before her accident (right-most column), she did just fine, except for the fact that when she lifted her pencil and put it back down, she sometimes put it back down in the wrong place (presumably due to her inability to see shapes and edges even as she was drawing them). When she was later shown the objects she had drawn from memory, she couldn’t identify them.

Dee’s ability to draw objects from memory suggests that she can see things “in her mind’s eye” just fine. So does her correct responses to queries like this: “Think of the capital letter D; now imagine that it has been rotated flat-side down; now put it on top of the capital letter V; what does it look like?” Most people say “an ice cream cone,” and so does Dee.

Dee also still dreams normally:

[Dee] still sometimes reports experiencing a full visual world in her dreams, as rich in people, objects, and scenes as her dreams used to be before the accident. Waking up from dreams like this, especially in the early years, was a depressing experience for her. Remembering her dream as she gazed around [her now edgeless, shapeless, object-less] bedroom, she was cruelly reminded of the visual world she had lost.

However, despite her severe deficits in identifying shapes, objects, and people, Dee displayed a nearly normal ability to walk around in her environment and use her hands to pick things up and interact with them. G&M report the moment they realized just how striking the difference was between Dee’s ability to recognize objects and her ability to interact with them:307

[In the summer of 1988] we were showing [Dee] various everyday objects to see whether she could recognize them, without allowing her to feel what they were. When we held up a pencil, we were not surprised that she couldn’t tell us what it was, even though she could tell us it was yellow. In fact, she had no idea whether we were holding it horizontally or vertically. But then something quite extraordinary happened. Before we knew it, Dee had reached out and taken the pencil, presumably to examine it more closely… After a few moments, it dawned on us what an amazing event we had just witnessed. By performing this simple everyday act she had revealed a side to her vision which, until that moment, we had never suspected was there. Dee’s movements had been quick and perfectly coordinated, showing none of the clumsiness or fumbling that one might have expected in someone whose vision was as poor as hers. To have grasped the pencil in this skillful way, she must have turned her wrist “in flight” so that her fingers and thumb were well positioned in readiness for grasping the pencil — just like a fully sighted person. Yet it was no fluke: when we took the pencil back and asked her to do it again, she always grabbed it perfectly, no matter whether we held the pencil horizontally, vertically, or obliquely.

How could Dee do this? She had to be using vision; a blind person couldn’t have grabbed the pencil so effortlessly. But she couldn’t have been using her conscious visual experience, either, as her conscious visual experience didn’t include any information about the rotation of the pencil or its exact shape.

 

matching and posting
Image © Oxford University Press.308 Used by permission of Oxford University Press.

G&M soon put this difference to a more formal test. They built a simple mailbox-like slot that could be rotated to any angle (while Dee closed her eyes), and then they gave Dee a thin card to “post” into the slot. When asked to “post” the card, she had no difficulty. However, when she was asked to merely turn the card so that it matched the orientation of the slot, without reaching toward the slot, she performed no better than chance.309 She couldn’t consciously see the orientation of the slot, but nevertheless when posting the card into the slot, she had no trouble rotating the card properly so that it went into the slot. The diagrams on the right310 show Dee’s performance relative to healthy control subjects, with the “correct” orientation always shown as vertical even though the slot was rotated to many different orientations. Video showed that when posting the card, Dee rotated it well before reaching the slot — clearly, a visually-guided behavior, even if it wasn’t guided by conscious vision.

 

maximum grip aperture
Illustration by the Open Philanthropy Project.311

G&M also tested Dee’s grasping movements. When a normal patient is asked to reach out and grab an object on a table, they open their fingers and thumb as soon as their hand leaves the table. About 75% of the way to the object, the gulf between fingers and thumb is as wide as it gets — the “maximum grip aperture” (MGA). Thereafter, they begin to close their fingers and thumb so that a good grasp is achieved (see right). The MGA is always larger than the width of the target object, but the two are related: the bigger the object, the bigger the MGA.

G&M tested Dee’s grasping behavior using some 3D wooden blocks they called “Efron blocks,” because they were modeled after the Efron shapes (again, with the same overall size but different dimensions). As expected, her grasping motions showed the same mid-flight grip scaling as those of healthy controls, and she grasped the Efron blocks just as smoothly as anyone else. She performed just fine regardless of the orientation of Efron blocks, and she effortlessly rotated her wrist to grasp them width-wise rather than length-wise (just like healthy subjects).312 She did this despite the fact that she performed very poorly when asked to distinguish the blocks when they were presented as pairs, and despite the fact that she could not show G&M how wide each block was by looking at it and then using her fingers and thumb to indicate its width. When asked to estimate, with her thumb and forefinger, the width of a familiar object stored in her memory, such as a golf ball, she did fine.

G&M also tested Dee on “Blake shapes,” a set of pebble-like objects that are smooth and rounded but irregular in shape, and thus are stably grasped at some points but not others. Again, Dee could reach out and grasp these objects just as well as healthy controls, even though she was unable to say whether pairs of the Blake shapes were the same or different.

G&M also tested Dee’s ability to navigate obstacles. They visited a laboratory in which obstacles of various heights could be placed along a path, and sophisticated equipment could precisely measure the adjustments people made to their gait to step over the obstacles. Once again, Dee performed just like healthy subjects, stepping confidently over the obstacles without tripping, just barely clearing them (again, like healthy subjects). However, when asked to estimate the height of these obstacles, she performed terribly.

In short, as G&M-13 puts it:

The most amazing thing about Dee is that she is able to use visual properties of objects such as their orientation, size, and shape, to guide a range of skilled actions — despite having no conscious awareness of those same visual properties. This… indicates that some parts of the brain (which we have good reason to believe are badly damaged in Dee) play a critical role in giving us visual awareness of the world while other parts (relatively undamaged in her) are more concerned with the immediate visual control of skilled actions.

Dee’s condition is now known as “visual form agnosia” (an inability to see “forms” or shapes), and a few other cases beside’s Dee have been reported.313

6.3.4 Optic ataxia

The case of Dee Fletcher raises the question: are there patients with the “opposite” condition, such that they can recognize shapes, objects, and people just fine, but have difficulty with visually-guided behavior, such as when grasping and manipulating objects?

Indeed there are:314

The Hungarian neurologist Rudolph Bálint was the first to document a patient with this kind of problem, in 1909. The patient was a middle-aged man who suffered a massive stroke to both sides of the brain in a region called the parietal lobe… He could recognize objects and people, and could read a newspaper. He did tend to ignore objects on his left side and had some difficulty moving his eyes from one object to another. But his big problem was not a failure to recognize objects, but rather an inability to reach out and pick them up. Instead of reaching directly toward an object, he would grope in its general direction much like a blind man, often missing it by a few inches. Unlike a blind man, however, he could see the object perfectly well — he just couldn’t guide his hand toward it. Bálint coined the term “optic ataxia”… to refer to this problem in visually guided reaching.

Bálint’s first thought was that this difficulty in reaching toward objects might be due to a general failure in his patient to locate where the objects were in his field of vision. But it turned out that the patient showed the problem only when he used his right hand. When he used his left hand to reach for the same object, his reaches were pretty accurate. This means that there could not have been a generalized problem in seeing where something was. The patient’s visual processing of spatial location per se was not impaired. After further testing, Bálint discovered that the man’s reaching difficulty was not a purely motor problem either — some kind of generalized difficulty in moving his right arm correctly. He deduced this from asking the patient to point to different parts of his own body using his right hand with his eyes closed: there was no problem.

…It was not until the 1980s that research on patients with optic ataxia was kick-started again, mostly by Marc Jeannerod and his group in Lyon, France. In one landmark study, his colleagues Marie-Thérèse Perenin and Alain Vighetto made detailed video recordings of a sizeable group of patients with optic ataxia performing a number of different visuomotor tests… Like Bálint, they observed that although their patients couldn’t accurately point to the targets, they were able to give pretty accurate verbal reports of where those same objects were located. Also like Bálint, Perenin and Vighetto demonstrated that the patients had no difficulty in directing hand movements toward different parts of their own body. Subsequent work in their laboratory went on to show that the reaching and pointing errors made by many patients with optic ataxia are most severe when they are not looking directly at the target. But even when pointing at a target in the center of the visual field, the patients still make bigger errors than normal people do, albeit now on the order of millimeters rather than centimeters. In short, Perenin and Vighetto’s research confirms Bálint’s original conclusion: optic ataxia is a deficit in visually guided reaching, not a general deficit in spatial vision.

Patients with optic ataxia also have difficulty avoiding collisions with obstacles as they reach for an object. For example, neuroscientist Robert McIntosh designed a test in which subjects are asked to reach from a fixed starting point to a strip 25cm away, between two vertical rubber cylinders. The location of the cylinders is varied, and healthy control subjects always vary their reach trajectory so as to stay well clear of the rubber cylinders. In contrast, optic ataxia patients do not vary their reach trajectory in response to where the rubber cylinders are located, and thus often come somewhat close to knocking over the rubber cylinders as they reach for the strip at the back of the table.

However, the failure of patients with optic ataxia to adjust their reach trajectory in response to the location of the cylinders is not due to a failure to (consciously) see where the cylinders are. When asked to point to the midpoint between the two cylinders, patients with optic ataxia are just as accurate as healthy controls.

G&M were able to run this test on a patient with optic ataxia for only one hand. Morris Harvey has damage in his left parietal lobe, which means that his optic ataxia affects only his right hand, and only when reaching toward objects in his right visual field. How did Morris perform at the cylinders task? When reaching with his left hand, his reach trajectory was the same as healthy subjects, adjusted to maximally avoid the cylinders. But when reaching with his right hand, he studiously avoided the cylinder on the left, but took no account of the cylinder on the right.

(In contrast to those with optic ataxia, Dee Fletcher avoided the cylinders as normal when reaching out to the strip at the back of the table, but she performed poorly when asked to point to the midpoint between the two cylinders.)

Some optic ataxia patients also have trouble changing their reach trajectory mid-flight:

Our French colleagues Laure Pisella and Yves Rossetti had [optic ataxia patient] Irène make a series of reaches to touch a small LED target. From time to time, however, the target would unpredictably shift leftwards or rightwards at the very instant Irène’s hand started to move toward it. Healthy volunteers doing this task had no problem in making the necessary in-flight corrections to their reach, and in fact they adjusted their reaches seamlessly as if their movements were on “automatic pilot,” particularly when under time pressure to move quickly. Yet Irène found these changes in target location frustratingly impossible to deal with. It was as if she no longer had that automatic pilot. To put it another way, Irène’s reaches seemed to be entirely predetermined at the outset of the movement, and remained impervious to unexpected changes in the position of the target, even though she could see them clearly enough and knew they might occur. On occasions when the target moved, she found herself reaching first to its original location, and only then shifting her finger to the new location.

How do patients with optic ataxia perform on the mail slot task described in the previous section? Just as you’d expect:

…[Perenin and Vighetto] examined the ability of their [optic ataxia] patients to reach out and pass their hand through an open slot cut in a disk, which could be positioned at different orientations at random… Remarkably, not only did the patients tend to make the expected spatial errors, in which their hand missed the slot altogether, but they also made orientation errors, in which the hand would approach the slot at the wrong angle. Yet most of these same patients could easily tell one orientation of the slot from another when asked to do so. So again we see a familiar story unfolding. The failure of the patients to rotate their hand as they reached out to pass it through a slot was not due to a difficulty in perceiving the orientation of the slot — the problem was visuomotor in nature, not perceptual. (Of course when their hand made contact with the disk they could correct themselves using touch, and then pass their hand through the slot. In other words the deficit was restricted to the modality of sight, and did not extend to touch.)

What about the measures of grasping movements described in the previous section? Again, patients with optic ataxia perform just as you’d expect:

Instead of first opening the hand during the early part of the reach, and then gradually closing it as it moved toward the target object, the optic ataxia patient would keep the hand widely opened throughout the movement, much as a person would do if reaching blindfolded toward the object… Jeannerod and his colleagues were the first to carry out systematic tests with Anne Thiérry, the optic ataxia patient we described earlier in this chapter. They used similar matching and grasping tasks to those we had used earlier with Dee… Anne was found to show poor scaling of her grip when reaching for objects of different sizes, while remaining well able to demonstrate the sizes of the objects by use of her forefinger and thumb. Again, the pattern of deficits and spared abilities in Anne and the pattern in Dee complement each other perfectly.

Next, what about Blake shapes? Again, the optic ataxia patient’s performance seems to be the mirror image of Dee Fletcher’s:

Although [Ruth Vickers’] symptoms had cleared to some degree by the time we saw her, it was obvious that she still had severe optic ataxia. She could not reach with any degree of accuracy to objects that she could see but was not looking at directly. She could, however, reach reasonably accurately to objects directly in her line of sight.

Nevertheless, the reaches Ruth made to pick up an object she was looking at, although spatially accurate, were far from normal. Like Anne Thiérry, she would open her hand wide as she reached out, no matter how big or small the objects were, showing none of the grip scaling seen in healthy people… Yet despite this, when asked to show us how big she thought the object was using her finger and thumb, she performed quite creditably, again just like Anne. And she could describe most of the objects and pictures we showed her without any difficulty. In fact, although her strokes had left her unable to control a pencil or pen very well, she could draw quite recognizable copies of pictures she was shown… In other words, Ruth’s visual experience of the world seemed pretty intact, and she could readily convey to us what she saw — in complete contrast to Dee Fletcher.

Because Ruth could distinguish between many different shapes and patterns, we did not expect her to have much difficulty with the smooth pebble-like shapes we had tested Dee with earlier. We were right — when she was presented with a pair of “Blake shapes” she could generally tell us whether or not the two shapes were the same. Although she sometimes made mistakes, particularly when two identical shapes were presented in different orientations, her performance was much better than Dee’s. When it came to picking up the shapes, however, the opposite was the case. Ruth had real problems. Instead of gripping the Blake shapes at stable “grasp points,” she positioned her finger and thumb almost at random… This inevitably meant that after her fingers contacted the pebble she had to correct her grip by means of touch — if she did not, the pebble would often slip from her grasp. In other words, although some part of Ruth’s brain could code the shape of these objects to inform her visual experience, her hand was unable to use such shape information to guide its actions.

6.3.5 Lesions in monkeys

These lesion studies in humans provide suggestive evidence for two different streams of visual processing, one of which (the “vision for action” system) seems to be unconscious. Now we turn to the evidence from lesion studies in monkeys, which, I was surprised to learn, goes back to the 1860s:315

During the 1860s, [neurologist David Ferrier] removed what we now call the dorsal stream in a monkey, and discovered that the animal would misreach and fumble for food items set out in front of it. In a similar vein, recent work by Mitchell Glickstein in England has shown that small lesions in the dorsal stream can make a monkey unable to pry food morsels out of narrow slots set at different orientations. The monkey is far from blind, but it cannot use vision to insert its finger and thumb at the right angle to get the food. It eventually does it by touch, but its initial efforts, under visual guidance, fail. Yet the same monkey has no difficulty in telling apart different visual patterns, including lines of different orientation. These observations, and a host of others, have demonstrated that dorsal-stream damage in the monkey results in very similar patterns of disabilities and spared abilities to those we saw in [patients with optic ataxia]. In other words, monkeys with dorsal-stream lesions show major problems in vision for action but evidently not in vision for perception.

In direct contrast, Heinrich Klüver and Paul Bucy, working at the University of Chicago in the 1930s, found that monkeys with lesions of the temporal lobes, including most of what we now know as the ventral stream, did not have any visuomotor problems at all, but did have difficulties in recognizing familiar objects, and in learning to distinguish between new ones. Klüver and Bucy referred to these problems as symptoms of “visual agnosia,” and indeed they do look very like the problems that Dee Fletcher has. Moreover, like Dee, these monkeys with ventral-stream lesions had no problem using their vision to pick up small objects. The influential neuroscientist, Karl Pribram, once noted that monkeys with ventral-stream lesions that had been trained for months to no avail to distinguish between simple visual patterns, would sit in their cages snatching flies out of the air with great dexterity. Mitchell Glickstein recently confirmed that such monkeys do indeed retain excellent visuomotor skills. He found that monkeys with ventral-stream damage had no problem at all using their finger and thumb to retrieve food items embedded in narrow slots — quite unlike his monkeys with dorsal-stream lesions.

Such studies in monkeys are widely thought to be informative for our understanding of human neuroscience, given the many similarities between human brains and monkey brains.

6.3.6 Dissociation studies in healthy subjects

G&M hypothesize that the dorsal and ventral streams of visual processing use different frames of reference, in part due to computational constraints:316

When we perceive the size, location, orientation, and geometry of an object, we implicitly do so in relation to other objects in the scene we are looking at. In contrast, when we reach out to grab that same object, our brain needs to focus on the object itself and its relationship to us — most particularly, to our hand — without taking account of the… scene in which the object is embedded. To put it a different way, perception uses a scene-based frame of reference while the visual control of action uses egocentric frames of reference.

…The use of scene-based metrics means that the brain can construct this representation in great detail without having to compute the absolute size, distance, and geometry of each object in the scene. To register the absolute metrics of the entire scene would in fact be computationally impossible, given the rapidity with which the pattern of light changes on our retina. It is far more economical for perception to compute just the relational metrics of the scene, and even these computations do not generally need to be precise. It is this reliance on scene-based frames of reference that lets us watch the same scene unfold on a small television or on a gigantic movie screen without being confused by the differences in scale.

…But… scene-based metrics are the very opposite of what you need when you act upon the world. It is not enough to know that an object you wish to pick up is bigger or closer than a neighboring object. To program your reach and scale your grasp, your brain needs to compute the size and distance of the object in relation to your hand. It needs to use absolute metrics set within an egocentric frame of reference. It would be a nuisance, and potentially disastrous, if the illusions of size or distance that are a normal part of [scene and object] perception were to intrude into the visual control of your movements.

If this account is right, it suggests a way to test for dorsal-ventral dissociation even in healthy subjects, since object and scene recognition should be subject to certain kinds of visual illusions that visually-guided action is not.

virtual workbench
Image © MIT Press.317 Used by permission of MIT Press.

 

One way to test for this dissociation is to use virtual reality displays. In one study, Hu & Goodale (2000) used a virtual reality display to show healthy subjects a series of 3D images of target blocks (marked with a red spot), each of which was displayed along with another “virtual” block that was either 10% wider or narrower than the target block. These blocks were shown for a half second or less, and then the subject was asked to either (1) reach out and grab the target block using their thumb and index finger, or to (2) indicate the size of the target block using their thumb and index finger, but not reach out toward it. To ensure a “real” (not pantomimed) grasping motion, a physical but unseen block was placed exactly where the virtual target block appeared to be (see right).

The point of having two (virtual) blocks of slightly different sizes was to induce a “size-contrast effect,” akin to the effect observed when a person you normally think of as tall stands next to a professional basketball player and suddenly seems shorter than they usually do. Hu & Goodale’s expectation was that this size-contrast effect would would affect the subject’s (ventral) perception of the target block, and thus their attempt to indicate its size with their thumb and index finger (without reaching for it), but would not affect their (dorsally-guided) attempt to grasp the target block.

And this is just what happened. When the target block was paired with a larger companion block, subjects consistently judged it to be smaller than when the same target block was paired with a smaller companion block. But when subjects reached out to grasp the target block, they opened their thumb and index finger to an identical degree no matter which companion block appeared. In other words, the size-contrast effect affected the subjects’ perception of the target block, but didn’t affect their physical interaction with the target block.

Another proposed difference between the dorsal and ventral streams is that the dorsal system should operate only in real-time, whereas the ventral stream interacts with short- and long-term memory to help guide decision-making and action over a longer period of time. Consistent with this hypothesis, the subjects’ grip size calibration was affected by the size-contrast illusion when a delay was inserted between viewing the (virtual) blocks and reaching toward the target block:

When the students [subjects] had to wait for five seconds before picking up the target object that they had just seen, the scaling of their grasp now fell prey to the influence of the companion block. Just as they did when they made perceptual judgments, they opened their hand wider when the target block was accompanied by a small block than when it was accompanied by a large block. This intrusion of the size-contrast effect into grip scaling after a delay is exactly what we had predicted. Since the dedicated visuomotor systems in the dorsal stream operate only in real time, the introduction of a delay disrupts their function. Therefore when a delay is introduced, the calibration of the grasp has to depend on a memory derived from perceptual processing in the ventral stream, and becomes subject to the same size-contrast illusions that perception is prone to.

 

Ebbinghaus illusion
Illustration by the Open Philanthropy Project.318

In another experiment, Haffendon & Goodale (1998) tested for a ventral-dorsal dissociation using the well-known Ebbinghaus illusion, which also makes use of size-contrast effects. In one version, two physically identical circles are perceived as being of different size (top right). In another version, two physically different circles are perceived as being of identical size (bottom right).

To test for an action-perception dissociation, Haffendon & Goodale placed some flat disks atop the classic Ebbinhaus backgrounds, and then asked subjects to either “match” (indicate the size of) or “grasp” (reach out to grab) the target disk. As expected, subjects’ “match” attempts were affected by the visual illusion, but their “grasp” attempts were not.

Several experiments with other visual illusions have demonstrated similar results. Furthermore, just as G&M’s theory predicts, both perception and visuomotor control are affected if the illusion used is one that results from early visual processing (before the dorsal-ventral split).319

6.3.7 Single-neuron recordings

Further evidence for the “two streams” hypothesis comes from single-neuron recordings in monkeys:320

The 1980 Nobel laureates David Hubel and Torsten Wiesel… found that neurons in primary visual cortex [V1] would [fire] every time a visual edge or line was shown to the eye, so long as it was shown at the right orientation and in the right location within the field of view. The small area of the retina where a visual stimulus can activate a given neuron is called the neuron’s “receptive field.” Hubel and Wiesel discovered, in other words, that these neurons are “encoding” the orientation and position of particular edges that make up a visual scene out there in the world. Different neurons prefer (or are “tuned” to) different orientations of edges… Other neurons are tuned for the colors of objects, and still others code the direction in which an object is moving…

…The 1960s and early 1970s heralded great advances in single-cell recording as investigators pushed well beyond the early visual areas, out into the dorsal and ventral streams. It soon became apparent that neurons in the two streams coded the visual world very differently…

To be more specific, but still oversimplify: neurons in the ventral stream tend to respond to fairly complex visual patterns (e.g. entire objects, or even specific faces), but many of them don’t “care” so much about details such as the angle of the object, its lighting conditions, or how far the object is from the eye: just the sort of behavior you’d expect from neurons in a pathway that specializes in perceiving objects. In contrast, neurons in the dorsal stream seem to typically code for more action-specific features, for example the motion of objects or small differences in their orientation, and often only respond when a monkey responds to a visual target, for example by reaching out to it or tracking the object’s motion with its eyes.321

What about single neuron recordings in humans? Such studies are still rare for ethical reasons,322 but their findings are illuminating. For example, some cells just downstream of the inferotemporal cortex (in the ventral stream), in the medial temporal lobe (MTL), have been found to respond only to specific faces. For example, in one patient, a specific neuron responded most strongly to pictures (from any angle) of either Jennifer Aniston or Lisa Kudrow, both actresses on Friends. Another cell responded to any picture of actress Halle Berry, even when she was masked as Catwoman (a character she played), and also to the written words “HALLE BERRY,” but not to pictures of other people, or to other written names.323 Unfortunately, I don’t know whether single-neuron recordings have been made in the human dorsal stream.

6.3.8 Challenges

There is additional evidence for this “two streams” account of visual processing, for example from fMRI studies,324 but I won’t describe that evidence here (see G&M-13). Instead, I’d like to briefly mention some challenges for the two streams theory:325

  • Many of the relevant primary studies can be interpreted to support alternate hypotheses.326
  • Several studies suggest that the division of labor between the dorsal and ventral streams is not clear-cut: e.g. some neurons in the dorsal stream seem to subserve object recognition, and some neurons in the ventral stream seem to subserve visually-guided motor control.327
  • Personally, I would not be surprised if some of the neuroimaging studies used to argue in favor of G&M’s view could be undermined by a careful examination of interpretive complications328 and statistical errors329 — though, this worry is not unique to imaging studies of conscious and unconscious vision (see Appendix Z.8).

Considering all the evidence I’ve studied or skimmed, my impression is that something like G&M’s “two streams” account of visual processing has a good chance of being true (with many complications), but also has a good chance of being quite mistaken.

If something like the “two streams” account is right, then it could provide some evidence in favor of certain kinds of “cortex-required views” about consciousness, especially if we observe other kinds of cognitive processing having both conscious and unconscious components, with the conscious components being computed by the same broad regions of the brain that compute conscious vision.

6.4 Appendix D. Some clarifications on nociception and pain

In this appendix, I clarify how I use nociception-related and pain-related terms in this report, and provide my sources for the judgments I made in two rows of my table of PCIFs: those for “Has nociceptors” and “Has neural nociceptors.”

As I use the terms, nociception is the encoding and processing of noxious stimuli, where a noxious stimulus is an actually or potentially body-damaging event (either external or internal, e.g. cutaneous or visceral). A body-damaging event can be chemical (e.g. a strong acid), mechanical (e.g. pinching), or thermal (e.g. excessive heat). A sensory receptor that responds only or preferentially to noxious stimuli is a nociceptor. Not all noxious stimuli are successfully detected by nociceptors, e.g. when no nociceptor is located at the location of a noxious stimulus’ contact with the body. Those noxious stimuli that are detected are called nociceptive stimuli.

These definitions are identical to those of the International Association for the Study of Pain (IASP) — see Loeser & Treede (2008) — except that I have dropped the word “neural,” and replaced the phrase “tissue-damaging” with “body-damaging.” I made both these modifications because I want to use nociception-related terms in the context of a wide variety of cognitive systems, including e.g. personal computers and robots, which in some cases have nociception-specific sensory receptors but not “neurons” in the usual sense of that word, and which are more typically said to have “bodies” than “tissues.” Also, in accordance with many other definitions (e.g. Ringkamp et al. 2013), I have clarified that nociceptors can in some cases respond to both noxious and non-noxious stimuli, but must respond preferentially to noxious stimuli to be counted as “nociceptors.”

Some nociceptors respond to only one kind of noxious stimuli, while other (“polymodal”) nociceptors respond to multiple kinds of noxious stimuli. Some polymodal nociceptors are dedicated to noxious stimuli only, whereas other polymodal nociceptors — called wide dynamic range (WDR) neurons — respond to both noxious and non-noxious stimuli. (See e.g. Derbyshire 2014; Gebhart & Schmidt 2013, p. 4266; Walters 1996, p.97.)

Pain, in contrast to mere nociception, is an unpleasant conscious experience associated with actual or potential body damage (or akin to unpleasant experiences associated with noxious stimuli). The IASP’s definition of pain (Loeser & Treede 2008) is “an unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage.” I have dropped the phrase about description because I want to talk about pain in cognitive systems that may or may not be able to describe their pain to others. Following Price (1999), ch. 1, I have also added the parenthetical phrase “or akin to such experiences” so as to capture pain that “feels like” the unpleasant experiences normally associated (in humans, at least) with actual or potential body damage, even if those experiences are not in fact associated with such actual or potential damage, as with some cases of neuropathic pain, and perhaps also as with some cases of psychologically-created experiences of pain, e.g. when a subject is hallucinating or dreaming a painful experience. But I keep my definition simpler than that of e.g. Sneddon (2009), which adds that animals in pain should “quickly learn to avoid the noxious stimulus and demonstrate sustained changes in behaviour that have protective function to reduce further injury and pain, prevent the injury from recurring, and promote healing and recovery.” Whether such phenomena are indicative of pain or just nociception is an empirical question, and I do not wish to burden my definition of pain with such assumptions.

Nociception can occur without pain, and pain can occur without nociception. Loeser & Treede (2008) provide examples: “after local anesthesia of the mandibular nerve for dental procedures, there is peripheral nociception without pain, whereas in a patient with thalamic pain [a kind of neuropathic pain resulting from stroke], there is pain without peripheral nociception.” Rose et al. (2014) provide another example of nociception without pain: “carpal tunnel surgery is sometimes performed in awake patients following axillary local anesthetic injection, which blocks conduction in axons passing from receptors in the hand and arm to the spinal cord. Consequently, the patient can watch the surgery but feel nothing, in spite of intense nociceptor activation.” See p. 127 of Le Neindre et al. (2017) for a table summarizing several types of nociception-related processing that (in humans) are varyingly conscious or unconscious. (But, my usual caveats about the possibility of hidden qualia apply.)

The ability to detect and react to noxious stimuli is a basic adaptive capability, and thus nociceptors are found in humans (Purves et al. 2011) and many other species (Sneddon et al. 2014; Smith & Lewin 2009), including fruit flies (Im & Galko 2012), nematode worms (Wittenberg & Baumeister 1999), and bacteria (Paoni et al. 1981). Not all the nociceptors are neural nociceptors, however.

Different species have evolved different nociceptors, presumably because they are exposed to different noxious stimuli, and because a stimulus that is noxious for one species might not be noxious for another species (Smith & Lewin 2009). For example, the threshold for noxious heat in one species of trout is ~33 °C or perhaps ~25 °C (Ashley et al. 2007), while in chickens it is ~49 °C (Gentle et al. 2001), and it may be higher still in the Pompeii worms that live near hydrothermal vents and regularly experience temperatures above 80 °C (Cary et al. 1998). Another example is this: while most mammalian species have acid-detecting nociceptors — and so do many species of other phyla, including e.g. the leech H. medicinalis (Pastor et al. 1996) — African naked mole-rats do not have acid-detecting nociceptors (Park et al. 2008; Smith et al. 2011).

Non-neural nociceptors are also built into many personal computers (PCs). For example, many PCs contain a sensor which detects whether the computer’s central processing unit (CPU) is becoming dangerously hot, such that if it is, a fan can be signalled to turn on, cooling the CPU (Mueller 2013, chapter 3, section: “Processor Cooling”). Some robots are equipped with sensors that detect both noxious and non-noxious stimuli (Dahl et al. 2011; Kühn & Haddadin 2017), somewhat analogous to the previously-mentioned WDR neurons in humans.

Some readers may be skeptical of (non-neural) nociception in bacteria. For an overview of bacterial signal transduction and subsequent chemotaxis (movement in response to chemical stimuli), see Wadhams & Armitage (2004); Wuichet & Zhulin (2010); Sourjik & Wingreen (2012). In the literature on bacterial chemotaxis, a noxious stimulus is typically called a “repellent,” and the behavioral response away from noxious chemical stimuli is sometimes called “negative chemotaxis.” Example papers describing negative chemotaxis in bacteria include Shioi et al. (1987); Yamamoto et al. (1990); Kaempf & Greenberg (1990); Ohga et al. (1993); Khan et al. (1995); Liu & Fridovich (1996); Karmakar et al. (2015). For a more general account of how a single-celled organism can engage in fairly sophisticated computation, see Bray (2011).

As for nociceptors in the human enteric nervous system, Wood (2011) writes: “Presence in the gastrointestinal tract of pain receptors (nociceptors) equivalent to those connected with C-fibers and A-δ fibers elsewhere in the body is likely…”

My sources for the presence or absence of neural nociceptors in multiple taxa are Purves et al. (2011), Sneddon et al. (2014), Smith & Lewin (2009), and Wood (2011). My source for neural nociceptors in the common fruit fly is Terada et al. (2016).

Because my investigation at one point paid special attention to similarities between rainbow trout and chickens, I collected additional specific sources for those taxa, listed in a footnote.330

Neural nociceptors in humans come in many types. The varieties of human nociceptors are described succinctly by Basbaum et al. (2009):

The cell bodies of nociceptors are located in the dorsal root ganglia (DRG) for the body and the trigeminal ganglion for the face, and have both a peripheral and central axonal branch that innervates their target organ and the spinal cord, respectively… There are two major classes of nociceptors… The first includes medium diameter myelinated [insulated] (Aδ) afferents [“afferent” means “nerve fiber of a sensory neuron”] that mediate acute, well-localized “first” or fast pain. These myelinated afferents differ considerably from the larger diameter and rapidly conducting Aβ fibers that respond to innocuous mechanical stimulation (i.e., light touch). The second class of nociceptor includes small diameter unmyelinated “C” fibers that convey poorly localized, “second” or slow pain.

Electrophysiological studies have further subdivided Aδ nociceptors into two main classes. Type I (HTM: high-threshold mechanical nociceptors) respond to both mechanical and chemical stimuli but have relatively high heat thresholds (>50 °C). If, however, the heat stimulus is maintained, these afferents will respond at lower temperatures. And most importantly, they will sensitize (i.e., the heat or mechanical threshold will drop) in the setting of tissue injury. Type II Aδ nociceptors have a much lower heat threshold, but a very high mechanical threshold. Activity of this afferent almost certainly mediates the “first” acute pain response to noxious heat… By contrast, the type I fiber likely mediates the first pain provoked by pinprick and other intense mechanical stimuli.

The unmyelinated C fibers are also heterogeneous. Like the myelinated afferents, most C fibers are polymodal, that is, they include a population that is both heat and mechanically sensitive (CMHs)… Of particular interest are the heat-responsive, but mechanically insensitive, unmyelinated afferents (so-called silent nociceptors) that develop mechanical sensitivity only in the setting of injury… These afferents are more responsive to chemical stimuli (capsaicin or histamine) compared to the CMHs and probably come into play when the chemical milieu of inflammation alters their properties. Subsets of these afferents are also responsive to a variety of itch-producing [stimuli]. It is worth noting that not all C fibers are nociceptors. Some respond to cooling, and [others]… appear to mediate pleasant touch…

The variety of nociceptors across the entire animal kingdom is of course much broader.

6.5 Appendix E. Some clarifications on “neuroanatomical similarity”

In this appendix, I clarify how I estimated the values for “neuroanatomical similarity” when applying my “theory-agnostic estimation process” described above.

As far as I know, there is no widely used measure of neuroanatomical similarity. Moreover, hypotheses about structural homologies are often highly uncertain, and revised over time.

I am not remotely an expert in comparative neuroanatomy, and my very rough ratings for “neuroanatomical similarity with humans” are drawn from my cursory understanding of the field, and would likely be disputed by experts in comparative neuroanatomy. For a brief overview of the major “similarity” factors I’m considering, see e.g. Powers (2014).331

To illustrate some of the similarities and differences I have in mind, I present below a table of the animal taxa I rated for “neuroanatomical similarity with humans” above, shown alongside my rating of neuroanatomical similarity, and some (but not all) of the factors influencing that rating judgment.332 Note the “processing power” measures (e.g. brain mass, neuron counts, or neuronal scaling rules) are excluded here, because they are captured by a separate column in my previous table.

Because the width of the table doesn’t fit within the horizontal space allotted for this page, the table must be scrolled horizontally to view all its contents.

MY RATING BILATERAL OR RADIAL SYMMETRY? GANGLIA OR BRAIN? MIDBRAIN? RETICULAR FORMATION? DIENCEPHALON? TELENCEPHALON? NEOCORTEX? DORSOLATERAL PREFRONTAL CORTEX? EXTREME HEMISPHERIC SPECIALIZATION?
Humans (for comparison) Bilateral Brain Yes Yes Yes Yes, via evagination Yes, disproportionately large Yes Yes333
Chimpanzees High Bilateral Brain Yes Yes Yes Yes, via evagination Yes, disproportionately large Yes No
Cows Moderate/high Bilateral Brain Yes Yes Yes Yes, via evagination Yes Debated, but probably not334 No
Chickens Low/moderate Bilateral Brain Yes Yes Yes Yes, via evagination Debated, but “not really”335 No No
Rainbow trout Low Bilateral Brain Yes Yes Yes Yes, via eversion336 No No No
Gazami crabs Very low Bilateral Ganglia No No No No No No No
Common fruit flies Very low Bilateral Ganglia No No No No No No No

6.6 Appendix F. Illusionism and its implications

In this appendix, I elaborate on my earlier brief explanation of illusionism, and say more about its implications for my tentative conclusions in this report.

6.6.1 What I mean by “illusionism”

Frankish (2016b) explains illusionism this way:

Suppose we encounter something that seems anomalous, in the sense of being radically inexplicable within our established scientific worldview. Psychokinesis is an example. We would have, broadly speaking, three options. First, we could accept that the phenomenon is real and explore the implications of its existence, proposing major revisions or extensions to our science… In the case of psychokinesis, we might posit previously unknown psychic forces and embark on a major revision of physics to accommodate them. Second, we could argue that, although the phenomenon is real, it is not in fact anomalous and can be explained within current science. Thus, we would accept that people really can move things with their unaided minds but argue that this ability depends on known forces, such as electromagnetism. Third, we could argue that the phenomenon is illusory and set about investigating how the illusion is produced. Thus, we might argue that people who seem to have psychokinetic powers are employing some trick to make it seem as if they are mentally influencing objects.

The first two options are realist ones: we accept that there is a real phenomenon of the kind there appears to be and seek to explain it. Theorizing may involve some modest reconceptualization of the phenomenon, but the aim is to provide a theory that broadly vindicates our pre-theoretical conception of it. The third position is an illusionist one: we deny that the phenomenon is real and focus on explaining the appearance of it. The options also differ in explanatory strategy. The first is radical, involving major theoretical revision and innovation, whereas the second and third are conservative, involving only the application of existing theoretical resources.

Turn now to consciousness. Conscious experience has a subjective aspect; we say it is like something to see colours, hear sounds, smell odours, and so on. Such talk is widely construed to mean that conscious experiences have introspectable qualitative properties, or ‘feels’, which determine what it is like to undergo them. Various terms are used for these putative properties. I shall use ‘phenomenal properties’, and, for variation, ‘phenomenal feels’ and ‘phenomenal character’, and I shall say that experiences with such properties are phenomenally conscious… Now, phenomenal properties seem anomalous. They are sometimes characterized as simple, ineffable, intrinsic, private, and immediately apprehended, and many theorists argue that they are distinct from all physical properties, inaccessible to third-person science, and inexplicable in physical terms… Again, there are three broad options.

First, there is radical realism, which treats phenomenal consciousness as real and inexplicable without radical theoretical innovation. In this camp I group dualists, neutral monists, mysterians, and those who appeal to new physics… Second, there is conservative realism, which accepts the reality of phenomenal consciousness but seeks to explain it in physical terms, using the resources of contemporary cognitive science or modest extensions of it. Most physicalist theories fall within this camp, including the various forms of representational theory. Both radical and conservative realists accept that there is something real and genuinely qualitative picked out by talk of the phenomenal properties of experience, and they adopt this as their explanandum. That is, both address [Chalmers’] hard problem.

The third option is illusionism. This shares radical realism’s emphasis on the anomalousness of phenomenal consciousness and conservative realism’s rejection of radical theoretical innovation. It reconciles these commitments by treating phenomenal properties as illusory. Illusionists deny that experiences have phenomenal properties and focus on explaining why they seem to have them. They typically allow that we are introspectively aware of our sensory states but argue that this awareness is partial and distorted, leading us to misrepresent the states as having phenomenal properties… Whatever the details, they must explain the content of the relevant states in broadly functional terms, and the challenge is to provide an account that explains how real and vivid phenomenal consciousness seems. This is the illusion problem.

Illusionism comes in many varieties. Here is an example of what Frankish calls “weak illusionism,” from Carruthers (2000), pp. 93-94:

What would it take for [an explanatory theory] of phenomenal consciousness to succeed? What are the desiderata for a successful theory? I suggest that the theory would need to explain, or explain away, those aspects of phenomenal consciousness which seem most puzzling and distinctive, of which there are five:

  1. Phenomenally conscious states have a subjective dimension; they have feel; there is something which it is like to undergo them.
  2. The properties involved in phenomenal consciousness seem to their subjects to be intrinsic and non-relationally individuated.
  3. The properties distinctive of phenomenal consciousness can seem to their subjects to be ineffable or indescribable.
  4. Those properties can seem in some way private to their possessors.
  5. It can seem to subjects that we have infallible (as opposed to merely privileged) knowledge of phenomenally conscious properties.

Note that only (1) is expressed categorically, as a claim about the actual nature of phenomenal consciousness. The other strands are expressed in terms of ‘seemings’, or what the possessors of phenomenally conscious mental states may be inclined to think about the nature of those states. This is because (1) is definitive of the very idea of phenomenal consciousness… whereas (2) to (5), when construed categorically, are the claims concerning phenomenal consciousness which raise particular problems for physicalist and functionalist conceptions of the mind…

Aspect (1) therefore needs to be explained in any successful account of phenomenal consciousness; whereas (2) to (5) – when transposed into categorical claims about the nature of phenomenal consciousness – should be explained away. If we can explain (2) to (5) in a way which involves no commitment to the truth of the things people are inclined to think about phenomenal consciousness, then we can be qualia irrealists (in the strong sense of ‘qualia’…). But if we can explain (1), then we can maintain that we are, nevertheless, naturalistic realists concerning phenomenal consciousness itself.

Frankish calls Carruthers’ theory an example of “weak illusionism” because, while Carruthers suggests that several key features of consciousness are illusions, he seems to accept that perhaps the most central feature of consciousness — its “subjective,” “something it’s like” nature — is real, and needs to be “explained” rather than “explained away.”337 In contrast, Frankish stipulates, a “strong illusionist” would say that even the subjective, qualitative, “what it’s like”-ness of consciousness is an illusion.

My own view is probably best described as a variant of “strong illusionism,” and hereafter I will (like Frankish) use “illusionism” to mean “strong illusionism,” unless otherwise specified. (As Frankish argues, weak illusionism may collapse into strong illusionism anyway.)

However, unlike Frankish, I avoid saying things like “phenomenal consciousness is an illusion” or “phenomenal properties are illusory,” because whereas Frankish defines “phenomenal consciousness” and “phenomenal properties” in a particular philosophical way, I’m instead taking Schwitzgebel’s approach of defining these terms by example (see above).338 On this way of talking, phenomenal consciousness is real, and so are phenomenal properties, and there’s “something it’s like” to be me, and probably there’s “something it’s like” to be a chimpanzee, and probably there isn’t “something it’s like” to be a chess-playing computer, and these “phenomenal properties” and this “something it’s like”-ness aren’t what they seem to be when we introspect about them, and they don’t have the properties that many philosophers have assumed they must have, and that is the sense in which these features of consciousness are “illusory.”

Frankish sounds like he would likely accept this way of talking, too, so long as we have some way to distinguish what the phenomenal realist means by phenomenality-related terms and what the illusionist means by them.339 Below and elsewhere in this report, I use terms like “consciousness” and “qualia” and “phenomenal properties” to refer to the kinds of experiences defined by example above, which both “realists” and “illusionists” agree exist. To refer to the special kinds of qualia and phenomenal properties that “realists” think exist (and that illusionists deny), I use phrases such as “the realist’s notion of qualia.”

Frankish (2016b) and the other papers in the same journal issue do a good job of explaining the arguments for and against illusionism, and I won’t repeat them here. I will, however, make some further effort to explain what illusionism is, since the idea can be difficult to wrap one’s head around, and also these issues are hard to talk about clearly.340 After that, I’ll make some brief comments about what implications illusionism seems to have for the distribution question and for my moral intuitions about moral patienthood.

6.6.2 Other cognitive illusions

First, it’s worth calling to mind some cognitive illusions about other things, which can help to set the stage for understanding how some “more central” features of consciousness might also be illusory.

 

Tabletops illusion
Image © Oxford University Press.341 Used by permission of Oxford University Press.

Consider the “tabletops” illusion on the right. Would you believe that these two tabletops are exactly the same shape? When I first saw this illusion, I couldn’t make my brain believe they were the same shape no matter what I tried. Clearly, the table on the right is longer and narrower than the one on the left. To test this, I cut a piece of paper to be the same shape as the first tabletop, and then I moved and rotated it to cover the second tabletop. Sure enough, they were the same shape! After putting away the piece of paper, my brain still cannot perceive them as the same shape. But, with help from the piece of paper, I can convince myself they are the same shape, even though I may never be able to perceive them as the same shape.

This example illustrates a lesson that will be useful later: we can know something is an illusion even though our direct perception remains as fooled as ever, and even though we do not know how the illusion is produced.342

Of course, our brains don’t merely trick us about particular objects or stimuli. Other cognitive illusions affect us continuously, from birth to death, and some of these illusions have only been discovered quite recently. For example, the human eye’s natural blind spot — mentioned above — wasn’t discovered until the 1660s.343 Or, consider the fact that your entire visual field seems to be “in color,” but in fact you have greatly diminished color perception in the periphery your visual field, such that you cannot distinguish green and red objects at about 40° eccentricity (away from the center of your visual field), depending on the size of the objects.344 As far as I know, this basic fact about our daily visual experience, which is very easy to test, wasn’t discovered until the 19th century.345

Next, consider a class of pathological illusions, in which patients are either convinced they have a disability they don’t have, or convinced they don’t have a disability they do have. For example, patients with Anton’s syndrome are blind, but they don’t think they are blind:346

Patients with Anton’s syndrome… cannot count fingers or discriminate objects, shapes, or colors… Some patients with Anton’s syndrome cannot even correctly tell if the room lights are on or off. Despite being profoundly… blind, patients typically deny having any visual difficulty. They confabulate responses such that they guess how many fingers the examiner is holding up or whether the lights are on or off. When confronted with their errors, they often make excused such as “The lights are too dim” or “I don’t have my glasses.”

Or, consider a patient with inverse Anton’s syndrome:347

Although denying visual perception, [the patient] correctly named objects, colors, and famous faces, recognized facial emotions, and read various types of single words with greater than 50% accuracy when presented in the upper right visual field. Upon confrontation regarding his apparent visual abilities, the patient continued to deny visual perceptual awareness… [and] alternatively replied “I feel it,” “I feel like something is there,” “it clicks,” or “I feel it in my mind.”

A patient can even be wrong about the most profound disability of all, death. From a case report on a patient called “JK”:348

When she was most ill, JK claimed that she was dead. She also denied the existence of her mother, and believed that her (JK’s) body was going to explode. On one occasion JK described herself as consisting of mere fresh air and on another she said that she was “just a voice and if that goes I won’t be anything … if my voice goes I will be lost and I won’t know where I have gone”…

…[JK felt] guilty about having claimed social security benefits (to which she was fully entitled) on the grounds that she was dead while she was claiming…

…Her subjective experience of eating was similarly unreal; she felt as though she were “just placing food in the atmosphere”, rather than into her body…

We wanted to know whether the fact that JK had thoughts and feelings (however abnormal) struck her as being inconsistent with her belief that she was dead. We therefore asked her, during the period when she claimed to be dead, whether she could feel her heart beat, whether she could feel hot or cold, and whether she could feel when her bladder was full. She said she could. We suggested that such feelings surely represented evidence that she was not dead, but alive. JK said that since she had such feelings even though she was dead, they clearly did not represent evidence that she was alive. She said she recognised that this was a difficult concept for us to grasp and one which was equally difficult for her to explain, partly because the experience was unique to her and partly because she did not fully understand it herself.

We then asked JK whether she thought we would be able to feel our hearts beat, to feel hunger, and so on if we were dead. JK said that we wouldn’t, and repeated that this experience was unique to her; no one else had ever experienced what she was going through. However, she eventually agreed that it “might be possible”. Hence, JK recognised the logical inconsistency between someone’s being dead and yet remaining able to feel and think, but thought that she was none the less in this state.

What is it like to be a patient with one of these illusions?349 In at least some such cases, a plausible interpretation seems to be that when the patient introspects about whether they are blind or dead, they seem to just know, “directly,” that they are dead, or blind, or not blind, and this feeling of “knowing” trumps the evidence presented to them by the examiner. (Perhaps if these patients had been trained as philosophers, they would claim they had “direct acquaintance”350 with the fact that they were dead, or blind, or not blind.)

In any case, it’s clear that we can be subject to profound illusions about the external world, about ourselves, and about our own capacities. But can we be wrong about our own subjective experience? When perceiving the tabletops illusion above, we are wrong about the shape of the tabletops, but presumably we are right about what our subjective experience of the tabletops is like — right? Isn’t it the case that “where consciousness is concerned, the existence of the appearance is the reality”?351

In fact, I think we are very often wrong about our own subjective experiences.352 To get a sense for why I think so, try this experiment: close your eyes, picture in as much detail as you can the front of your house or apartment building from across the street, and ask a friend to read you these questions one at a time (pausing for several seconds between each question, so you have a chance to think about the answer):353

How much of the scene can you vividly visualize at once? Can you keep the image of the chimney vividly in mind at the same time that you vividly imagine your front door, or how does the image of the chimney fade as you begin to think about the door? How much detail does your image have? How stable is it? If you can’t visually imagine the entire front of your house in rich detail all at once, what happens to the aspects of the image that are relatively less detailed? If the chimney is still experienced as part of the imagery when your imagemaking energies are focused on the front door, how exactly is it experienced? Does it have determinate shape, determinate color? In general, do the objects in your image have color before you think to assign color to them, or do some of the colors remain indeterminate, at least for a while…? If there is indeterminacy of color, how is that indeterminacy experienced? As gray? Does your visual image have depth in the same way that your sensory experience does… or is your imagery somehow flatter…? …Do you experience the image as located somewhere in egocentric space – inside your head, or before your eyes, or in front of your forehead – or does it make no sense to attempt to assign it a position in this way?

When questioned in this way, I suspect many people will quickly become quite uncertain about the subjective character of their own conscious experience of imagining the front of their house or apartment building.

Here is another exercise from Dennett (1991), ch. 4:

…would you be prepared to bet on the following propositions? (I made up at least one of them.)

  1. You can experience a patch that is red and green all over at the same time — a patch that is both colors (not mixed) at once.
  2. If you look at a yellow circle on a blue background (in good light), and the luminance or brightness of the yellow and blue are then adjusted to be equal, the boundary between the yellow and blue disappears.
  3. There is a sound, sometimes called the auditory barber pole, which seems to keep on rising in pitch forever, without ever getting any higher.
  4. There is an herb an overdose of which makes you incapable of understanding spoken sentences in your native language. Until the effect wears off, your hearing is unimpaired, with no fuzziness or added noise, but the words you hear sound to you like an entirely foreign language, even though you somehow know they aren’t.
  5. If you are blindfolded, and a vibrator is applied to a point on your arm while you touch your nose, you will feel your nose growing like Pinocchio’s; if the vibrator is moved to another point, you will then have the eerie feeling of pushing your nose inside out, with your index finger coming to rest somewhere inside your skull.

Do you know which one Dennett fabricated? I reveal the answer in a footnote.354

To try additional exercises of this sort, see Schwitzgebel (2011).355

6.6.3 Where do the illusionist and the realist disagree?

In sum, the illusions to which we are susceptible are deep and pervasive.356 But even given all this, what could it mean for the realist’s notion of the “what it’s like”-ness of conscious experience to be an illusion? What is it, exactly, that the realist and the weak illusionist both think exists, but which the strong illusionist does not?357

This question is difficult to answer clearly because, after all, the realist (and perhaps the weak illusionist) is claiming that (their notion of) “phenomenal property” is sui generis, and rather unlike anything else we understand.

More centrally, it is not clear what the realist and the weak illusionist think is left to explain once the strong illusionist has explained (or explained away) the apparent privacy, ineffability, subjectivity, and intrinsicness of qualia. Frankish (2012a)358 makes this point by distinguishing three (soda-inspired) notions of “qualia” that have been discussed by philosophers:

  • Classic qualia: “Introspective qualitative properties of experience that are intrinsic (see footnote359), ineffable, and subjective.” (Classic qualia are widely thought to be incompatible with physicalism.)
  • Diet qualia: “The phenomenal characters (subjective feels, what-it-is-likenesses, etc.) of experience.”
  • Zero qualia: “The properties of experiences that dispose us to judge that experiences have introspectable qualitative properties that are intrinsic, ineffable, and subjective.”

Which of these notions of qualia should be the core “explanandum” (thing to be explained) of consciousness? In my readings, the most popular option these days seems to be diet qualia, since assuming classic qualia as the explanandum seems rather presumptuous, and would seem to beg the question against the physicalist. Diet qualia, in contrast, appears to be a theory-neutral explanandum of consciousness. One can take diet qualia to be the core explanandum of consciousness, and then argue that the best explanation of that explanandum is that classic qualia exist, or argue for a physicalist account of diet qualia, or argue something else.

Frankish, however, argues that the notion of diet qualia, when we look at it carefully, turns out to have no distinctive content beyond that of zero qualia. Frankish asks: if an experience could have zero qualia without having diet qualia,

…what exactly would be missing? Well, a phenomenal character, a subjective feel, a what-it-is-likeness. But what is that supposed to be, if not some intrinsic, ineffable, and subjective qualitative property? This is the crux of the matter. I can see how the properties that dispose us to judge that our experiences have classic qualia might not be intrinsic, ineffable, and subjective, but I find it much harder to understand how a phenomenal character itself might not be. What could a phenomenal character be, if not a classic quale? How could a phenomenal residue remain when intrinsicality, ineffability, and subjectivity have been stripped away?

The worry can be put another way. There are competing pressures on the concept of diet qualia. On the one hand, it needs to be weak enough to distinguish it from that of classic qualia, so that functional or representational theories of consciousness are not ruled out a priori. On the other hand, it needs to be strong enough to distinguish it from the concept of zero qualia, so that belief in diet qualia counts as realism about phenomenal consciousness. My suggestion is that there is no coherent concept that fits this bill. In short, I understand what classic qualia are, and I understand what zero qualia are, but I do not understand what diet qualia are; I suspect the concept has no distinctive content.

So what might the “something it’s like”-ness of diet qualia be, if it is more than zero qualia and less than classic qualia? Frankish surveys several possible answers, and finds all of them wanting. He concludes that, as far as he can tell, “there is no viable ‘diet’ notion of qualia which is stronger than that of zero qualia yet weaker than that of classic qualia and which picks out a theory-neutral explanandum [of consciousness].”

Frankish then complains about “the diet/zero shuffle”:

I have argued that the notion of diet qualia has no distinctive content. If there are no classic qualia, then all that needs explaining (as far as ‘what-it-is-likeness’ goes) are zero qualia. This is not a popular view, but it is one that is tacitly reflected in the practice of philosophers who offer reductive accounts of consciousness. Typically, these accounts involve a three-stage process. First, diet qualia are introduced as a neutral explanandum. Second, diet qualia are identified with some natural, usually relational, property of experience, such as possession of a form of non-conceptual intentional content or availability to higher-order thinking. Third, this identification is defended by arguing that we would be disposed to judge that experiences with this property have intrinsic, ineffable, and subjective qualitative properties. In the end, diet qualia are not explained at all but simply identified with some other feature, and what actually get explained are zero qualia. I shall call this the diet/zero shuffle.

If Frankish is right, the upshot is that there is no coherent “what it’s like”-ness that needs explaining above and beyond zero qualia, unless non-physicalists can argue convincingly for the existence of classic qualia. And if zero qualia are all that need to be explained, then the strong illusionist still has lots of work to do, but it looks like much less mysterious kind of work than one might have previously thought, and perhaps fairly similar to the kind of work that remains to explain human memory, attentional systems, and the types of cognitive illusions described above.

Now, supposing (strong) illusionism is right, what are the implications for the distribution question, and for my intuitions about which properties of consciousness are important for moral patienthood? I discussed the former question here, and I discuss the latter question next.

6.6.4 Illusionism and moral patienthood

What are the implications of illusionism for my intuitions about moral patienthood? In one sense, there might not be any.360 After all, my intuitions about (e.g.) the badness of conscious pain and the goodness of conscious pleasure were never dependent on the “reality” of specific features of consciousness that the illusionist thinks are illusory. Rather, my moral intuitions work more like the example I gave earlier: I sprain my ankle while playing soccer, don’t notice it for 5 seconds, and then feel a “rush of pain” suddenly “flood” my conscious experience, and I think “Gosh, well, whatever this is, I sure hope nothing like it happens to fish!” And then I reflect on what was happening prior to my conscious experience of the pain, and I think “But if that is all that happens when a fish is physically injured, then I’m not sure I care.” And so on. (For more on how my moral intuitions work, seeAppendix A.)

But if we had a better-developed understanding of how consciousness works, this could of course have important implications for my intuitions about moral patienthood. Perhaps a satisfactory illusionist theory of consciousness developed 20 years from now will show that some of the core illusions of human consciousness are irrelevant to what I morally care about, such that animals without those particular illusions of human consciousness still have a morally relevant sort of consciousness. This is, indeed, the sort of reasoning that leads me to assign a higher probability that various animal taxa will have “consciousness of a sort I morally care about” than “consciousness as defined by example above.” But getting further clarity about this must await more clarity about how consciousness works.

Personally, my hunch is that the cognitive algorithms which produce (e.g.) the illusion of an explanatory gap, and other “core” illusions of (human) consciousness, are going to be pretty important to what I find morally important about consciousness. In order words, my hunch is that if we could remove those particular illusion-producing algorithms from how a human mind works, then all “pain” might be to that person like the “first 5 seconds of unnoticed nociception” from my sprained-ankle example.361 But this is just a hunch, and my hunch could easily being wrong, and I could imagine ending up with a different hunch if I spent ~20 hrs trying to extract where my intuitions on this are coming from, and I could also imagine myself having different moral intuitions about the sprained-ankle example and other issues if I took the time to do something closer to my “extreme effort” process.

Also, it could even be that a brain similar to mine except that it lacked particular illusions might actually be a “moral superpatient,” i.e. a moral patient with especially high moral weight. For example, suppose a brain like mine was modified such that its introspective processes had much greater access to, and understanding of, the many other cognitive processes going on within the brain, such that it’s experiences seemed much less “ineffable.” Arguably, such a being would have especially morally weighty experiences.

6.7 Appendix G. Consciousness and fuzziness

In this appendix, I elaborate on my earlier comments about the “fuzziness” of consciousness.

6.7.1 Fuzziness and moral patienthood

First, how does a “fuzzy” view of consciousness (both between and within individuals) interact with the question of moral patienthood?

The question of which beings should be considered moral patients by virtue of their phenomenal consciousness requires the collaboration of two kinds of reasoning:

  1. Scientific explanation: How does consciousness work? How can it vary? How can we measure it?
  2. Moral judgment: Which kinds of processes — in this case, related to consciousness — are sufficient for moral patienthood?

In theory, a completed scientific explanation of consciousness could reveal a relatively clear dividing line between the conscious and the not-conscious. The discovery of a clear dividing line could allow for relatively straightforward moral judgments about which beings are moral patients (via their phenomenal consciousness). But if we discover (as I expect we will) that there is no clear dividing line between the conscious and the not-conscious — i.e. that consciousness is “fuzzy” — this could make our consciousness-derived moral judgments quite difficult.362 To illustrate this point, I’ll tell a story about two fictional pre-scientific groups of people, each united by a common “sacred” value.

The first group, the water-lovers, have a sacred value for water. They don’t know what water is made of or how it works, but they have positive and negative (or probably negative) examples of it. The clear liquid in lakes is water; the clear liquid that falls from the sky sometimes is water; the discolored liquid in a pond is probably water with extra stuff in it, but they’re not absolutely certain; the red liquid that comes out of a stab wound probably isn’t water, but it might be water with some impurities, like the water in a pond. The water-lovers also have some observations and ideas about how water seems to work. It can quench thirst; it slips and slides over the body; it can’t be walked on; if it gets very cold it turns to ice; it seems to eventually float away into the air if placed in a pot with a fire beneath it; if snow falls on one’s hand it seems to transform into water; etc.

Then, some scientists invent modern chemistry and discover that water is H2O, that it can be separated into hydrogen and oxygen via electrolysis, that it has particular boiling and melting points, that the red liquid that flows from a stab wound is partly water but partly other things, and so on. In this case, the water-lovers have little trouble translating their sacred value for “water” — defined in a pre-scientific ontology — into a value for certain things in the ontology of the real world discovered by science. The resolution to this “cross-ontology value translation” challenge is fairly straightforward: they value H2O.363

A few of the water-lovers conclude that what scientists have really shown is that water doesn’t exist — only H2O exists. They suffer an existential crisis and become nihilists. But most water-lovers carry on as before, assigning sacred value to water in lakes, rain that falls from the sky, and so on. A few of the group’s intellectuals devote themselves to the task of making judgments about edge cases, such as whether “heavy water” (2H2O) should be thought to have sacred value.

Another group of people at the time are the life-lovers, who have a sacred value for life.364 They don’t know how life works, but they have many positive and probably-negative examples of it, and they have some observations and ideas about how it seems to work. Humans are alive, up to the point where they stop responding even upon being dunked in water. Animals are also alive, up to the point where they stop responding even when poked with a knife. Plants are alive, but they move much more slowly than animals do. The distinction between living and non-living things is thought to be that living things are possessed by a “life force,” or perhaps by some kind of supernatural soul, which allows living things — unlike non-living things — to grow and reproduce and respond to the environment.

Then, some scientists invent modern biology. They discover that the ill-defined set of processes previously gestured at with terms like “life” is fully explained by the mechanistic activity of atoms and molecules, with no role for a separate life force or soul. This set of processes includes the development and maintenance of a cellular structure, homeostasis, metabolism, reproduction, growth, a wide variety of mechanisms for responding to stimuli, and more. A beginner’s introduction to how these processes commonly work is not captured by a simple chemical formula, but by an 800-page textbook.

Moreover, the exact set of processes at work, and the parameters of those processes, vary widely across systems. Virtually all “living systems” depend on photosynthesis-derived materials, but some don’t. Most survive in a relatively narrow range of temperatures, chemical environments, radiation levels, and gravity strengths, but many thrive in more extreme environments. Most actively regulate internal variables (e.g. temperature) within a relatively narrow range and die if they fail to do so, but some can instead enter a potentially years-long state of non-activity, from which they can later be revived.365 Most age, but some don’t. They range from ~0.5 µm to 2.8 km in length, and from mere weeks to thousands of years in life span.366 There are many edge cases, such as viruses, parasites, and bacterial spores.367

All this leaves the life-lovers with quite a quandary. How should they preserve their sacred value for “life” — defined in a pre-scientific ontology — as a value for some but not all of these innumerable varieties of life-related processes discovered by science? How should they resolve their cross-ontology value translation problem?

Some conclude they never should have cared about living things in the first place. Others conclude that any carbon-based thing which grows and reproduces should have sacred value — but one consequence of this is that they assign sacred value to certain kinds of crystals,368 and some life-lovers find this a bit nutty. Others assign sacred value only to systems which satisfy a longer list of life-related criteria that includes “autonomous reproduction,” but this excludes the cuckoo yellowjacket wasp and many other animals, which again seems strange to many other life-lovers. The life-lovers experience a series of schisms over which life-related processes have sacred value.

Like many physicalists,369 I expect the scientific explanation of phenomenal consciousness to look less like the scientific explanation of water and more like the scientific explanation of life. That is, I don’t expect there to be a clear dividing line between the conscious and the non-conscious. And as scientists continue to decompose consciousness into its component processes and reveal their great variety, I expect people to come to radically different moral judgments about which kinds of consciousness-related processes are moral patients and which are not. I suspect this will be true even for people who started out with very similar values (as defined in a first-person ontology, or as defined in the inchoate scientific ontologies available to us in 2017).370

I don’t have a knock-down argument for the fuzzy view, but consider: does it seem likely there was a single gene mutation in phylogeny such that earlier creatures had no conscious experience at all, while carriers of the mutation do have some conscious experience? Does it seem likely there is some moment in the development of a human fetus or infant before which it has no conscious experience at all, and after which it does have some conscious experience? Is there a clear dividing line between what is and isn’t alive, or between software that does and doesn’t implement some form of “attention,” “memory,” or “self-modeling”? Is “consciousness” likely to be as simple as electrons, crystals, and water, or is it more likely to be a more complex set of interacting processes like “life” or “vision” or “language use” or “face-recognition,” which even in fairly “minimal” form can vary along many different dimensions such that there is no obvious answer as to whether some things should count as belonging to one of these classes or not, except by convention? (I continue this line of thinking below.)

As I mentioned above, one consequence of holding a fuzzy view of consciousness is that it can be hard to give a meaningful response to questions like “How likely do you think it is that chickens / fishes / fruit flies are conscious?” Or as Dennett (1995) puts it,

Wondering whether it is “probable” that all mammals have [consciousness] thus begins to look like wondering whether or not any birds are wise or reptiles have gumption: a case of overworking a term from folk psychology that has [lost] its utility along with its hard edges.

I considered multiple options for how to proceed given this difficulty,371 and in the end I decided to take the following approach for this report: I temporarily set my moral judgments aside, and investigated the likely distribution of “consciousness” (as defined above), while acknowledging the extreme fuzziness of the concept. Then, at the end, I brought my moral judgments back into play, and explored what my empirical findings might imply given my moral judgments.

If you’re interested, I explain a few of my moral intuitions in Appendix A. I suspect these intuitions affect this report substantially, because they probably affect my intuitions about which beings are “conscious,” even when I try to pursue that investigation with reference to consciousness as defined above rather than with reference to “types of consciousness I intuitively morally care about.”372

6.7.2 Fuzziness and Darwin

Here, I’d like to say a bit more about why I expect consciousness to be fuzzy.

In part, this is because I think we should expect the vast majority of biological concepts — including concepts defined in terms of biological cognition (as consciousness is defined above, even if it can also be applied to non-biological systems) — to be at least somewhat “fuzzy.”

One key reason for this, I think, is Darwin. Dennett (2016b) explains:

Ever since Socrates pioneered the demand to know what all Fs have in common, in virtue of which they are Fs, the ideal of clear, sharp boundaries has been one of the founding principles of philosophy. Plato’s forms begat Aristotle’s essences, which begat a host of ways of asking for necessary and sufficient conditions, which begat natural kinds, which begat difference-makers and other ways of tidying up the borders of all the sets of things in the world. When Darwin came along with the revolutionary discovery that the sets of living things were not eternal, hard-edged, in-or-out classes but historical populations with fuzzy boundaries… the main reactions of philosophers were to either ignore this hard-to-deny fact or treat it as a challenge: Now how should we impose our cookie-cutter set theory on this vague and meandering portion of reality?

“Define your terms!” is a frequent preamble to discussions in philosophy, and in some quarters it counts as Step One in all serious investigations. It is not hard to see why. The techniques of argumentation inaugurated by Socrates and Plato and first systematized by Aristotle are not just intuitively satisfying… but demonstrably powerful tools of discovery… Euclid’s plane geometry was the first parade case, with its crisp isolation of definitions and axioms, inference rules, and theorems. If only all topics could be tamed as thoroughly as Euclid had tamed geometry! The hope of distilling everything down to the purity of Euclid has motivated many philosophical enterprises over the years, different attempts to euclidify all the topics and thereby impose classical logic on the world. These attempts continue to this day and have often proceeded as if Darwin never existed…

…

An argument that exposes the impact of Darwinian thinking is David Sanford’s (1975) nice “proof” that there aren’t any mammals:

  1. Every mammal has a mammal for a mother.
  2. If there have been any mammals at all, there have been only a finite number of mammals.
  3. But if there has been even one mammal, then by (1), there have been an infinity of mammals, which contradicts (2), so there can’t have been any mammals. It’s a contradiction in terms.

Because we know perfectly well that there are mammals, we take this argument seriously only as a challenge to discover what fallacy is lurking within it. And we know, in a general way, what has to give: if you go back far enough in the family tree of any mammal, you will eventually get to the therapsids, those strange, extinct bridge species between the reptiles and the mammals… A gradual transition occurred over millions of years from clear reptiles to clear mammals, with a lot of intermediaries filling in the gaps. What should we do about drawing the lines across this spectrum of gradual change? Can we identify a mammal, the Prime Mammal, that didn’t have a mammal for a mother, thus negating premise (1)? On what grounds? Whatever the grounds are, they will compete with the grounds we could use to support the verdict that that animal was not a mammal – after all, its mother was a therapsid. What could be a better test of therapsid-hood than that? Suppose that we list ten major differences used to distinguish therapsids from mammals and declare that having five or more of the mammal marks makes an animal a mammal. Aside from being arbitrary – why ten instead of six or twenty, and shouldn’t they be ordered in importance? – any such dividing line will generate lots of unwanted verdicts because during the long, long period of transition between obvious therapsids and obvious mammals there will be plenty of instances in which mammals (by our five + rule) mated with therapsids (fewer than five mammal marks) and had offspring that were therapsids born of mammals, mammals born of therapsids born of mammals, and so forth! …What should we do? We should quell our desire to draw lines. We can live with the quite unshocking and unmysterious fact that, you see, there were all these gradual changes that accumulated over many millions of years and eventually produced undeniable mammals.

The insistence that there must be a Prime Mammal, even if we can never know when and where it existed, is an example of hysterical realism. It invites us to reflect that if we just knew enough, we’d see – we’d have to see – that there is a special property of mammal-hood – the essence of mammal-hood – that defines mammals once and for all. To deny that there is such an essence, philosophers sometimes say, is to confuse metaphysics with epistemology: the study of what there (really) is with the study of what we can know about what there is. I reply that there may be occasions when thinkers do go off the rails by confusing a metaphysical question with a (merely) epistemological question, but this must be shown, not just asserted. In this instance, the charge of confusing metaphysics with epistemology is just a question-begging way of clinging to one’s crypto essentialism in the face of difficulties.

…

In particular, the demand for essences with sharp boundaries blinds thinkers to the prospect of gradualist theories of complex phenomena, such as life, intentions, natural selection itself, moral responsibility, and consciousness.

If you hold that there can be no borderline cases of being alive (such as, perhaps, viruses or even viroids or motor proteins), you are more than halfway to élan vital before you start thinking about it. If no proper part of a bacterium, say, is alive, what “truth maker” gets added that tips the balance in favor of the bacterium’s being alive? The three more or less standard candidates are having a metabolism, the capacity to reproduce, and a protective membrane, but since each of these phenomena, in turn, has apparent borderline cases, the need for an arbitrary cutoff doesn’t evaporate. And if single-celled “organisms” (if they deserve to be called that!) aren’t alive, how could two single-celled entities yoked together with no other ingredients be alive? And if not two, what would be special about a three-cell coalition? And so forth.

Of course, “fuzziness” is not limited to biological and biology-descended concepts. Mathematical concepts and some concepts used in fundamental physics have sharp boundaries, but the further from these domains we travel, the less sharply defined our concepts tend to be.373

6.7.3 Fuzziness and auto-activation deficit

Finally, here is one more “intuition pump” (Dennett 2013) in favor of a “fuzzy” view about consciousness.

Consider a form of akinetic mutism known variously as “athymhormia,” “psychic akinesia,” or “auto-activation deficit.” Leys & Henon (2013) explain:

Patients with akinetic mutism appear alert or at least wakeful, because their eyes are open and they have active gaze movements. They are mute and immobile, but they are able to follow the observer or moving objects with their eyes, to whisper a few monosyllables, and to have slow feeble voluntary movements under repetitive stimuli. The patients can answer questions, but otherwise never voluntarily start speaking. In extreme circumstances such as noxious stimuli, they can become agitated and even say appropriate words. This neuropsychological syndrome occurs despite the lack of obvious alteration of sensory motor functions. This syndrome results in impaired abilities in communicating and initiating motor activities.

…Akinetic mutism is due to disruption of reticulo-thalamo-frontal and extrathalamic reticulo-frontal afferent pathways…

…A discrepancy may be present between hetero- and auto-activation, the patient having an almost normal behavior under external stimuli. This peculiar form of akinetic mutism has been reported as [athymhormia], “loss of psychic self-activation” or “pure psychic akinesia”…

Below, I quote some case case reports of auto-activation deficit, and then explain why I think they (weakly) suggest a fuzzy view of consciousness.

Case report #1: A 35-year-old woman

From ch. 4 of Heilman & Satz (1983), by Damasio & Van Hoesen, on pp. 96-99:

J is a 35-year-old woman…

On the night of admission she was riding in a car driven by her husband and talking normally, when she suddenly slumped forward, interrupted her conversation, and developed weakness of the right leg and foot. On arrival at the hospital she was alert but speechless…

…There was a complete absence of spontaneous speech. The patient lay in bed quietly with an alert expression and followed the examiner with the eyes. From the standpoint of affect her facial expression could be best described as neutral. She gave no reply to the questions posed to her, but seemed perplexed by this incapacity. However, the patient did not appear frustrated… She never attempted to mouth words… She made no attempt to supplement her verbal defect with the use of gesture language. In striking contrast to the lack of spontaneous speech, the patient was able, from the time of admission, to repeat words and sentences slowly, but without delay in initiation. The ease in repetition was not accompanied by echolalia [unsolicited repetition of vocalizations made by someone else], and the articulation and melody of repeated speech were normal. The patient also gave evidence of good aural comprehension of language by means of nodding behavior… Performance on the Token Test was intact and she performed normally on a test of reading comprehension…

Spontaneous and syntactically organized utterances to nurses and relatives appeared in the second week postonset, in relation to immediate needs only. She was at this point barely able to carry a telephone conversation using mostly one- and two-word expressions. At 3 weeks she was able to talk in short simple but complete sentences, uttered slowly… Entirely normal articulation was observed at all times…

On reevaluation, 1 month later, the patient was remarkably recovered. She had considerable insight into the acute period of the illness and was able to give precious testimony about her experiences then. Asked if she ever suffered anguish for being apparently unable to communicate she answered negatively. There was no anxiety, she reported. She didn’t talk because she had “nothing to say.” Her mind was “empty.” Nothing “mattered.” She apparently was able to follow our conversations even during the early period of the illness, but felt no “will” to reply to our questions. In the period after discharge she continued to note a feeling of tranquility and relative lack of concern…

Case report #2: A 61-year-old clerk

Bogousslavsky et al. (1991) explain:

A 61-year old clerk with known [irregular heartbeat] was admitted because of “confusion” and [a drooping eyelid]…

According to his family, [following a stroke] he had become “passive” and had lost any emotional concern. He was [drowsy] and was orientated in time and place…, but remained apathetic and did not speak spontaneously. He moved very little unless asked to do so, only to go to the bathroom three or four times a day. He would sit down at the table to eat only when asked by the nurses or family and would stop eating after a few seconds unless repeatedly stimulated. During the day, he would stay in bed or in an armchair unless asked to go for a walk. He did not react to unusual situations in his room, such as Grand Mal seizures in another patient. He did not read the newspapers and did not watch the television. This behaviour contrasted with preserved motor and speech abilities when he was directly stimulated by another person: with constant activation, he was able to move and walk normally, he could play cards, answer questions, and read a test and comment on it thereafter; however, these activities would stop immediately if the external stimulation disappeared. He did not show imitation and utilization behaviour [imitation behavior occurs when patients imitate an examiner’s behavior without being instructed to do so; utilization behavior occurs when patients try to grab and use everyday objects presented to them, without being instructed to do so], and could inhibit socially inadequate acts, even when asked to perform them by an examiner (shouting in the room, undressing during daytime); however, he did not react emotionally to such orders. Also, he showed no emotional concern [about] his illness, though he did not deny it, and he remained indifferent when he had visitors or received gifts. He did not smile, laugh or cry. He never mentioned his previous activities and, when asked about his job, he answered he had no project to go back to work. When asked about his private thoughts, he just said “that’s all right”, “I think of nothing,” “I don’t want anything.” Because his motor and mental abilities seemed normal when stimulated by another person, his family and friends wondered whether he was really ill or was inactive on purpose, to annoy them. They complained he had become “a larva.”

Formal neuropsychological examination using a standard battery of tests was performed 3, 10, 25 and 60 days after stroke, including naming, repetition, comprehension, writing, reading, facial recognition, visuospatial recognition, topographic orientation on maps, drawing, copy of the Rey-Osterrieth figure, which were normal. No memory dysfunction was found: the patient could evoke remote and recent events of his past, visual… and verbal… learning, delayed reproduction of the Rey-Osterrieth figure showed normal results for age. Only minor disturbances were found on “frontal lobe tests”… His symbolic understanding of proverbs was preserved. The patient could cross out 20 lines distributed evenly on a sheet of paper; with no left- or right-side preference. [Editor’s note: see the paper for sources describing these tests.]

The patient was discharged unchanged two months after stroke to a chronic care institution, because his family could not cope with his behavioural disturbances, though they recognized that his intellect was spared.

I quote from five additional AAD case studies in a footnote.374

Also, Laplane & Dubois (2001) summarize findings from several other cases that I have not read because they were published in French. For example, citing the case study by Damasio & Van Hoesen plus three French papers containing additional case studies, they write:

It is surprising that subjects who are cognitively unimpaired can remain inactive for hours without complaining of boredom. Their mind is “empty, a total blank,” they say. In the most typical cases, they have no thoughts and no projections in the future. Although purely subjective, this feeling of emptiness seems to be a reliable symptom, since it has been reported in almost the same terms by numerous patients.

What can we learn from these case studies? One lesson, I think, is that phenomena we might have previously thought were inseparable are, in fact, separable, as Watt & Pincus (2004) argue. They suggest that “milder versions” of akinetic mutism, such as cases in which patients “respond to verbal inquiry” (as in the cases above), “appear to offer evidence of the independence of consciousness from an emotional bedrock, and that the former can exist without the latter.” They also offer a slightly different interpretation, according to which

lesser versions of [akinetic mutism]… may allow some phenomenal content, while the more severe versions… may show a virtual “emptying out” of consciousness. In these cases, events may be virtually meaningless and simply don’t matter anymore… [In more severe cases] patients [might] live in a kind of strange, virtually unfathomable netherworld close to the border of a persistent vegetative state.

They go on to say that their summary of disorders of consciousness “emphasizes their graded, progressive nature and eschews an all-or-nothing conceptualization. While intuitively appealing, an all-or-nothing picture of consciousness provides a limited basis for heuristic empirical study of the underpinnings of consciousness from a neural systems point of view, as compared to a graded or hierarchical one that emphasizes the core functional envelopes of emotion, intention, and attention.”

For a similar illustration involving various pain phenomena rather than AAD, see Corns (2014).

6.8 Appendix H. First-order views, higher-order views, and hidden qualia

In this appendix, I describe what I see as some of the most important arguments in the debate between first-order and higher-order theorists, which (in the present literature) is perhaps the most common way for theorists to argue about the complexity of consciousness and the distribution question (as mentioned above). See here for an explanation of what distinguishes first-order and higher-order views about consciousness.

Let’s start with the case for a relatively complex account of consciousness. In short, a great deal of cognitive processing that seems to satisfy (some) first-order accounts of consciousness — e.g. the processing which enables the blindsight subject to correctly guess the orientation of lines in her blind spot, or which occurs in the dorsal stream of the human visual system,375 or which occurs in “smart” webcams or in Microsoft Windows — is, as far as we know, unconscious. This is the basic argument that consciousness must be more complex than first-order theorists suggest, whether or not “higher-order” theories, as narrowly defined in e.g. Block (2011), are correct. What can the first-order theorist say in reply?

Most responses I’ve seen seem unpersuasive to me.376 However, there is at least one reply that seems (to me) to have substantial merit, though it does not settle the issue.377

This reply says that there may be “hidden qualia” (perhaps including even “hidden conscious subjects”), in the sense that there may be conscious experiences — in the human brain and perhaps in other minds — that are not accessible to introspection, verbal report, and so on. If so, then this would undermine the basic argument (outlined above) that consciousness must be more complex than first-order theorists propose. Perhaps (e.g.) the dorsal stream is conscious, but its conscious experiences simply are not accessible to “my” introspective processes and the memory and verbal reporting modules that are hooked up to “my” introspective processes.

This “hidden qualia” view certainly seems coherent to me.378 The problem, of course, is that (by definition) we can’t get any introspective evidence of hidden qualia, and without a stronger model of how consciousness works, we can’t use third-person methods to detect hidden qualia, either. Nevertheless I see some reasons to think hidden qualia are at least plausible, given what little we know so far about consciousness. Nevertheless, there are some (inconclusive) reasons to think that hidden qualia may exist.379

6.8.1 Block’s overflow argument

First, consider the arguments in Block (2007b):

No one would suppose that activation of the fusiform face area all by itself is sufficient for face-experience. I have never heard anyone advocate the view that if a fusiform face area were kept alive in a bottle, that activation of it would determine face-experience – or any experience at all… The total neural basis of a state with phenomenal character C is itself sufficient for the instantiation of C. The core neural basis of a state with phenomenal character C is the part of the total neural basis that distinguishes states with C from states with other phenomenal characters or phenomenal contents, for example the experience as of a face from the experience as of a house… So activation of the fusiform face area is a candidate for the core neural basis – not the total neural basis – for experience as of a face…

Here is the illustration I have been leading up to. There is a type of brain injury which causes a syndrome known as visuo-spatial extinction. If the patient sees a single object on either side, the patient can identify it, but if there are objects on both sides, the patient can identify only the one on the right and claims not to see the one on the left… With competition from the right, the subject cannot attend to the left. However… when [a patient named] G.K. claims not to see a face on the left, his fusiform face area (on the right, fed strongly by the left side of space) lights up almost as much as when he reports seeing the face… Should we conclude that [a] G.K. has face experience that – because of lack of attention – he does not know about? Or that [b] the fusiform face area is not the whole of the core neural basis for the experience, as of a face? Or that [c] activation of the fusiform face area is the core neural basis for the experience as of a face but that some other aspect of the total neural basis is missing? How are we to answer these questions, given that all these possibilities predict the same thing: no face report?

Block argues that option [a] is often what’s happening inside our brains (whether or not it happens to be happening for G.K. and face experiences in particular). In other words, he thinks there are genuine phenomenal experiences going on inside our heads to which we simply don’t have cognitive access, because the capacity/bandwidth of the cognitive access mechanisms are more limited than the capacity/bandwidth of the phenomenality mechanisms — in other words, that “phenomenality overflows access.”

Clark & Kiverstein (2007) summarize Block’s “overflow argument” like this:

The psychological data seem to show that subjects can see much more than working memory enables them to report. Thus, in the Landman et al. (2003) experiments, for instance, subjects show a capacity to identify the orientation of only four rectangles from a group of eight. Yet they typically report having seen the specific orientation of all eight rectangles. Working memory here seems to set a limit on the number of items available for conceptualization and hence report.

Work in neuroscience then suggests that unattended representations, forming parts of strong-but-still-losing clusters of activation in the back of the head, can be almost as strong as the clusters that win, are attended, and hence get to trigger the kinds of frontal activity involved in general broadcasting (broadcasting to the “global workspace”). But whereas Dehaene et al. (2006) treat the contents of such close-seconds as preconscious, because even in principle (given their de facto isolation from winning frontal coalitions) they are unreportable, Block urges us to treat them as phenomenally conscious, arguing that “the claim that they are not conscious on the sole ground of unreportability simply assumes metaphysical correlationism”… That is to say, it simply assumes what Block seeks to question – that is, that the kind of functional poise that grounds actual or potential report is part of what constitutes phenomenology. Contrary to this way of thinking, Block argues that by treating the just-losing coalitions as supporting phenomenally conscious (but in principle unreportable) experiences, we explain the psychological results in a way that meshes with the neuroscience.

The argument from mesh (which is a form of inference to the best explanation) thus takes as its starting point the assertion that the only grounds we have for treating the just-losing back-of-the-head coalitions as non-conscious is the unreportability of the putative experiences.

Block’s arguments don’t settle the issue, of course. As the numerous replies to Block (2007b) in that same journal issue point out, there are a great many models which fit the data Block describes (plus other data reported by others). I haven’t evaluated these experimental data and these models in enough detail to have a strong opinion about the strength of Block’s argument, but at a glance it seems to deserve at least some weight, at least in our current state of ignorance about how consciousness works.380

6.8.2 Split-brain patients

Second, consider the famous studies of split-brain patients, conducted by Michael Gazzaniga and others, which have often been argued to provide evidence of at least two separate conscious subjects in a single human brain, with each one (potentially) lacking introspective access to the other.

Gazzaniga himself originally interpreted his split-brain studies to indicate two separate streams of consciousness (Gazzaniga & LeDoux 1978), but later (Gazzaniga 1992) rejected the “double-consciousness” view, and suggested instead that consciousness is computed by the left hemisphere. Later (Gazzaniga 2002), he “conceded that the right hemisphere might be conscious to some degree, but the left hemisphere has a qualitatively different kind of consciousness, which far exceeds what’s found in the right.”381 Most recently, in 2016, Gazzaniga wrote that “there is ample evidence suggesting that the two hemispheres possess independent streams of consciousness following split-brain surgery.”382

Glancing at the literature myself, the evidence seems unclear. For example, as far as I know, we do not have detailed verbal reports of conscious experience from both hemispheres, even though three different split-brain patients have been able to learn to engage in some limited verbal communication from the right (and not just the left) hemisphere.383 I don’t know whether we lack this evidence because those patients just weren’t asked the relevant questions (e.g. they weren’t asked to describe their experience), or because their right-hemisphere verbal communication is too deficient (as with the “verbal communication” of the tiny portion of those with hydranencephaly that can utter word-ish vocalizations at all). See also the arguments back and forth in Schechter (2012) and Pinto et al. (2017).

Also note that if the evidence ends up supporting the view that there are (at least) two streams of consciousness in split-brain patients, this would seem to undermine the “basic argument” for relatively complex theories of consciousness outlined above less than e.g. Block’s overflow argument does, since a “double consciousness” view might just show that each hemisphere has the resources to support (a still quite complex) stream of consciousness if the two hemispheres are disconnected, rather than suggesting that the human brain may support a multitude of (potentially quite simple) streams of conscious processing.

6.8.3 Other cases of hemisphere disconnection

As Blackmon (2016) points out, in addition to split-brain patients there are also patients who have undergone a variety of “hemisphere disconnection” procedures:

surgical hemisphere disconnections are distinct from the more familiar “split-brain” phenomenon in which both hemispheres, despite having their connection via the corpus callosum severed, are connected with the rest of the brain as well as with the body via functioning sensory and motor pathways. Split-brain patients have two functioning hemispheres which receive sensory data and send motor commands; hemispherectomy patients do not.

Hemisphere disconnection procedures include:

  • Anatomical hemispherectomy, in which an entire hemisphere is surgically removed from the cranium.
  • Functional hemispherectomy, in which some of a hemisphere is removed while the rest is left inside the cranium (but still disconnected).
  • Hemispherotomy, in which a hemisphere is disconnected entirely from the rest of the brain but left entirely in the cranium.
  • The Wada test, which (successively) anesthetizes each hemisphere while the other hemisphere remains awake, in order to test how well each hemisphere does with memory, language, etc. without the help of the other hemisphere. We can think of this as a “temporary” hemisphere disconnection.

Like split-brain patients, patients undergoing these procedures seem to remain conscious, recover quickly, hold regular jobs, describe their experiences on online forums, and so on.

Studies of hemisphere disconnection provide data which complement the consciousness-relevant data we have from studies of split-brain patients. For example, hemisphere disconnection studies provide unambiguous data about the capabilities of the surviving hemisphere, whereas there is some ambiguity on this matter from split-brain studies, since interhemispheric cortical transfer of information remains possible in split-brain patients (via the superior colliculus), and the outputs of each hemisphere in split-brain patients might be integrated elsewhere in the brain (e.g. in the cerebellum).

I haven’t examined the hemisphere disconnection literature. But as with split-brain research, it seems as though it might (upon further examination) do some work to undermine the “basic argument” for relatively complex theories of consciousness outlined above.

6.8.4 Shiller’s arguments

Shiller (2016) makes two additional arguments for the plausibility of hidden qualia actually existing.384

His exceptionality argument goes like this: just like our other senses didn’t evolve to detect and make use of all potentially available information (e.g. about very small things, or far away things, or very high-pitched sounds, or ultraviolet light), because doing so would be more costly than it’s worth (evolutionarily), probably our introspective powers also haven’t evolved to access all the qualia and other processes going on in our heads. Unless introspection is exceptional (in a certain sense) among our senses, we should expect there to be qualia (and other cognitive processes) that our introspection just doesn’t have access to.

Next, Shiller’s argument from varieties goes like this:

The exceptionality argument centered on the likely limitations of introspection. The second argument that I will present focuses on the variety of kinds of hidden qualia that we might possibly have. Since it is independently plausible that we have many different varieties of qualia, it is more than merely plausible that we have at least one variety of hidden qualia. I have in mind a probabilistic argument: if we judge that there is a not too small probability that we have hidden qualia of each kind, then (given their independence) we are committed to thinking that it is fairly probable that we have at least one kind of hidden qualia. I will briefly describe five kinds of hidden qualia and present a few considerations for thinking that we might have hidden qualia of each kind…

At a glance, these two arguments seem to have some force, especially the first one. But I haven’t spent much time evaluating them in detail.

Of course, even if these and other arguments related to the complexity of consciousness turned out to strongly favor either a “relatively simple” or a “relatively complex” account of consciousness as defined by example above, one could still argue that “morally-relevant consciousness” can be more or less simple than “consciousness as defined by example above.”

6.9 Appendix Z. Miscellaneous elaborations and clarifications

This appendix collects a variety of less-important sub-appendices.

6.9.1 Appendix Z.1. Some theories of consciousness

Below, I list some theories of consciousness that I either considered briefly or read about in some depth.

The items in the table below overlap each other, they are not all theories of the exact same bundle of phenomena, and several of them are families of theories rather than individual theories. In some cases, the author(s) of a theory might not explicitly endorse both physicalism and functionalism about consciousness, but it seems to me their theories could be adapted in some way to become physicalist functionalist theories.

Obviously, this list is not comprehensive.

In no particular order:

THEORY SOME EXPOSITORY REFERENCES EXAMPLE CRITIQUES
Global workspace theories Baars et al. (2013); Shanahan (2010), ch. 4; Dehaene (2014); Dehaene et al. (2014); Shevlin (2016) Lau & Rosenthal (2011); Prinz (2012), ch. 1; Block (2014); Pitts et al. (2014); Kemmerer (2015)385
Temporal binding theory Crick & Koch (1990) Prinz (2012), ch. 1
Daniel Dennett’s multiple drafts / user-illusion theory Dennett (1991, 2005, 2017)386 Block (1993) and the critiques cited in Dennett (1993a, 1993b, 1994)
First-order representational theories Kirk (1994, 2007, 2017); Dretske (1995); Tye (1995); Carruthers (2005), ch. 3 Carruthers (2000); Prinz (2012), ch. 1; Gennaro (2011), ch. 3
Higher-order theories Lycan (1996); Carruthers (2000); Rosenthal (2006, 2009); Lau & Rosenthal (2011); Carruthers (2016) Prinz (2012), ch. 1; Block (2011); Carruthers (2017)
AIR theory Prinz (2012)387 Lee (2015); Mole (2013); Barrett (2014)
Thomas Metzinger’s theory Metzinger (2003, 2010) Graham & Kennedy (2004)
Integrated information theory Oizumi et al. (2014); Tononi (2015) Cerullo (2015); Thagard & Stewart (2014); Graziano (2013), ch. 11
Antonio Damasio’s theory Damasio (2010) Prinz (2012), ch. 1

Other theories I looked at briefly include prediction error minimization (Hohwy 2012), Nicholas Humphrey’s theory (Humphrey 2011), sensorimotor theory (O’Regan & Noe 2001; O’Regan 2011, 2012), the geometric theory (Fekete et al. 2016), semantic pointer competition (Thagard & Stewart 2014), the radical plasticity thesis (Cleeremans 2011), Björn Merker’s theory (Merker 2005, 2007), Markkula’s narrative behavior theory (Markkula 2015), Derek Denton’s theory (Denton 2006), attention schema theory (Graziano 2013; Webb & Graziano 2015; Graziano & Webb 2017), Julian Jaynes’ theory (Jaynes 1976; Kuijsten 2008), Gary Drescher’s theory (Drescher 2006, ch. 2; notes from my conversation with Gary Drescher), and Orch-OR theory (Hameroff & Penrose 2014).

What, exactly, must a theory of phenomenal consciousness explain? Example lists of desiderata include Van Gulick (1995), pp. 93-95 of Carruthers (2000), chapter 3 of Metzinger (2003), table 1 of Seth & Baars (2005), Aleksander (2007), chapter 1 of Prinz (2012), much of Baars (1988), and section 4.3 of Shevlin (2016).

For some other thoughts on theories of consciousness, see Appendix B.

6.9.2 Appendix Z.2. Some varieties of conscious experience

In the table below, I list some varieties of conscious experience, illustrating the diversity of conscious states for which we have some evidence about the subjective qualities of the phenomena from human verbal self-report.

PHENOMENON EXAMPLE SOURCES
Akinetic mutism, especially a form called “athymhormia,” “psychic akinesia,” or “auto-activation deficit” Laplane & Dubois (2001); Leys & Henon (2013); Leu-Semenescu et al. (2013); Davies & Levy (2016); Klein (2017a)
Memory disorders, e.g. anterograde amnesia and persistent déjà vu O’Connor et al. (2007); Markowitsch (2008); Dittrich (2016)
Sensory agnosias and similar, including e.g. face blindness and simultanagnosia Humphreys (1999); Farah (2004); Husain (2008); Coslett & Lie (2008); Behrmann & Nishimura (2010); Coslett (2011); Barton (2011); Zihl (2013); Chechlacz & Humphreys (2014); Remmer (2015)
Synesthesia Ward (2013); Banissy et al. (2014); Deroy & Spence (2016); Brogaard (2016)
Hypnosis Shor & Orne (1965); Pekala & Kumar (2000); Jamieson (2007); Nash & Barnier (2008); Cardeña et al. (2013)
Phantom limbs and other phantom experiences Katz (2000); Halligan (2002); Ramachandran & Brang (2009); Andreotti et al. (2014)
Dreaming, especially e.g. RBD, dreams within dreams, sleepwalking, and other unusual dreaming phenomena Hobson et al. (2000); Carey (2001); sec. 4.2.5 of Metzinger (2003); Domhoff (2007); Boeve (2010); Mahowald (2011); Buzzi (2011); Chen et al. (2013); Schenck (2015); Siclari et al. (2017)
Daydreaming Giambra (2000); Smallwood (2015); Dorsch (2015); Konishi & Smallwood (2016)
Lucid dreaming LaBerge & DeGracia (2000); Metzinger (2013); Stumbrys et al. (2014); Voss & Hobson (2015)
Sensory substitution Bach-y-Rita & Kercel (2003); Lenay et al. (2003); Collignon et al. (2011); Maidenbaum et al. (2014); Stiles & Shimojo (2015)
Body ownership illusions, e.g. the rubber hand illusion and out-of-body experiences Banissy et al. (2009); Blanke & Metzinger (2009); Tsakiris (2010); Blanke et al. (2015); Kilteni et al. (2015)
Psychedelic experiences Newcombe & Johnson (1999); Shanon (2002); chs. 5-7 in volume 2 of Cardeña & Winkelman (2011); PsychonautWiki (2016)
Mystical experience McNamara & Butler (2013); Wulff (2014)
Absence seizures Panayiotopoulos (2008); Arzimanoglou & Ostrowsky-Coste (2010); Tenney & Glauser (2013); Gotman & Kostopoulos (2013); Luijtelaar et al. (2014)
Pain asymbolia and (maybe) other sensory/affective dissociations Grahek (2007); Shriver (2014); Bain (2014); Klein (2015)
Delirium Bhat & Rockwood (2007); Eeles et al. (2013)
Fluent aphasia, a.k.a. “Wernicke’s aphasia” Edwards (2005); Marshall (2010); video example from Tactus Therapy
Hemisphere disconnection, including split-brain procedures and (congenital) agenesis of the corpus callosum Gazzaniga & LeDoux (1978); Gazzaniga (2000); ch. 5 of Boller & Grafman (2000);388 Paul et al. (2007); de Ribaupierre and Delalande (2008); Bayne (2008); Schechter (2012); Blackmon (2016)
Dissociative identity disorder Dorahy et al. (2014); Lynn et al. (2014)
Hallucinations Aleman & Larøi (2008); Blom (2010); Luhrmann (2011); McCarthy-Jones (2012); Blom & Sommer (2012); Jardri et al. (2013); Collerton et al. (2015)
Craniopagus twins, especially Krista and Tatiana Hogan Dominus (2011); Squair (2012); Pyke (2014); Langland-Hassan (2015); MacQueen (2015)389
Blindsight and related deficits, e.g. “numb-sense” and “deaf-hearing” Weiskrantz (1997); Holt (2003); Cowey (2004); Weiskrantz (2007); Cowey (2010); Overgaard (2011)
Delusional misidentification syndromes, e.g. Cotard delusion Young (2008); Debruyne (2009); Hirstein (2010); Politis & Loane (2012); Blom (2014); Langdon et al. (2014); Klein & Hirachan (2014); Walsh et al. (2015)
Aphantasia Zeman et al. (2015); Vito & Bartolomeo (2016)
Anesthesia awareness Mashour & LaRock (2008); Mashour (2009)
Anosognosia Heilman (1991); Vuilleumier (2004); Vallar & Ronchi (2006); Prigatano (2010); Moro et al. (2011); Chen et al. (2014)
Locked-in syndrome Wikipedia’s List of people with locked-in syndrome; Bauer et al. (1979); Pistorius (2013); Kyselo & Paolo (2015)390
Terminal lucidity391 Nahm et al. (2012)
Sight restoration following long-term congenital blindness Ostrovsky et al. (2006); Held et al. (2011); Degenaar & Lokhorst (2014)

These and other phenomena which reveal the variety of human conscious experience are collected in Kunzendorf & Wallace (2000), Vaitl et al. (2005), Bayne et al. (2009), Windt (2011) pp. 238-244, Cardeña & Winkelman (2011), Cardeña et al. (2014), Giacino et al. (2014), Bayne & Hohwy (2016), Laureys et al. (2015), Kriegel (2015); ch. 4 of Gennaro (2016), chs. 3-19 of Perry et al. (2002), Part III of Schneider & Velmans (2017), and other sources.

Phenomena not included in the table above, because we don’t (to my knowledge) have human verbal self-report of what it is like to subjectively experience them, include:

  • Possible consciousness detected via neuroimaging of vegetative state patients: see e.g. Owen (2013); Chaudhary et al. (2017); Klein (2017a).
  • Hydranencephaly: According to Aleman & Merker (2014), a small number of persons with hydranencephaly seem to use words meaningfully, and a small (perhaps overlapping) number also survive into their teenage or later years. However, I doubt there are any cases in which hydranencephalics can verbally report some details of their subjective experiences (if they have any).392

6.9.3 Appendix Z.3. Challenging dualist intuitions

I suspect that our dualist intuitions are the biggest barrier to embracing a physicalist, functionalist, illusionist theory of consciousness, which might otherwise be, as Dennett (2016a) puts it, “the obvious default theory of consciousness.”

Some sources that might be especially useful here include Dennett (1991, 2017), chapter 2 of Drescher (2006), Metzinger (2010), and Graziano (2013).

I think it can also be helpful to think about what kind of scientific progress might be needed for one to grok, at a “gut level,” how phenomenal consciousness could be “just” a set of physical processes and nothing more — even though the feeling of understanding is not necessarily strong evidence of accurate understanding (Trout 2007, 2016).

In my experience, the subjective feeling of understanding often comes when I can “see” (visualize) how a system of processes I already understand could “add up to” the system I’m trying to understand. Eliezer Yudkowsky illustrates this phenomenon with two examples, heat and socks:

On a high level, we can see heat melting ice and flowing from hotter objects to cooler objects. We can, by imagination, see how vibrating particles could actually constitute heat rather than causing a mysterious extra ‘heat’ property to be present. Vibrations might flow from fast-vibrating objects to slow-vibrating objects via the particles bumping into each other and transmitting their speed. Water molecules vibrating quickly enough in an ice cube might break whatever bonds were holding them together in a solid object.

…

For an even more transparent reductionist identity, consider, “You’re not really wearing socks, there are no socks, there’s only a bunch of threads woven together that looks like a sock.” Your visual cortex can represent this identity directly, so it feels immediately [obvious] that the sock just is the collection of threads; when you imagine sock-shaped woven threads, you automatically feel your visual model recognizing a sock.

…

The gap between mind and brain is larger than the gap between heat and vibration, which is why humanity understood heat as disordered kinetic energy long before anyone had any idea how ‘playing chess’ could be decomposed into non-mental simpler parts [as when chess-playing computers were invented].

Similarly, Dennett (1986) relates the following:

Sherry Turkle… talks about the reactions small children have to computer toys when they open them up and look inside. What they see is just an absurd little chip and a battery and that’s all. They are baffled at how that could possibly do what they have just seen the toy do. Interestingly, she says they look at the situation, scratch their heads for a while, and then they typically say very knowingly: “It’s the battery!” (A grown-up version of the same fallacy is committed by the philosopher John Searle, 1980, when he, arriving at a similar predicament, says: “It’s the mysterious causal powers of the brain that explain consciousness.”) Suddenly facing the absurdly large gap between what we know from the inside about consciousness and what we see if we take off the top of somebody’s skull and look in can provoke such desperate reactions. When we look at a human brain and try to think of it as the seat of all that mental activity, we see something that is just as incomprehensible as the microchip is to the child when she considers it to be the seat of all the fascinating activity that she knows so well as the behavior of the simple toy.

Unfortunately, current theories of consciousness aren’t yet detailed enough for most or all of us to visualize how the processes they posit could “add up to” subjective experience. (See also Appendix B.)

When I feel as though no amount of functional cognitive processing could ever “add up to” the phenomenality of phenomenal consciousness, I try to remind myself that some ancient biologists probably felt the same way when they tried to imagine how the interaction of inanimate, non-living parts could ever “add up to” living systems. I also remind myself of Edgar Allen Poe, in 1836, failing to see how mechanical parts could ever “add up to” the intelligent play of chess, despite his awareness of Charles Babbage’s Analytical Engine:

Arithmetical or algebraical calculations are, from their very nature, fixed and determinate. Certain data being given, certain results necessarily and inevitably follow… But the case is widely different with the Chess-Player [i.e. von Kempelen’s “Mechanical Turk”]. With him there is no determinate progression. No one move in chess necessarily follows upon any one other. From no particular disposition of the [chess pieces] at one period of a game can we predicate their disposition at a different period… Now even granting that the movements of the Automaton Chess-Player were in themselves determinate, they would be necessarily interrupted and disarranged by the indeterminate will of his antagonist. There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage… It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, a priori.393

So, if you cannot now intuitively grok how phenomenal consciousness could be “just” a set of physical processes and nothing more, perhaps you can nevertheless take it to be a lesson of history that, once the mechanisms of consciousness are much more thoroughly understood and described than they are now, you may then be able to “see” how they add up to phenomenal consciousness, just as you can now see how some computer hardware and software can add up to intelligent chess play. (If you’re not familiar with how computers play chess, see e.g. Levy & Newborn 1991.)

Of course, nothing I’ve said in this section engages the arguments that have been put forward for why a physicalist, functionalist explanation of consciousness may not be forthcoming (e.g. Chalmers 1997, 2010). I don’t discuss those arguments in this report (see above);394 the purpose of this appendix is merely to give the reader a sense of why I distrust my dualistic intuitions about consciousness, and point to some recommended readings (see above).

6.9.4 Appendix Z.4. Brief comments on unconscious emotions

Previously, I mentioned that research on unconscious emotions might lend support to some “cortex-required views” about consciousness. I did not investigate this literature thoroughly, but I make some brief comments on unconscious emotions below.

In common parlance, typical emotion words such as “fear” and “desire” and “excitement” refer to a particular kind of conscious experience in addition to a set of physiological and behavioral responses. However, most scientific studies of “emotion” do not measure (self-reported) conscious experience, but instead measure only physiological or behavioral responses. This is obviously true for studies of animal emotion, since animals cannot report conscious experiences. But it is also true of many studies of human emotion.395 Hence, most scientific studies of “emotion,” especially in animals, don’t necessarily assume that “emotions” must involve conscious experiences.

Thus, in this report, I use “emotion” to refer to certain kinds of cognitive processing, physiological responses, and behavioral responses, which might or might not also involve conscious experience. But, in keeping with everyday usage, I reserve the word “feelings,” and terms for specific emotions such as “fear,” for emotional responses that do involve certain kinds of conscious experiences. In contrast, when referring only to cognitive processing, physiological responess, and behavioral responses — e.g. as examined in animal studies — I avoid consciousness-implying terms like “fear” in favor of more neutral terms like “threat response.”396

Under this terminology, an “unconscious emotion” is an emotional response — involving certain kinds of cognitive processing, physiological response, and/or behavioral response — without an accompanying conscious experience of that emotion. This is analogous to the above discussion of “unconscious vision,” which involves certain kinds of cognitive processing and behavioral response without any conscious experience of that visual processing.

If humans exhibit genuinely unconscious emotions, and the conscious experience of emotion seems to depend on neural circuits in certain cortical structures, this could lend some further support to some “cortex-required” views.

So, what is the current state of the evidence? As far as I can tell, the matter of unconscious emotions remains under considerable debate.397 Moreover, my sense is that the neuroscience of emotions is less well-developed than the neuroscience of vision (likely, because neuroscience of vision is easier to study).

Below, I make some brief remarks related to just one example of “unconscious emotion”: namely, unconscious “pleasure.”

Perhaps the leading theory in the neuroscience of pleasure is the liking/wanting theory developed by Kent Berridge and others. In short, this theory claims that (Berridge & Kringelbach 2016):

Affective neuroscience studies have further indicated that even the simplest pleasant experience, such as a mere sensory reward, is actually a more complex set of processes containing several psychological components, each with distinguishable neurobiological mechanisms… These include, in particular, distinct components of reward wanting versus reward liking (as well as reward learning), and each psychological component has both conscious and nonconscious subcomponents. Liking is the actual pleasure component or hedonic impact of a reward; wanting is the motivation for reward; and learning includes the associations, representations, and predictions about future rewards based on past experiences.

We distinguish between the conscious and nonconscious aspects of these subcomponents because both exist in people… At the potentially nonconscious level, we use quotation marks to indicate that we are describing objective, behavioral, or neural measures of these underlying brain processes. As such, “liking” reactions result from activity in identifiable brain systems that paint hedonic value on a sensation such as sweetness, and produce observable affective reactions in the brain and in behavior such as facial expressions. Similarly, “wanting” includes incentive salience or motivational processes within reward that mirror hedonic “liking” and make stimuli into motivationally attractive incentives. “Wanting” helps spur and guide motivated behavior, when incentive salience is attributed to stimulus representations by the mesolimbic brain systems. Finally, “learning” includes a wide range of processes linked to implicit knowledge as well as associative conditioning, such as basic Pavlovian and instrumental associations.

…By themselves, core “liking” and “wanting” processes can occur nonconsciously, even in normal people.

Berridge & Kringelbach (2015) summarizes the evidence on unconscious vs. conscious “liking” and “wanting”:

…in humans, the [unconscious and conscious] forms of hedonic reaction can be independently measured. For example, objective hedonic “liking” reactions can sometimes occur alone and unconsciously in ordinary people without any subjective pleasure feeling at all, at least in particular situations (e.g., evoked by subliminally brief or mild affective stimuli)… Unconscious “liking” reactions still effectively change goal-directed human behavior, though those changes may remain undetected or be misinterpreted even by the person who has them… More commonly, “liking” reactions occur together with conscious feelings of liking and provide a hedonic signal input to cognitive ratings and subjective feelings. However, dissociations between [unconscious and conscious] hedonic reaction[s] can still sometimes occur in normal people due to the susceptibility of subjective ratings of liking to cognitive distortions by framing effects, or as a consequence of theories concocted by people to explain how they think they should feel… For example, framing effects can cause two people exposed to the same stimulus to report different subjective ratings, if one of them had a wider range of previously experienced hedonic intensities (e.g., pains of childbirth or severe injury)… In short, there is a difference between how people feel and report subjectively versus how they objectively respond with neural or behavioral affective reactions. Subjective ratings are not always more accurate about hedonic impact than objective hedonic reactions and the latter can be measured independently of the former.

I have not read the primary studies cited in these review articles, but my sense is that there is much less evidence concerning unconscious pleasure than there is concerning unconscious vision.

6.9.5 Appendix Z.5. The lack of consensus in consciousness studies

Here I use a small set of examples to illustrate the lack of consensus about several different aspects of consciousness.

Note that while I have made some effort to ensure that the sources cited below are “talking about the same thing” — phenomenal consciousness, as opposed to e.g. self-consciousness or the capacity for distinct waking/sleeping states — there is (perhaps unavoidably) much ambiguous language in the consciousness literature, and thus no doubt some of the apparent diversity of opinion results from experts talking about different things rather than having different views about “the same thing.” For example, some of the discussions below may have been intended as accounts of a limited class of phenomenal consciousness (e.g. human phenomenal consciousness), rather than as accounts account of all types of phenomenal consciousness.

Example disagreements in consciousness studies:

Is consciousness physical? The largest survey of professional philosophers I’ve seen — the PhilPapers Survey, conducted in late 2009 (results; paper) — found that among “Target Faculty” (i.e. “all regular faculty members in 99 leading departments of philosophy”), 56.5% of respondents accepted or leaned toward physicalism about the mind, 27.1% of respondents accepted or leaned toward non-physicalism about the mind, and 16.4% of respondents gave an “Other” response.

What is the extent of consciousness in the natural world / when did it evolve? For overviews, see Velmans (2012), Swan (2013), and Godfrey-Smith (2017) (from Andrews & Beck 2017). Some specific possibilities include:

  • Consciousness precedes the emergence of living systems (Chalmers 2015; Howe 2015).
  • Consciousness evolved at least as early as some single-celled organisms (Baluškaa & Mancuso 2014; Braun 2015; and for context see Lyon 2015 and Bray 2011).
  • Consciousness evolved at least as early as some plants (Nagel 1997; Smith 2016).
  • Consciousness evolved ~520 million years ago, during or just before the Cambrian explosion (Feinberg & Mallatt 2016; Graziano 2014; Ginsburg & Jablonka 2007).
  • Consciousness evolved at least as early as some insects (Barron & Klein 2016; maybe also Merker 2005).
  • Consciousness evolved after fishes but before birds and mammals (Cabanac et al. 2009; Rose et al. 2014; Key 2016).
  • Consciousness evolved very late, in humans and perhaps in some apes (Macphail 2000; Carruthers 2000; Dennett 1995).

Was consciousness selected for, or is it a spandrel? For an overview of the debate, see Robinson et al. (2015).

What is the functional role of consciousness? Metzinger (2010), p. 55, summarizes some of the proposed options:

Today, we have a long list of potential candidate functions of consciousness: Among them are the emergence of intrinsically motivating states, the enhancement of social coordination, a strategy for improving the internal selection and resource allocation in brains that got too complex to regulate themselves, the modification and interrogation of goal hierarchies and long-term plans, retrieval of episodes from long-term memory, construction of storable representations, flexibility and sophistication of behavioral control, mind reading and behavior prediction in social interaction, conflict resolution and troubleshooting, creating a densely integrated representation of reality as a whole, setting a context, learning in a single step, and so on.

Which theory of consciousness is most likely correct? Lists, taxonomies, and collections of theories of consciousness include chapter 3 of Carruthers (2005), Katz (2013), Cavanna & Nani (2014), Sun & Franklin (2007), McGovern & Baars (2007), Van Gulick (2014), chs. 10-11 of Revonsuo (2009), and Part IV of Schneider & Velmans (2017). Some introductory texts on consciousness also capably survey the leading theories, e.g. Weisberg (2014).

Finally, to illustrate the methodological diversity of consciousness studies, compare:

  • David Chalmers’ metaphysical thought experiments (Chalmers 1997, 2010).
  • The largely science-driven arguments of Dennett (1991), Metzinger (2003), Merker (2005), Prinz (2012), Dehaene (2014), and Tye (2016), chs. 5-9.
  • The paradigms for probing human consciousness discussed in sources such as Miller (2015), Overgaard (2015), Goodale & Milner (2013), Laureys et al. (2015), and Breitmeyer & Ogmen (2006).
  • Consciousness-relevant investigations in comparative ethology and comparative neuroanatomy, such as can be found in Shettleworth (2009), Vonk & Shackelford (2012), Pearce (2008), Wynne & Udell (2013), and Herculano-Houzel (2016).
  • Standish (2013)’s anthropic argument against ant consciousness.

Obviously, this list of examples is far from exhaustive.

6.9.6 Appendix Z.6. Against hasty eliminativism

Earlier, I argued against “hasty” eliminativism. Here, I develop this line of thinking a bit further.

I agree with a great deal of Irvine (2013)’s survey of the scientific and measurement difficulties currently obstructing progress in consciousness studies, but whereas Irvine is ready to embrace eliminativism about consciousness, I prefer to wait to see how our concept of consciousness evolves in response to empirical discoveries, and decide later whether we want to modify the concept or toss it in the trash next to “phlogiston.”

One reason to resist quick elimination of “consciousness” is articulated nicely by Flanagan (1992), pp. 23-24:

Two formidable arguments for [eliminating] consciousness involve attempts to secure an analogy between the concept of consciousness and some other concept in intellectual ruin. It has been suggested that the concept of consciousness is like the concept of phlogiston or the concept of karma. One shouldn’t think in terms of such concepts as phlogiston or karma. And it would be a philosophical embarrassment to try to develop a positive theory of karma or phlogiston. There simply are no such phenomena for there to be theories about. Let me explain why the analogies with karma and phlogiston do not work to cast doubt on the existence of consciousness or on the usefulness of the concept.

…There is no single orthodox concept of consciousness. Currently afloat in intellectual space are several different conceptions of consciousness, many of them largely inchoate. The Oxford English Dictionary lists 11 senses for ‘conscious’ and 6 for ‘consciousness’. The meanings cover self-intimation, being awake, phenomenal feel, awareness of one’s acts, intentions, awareness of external objects, and knowing something with others. The picture of consciousness as a unified faculty has no special linguistic privilege, and none of the meanings of either ‘conscious’ or ‘consciousness’ wear any metaphysical commitment to immaterialism on its sleeve. The concept of consciousness is neither unitary nor well regimented, at least not yet.

This makes the situation of the concept of consciousness in the late twentieth century very different from that of the concept of phlogiston in the 1770s. I don’t know if ordinary folk had any views whatsoever about phlogiston at that time. But the concept was completely controlled by the community of scientists who proposed it in the late 1600s and who characterized phlogiston as a colorless, odorless, and weightless “spirit” that is released rapidly in burning and slowly in rusting. Once the spirit is fully released, we are left with the “true material,” a pile of ash or rust particles.

…if I am right that the concept of consciousness is simply not owned by any authoritative meaning-determining group in the way the concept of phlogiston was owned by the phlogiston theorists, then it will be harder to isolate any single canonical concept of consciousness that has recently come undone or is in the process of coming undone, and thus that deserves the same tough treatment that the concept of phlogiston received.

For additional elaboration of (something like) my reasons for resisting quick eliminativism about consciousness (among other concepts), see e.g. Chang (2011), Arabatzis (2011), Ludwig (2014), Ludwig (2015), and Taylor & Vickers (2016). In particular, I want to highlight my agreement with Ludwig (2014) that

Scientific ontologies are constantly changing through the introduction of new entities and the elimination of old entities that have become obsolete… The ubiquity of elimination controversies in the human sciences raises the general but rarely discussed… question [of when] scientists should eliminate an entity from their ontology. Typically, elimination controversies focus on one specific entity and consider other cases of ontological elimination only briefly through analogies to obsolete entities in the history of science such as the élan vital, ether, phlogiston, phrenological organs, or even witchcraft… I want to argue that this situation is unfortunate as it often leads to the implicit use of an oversimplified “phlogiston model” of ontological elimination… that proves inadequate for many debates in the human sciences… [I propose] a more complex model that interprets ontological elimination as typically located on gradual scale between criticism of empirical assumptions and conceptual choices…

Another way to express my attitude on the matter is to express sympathy for the view that Baars & McGovern (1993) attribute to “average cognitive psychologists”:

…if we were to compel the average cognitive psychologists to give an opinion on the matter, we would no doubt hear something like… “Yes, of course, we will ultimately be able to explain human behavior and experience in neural terms. In the meantime, it is useful, and perfectly good science, to work at a higher level of analysis, in which we postulate mental events such as thoughts, percepts, goals, and the like.”

6.9.7 Appendix Z.7. Some candidate dimensions of moral concern

In this appendix I briefly remark on some candidate dimensions of moral concern that could be combined to estimate the relative “moral weight” of a species or other taxon of cognitive system, as mentioned briefly above.

Many commonly-discussed candidate dimensions of moral concern are captured by theories of well-being. Crisp (2013) organizes philosophical theories of well-being into three categories: hedonistic theories according to which well-being is the presence of pleasure and the absence of pain, desire theories according to which well-being is getting what one wants, and objective list theories according to which well-being is the presence or absence of certain objective characteristics (potentially including hedonistic and desire-related characteristics). Fletcher (2015) offers a different categorization, including chapters on hedonistic theories, perfectionistic theories, desire-fulfillment theories, objective list theories, hybrid theories, subject-sensitive theories, and eudaimonistic theories. In the social sciences, human objective well-being is often measured using variables such as education, health status, personal security, income, and political freedom (see e.g. OECD 2014), while human subjective well-being is typically conceived of in terms of life satisfaction, hedonic affect, and eudaimonia (psychological “flourishing”): see e.g. OECD (2013). For another overview of several approaches to well-being, see the chapters in Part II of Adler & Fleurbaey (2016).398

Some candidate dimensions of moral concern captured by these theories of well being — e.g. pain and pleasure — are widely thought to vary in their moral importance depending on other parameters such as “intensity” and duration.399 With respect to duration, there is also a question about objective vs. subjective duration.400

Another set of candidate dimensions of moral concern involve various ways in which consciousness can be “unified” or “disunified.” Following Bayne (2010)’s taxonomy (ch. 1), we might consider the moral relevance of “subject unity,” “representational unity,” and “phenomenal unity” — each of which has a “synchronic” (momentary) and “diachronic” (across time) aspect (see footnote401.) For example, Daniel Dennett seems to appeal to some kinds of disunity when explaining why he doesn’t worry much about the conscious experiences of most animals (if they have any).402

In a cost-benefit framework, one’s estimates concerning the moral weight of various taxa are likely more important than one’s estimated probabilities of the moral patienthood of those taxa. This is because, for the range of possible moral patients of most interest to us, it seems very hard to justify probabilities of moral patienthood much lower than 1% or much higher than 99%. In contrast, it seems quite plausible that the moral weights of different sorts of beings could differ by several others of magnitude. Unfortunately, estimates of moral weight are trickier to make than, and in many senses depend upon, one’s estimates concerning moral patienthood.

6.9.8 Appendix Z.8. Some reasons for my default skepticism of published studies

In several places throughout this report, I’ve mentioned my relatively high degree of skepticism about the expected reliability of consciousness-related scientific studies (e.g. on animal behavior, or self-reported human consciousness) that I have cited but not personally examined beyond a cursory look. Here, I provide some examples of published findings or personal research experiences that have led me to become unusually skeptical about the likely robustness of most published scientific studies (that I haven’t examined personally):

  1. Nosek et al. (2015) conducted high-powered replications of 100 studies from three top-ranked psychology journals, and found that “the mean effect size (r) of the replication effects… was half the magnitude of the mean effect size of the original effects,” 97% of the original studies had significant results (P < .05) whereas only 36% of replications had significant results, and only “39% of effects were subjectively rated to have replicated the original result.” Attempts to downplay the significance of this result have not been persuasive (to me).
  2. This “replication crisis” is not limited to psychology.403 Similarly discouraging results have been observed across many studies in cancer biology (Prinz et al. 2011; Begley & Ellis 2012; Kaiser 2017), economics (Hou et al. 2017; Camerer et al. 2016; Duvendack et al. 2015; Hubbard & Vetter 1992), genetics (Hirschhorn et al. 2002; Siontis et al. 2010; Benjamin et al. 2012; Farrell et al. 2015), marketing (Hubbard & Armstrong 1994), forecasting (Evanschitzky & Armstrong 2010), and other fields. For comments on how the replication crisis and some of the specific issues listed below may affect neuroimaging studies in particular, see e.g. Poldrack et al. (2017); Vul & Pashler (2017); Uttal (2012). For a popular introduction to the replication crisis in biomedicine, see Harris (2017).
  3. Similar results seem to often hold beyond the domain of strict replications. For example, a systematic review (Ioannidis 2005a) of highly-cited, top-journal clinical research studies published from 1990-2003 found that, among those 34 studies which could be compared against a later study (or meta-analysis) of the same questions and comparable or larger sample size or a better-controlled design, 41% were either contradicted by subsequent studies or found noticeably smaller effects than subsequent studies did. (The other 59% were supported by the later studies.)
  4. Or, consider the case of “medical reversal,” in which a current “standard of care” medical therapy, which has already undergone enough testing to be widely provided — sometimes to millions of patients and at the cost of billions of dollars — is found (usually via a large randomized controlled trial) to actually be inferior to a lesser or prior standard of care (e.g. not effective at all, and sometimes even harmful). In a systematic review of published articles testing a current standard of care, 40.2% of these studies resulted in medical reversal (Prasad et al. 2013; Prasad & Cifu 2015).
  5. Many (most?) published studies suffer from low power. It is commonly thought (and taught) that low power is a problem primarily because it can lead to a failure to detect true effects, resulting in a waste of research funding. But low power also has two arguably more insidious consequences: it increases the probability of false positives, and it exaggerates the size of measured true effects (Button et al. 2013). A systematic review of power issues in 10,000 papers in psychology and cognitive neuroscience suggested that the rate of false positives for this literature likely exceeds 50%. The problem seems to be especially bad in cognitive neuroscience (Szucs & Ioannidis 2017), with obvious implications for the scientific study of consciousness.
  6. Even merely re-analyzing the data from a study can often produce a different result. For example, in a systematic review of reanalyses of data from randomized clinical trials, 35% of the re-analyses “led to interpretations different from that of the original article” (Ebrahim et al. 2014).
  7. In my experience, when I read review articles in the life and social sciences, and then examine the primary evidence myself, I almost always come away with a lower opinion of the strength of the reported evidence than is suggested in the relevant review articles. For example, compare the review articles cited in my report on behavioral treatments for insomnia to my own conclusions in that report.
  8. Given the strong incentives to publish positive results, and the many available methods for doing so even in the absence of “true” positive results (via “researcher degrees of freedom”404), some simulations405 suggest that we should expect many (perhaps most) published research findings to be false. Moreover, researchers do seem to make extensive use of these researcher degrees of freedom: for example, in a systematic review of 241 fMRI studies, there were “nearly as many unique analysis pipelines as there were studies in the sample” (Carp 2012).
  9. In my experience, most top-journal primary studies and meta-analyses in the life and social sciences (that I read closely) turn out to rely heavily on statistical techniques that are inappropriate given the design of the study and/or the nature of the underlying data. Here is one example: in the life sciences and social sciences, the most common algorithm for quantitative synthesis of effect sizes from multiple primary studies is probably the DerSimonian-Laird (DL) algorithm (DerSimonian & Laird 1986), even though (a) it was never shown (in simulations) to be appropriate for use in most of the situations for which it is used406 (e.g. when primary studies vary greatly in sample size, or when primary studies are few and small and heterogeneous), (b) better-performing algorithms are available (e.g. IntHout et al. 2014), and (c) even the authors of the algorithm seem to concede it is not appropriate for establishing the statistical significance of summary effects (DerSimonian & Laird 2015, p. 142). Here is a second example: when studying the literature on subjective well-being, I learned (from Cranford et al. 2006) that studies using ecological momentary assessment (EMA) had, for decades, typically used statistical tests appropriate for studying between-person differences, whereas EMA focuses on within-person changes. A long-time leader in the field confirmed this in conversation.407
  10. In line with my personal experience, systematic reviews of the appropriateness of statistical tests used in published papers find high rates of straightforward statistical errors, for example inappropriate use of parametric tests, failure to account for multiple comparisons,408 the use of “fail-safe N” to protect against publication bias, and other problems (Assmann et al. 2000; Scales et al. 2005; Whittingham et al. 2006; Heene 2010; Button et al. 2013; Nuijten et al. 2016; Eklund et al. 2016; Westfall & Yarkoni 2016).409
  11. In surveys, a substantial fraction of researchers from a variety of fields admit to engaging in “questionable research practices” (QRPs) that may undermine the validity of their published results (Martinson et al. 2005; John et al. 2012; Necker 2014; Agnoli et al. 2017).410 This general finding is bolstered by a variety of analytic attempts to estimate the rate of QRPs in various fields, for example by examining how reported findings change from the dissertation version of a study to the published article version (Mazzola & Deuling 2013), or by directly inspecting published correlation matrices (Bosco et al. 2015). For an overall review, see Banks et al. (2016).
  12. Huge swaths of literature in the social sciences depend on self-report measures that have not been validated using the standards typically required (Reise & Revicki 2015; Millsap 2011) for measures used in (e.g.) high-stakes testing or patient-reported outcomes in health care. Also, systematic comparisons of self-reported data against data collected using “gold standard” objective measures (e.g. administrative data) suggest that self-report measures across a variety of fields result in substantial measurement error (see the sources in this footnote).
  13. Failure to share study data is widespread,411 and (in psychology, at least) predicts less-robust results (Wicherts et al. 2011). I would guess the same is true in many other fields.

Of course, failed replications and flawed methods don’t prove the original reported results are false, merely that they are not adequately supported. But that is just what I mean when I say that, prior to examining them myself, I expect most published studies not to “hold up” under attempted replication or close scrutiny.

Unfortunately, these points also undermine my ability to fully trust the studies I’ve cited above about the trustworthiness of published studies, only a few of which I’ve personally examined “somewhat closely” (i.e. for more than 30 minutes). Obviously, such “meta-research” is not immune from replication failures and flawed methodology.

To illustrate: when I first began to study the rate at which non-randomized studies are confirmed or contradicted by later (and presumably more trustworthy) randomized controlled trials, one of the first studies I came across was Young & Karr (2011). After reading the study, I excitedly tweeted it, along with the following comment: “Reminder: very few correlations from epidemiological studies hold up in RCTs — in this study, an impressive 0/52.” However, Young & Karr (2011) itself suffers multiple flaws. For example, the selection of primary studies was not systematic, and the authors provide no detail on how the studies they examined were chosen. Moreover, as I sought out additional reviews of this issue, I found that every large-scale review of this question that was conducted in a systematic, transparently-reported way found very different results than Young & Karr did (e.g. Deeks et al. 2003; Odgaard-Jensen et al. 2011; Anglemyer et al. 2014412). Today, I do not trust Young & Karr’s result.413

6.9.9 Appendix Z.9. Early scientific progress tends to lead to more complicated models of phenomena

I remarked above that, as far as I can tell, early scientific progress on a then-mysterious phenomenon tends to make our models of that phenomenon increasingly complicated (except in fundamental physics, and I strongly doubt that consciousness is a feature of fundamental physics414). If this pattern holds true for the scientific study of consciousness, too, then consciousness could turn out to be a relatively complicated phenomenon, and this would push me in the direction of thinking that consciousness can be found in fewer systems than if consciousness turned out to be simple.

Consider the case of recognizing oneself in the mirror. From the inside, it feels as though this is a fairly simple process: I see myself in the mirror, and I immediately recognize that it’s me. When I introspect about this process, I don’t detect any “steps” to the process. I don’t detect a series of complicated manipulations of data. Moreover, anyone trying to construct a theory of how mirror self-recognition works, before the age of computers and algorithms, wouldn’t have known enough about how mirror self-recognition might work to propose any complicated theory for it. But of course we now know — both from the neuroscientific study of vision and from attempts to build machine vision systems that can recognize objects (including themselves)415 — that this process of mirror self-detection is actually very complicated, and very few computational systems can execute it. Indeed, while chimpanzees seem to recognize themselves in mirrors, even gorillas generally do not.416 Mirror self-recognition is a highly specific and complex computational process, and is very rare in computational systems.

That said, complexity doesn’t necessarily imply rarity. Consider again the case of “life,” which turned out to be both more complicated and more extensive than we had once supposed.417

Back when life was fairly mysterious to us, any specific proposed theory of life was almost unavoidably simple. When we know so little about something that it is “mysterious” to us, we often don’t even know enough detail to propose a specific and complicated model, so we either propose specific and simple models, or we say “it’s probably complex, but I can’t propose any specific theory of that complexity at this time.”

So for example in the case of life, Xavier Bichat proposed in 1801 that living things are distinguished by certain “vital properties,” which he described as fundamental properties of the universe alongside gravity.418 Some of Bichat’s contemporaries instead argued that organic and inorganic processes differed in complexity rather than in fundamental properties or substances, but in my understanding, they couldn’t propose specific complicated theories of life — all they could say was that life would probably turn out to be complicated,419 as I am saying for consciousness.

Of course, early scientific progress on life revealed that life is, in fact, fairly complex: it is made of many parts, interacting in particular, complicated ways that were impossible to predict in detail prior to their discovery.

This pattern — that early scientific progress tends to lead to more complicated models of phenomena — seems especially true of behavioral and cognitive phenomena. In early theorizing about these phenomena, there were some who proposed specific and relatively simple models of a given phenomenon, and there were those who said “It’s probably complicated, but I don’t know enough detail to specify a complicated model,” and as far as I know, these phenomena have always turned out to be more complicated than any simple, specific model we could propose early on. (Of course, the early, simple, false models were often useful for guiding scientific progress,420 but that is different from saying that the early, simple models have tended to be correct.)

If this story is right — and I’m not sure it is, as I’m not an historian of science — then it should not be a surprise that most currently proposed and highly specific theories are quite simple,421 and thus imply a relatively extensive distribution of consciousness. That is what early specific models always look like, and if we consider the history of scientific progress on other behavioral and cognitive mechanisms, consciousness seems likely to turn out to be a good deal more complicated than our early specific models suggest.

Of course, as the example of life shows, we might discover that consciousness is highly complex and yet still surprisingly extensive.422 Or perhaps it will be complex and rare (like mirror self-recognition).

6.9.1o Appendix Z.10. Recommended readings

In this appendix, I provide links to some “especially recommended readings” related to a few of the major topics covered in this report. Many topics covered in this report are not listed below because I haven’t yet found introductory sources on those topics that I “especially recommend” — for example, this is the case for moral patienthood in general, PCIF arguments in general, cortex-required views in general, and the question of moral weight.

IF YOU WANT TO READ… I RECOMMEND…
…a brief introduction to consciousness studies, focused on metaphysical debates and contemporary theories of consciousness Weisberg, Consciousness (2014)
…a detailed, reasonably theory-neutral discussion of the distribution question, across many animal taxa, which comes to different conclusions than I have Tye, Tense Bees and Shell-Shocked Crabs (2016)
…a book on consciousness I admire despite disagreeing with it at a fairly fundamental level Chalmers, The Conscious Mind (1997)
…a detailed effort to (properly, in my view) undermine commonly held, “default” intuitions about consciousness Dennett, Consciousness Explained (1991)
…the basic case for illusionism about consciousness Frankish, “Quining diet qualia” (2012) and then “Illusionism as a theory of consciousness” (2016)
…an introduction to animal cognition and behavior Shettleworth, Cognition, Evolution, and Behavior, 2nd edition (2009) or Wynne & Udell, Animal Cognition, 2nd edition (2013)
…a short introduction to some basic issues related to the evolution of consciousness Godfrey-Smith, “The evolution of consciousness in phylogenetic context” (2017)
…an introduction to unconscious vision Goodale & Milner, Sight Unseen, 2nd edition (2013)

7 Sources

DOCUMENT SOURCE
Aaronson (2013) Source (archive)
Aaronson (2014a) Source
Aaronson (2014b) Source
Achen & Bartels (2006) Source (archive)
Ackerman (2016) Source (archive)
Adamo et al. (2009) Source (archive)
Adler & Fleurbaey (2016) Source (archive)
Agnoli et al. (2017) Source (archive)
Alanen (2003) Source (archive)
Alberts et al. (2016) Source
Aleksander (2007) Source (archive)
Aleksander (2017) Source (archive)
Aleman & Larøi (2008) Source
Aleman & Merker (2014) Source (archive)
Alkire et al. (2008) Source (archive)
Allen (2013) Source (archive)
Allen et al. (2009) Source (archive)
Allen-Hermanson (2008) Source (archive)
Allen-Hermanson (2016) Source (archive)
Almond (2008) Source
Althaus (2003) Source (archive)
Amazon Web Services Source (archive)
Amazon, “The Official Mike the Headless Chicken Book” Source (archive)
Amting et al. (2010) Source (archive)
Anderson (2004) Source (archive)
Anderson & Gallup Jr. (2015) Source (archive)
Andreotti et al. (2014) Source (archive)
Andrews & Beck (2017) Source (archive)
Anglemyer et al. (2014) Source (archive)
Anselme & Robinson (2016) Source (archive)
Antony (2008) Source (archive)
Arabatzis (2011) Source (archive)
Arbital, “Executable philosophy” Source
Arbital, “Extrapolated volition (normative moral theory)” Source
Arbital, “Ontology identification problem” Source
Arbital, “Rescuing the utility function” Source
Arizona State University, Department of Psychology, Clive Wynne Source (archive)
Armknecht et al. (2015) Source (archive)
Armstrong (1968) Source (archive)
Aronyosi (2013) Source (archive)
Arrabales (2010) Source (archive)
Arzimanoglou & Ostrowsky-Coste (2010) Source (archive)
Ashley et al. (2007) Source (archive)
Assael et al. (2016) Source (archive)
Assmann et al. (2000) Source (archive)
Baars (1988) Source (archive)
Baars & McGovern (1993) Source (archive)
Baars et al. (2003) Source (archive)
Baars et al. (2013) Source (archive)
Bach-y-Rita & Kercel (2003) Source (archive)
Bain (2014) Source (archive)
Baird (1905) Source (archive)
Baker (2016) Source (archive)
Bakker (2017) Source (archive)
Balcombe (2006) Source (archive)
Balcombe (2016) Source (archive)
Ballarin et al. (2016) Source (archive)
Baluškaa & Mancuso (2014) Source (archive)
Banissy et al. (2009) Source (archive)
Banissy et al. (2014) Source (archive)
Banks et al. (2016) Source (archive)
Bargh & Morsella (2010) Source (archive)
Barnow & Greenberg (2014) Source (archive)
Baron-Cohen (1995) Source (archive)
Barrett (2014) Source
Barrett (2017) Source (archive)
Barrett et al. (2005) Source (archive)
Barron & Klein (2016) Source (archive)
Bartlett & Youngner (1988) Source (archive)
Barton (2011) Source (archive)
Basbaum et al. (2009) Source (archive)
Basl (2014) Source (archive)
Bateson (1991) Source (archive)
Bauer et al. (1979) Source (archive)
Baumgarten (2013) Source (archive)
Bayne (2008) Source (archive)
Bayne (2010) Source (archive)
Bayne (2013) Source (archive)
Bayne & Hohwy (2016) Source (archive)
Bayne et al. (2009) Source (archive)
BBC, “The chicken that lived for 18 months without a head” Source (archive)
Beaney (2014) Source (archive)
Beauchamp & Childress (2012) Source (archive)
Beauchamp & Frey (2011) Source (archive)
Bechtel & Richardson (1998) Source (archive)
Beckstead (2013) Source (archive)
Begley & Ellis (2012) Source (archive)
Behavioural Ecology Research Group at the University of Oxford, “Tool Manufacture” Source (archive)
Behrmann & Nishimura (2010) Source (archive)
Bender et al. (2008) Source (archive)
Benjamin et al. (2012) Source (archive)
Bennett & Hill (2014) Source (archive)
Bermudez (2014) Source (archive)
Bernstein (1998) Source (archive)
Berridge & Kringelbach (2015) Source (archive)
Berridge & Kringelbach (2016) Source (archive)
Berridge & Winkielman (2003) Source (archive)
Beshkar (2008) Source (archive)
Best et al. (2008) Source (archive)
Bhandari & Wagner (2006) Source (archive)
Bhat & Rockwood (2007) Source (archive)
Bicchieri & Mercier (2014) Source (archive)
Bird (2011) Source
Biro & Stamps (2015) Source (archive)
Bishop (2004) Source (archive)
Bishop (2015) Source (archive)
Bishop & Trout (2004) Source (archive)
Bishop & Trout (2008) Source (archive)
Blackmon (2013) Source (archive)
Blackmon (2016) Source (archive)
Blackmore (2016) Source (archive)
Blanke & Metzinger (2009) Source (archive)
Blanke et al. (2015) Source (archive)
Block (1978) Source (archive)
Block (1993) Source (archive)
Block (1995) Source (archive)
Block (2007a) Source (archive)
Block (2007b) Source (archive)
Block (2011) Source (archive)
Block (2014) Source (archive)
Blom (2010) Source (archive)
Blom (2014) Source (archive)
Blom & Sommer (2012) Source (archive)
Bloom (2000) Source (archive)
Boeve (2010) Source (archive)
Bogosian (2016) Source (archive)
Bogousslavsky et al. (1991) Source (archive)
Boller & Grafman (2000) Source (archive)
Boly et al. (2013) Source (archive)
Borries et al. (2016) Source (archive)
Bosco et al. (2015) Source (archive)
Bostrom (2006) Source (archive)
Bostrom & Yudkowsky (2014) Source (archive)
Botha & Everaert (2013) Source (archive)
Bound et al. (2001) Source (archive)
Bourget & Chalmers (2013) Source (archive)
Braithwaite (2010) Source (archive)
Braun (2015) Source (archive)
Bray (2011) Source (archive)
Breed (2017) Source (archive)
Breitmeyer & Ogmen (2006) Source (archive)
Brennan & Lo (2015) Source (archive)
Bridgeman (1992) Source (archive)
Briscoe & Schwenkler (2015) Source (archive)
Brogaard (2016) Source (archive)
Brook & Raymont (2017) Source
Brown et al. (2011) Source (archive)
Bryant et al. (2014) Source (archive)
Buchanan & Powell (2016) Source (archive)
Bunge (1980) Source (archive)
Burghardt (2005) Source (archive)
Burkart et al. (forthcoming) Source (archive)
Burkeman (2015) Source (archive)
Button et al. (2013) Source (archive)
Buzzi (2011) Source (archive)
Bykvist (2017) Source (archive)
Cabanac et al. (2009) Source (archive)
Caetano & Aisenberg (2014) Source (archive)
Camerer et al. (2016) Source (archive)
Campbell (2002) Source (archive)
Campbell (2013) Source (archive)
Cardeña & Winkelman (2011) Source (archive)
Cardeña et al. (2013) Source (archive)
Cardeña et al. (2014) Source (archive)
Cardoso-Leite & Gorea (2010) Source (archive)
Carey (2001) Source
Carp (2012) Source (archive)
Carroll (2016) Source (archive)
Carruthers (1989) Source (archive)
Carruthers (1992) Source (archive)
Carruthers (1996) Source (archive)
Carruthers (1999) Source (archive)
Carruthers (2000) Source (archive)
Carruthers (2002) Source (archive)
Carruthers (2004) Source (archive)
Carruthers (2005) Source (archive)
Carruthers (2011) Source (archive)
Carruthers (2016) Source
Carruthers (2017) Source (archive)
Carruthers & Schier (2017) Source (archive)
Cary et al. (1998) Source (archive)
Castelvecchi (2016) Source (archive)
Cavanna & Nani (2014) Source (archive)
Center for Deliberative Democracy, “What is Deliberative Polling?” Source
Cerullo (2015) Source (archive)
Chafe (1996) Source (archive)
Chalmers (1990) Source (archive)
Chalmers (1995) Source (archive)
Chalmers (1996) Source (archive)
Chalmers (1997) Source (archive)
Chalmers (2003) Source (archive)
Chalmers (2004) Source (archive)
Chalmers (2009) Source (archive)
Chalmers (2010) Source (archive)
Chalmers (2011) Source (archive)
Chalmers (2012) Source (archive)
Chalmers (2015) Source (archive)
Chan (2009) Source
Chang (2004) Source (archive)
Chang (2011) Source (archive)
Chang (2012) Source (archive)
Chang & Li (2015) Source (archive)
Chappell (2010) Source (archive)
Chaudhary et al. (2017) Source (archive)
Chechlacz & Humphreys (2014) Source (archive)
Cheke & Clayton (2010) Source (archive)
Chen et al. (2013) Source (archive)
Chen et al. (2014) Source (archive)
Chen et al. (2016) Source (archive)
Cheng (2016) Source (archive)
Chervova et al. (1994) Source
Chrisley & Sloman (2016) Source (archive)
Christen et al. (2014) Source (archive)
Christensen et al. (2012) Source (archive)
Churchland (1988) Source (archive)
Clark (1996) Source (archive)
Clark (2001) Source (archive)
Clark (2009) Source (archive)
Clark (2013) Source (archive)
Clark & Kiverstein (2007) Source (archive)
Clarke and Harris (2001) Source (archive)
Cleeremans (2011) Source (archive)
Clegg (2001) Source (archive)
Cloney (1982) Source (archive)
Cohen & Dennett (2011) Source (archive)
Collerton et al. (2015) Source (archive)
Collignon et al. (2011) Source (archive)
Copeland (1996) Source (archive)
Corns (2014) Source (archive)
Coslett (2011) Source
Coslett & Lie (2008) Source
Cowey (2004) Source (archive)
Cowey (2010) Source (archive)
Crane & Piantanida (1983) Source (archive)
Cranford & Smith (1987) Source (archive)
Cranford et al. (2006) Source (archive)
Crew (2014) Source (archive)
Crick & Koch (1990) Source (archive)
Crick & Koch (1998) Source (archive)
Crisp (2013) Source
Crisp & Pummer (2016) Source
Crook & Walters (2011) Source (archive)
Dahl et al. (2011) Source (archive)
Dahlsgaard et al. (2005) Source (archive)
Damasio (1999) Source (archive)
Damasio (2010) Source (archive)
Damasio & Carvalho (2013) Source (archive)
Damasio & Van Hoesen (1983) Source
Damasio et al. (2013) Source (archive)
Daniels (2016) Source
Daswani & Leike (2015) Source (archive)
Davies & Levy (2016) Source (archive)
Davies et al. (2006) Source (archive)
Dawkins (2012) Source (archive)
Dawkins (2015) Source (archive)
Dawkins (2017) Source (archive)
de Blanc (2011) Source (archive)
de Gelder et al. (2002) Source (archive)
de Ribaupierre and Delalande (2008) Source (archive)
de Waal (1992) Source (archive)
de Waal (2001) Source
de Waal (2007) Source (archive)
de Waal (2016) Source (archive)
de Waal et al. (2014) Source (archive)
Dean (1990) Source
Deaner et al. (2007) Source (archive)
Debruyne (2009) Source (archive)
Deeks et al. (2003) Source (archive)
Degenaar & Lokhorst (2014) Source
DeGrazia (2016) Source
Dehaene (2014) Source (archive)
Dehaene et al. (1998) Source (archive)
Dehaene et al. (2014) Source (archive)
Dennett (1986) Source (archive)
Dennett (1988) Source (archive)
Dennett (1991) Source (archive)
Dennett (1993a) Source (archive)
Dennett (1993b) Source (archive)
Dennett (1994) Source (archive)
Dennett (1995) Source (archive)
Dennett (2005) Source (archive)
Dennett (2007) Source (archive)
Dennett (2013) Source (archive)
Dennett (2016a) Source (archive)
Dennett (2016b) Source (archive)
Dennett (2017) Source (archive)
Denton (2006) Source (archive)
Derbyshire (2014) Source (archive)
Deroy & Spence (2016) Source (archive)
DerSimonian & Laird (1986) Source (archive)
DerSimonian & Laird (2015) Source (archive)
Devor et al. (2014) Source (archive)
Dewsbury (1984) Source (archive)
Di Virgilio & Clarke (1997) Source (archive)
Dickinson (2011) Source (archive)
Dijkerman & De Haan (2007) Source (archive)
Dillon (2014) Source
Dittrich (2016) Source (archive)
Domhoff (2007) Source (archive)
Dominus (2011) Source (archive)
Donaldson & Grant-Vallone (2002) Source (archive)
Dorahy et al. (2014) Source (archive)
Doris (2010) Source (archive)
Dorsch (2015) Source (archive)
Dragoi (2016) Source (archive)
Drescher (2006) Source (archive)
Dretske (1995) Source (archive)
Droege & Braithwaite (2015) Source (archive)
Dugas-Ford et al. (2012) Source (archive)
Dugatkin (2013) Source (archive)
Duvendack et al. (2015) Source (archive)
Dyer (1994) Source (archive)
Ebrahim et al. (2014) Source (archive)
Edelman (1990) Source (archive)
Edelman (2008) Source (archive)
Edwards (2005) Source (archive)
Eeles et al. (2013) Source (archive)
Egan (2012a) Source (archive)
Egan (2012b) Source (archive)
Egger (1978) Source (archive)
Egger et al. (2014) Source (archive)
Eklund et al. (2016) Source (archive)
Emery (2016) Source (archive)
Engelborghs et al. (2000) Source (archive)
Epley (2011) Source (archive)
Epley et al. (2007) Source
Evanschitzky & Armstrong (2010) Source (archive)
Everett Kaser Software, “MESH” Source (archive)
Executable Philosophy Source (archive)
Farah (2004) Source (archive)
Farrell et al. (2015) Source (archive)
Favre (2011) Source (archive)
Fayers & Machin (2016) Source (archive)
Fazekas & Overgaard (2016) Source (archive)
Feinberg & Mallatt (2016) Source (archive)
Feinstein et al. (2016) Source (archive)
Fekete et al. (2016) Source (archive)
Fernandes et al. (2014) Source (archive)
Fernandez-Ballesteros & Botella (2007) Source (archive)
Feuillet et al. (2007) Source (archive)
Fidler et al. (2017) Source (archive)
Fiedler & Schwarz (2015) Source (archive)
Finn et al. (2009) Source (archive)
Finnerup and Jensen (2004) Source (archive)
Fireman et al. (2003) Source (archive)
Fischer & Collins (2015) Source (archive)
Fitch (2010) Source (archive)
FiveThirtyEight, “Hack Your Way To Scientific Glory” Source (archive)
Flanagan (1992) Source (archive)
Flanagan (2016) Source (archive)
Fleming (2014) Source (archive)
Fleming et al. (2007) Source (archive)
Fletcher (2015) Source (archive)
Foreign and Commonwealth Office London, “Consolidated Texts of the EU Treaties as Amended by the Treaty of Lisbon” Source (archive)
Frankish (2005) Source (archive)
Frankish (2012a) Source (archive)
Frankish (2012b) Source (archive)
Frankish (2016a) Source (archive)
Frankish (2016b) Source (archive)
Frankish (2016c) Source (archive)
Frankish & Dennett (2004) Source (archive)
Franklin et al. (2012) Source (archive)
Franklin et al. (2016) Source (archive)
Frasch et al. (2016) Source (archive)
Freud et al. (2016) Source (archive)
Friedman (2005) Source (archive)
Friedman-Hill et al. (1995) Source (archive)
Fruita Colorado, “Fruita Community Center” Source (archive)
Furness et al. (2014) Source (archive)
Gagliano et al. (2016) Source (archive)
Gallup Jr. et al. (2011) Source (archive)
Gamez (2008) Source (archive)
Gangopadhyay et al. (2010) Source (archive)
Garamszegi (2016) Source (archive)
Garfield (2016) Source (archive)
Gazzaniga (1992) Source (archive)
Gazzaniga (2000) Source (archive)
Gazzaniga (2002) Source (archive)
Gazzaniga & Campbell (2015) Source (archive)
Gazzaniga & LeDoux (1978) Source (archive)
Gebhart & Schmidt (2013) Source (archive)
Gelman (2016) Source (archive)
Gelman & Geurts (2017) Source (archive)
Gelman & Loken (2013) Source
Gelman & Loken (2014) Source
Gennaro (2011) Source (archive)
Gennaro (2016) Source (archive)
Gentle (2011) Source (archive)
Gentle et al. (2001) Source (archive)
Gerber et al. (2014) Source (archive)
Giacino et al. (2014) Source (archive)
Giambra (2000) Source
Gigerenzer (2007) Source (archive)
Gili et al. (2013) Source (archive)
Ginsburg & Jablonka (2007) Source
Godfrey-Smith (2009) Source (archive)
Godfrey-Smith (2016a) Source (archive)
Godfrey-Smith (2016b) Source (archive)
Godfrey-Smith (2017) Source
Goff (2017a) Source (archive)
Goff (2017b) Source (archive)
Goodale & Ganel (2016) Source (archive)
Goodale & Milner (2013) Source (archive)
Goodale et al. (2001) Source (archive)
Goodfellow et al. (2016) Source (archive)
Goodwin (2015) Source (archive)
Gorber et al. (2007) Source (archive)
Gorber et al. (2009) Source (archive)
Gorea (2015) Source (archive)
Gotman & Kostopoulos (2013) Source (archive)
Graham (2001) Source (archive)
Graham & Burghardt (2010) Source (archive)
Graham & Kennedy (2004) Source (archive)
Graham et al. (2013) Source (archive)
Grahek (2007) Source (archive)
Gray (2004) Source (archive)
Gray & Wegner (2009) Source (archive)
Graziano (2013) Source (archive)
Graziano (2014) Source (archive)
Graziano (2016a) Source (archive)
Graziano (2016b) Source (archive)
Graziano & Webb (2017) Source (archive)
Greaves & Ord (2016) Source
Green & Wikler (1980) Source (archive)
Greenwald (2012) Source (archive)
Grim (2004) Source (archive)
Grimes (1996) Source (archive)
Grisdale (2010) Source (archive)
Grosenick et al. (2007) Source (archive)
Groves et al. (2009) Source (archive)
Grzybowski & Aydin (2007) Source (archive)
Guthrie (1993) Source (archive)
Güzeldere et al. (2000) Source (archive)
Gyulai et al. (1996) Source (archive)
Haffendon & Goodale (1998) Source (archive)
Hajdin (1994) Source
Hájek (2011) Source
Hall (2007) Source (archive)
Halligan (2002) Source (archive)
Hameroff & Penrose (2014) Source (archive)
Hancock (2002) Source (archive)
Hankins (2012) Source (archive)
Hansen et al. (2009) Source (archive)
Hanson (2002) Source (archive)
Hanson (2016) Source (archive)
Hardin (1988) Source (archive)
Harman (1990) Source (archive)
Harman & Lepore (2014) Source (archive)
Harris (2017) Source (archive)
Hartmann et al. (1991) Source (archive)
Hassin (2013) Source (archive)
Hatzimoysis (2007) Source (archive)
HealthMeasures Source (archive)
Healy et al. (2013) Source (archive)
Heene (2010) Source (archive)
Heider (2000) Source (archive)
Heil (2013) Source (archive)
Heilman (1991) Source (archive)
Heilman & Satz (1983) Source (archive)
Held & Špinka (2011) Source (archive)
Held et al. (2011) Source (archive)
Helen Yetter-Chappell, “Published Papers” Source (archive)
Helton (2005) Source (archive)
Herbet et al. (2014) Source (archive)
Herculano-Houzel (2011) Source (archive)
Herculano-Houzel (2016) Source (archive)
Herculano-Houzel (2017a) Source (archive)
Herculano-Houzel (2017b) Source (archive)
Herculano-Houzel & Kaas (2011) Source (archive)
Hernandez-Orallo (2017) Source (archive)
Herndon et al. (1999) Source (archive)
Herzog et al. (2007) Source (archive)
Hesse et al. (2011) Source (archive)
Hirschhorn et al. (2002) Source (archive)
Hirstein (2010) Source (archive)
Hirstein (2012) Source (archive)
Hobson et al. (2000) Source (archive)
Hofree & Winkielman (2012) Source (archive)
Hohwy (2012) Source (archive)
Holt (2003) Source (archive)
Horowitz (2010) Source (archive)
Hou et al. (2017) Source (archive)
Howe (2015) Source (archive)
Howick & Mebius (2015) Source (archive)
Hu & Goodale (2000) Source (archive)
Hubbard & Armstrong (1994) Source (archive)
Hubbard & Vetter (1992) Source (archive)
Hudetz (2012) Source (archive)
Huemer (2016) Source (archive)
Humphrey (2011) Source (archive)
Humphreys (1999) Source (archive)
Humphreys & Riddoch (2013) Source (archive)
Hurlbert (2009) Source (archive)
Hurlburt & Schwitzgebel (2007) Source (archive)
Husain (2008) Source (archive)
Hutson (2012) Source (archive)
Hylton (2014) Source (archive)
Ihle et al. (2017) Source (archive)
Im & Galko (2012) Source (archive)
Ingle (1973) Source (archive)
Inglehart & Welzel (2010) Source (archive)
IntHout et al. (2014) Source (archive)
Introspection and Consciousness, Oxford University Press Source (archive)
Ioannidis (2005a) Source (archive)
Ioannidis (2005b) Source (archive)
Irvine (2013) Source (archive)
Jack & Robbins (2012) Source (archive)
Jackendoff (2007) Source (archive)
Jackson (1998) Source (archive)
Jackson & Lorber (1984) Source (archive)
James et al. (2009) Source (archive)
Jamieson (2007) Source (archive)
Jardri et al. (2013) Source (archive)
Jarvis et al. (2005) Source (archive)
Jaworska & Tannenbaum (2013) Source
Jaynes (1976) Source (archive)
Jaynes (2003) Source (archive)
Jelbert et al. (2014) Source (archive)
Jennings (1906) Source
Jennions & Møller (2003) Source (archive)
Jet Brains, “Download PyCharm” Source (archive)
Jiang et al. (2016) Source (archive)
John et al. (2012) Source (archive)
Johnson (1993) Source (archive)
Jonas & Kording (2017) Source (archive)
Jørgensen et al. (2016) Source (archive)
Jøsang (2016) Source (archive)
Journal of Consciousness Studies, Volume 18, Number 1 (2011) Source (archive)
Journal of Consciousness Studies, Volume 23, Numbers 11-12 (2016) Source (archive)
Joyce (2005) Source (archive)
Kaas (2009) Source (archive)
Kabadayi et al. (2016) Source (archive)
Kaempf & Greenberg (1990) Source (archive)
Kagan (2016) Source (archive)
Kaiser (2017) Source (archive)
Kaminski (2016) Source (archive)
Kammerer (2016) Source (archive)
Kapur et al. (1994) Source (archive)
Kardish et al. (2015) Source (archive)
Karmakar et al. (2015) Source (archive)
Karp (2016) Source (archive)
Karpathy (2016) Source (archive)
Katz (2000) Source (archive)
Katz (2013) Source (archive)
Keijzer (2012) Source (archive)
Keltner et al. (2013) Source (archive)
Kemmerer (2015) Source (archive)
Kent Berridge Affective Neuroscience & Biopsychology Lab Source (archive)
Key (2015) Source (archive)
Key (2016) Source (archive)
Khan et al. (1995) Source (archive)
Kihlstrom (2013) Source (archive)
Kilteni et al. (2015) Source (archive)
Kim (2010) Source (archive)
Kim et al. (2014) Source (archive)
King (2013) Source (archive)
King (2016a) Source (archive)
King (2016b) Source (archive)
Kirk (1994) Source (archive)
Kirk (2007) Source (archive)
Kirk (2015) Source (archive)
Kirk (2017) Source (archive)
Kitano (2007) Source (archive)
Klein (2010) Source (archive)
Klein (2015) Source (archive)
Klein (2017a) Source (archive)
Klein (2017b) Source (archive)
Klein & Barron (2016) Source (archive)
Klein & Hirachan (2014) Source (archive)
Klein & Hohwy (2015) Source (archive)
Koch (2004) Source (archive)
Koch et al. (2016) Source (archive)
Konishi & Smallwood (2016) Source (archive)
Kotseruba et al. (2016) Source (archive)
Kowalski et al. (2012) Source (archive)
Kravitz et al. (2011) Source (archive)
Kriegel (2015) Source (archive)
Kubovy & Pomerantz (1981) Source (archive)
Kuhn (2012) Source (archive)
Kühn & Haddadin (2017) Source (archive)
Kuijsten (2008) Source (archive)
Kuncel et al. (2005) Source (archive)
Kunzendorf & Wallace (2000) Source (archive)
Kyselo & Paolo (2015) Source (archive)
LaBerge & DeGracia (2000) Source (archive)
LaChat (1996) Source (archive)
Lackner (1988) Source (archive)
Ladner et al. (2016) Source (archive)
Lagercrantz (2016) Source (archive)
Lambert & Kinsley (2004) Source (archive)
Lamme (2010) Source (archive)
Lample & Chaplot (2016) Source (archive)
Langdon et al. (2014) Source (archive)
Langland-Hassan (2015) Source (archive)
Långsjö et al. (2012) Source (archive)
Laplane & Dubois (2001) Source (archive)
Laplane et al. (1984) Source (archive)
Lau & Rosenthal (2011) Source (archive)
Laureys (2005a) Source (archive)
Laureys (2005b) Source (archive)
Laureys et al. (2015) Source (archive)
Le Neindre et al. (2009) Source (archive)
Le Neindre et al. (2017) Source (archive)
Lecours (1998) Source (archive)
LeDoux (2015) Source (archive)
Lee (2014) Source (archive)
Lee (2015) Source (archive)
Leek & Jager (2017) Source (archive)
Leisman & Koch (2009) Source (archive)
Lenay et al. (2003) Source (archive)
Lessells & Boag (1987) Source (archive)
Leu-Semenescu et al. (2013) Source (archive)
Levin (2013) Source
Levy & Newborn (1991) Source (archive)
Lewin (1980) Source (archive)
Lewis (2001) Source (archive)
Lewis (2013) Source (archive)
Leys & Henon (2013) Source (archive)
Liao (2016) Source (archive)
Lieberman (2013) Source (archive)
Life, “Headless Rooster: Beheaded chicken lives normally after freak decapitation by ax” Source (archive)
Lin (2015) Source (archive)
Lin et al. (2006) Source (archive)
Liu & Fridovich (1996) Source (archive)
Liu & Schubert (2010) Source
Lockhart (2000) Source (archive)
Loeser & Treede (2008) Source (archive)
Lomber & Malhotra (2008) Source (archive)
Loosemore (2012) Source (archive)
Loukola et al. (2017) Source (archive)
Low (2012) Source
Ludwig (2014) Source (archive)
Ludwig (2015) Source (archive)
Luhrmann (2011) Source (archive)
Lui et al. (2011) Source (archive)
Luijtelaar et al. (2014) Source (archive)
Luke Muehlhauser, “Other Writings” Source (archive)
Lurz (2009) Source (archive)
Lycan (1996) Source (archive)
Lynn et al. (2014) Source (archive)
Lyon (2015) Source (archive)
MacAskill (2014) Source (archive)
Macchi et al. (2016) Source (archive)
Machado (2007) Source (archive)
Mackie & Burighel (2005) Source (archive)
MacLean et al. (2014) Source (archive)
Macphail (1987) Source (archive)
Macphail (1998) Source (archive)
Macphail (2000) Source (archive)
MacQueen (2015) Source (archive)
Maginnis (2006) Source (archive)
Mahowald (2011) Source (archive)
Maidenbaum et al. (2014) Source (archive)
Maley & Piccinini (2013) Source (archive)
Mallatt & Feinberg (2016) Source (archive)
Mallinson (2016) Source
Mandik (2013) Source (archive)
Marblestone et al. (2016) Source (archive)
Marino (2017a) Source (archive)
Marino (2017b) Source (archive)
Marinsek & Gazzaniga (2016) Source (archive)
Markkula (2015) Source (archive)
Markowitsch (2008) Source (archive)
Marshall (2010) Source (archive)
Marshall (2014) Source (archive)
Marshall (2016) Source (archive)
Martinson et al. (2005) Source (archive)
Mashour (2009) Source (archive)
Mashour & Alkire (2013) Source (archive)
Mashour & LaRock (2008) Source (archive)
Matheny & Chan (2005) Source (archive)
Matthews & Dresner (2016) Source (archive)
Mautner (2009) Source (archive)
Mazzola & Deuling (2013) Source (archive)
McCarthy-Jones (2012) Source (archive)
McDermott (2001) Source (archive)
McDermott (2007) Source (archive)
McGinn (2004) Source (archive)
McGovern & Baars (2007) Source (archive)
McLaughlin et al. (2009) Source (archive)
McNamara & Butler (2013) Source (archive)
Menzel & Fischer (2011) Source (archive)
Merker (2005) Source (archive)
Merker (2007) Source (archive)
Merker (2013) Source (archive)
Merker (2016) Source (archive)
Metzinger (2003) Source (archive)
Metzinger (2010) Source (archive)
Metzinger (2013) Source (archive)
Meyer et al. (2009) Source (archive)
Michael Bach, “Visual Phenomena & Optical Illusions: 132 of them” Source (archive)
Mike the Headless Chicken, “History” Source (archive)
Miklósi & Soproni (2006) Source (archive)
Miller (2000) Source (archive)
Miller (2013) Source (archive)
Miller (2015) Source (archive)
Millsap (2011) Source (archive)
Milner & Goodale (2006) Source (archive)
Minds and Machines, Volume 4, Issue 4 (1994) Source
MIT Press Source (archive)
Mitchell (2005) Source (archive)
Mole (2013) Source (archive)
Möller (2016) Source (archive)
Molyneux (2012) Source (archive)
Moro et al. (2011) Source (archive)
Morris (2011) Source (archive)
Morris (2015) Source (archive)
Muehlhauser (2010) Source (archive)
Muehlhauser (2011) Source (archive)
Muehlhauser (2015) Source (archive)
Muehlhauser & Williamson (2013) Source (archive)
Muehlhauser, Animal consciousness elicitation survey, 2016 Source (archive)
Muehlhauser, Animal consciousness self-elicitations chart, 2016 Source
Muehlhauser, Animal consciousness self-elicitations spreadsheet, 2016 Source
Mueller (2013) Source (archive)
Munevar (2012) Source (archive)
Nagel (1974) Source (archive)
Nagel (1997) Source (archive)
Nahm et al. (2012) Source (archive)
Nakagawa & Parker (2015) Source (archive)
Nakagawa & Santos (2012) Source (archive)
Nakagawa et al. (2017) Source (archive)
Nash & Barnier (2008) Source (archive)
Necker (2014) Source (archive)
Newcombe & Johnson (1999) Source (archive)
Newell & Shanks (2014) Source (archive)
Newson (2007) Source (archive)
Ng (1995) Source (archive)
Nichols & Stich (2003) Source (archive)
Nicol (2015) Source (archive)
Nissen et al. (2016) Source (archive)
Norberg (2016) Source (archive)
Norwood & Lusk (2011) Source (archive)
Nosek et al. (2015) Source (archive)
Nuijten et al. (2016) Source (archive)
O’Connor et al. (2007) Source (archive)
O’Neill (2015) Source (archive)
O’Regan (2011) Source (archive)
O’Regan (2012) Source (archive)
O’Regan & Noe (2001) Source (archive)
Odgaard-Jensen et al. (2011) Source (archive)
OECD (2013) Source (archive)
Oesterheld (2016) Source (archive)
Ohga et al. (1993) Source
Oizumi et al. (2014) Source (archive)
Olkowicz et al. (2016) Source (archive)
Olmstead & Kuhlmeier (2015) Source (archive)
Ord (2006) Source (archive)
Ord (2015) Source (archive)
Ortega (2005) Source
Ostrovsky et al. (2006) Source (archive)
Osvath & Karvonen (2012) Source (archive)
Our non-verbatim summary of a conversation with Aaron Sloman, July 3, 2016 Source
Our non-verbatim summary of a conversation with Brian Tomasik, October 6, 2016 Source
Our non-verbatim summary of a conversation with Carl Shulman, August 19, 2016 Source
Our non-verbatim summary of a conversation with David Chalmers, May 20, 2016 Source
Our non-verbatim summary of a conversation with Derek Shiller, January 24, 2017 Source
Our non-verbatim summary of a conversation with Gary Drescher, July 18, 2016 Source
Our non-verbatim summary of a conversation with James Rose, November 18, 2016 Source
Our non-verbatim summary of a conversation with Joel Hektner, December 17, 2015 Source
Our non-verbatim summary of a conversation with Keith Frankish, January 24, 2017 Source
Our non-verbatim summary of a conversation with Michael Tye, August 24, 2016 Source
Overgaard (2011) Source (archive)
Overgaard (2015) Source (archive)
Owen (2013) Source (archive)
Owen et al. (2002) Source (archive)
Oxford University Press Source (archive)
Palazzo et al. (2013) Source (archive)
Panayiotopoulos (2008) Source (archive)
Panksepp (2008) Source
Paoni et al. (1981) Source (archive)
Papineau (1993) Source (archive)
Papineau (2002) Source (archive)
Papineau (2003) Source (archive)
Papineau (2009) Source (archive)
Park et al. (2008) Source (archive)
Parker (2003) Source (archive)
Parker et al. (2016) Source (archive)
Pärnpuu (2016) Source (archive)
Parvizi & Damasio (2001) Source (archive)
Pastor et al. (1996) Source (archive)
Paul et al. (2007) Source (archive)
Pearce (2008) Source (archive)
Pearce (2013) Source (archive)
Pekala & Kumar (2000) Source (archive)
Penry & Dreifuss (1969) Source (archive)
Pereboom (2011) Source (archive)
Perry (2009) Source (archive)
Perry et al. (2002) Source (archive)
Pessoa & Weerd (2003) Source (archive)
PETRL, “An Interview with Eric Schwitzgebel and Mara Garza” Source (archive)
Philippi et al. (2012) Source (archive)
Phillips (2014) Source (archive)
Phillips (2017a) Source (archive)
Phillips (2017b) Source (archive)
Philomel Records, “Diana Deutsch’s Audio Illusions” Source (archive)
Philosophy, et cetera, “The Cartesian Theatre” Source (archive)
PhilPapers, “The PhilPapers Surveys” Source (archive)
Pinker (2007) Source (archive)
Pinker (2011) Source (archive)
Pinto et al. (2017) Source (archive)
Pistorius (2013) Source (archive)
Pitts et al. (2014) Source (archive)
Place (1956) Source (archive)
PNAS, “Rights and Permissions” Source (archive)
Poe (2014) Source (archive)
Pokahr et al. (2005) Source (archive)
Poldrack et al. (2017) Source (archive)
Polger (2017) Source (archive)
Polger & Shapiro (2016) Source (archive)
Politis & Loane (2012) Source (archive)
Post (2004) Source (archive)
Powers (2014) Source (archive)
Prasad & Cifu (2015) Source (archive)
Prasad et al. (2013) Source (archive)
Preston, “Analytic Philosophy” Source (archive)
Preti (2007) Source (archive)
Preti (2011) Source (archive)
Price (1999) Source
Prigatano (2010) Source (archive)
Prince et al. (2008) Source (archive)
Prinz (2007) Source (archive)
Prinz (2012) Source (archive)
Prinz (2015a) Source (archive)
Prinz (2015b) Source (archive)
Prinz (2016) Source (archive)
Prinz et al. (2011) Source (archive)
PsychonautWiki, “Psychedelic” Source (archive)
Puccetti (1998) Source (archive)
Purves et al. (2011) Source (archive)
Putnam (1988) Source (archive)
Pyke (2014) Source (archive)
Qadri & Cook (2015) Source (archive)
Quian Quiroga (2012) Source (archive)
Quora, “What is the most intelligent thing a non-human animal has done?” Source
Raby et al. (2007) Source (archive)
Rachels (1990) Source (archive)
Rachels (2004) Source (archive)
Ramachandran & Brang (2009) Source (archive)
Random.org Source (archive)
Rao & Gershon (2016) Source (archive)
Reggia (2013) Source (archive)
Rehkämper et al. (2003) Source (archive)
Reilly & Schachtman (2008) Source (archive)
Reiner et al. (2004) Source (archive)
Reinhart et al. (2015) Source (archive)
Reise & Revicki (2015) Source (archive)
Remmer (2015) Source (archive)
Remy & Watanabe (1993) Source
Revonsuo (2009) Source (archive)
Rey (1983) Source (archive)
Rey (1992) Source (archive)
Rey (1995) Source (archive)
Rey (2007) Source (archive)
Rey (2015) Source (archive)
Rey (2016) Source (archive)
Rey et al. (2014) Source (archive)
Rial et al. (2008) Source (archive)
Ricciardelli (1993) Source (archive)
Rich (1997) Source (archive)
Riley & Freeman (2004) Source (archive)
Ringkamp et al. (2013) Source
Rinner et al. (2015) Source (archive)
Ritchie (2017) Source (archive)
Robinson (2015) Source
Robinson et al. (2015) Source (archive)
Rodd (1990) Source (archive)
Roelofs (2016) Source (archive)
Rolls (2013) Source (archive)
Romeijn & Roy (2014) Source (archive)
Rosati (1995) Source (archive)
Rose (2002) Source (archive)
Rose (2016) Source (archive)
Rose & Dietrich (2009) Source (archive)
Rose et al. (2014) Source (archive)
Rosenthal (1990) Source
Rosenthal (2006) Source (archive)
Rosenthal (2009) Source (archive)
Rossano (2003) Source (archive)
Roth & Dickie (2005) Source (archive)
Rowlands (2001) Source (archive)
Rusanen & Lappi (2016) Source (archive)
Russell & Norvig (2009) Source (archive)
Rutiku et al. (2015) Source (archive)
Rutledge et al. (2014) Source (archive)
Ryder (1996) Source (archive)
Sachs (2011) Source (archive)
Sachs (2015) Source (archive)
Safina (2015) Source (archive)
Sandberg (2014) Source (archive)
Sangiao-Alvarellos et al. (2004) Source (archive)
Sanz et al. (2013) Source (archive)
Sapontzis (2004) Source (archive)
Sato & Aoki (2006) Source (archive)
Sayre-McCord (2012) Source (archive)
Scales et al. (2005) Source (archive)
Schechter (2012) Source (archive)
Schechter (2014) Source
Schenck (2015) Source (archive)
Schenk & McIntosh (2009) Source (archive)
Schiff (2010) Source (archive)
Schiller & Tehovnik (2015) Source (archive)
Schneider & Velmans (2017) Source (archive)
Schubert & Masters (1991) Source (archive)
Schulze-Makuch & Irwin (2008) Source (archive)
Schwarz et al. (2008) Source (archive)
Schwitzgebel (2007a) Source (archive)
Schwitzgebel (2007b) Source (archive)
Schwitzgebel (2008) Source (archive)
Schwitzgebel (2011) Source (archive)
Schwitzgebel (2012) Source (archive)
Schwitzgebel (2015) Source (archive)
Schwitzgebel (2016) Source (archive)
Schwitzgebel & Garza (2015) Source (archive)
Seager & Allen-Hermanson (2010) Source
Searle (1992) Source (archive)
Searle (1997) Source (archive)
Searle (2002) Source (archive)
Sellars (1962) Source (archive)
Seth & Baars (2005) Source (archive)
Seth et al. (2006) Source (archive)
Shagrir (2012) Source (archive)
Shanahan (2010) Source (archive)
Shanon (2002) Source (archive)
Shapiro & Todorovic (2017) Source (archive)
Shepard (1964) Source (archive)
Shepard (1990) Source (archive)
Shepherd (2015) Source (archive)
Shepherd & Levy (forthcoming) Source (archive)
Shermer (2015) Source (archive)
Shettleworth (2009) Source (archive)
Shevlin (2016) Source (archive)
Shiller (2016) Source (archive)
Shioi et al. (1987) Source (archive)
Shor & Orne (1965) Source (archive)
Shriver (2014) Source (archive)
Shulman (2015) Source (archive)
Shumaker et al. (2011) Source (archive)
Siclari et al. (2017) Source (archive)
Siegel (2008) Source (archive)
Simmons et al. (2011) Source (archive)
Simon (2016) Source (archive)
Simonsohn et al. (2015) Source (archive)
Singer (2011) Source (archive)
Sinnott-Armstrong (2016) Source (archive)
Sinnott-Armstrong & Miller (2007) Source (archive)
Siontis et al. (2010) Source (archive)
Sittler-Adamczewski (2017) Source (archive)
Slate Star Codex, “Devoodooifying Psychology” Source (archive)
Smaldino & McElreath (2016) Source (archive)
Smallwood (2015) Source (archive)
Smart (1959) Source (archive)
Smart (2006) Source (archive)
Smart (2007) Source (archive)
Smith (1988) Source (archive)
Smith (2011) Source (archive)
Smith (2016a) Source (archive)
Smith (2016b) Source (archive)
Smith & Boyd (1991) Source (archive)
Smith & Lewin (2009) Source (archive)
Smith & Washburn (2005) Source (archive)
Smith et al. (2011) Source (archive)
Sneddon (2002) Source (archive)
Sneddon (2009) Source (archive)
Sneddon (2015) Source (archive)
Sneddon et al. (2003) Source (archive)
Sneddon et al. (2014) Source (archive)
Snowden et al. (2012) Source (archive)
Soares (2015) Source (archive)
Soares (2016) Source (archive)
Sobel (1999) Source (archive)
Sobel (2001) Source (archive)
Sobel (2017) Source (archive)
Software Engineering Stack Exchange, “Is Python Interpreted or Compiled?” Source (archive)
Sourjik & Wingreen (2012) Source (archive)
Spillmann and Werner (1990) Source (archive)
Squair (2012) Source (archive)
Stalans (2012) Source (archive)
Stamenov (1997) Source (archive)
Standish (2013) Source (archive)
Stanovich (2004) Source (archive)
Stanovich (2013) Source (archive)
Stanovich et al. (2016) Source (archive)
Starr et al. (2009) Source (archive)
Steegen et al. (2014) Source (archive)
Steele-Russell (1994) Source (archive)
Steele-Russell et al. (1979) Source (archive)
Sterzer (2013) Source (archive)
Stiles & Shimojo (2015) Source (archive)
Stone et al. (1999) Source (archive)
Stone et al. (2007) Source (archive)
Strausfeld (2012) Source (archive)
Strayer & Hummon (2001) Source (archive)
Streiner & Norman (2008) Source (archive)
Stumbrys et al. (2014) Source (archive)
Subitzky (2003) Source (archive)
Sun & Franklin (2007) Source (archive)
Sunstein (2005) Source (archive)
Sutton et al. (1980) Source (archive)
Suziedelyte & Johar (2013) Source (archive)
Swan (2013) Source (archive)
Swanton (1996) Source (archive)
Sytsma (2014) Source (archive)
Szucs & Ioannidis (2017) Source (archive)
Taborsky (2010) Source (archive)
Takeno (2012) Source
Tamietto & de Gelder (2010) Source (archive)
Tartaglia (2013) Source (archive)
Taurek (1977) Source (archive)
Taylor & Vickers (2016) Source (archive)
Tendal et al. (2011) Source (archive)
Tenney & Glauser (2013) Source (archive)
Terada et al. (2016) Source (archive)
Tesla, “Autopilot” Source (archive)
Thagard (1992) Source (archive)
Thagard (1999) Source (archive)
Thagard (2008) Source (archive)
Thagard & Stewart (2014) Source (archive)
The Official Mike the Headless Chicken Book, “Home Page” Source
The Open University, “Thought and Experience: Track 14” Source (archive)
Theiner (2014) Source (archive)
Thomas & Frankenberg (2002) Source (archive)
Thompson (1993) Source (archive)
Thompson (2009) Source (archive)
TimeTree Source (archive)
TimeTree, “Human versus Bantam” Source (archive)
TimeTree, “Human versus Bovine” Source (archive)
TimeTree, “Human versus Chimpanzee” Source (archive)
TimeTree, “Human versus E. coli” Source (archive)
TimeTree, “Human versus Fruit Fly” Source (archive)
TimeTree, “Human versus Japanese Blue Crab” Source (archive)
TimeTree, “Human versus Rainbow Trout” Source (archive)
Tittle (2004) Source (archive)
Togelius et al. (2010) Source (archive)
Tolman (1932) Source (archive)
Tomasik (2014a) Source (archive)
Tomasik (2014b) Source (archive)
Tomasik (2014c) Source (archive)
Tomasik (2014d) Source (archive)
Tomasik (2015a) Source (archive)
Tomasik (2015b) Source (archive)
Tomasik (2016a) Source (archive)
Tomasik’s hard-problem agent (switched to Python 3 syntax and more thoroughly commented) by Luke Muehlhauser Source
Tononi (2004) Source (archive)
Tononi (2014) Source
Tononi (2015) Source (archive)
Tononi & Koch (2015) Source (archive)
Tononi et al. (2015a) Source (archive)
Tononi et al. (2015b) Source (archive)
Tononi et al. (2016) Source (archive)
Trestman (2012) Source (archive)
Trevarthen & Reddy (2017) Source (archive)
Trewavas (2005) Source (archive)
Trout (2007) Source (archive)
Trout (2014) Source (archive)
Trout (2016) Source (archive)
Truog & Fackler 1992 Source (archive)
Tsakiris (2010) Source (archive)
Turner (2013) Source (archive)
Tutorials Point, “Execute Python-3 Online” Source (archive)
Tuyttens et al. (2016) Source (archive)
Tye (1995) Source (archive)
Tye (2000) Source (archive)
Tye (2009a) Source (archive)
Tye (2009b) Source (archive)
Tye (2015) Source
Tye (2016) Source (archive)
Unger (1988) Source (archive)
University of Toronto, “Deep Learning in Computer Vision: Winter 2016” Source (archive)
Urquiza-Haas & Kotrschal (2015) Source (archive)
Uttal (2011) Source (archive)
Uttal (2012) Source (archive)
Uttal (2015) Source (archive)
Uttal (2016) Source (archive)
Uttal & Campbell (2012) Source (archive)
Vaitl et al. (2005) Source (archive)
Vallar & Ronchi (2006) Source (archive)
Vallortigara (2000) Source (archive)
van Duijn et al. (2006) Source (archive)
Van Gulick (1995) Source (archive)
Van Gulick (2009) Source (archive)
Van Gulick (2014) Source
van Wilgenburg & Elgar (2013) Source (archive)
van Zanden et al. (2014) Source (archive)
Vanpaemel et al. (2015) Source (archive)
Varner (2012) Source (archive)
Veatch (1975) Source (archive)
Velmans (2012) Source (archive)
Verschure et al. (2014) Source (archive)
Vimal (2009) Source (archive)
Višak (2013) Source (archive)
Vito & Bartolomeo (2016) Source (archive)
Vonk & Shackelford (2012) Source (archive)
Voss & Hobson (2015) Source (archive)
Vuilleumier (2004) Source (archive)
Vul & Pashler (2008) Source (archive)
Vul & Pashler (2017) Source (archive)
Wadhams & Armitage (2004) Source (archive)
Waisman et al. (2014) Source (archive)
Waller et al. (2013) Source (archive)
Walsh et al. (2015) Source (archive)
Walters (1996) Source (archive)
Walters et al. (1983) Source (archive)
Ward (2011) Source (archive)
Ward (2013) Source (archive)
Warren (1997) Source (archive)
Wasserman et al. (2012) Source (archive)
Watt & Pincus (2004) Source (archive)
Waytz et al. (2012) Source (archive)
Webb & Graziano (2015) Source (archive)
Weisberg (2011) Source (archive)
Weisberg (2014) Source (archive)
Weiskrantz (1997) Source (archive)
Weiskrantz (2007) Source (archive)
Weiskrantz (2008) Source (archive)
Westfall & Yarkoni (2016) Source (archive)
Wetlesen (1999) Source (archive)
White (1991) Source (archive)
Whitehead & Rendell (2014) Source (archive)
Whittingham et al. (2006) Source (archive)
Wicherts et al. (2011) Source (archive)
Wicherts et al. (2016) Source (archive)
Wikimedia Commons, “File:1424 Visual Streams.jpg” Source (archive)
Wikipedia, “A* search algorithm” Source (archive)
Wikipedia, “AI-complete” Source (archive)
Wikipedia, “Alex (parrot)” Source (archive)
Wikipedia, “Alpha–beta pruning” Source (archive)
Wikipedia, “AlphaGo versus Lee Sedol” Source (archive)
Wikipedia, “AlphaGo” Source (archive)
Wikipedia, “Analytical Engine” Source (archive)
Wikipedia, “Anatomically modern human” Source (archive)
Wikipedia, “Anencephaly” Source (archive)
Wikipedia, “Anthropomorphism” Source (archive)
Wikipedia, “Antonie van Leeuwenhoek” Source (archive)
Wikipedia, “Application-specific integrated circuit” Source (archive)
Wikipedia, “Autonomous car” Source (archive)
Wikipedia, “Autotomy” Source (archive)
Wikipedia, “Belief–desire–intention software model” Source (archive)
Wikipedia, “Biological immortality” Source (archive)
Wikipedia, “Björn Merker” Source (archive)
Wikipedia, “Blind spot (vision)” Source (archive)
Wikipedia, “Caenorhabditis elegans” Source (archive)
Wikipedia, “Cattle” Source (archive)
Wikipedia, “Chaser (dog)” Source (archive)
Wikipedia, “Chicken” Source (archive)
Wikipedia, “Chimpanzee” Source (archive)
Wikipedia, “Clever Hans” Source (archive)
Wikipedia, “Collision detection” Source (archive)
Wikipedia, “David Marr (neuroscientist): Levels of analysis” Source (archive)
Wikipedia, “Device driver” Source (archive)
Wikipedia, “Drosophila melanogaster” Source (archive)
Wikipedia, “Élan vital” Source (archive)
Wikipedia, “Enteric nervous system” Source (archive)
Wikipedia, “Escherichia coli” Source (archive)
Wikipedia, “Evidence-based medicine” Source (archive)
Wikipedia, “Evolutionary algorithm” Source (archive)
Wikipedia, “Extremophile” Source (archive)
Wikipedia, “False awakening” Source (archive)
Wikipedia, “Flow (psychology)” Source (archive)
Wikipedia, “Global workspace theory (GWT)” Source (archive)
Wikipedia, “Homomorphic encryption: Fully homomorphic encryption” Source (archive)
Wikipedia, “Hydrocephalus” Source (archive)
Wikipedia, “Integrated development environment” Source (archive)
Wikipedia, “Intentional stance: Dennett’s three levels” Source (archive)
Wikipedia, “Knowledge argument” Source (archive)
Wikipedia, “Life: Biology” Source (archive)
Wikipedia, “Lisp (programming language)” Source (archive)
Wikipedia, “List of animal welfare groups” Source (archive)
Wikipedia, “List of animals by number of neurons” Source (archive)
Wikipedia, “List of longest-living organisms” Source (archive)
Wikipedia, “List of people with locked-in syndrome” Source (archive)
Wikipedia, “Loop quantum gravity” Source (archive)
Wikipedia, “Microsoft Windows” Source (archive)
Wikipedia, “Mike the Headless Chicken” Source (archive)
Wikipedia, “Mirror test” Source (archive)
Wikipedia, “Neuropathic pain” Source (archive)
Wikipedia, “Obligate parasite” Source (archive)
Wikipedia, “Pareto principle” Source (archive)
Wikipedia, “Particle horizon” Source (archive)
Wikipedia, “Pathfinding” Source (archive)
Wikipedia, “Pelagibacter ubique” Source (archive)
Wikipedia, “Persistent vegetative state” Source (archive)
Wikipedia, “Person-affecting view” Source (archive)
Wikipedia, “Philosophical Investigations” Source (archive)
Wikipedia, “Phlogiston theory” Source (archive)
Wikipedia, “Portunus trituberculatus” Source (archive)
Wikipedia, “Python (programming language)” Source (archive)
Wikipedia, “r/K selection theory” Source (archive)
Wikipedia, “Rainbow trout” Source (archive)
Wikipedia, “Rapid eye movement sleep behavior disorder” Source (archive)
Wikipedia, “Shared memory” Source (archive)
Wikipedia, “Sorting algorithm” Source (archive)
Wikipedia, “Split-brain” Source (archive)
Wikipedia, “Standard Model” Source (archive)
Wikipedia, “Symbolic artificial intelligence” Source (archive)
Wikipedia, “Vespula austriaca” Source (archive)
Wikipedia, “Von Neumann architecture” Source (archive)
Wilczek (2008) Source (archive)
Wilson (2004) Source (archive)
Wilson (2016) Source (archive)
Wimsatt (1976) Source (archive)
Wimsatt (2007) Source (archive)
Windt (2011) Source (archive)
Windt et al. (2016) Source (archive)
Winkielman & Berridge (2004) Source (archive)
Wise (2003) Source (archive)
Wittenberg & Baumeister (1999) Source (archive)
Wolpert (2011) Source (archive)
Wood (2011) Source (archive)
Wooldridge (1963) Source (archive)
Wooldridge (2000) Source (archive)
WorldAnimal.net, “Animal Protection in World Constitutions” Source (archive)
Wright et al. (2012) Source (archive)
Wuichet & Zhulin (2010) Source (archive)
Wulff (2014) Source
Wynne (2004) Source (archive)
Wynne & Udell (2013) Source (archive)
Xiao & Güntürkün (2009) Source (archive)
Yamamoto et al. (1990) Source (archive)
Yong (2016) Source (archive)
Yong (2017) Source (archive)
Young (2008) Source (archive)
Young (2012) Source (archive)
Young & Karr (2011) Source (archive)
Young & Leafhead (1996) Source
YouTube, “Computer teaches itself to play games – BBC News” Source
YouTube, “Crab amputates his own claw” Source
YouTube, “Fluent Aphasia (Wernicke’s Aphasia)” Source
YouTube, “Infinite Mario AI – Long Level” Source
YouTube, “Interview with Jesse Prinz” Source
Yudkowsky (2007) Source (archive)
Yudkowsky (2008a) Source (archive)
Yudkowsky (2008b) Source (archive)
Yudkowsky (2008c) Source (archive)
Yudkowsky (2008d) Source (archive)
Zeman et al. (2015) Source (archive)
Zhao (2016) Source (archive)
Zihl (2013) Source (archive)
Expand Footnotes Collapse Footnotes

1.On experts, see my comments here.

On advocacy groups, see my short history of animal advocacy here, including the sources listed in my annotated bibliography. See also Wikipedia’s List of animal welfare groups.

On government agencies, including intergovernmental agencies and resolutions, here are some example texts that arguably assume the moral patienthood of at least some animals:

Article 13 of Title II of the European Union’s Lisbon Treaty, agreed to by all EU member states, reads: “In formulating and implementing the Union’s agriculture, fisheries, transport, internal market, research and technological development and space policies, the Union and the Member States shall, since animals are sentient beings, pay full regard to the welfare requirements of animals, while respecting the legislative or administrative provisions and customs of the Member States relating in particular to religious rites, cultural traditions and regional heritage.”

Many national constitutions include provisions related to the protection of animals, some of which seem to imply some moral status for some animals (see WorldAnimal.net’s Animal Protection in World Constitutions).

Wise (2003) provides a brief overview of animal protection laws in a variety of jurisdictions, many of which seem to assume some moral status for animals. Textbooks on the topic (which I haven’t read) include Waisman et al. (2014), Frasch et al. (2016), Favre (2011), and Karp (2016).

2.For example see Appendix Z.5.

3.Related terms include “moral status,” “moral standing,” “moral considerability,” “personhood,” “moral subject” (this is fairly rare, but see e.g. Wetlesen 1999), and “member of the moral community.” Sometimes these terms are used more-or-less interchangeably, and sometimes they are not.

My preference for the terms “moral patient” and “moral patienthood” (see e.g. Gray & Wegner 2009; Bernstein 1998) is a pragmatic one. “Moral patient” is more succinct than “being with moral status,” “being with moral standing,” “being worthy of moral consideration,” and “member of the moral community.” It is less succinct and common than “person,” but “person” comes with fairly strong connotations of properties such as (1) having a temporally extended narrative about one’s life, (2) having a fairly sophisticated kind of agency in the world, (3) being human or human-like, (4) being able to participate in a community of other moral agents, and so on. Moreover, it is often used to mean some or all of those things denotatively, though in some other cases it is operationally defined as “being with moral status.”

One strike against “moral patient” is that it comes with a problematic connotation of helplessness, but not a terribly strong one, I don’t think. I presume “moral patient” gets its connotations by association with the concept of a medical patient, and while medical patients are primarily in a role of receiving help (or harm) from medical professionals (the intended connotation), they are not in most cases helpless (an unintended connotation).

For reviews of these related terms and concepts, see Newson (2007), Hancock (2002), Jaworska & Tannenbaum (2013), Morris (2011), ch. 3 of Beauchamp & Childress (2012), Kagan (2016), and the entry on “Moral Status” by James W. Walters on pp. 1855-1864 of Post (2004).

Some authors define “moral patient” such that a moral patient cannot also be a moral agent (e.g. Rodd 1990, p. 241), but that is not how I use the term.

Some authors use the term “moral patiency” instead of “moral patienthood.”

For some historical background on the term “moral patient,” see Hajdin (1994), p. 180.

4.The relevant studies, naturally enough, are those testing the effectiveness of behavioral treatments on sleep quality! And, assuming the perspective of evidence-based medicine, it’s fairly clear which of those studies are most informative: it’s the randomized controlled trials, especially those with large numbers of subjects, well-validated outcome measures, long-term follow-up, and other properties which improve the internal and external validity of a study.

5.Thus I have no doubt gotten some things wrong, and said some things that are silly, and I hope readers will call my attention to whatever errors I have made.

6.If you want to read a series of arguments about the likely distribution of conscious experience, see e.g. Tye (2016). Also see notes from my conversation with Michael Tye. I disagree with Tye in many places, but his book is perhaps the most thorough theory-neutral “argumentative” book on the distribution question I’ve seen. For a review of Tye’s book, see Klein (2017b).

7.Hence, each sub-investigation reported below was cut short long before I “completed” it, to save time (following something like the 80/20 rule). Thus, I can only share initial tentative conclusions based on a variety of partially-completed inquiries.

8.The full report is roughly 140,000 words.

9.For more details on what I mean by “naturalistic” vs. “rationalistic,” see the introductory chapter in Fischer & Collins (2015).

Examples might be more helpful, though. Example works in the “rationalistic” tradition are Chalmers (1996) and Jackson (1998). Example works in the “naturalistic” tradition are Wimsatt (2007) and Smith (2016).

Another way to indicate the tradition of my thinking is to identify myself with (what is often called) “Quinean naturalism,” after Willard van Orman Quine (Hylton 2014; Harman & Lepore 2014). Even better would be to coin the term “Dennettian naturalism” and identify myself with it, due especially (but not exclusively) to the way that Daniel Dennett updated and transformed Quinean naturalism with his greater emphasis on the impacts of both Darwin and the computer on philosophy, two lines of thinking that have greatly impacted my own philosophical thinking. (This doesn’t mean I agree with Dennett on everything about consciousness, of course.)

10.By convention, ichthyologists uses “fishes” when referring to multiple fish species, and “fish” when referring to multiple individuals of a single fish species.

11.Below is an incomplete list of alternative approaches for how to think about what should count as “good.” These alternative approaches have their own merits, and in some cases might lead us to make different grantmaking choices if we adopted them in place of our current framing (about moral patients, and dimensions of moral concern). I have not considered these alternatives in detail yet, but I have tried to at least expose myself to a wide range of viewpoints.

Rachels (2004) writes:

There is no characteristic, or reasonably small set of characteristics, that sets some creatures apart from others as meriting respectful treatment. That is the wrong way to think about the relation between an individual’s characteristics and how he or she may be treated. Instead we have an array of characteristics and an array of treatments, with each characteristic relevant to justifying some types of treatment but not others. If an individual possesses a particular characteristic (such as the ability to feel pain), then we may have a duty to treat it in a certain way (not to torture it), even if that same individual does not possess other characteristics (such as autonomy) that would mandate other sorts of treatment (refraining from coercion).

We could spin these observations into a theory of moral standing that would compete with the other theories. Our theory would start like this: There is no such thing as moral standing simpliciter. Rather, moral standing is always moral standing with respect to some particular mode of treatment. A sentient being has moral standing with respect to not being tortured. A self-conscious being has moral standing with respect to not being humiliated. An autonomous being has moral standing with respect to not being coerced. And so on…

It would do no harm, however, and it might be helpful for clarity’s sake, to drop the notion of [moral] “standing” altogether and replace it with a simpler conception. We could just say that the fact that doing so-and-so would cause pain to someone (to any individual) is a reason not to do it. The fact that doing so-and-so would humiliate someone (any individual) is a reason not to do it. And so on. Sentience and self-consciousness fit into the picture like this: Someone’s sentience and someone’s self-consciousness are facts about them that explain why they are susceptible to the evils of pain and humiliation.

We would then see our subject as part of the theory of reasons for action. We would distinguish three elements: what is done to the individual; the reason for doing it or not doing it, which connects the action to some benefit or harm to the individual; and the pertinent facts about the individual that help to explain why he or she is susceptible to that particular benefit or harm…

So, part of our theory of reasons for action would go like this: We always have reason not to do harm. If treating an individual in a certain way harms him or her, that is a reason not to do it. The fact that he or she is autonomous, or self-conscious, or sentient simply helps to explain why he or she is susceptible to particular kinds of harms.

A related but not-identical point is made by Bostrom & Yudkowsky (2014):

Alternatively, one might deny that moral status comes in degrees. Instead, one might hold that certain beings have more significant interests than other beings. Thus, for instance, one could claim that it is better to save a human than to save a bird, not because the human has higher moral status, but because the human has a more significant interest in having her life saved than does the bird in having its life saved.

For additional related arguments against “moral status talk,” see Sachs (2011).

Another approach is to think about moral status in the context of the concept of personhood, which might or might not be sufficient for a being to have moral status. For an overview of such approaches, see Newson (2007).

Substantially different approaches to thinking about what is “good” include deontological approaches, virtue ethics approaches, capabilities approaches, contractualist approaches, and more. For example essays on how these approaches can be applied to concerns about animals, see e.g. Sachs (2015) and the chapters in Part II of Beauchamp & Frey (2011).

See also Crisp & Pummer (2016) on “effective justice” and “effective altruism,” which is relevant due to our substantial ties to the effective altruism community (see e.g. this blog post).

12.That is, we must act under “moral uncertainty.” See e.g. Bykvist (2017); MacAskill (2014); Bogosian (2016); Lockhart (2000); Möller (2016); Greaves & Ord (2016).

See also our blog post on worldview diversification.

13.For example, if you think fishes are moral patients but have very low “intensity of valenced subjective experience” compared to chickens, and you consider “intensity of valenced subjective experience” to be a very important (i.e., heavily-weighted) dimension of moral concern, then you might still prioritize chicken welfare interventions over fish welfare investigations, even if you think that fish and chickens have roughly the same probability of being moral patients at all.

Of course, even after we’ve come to tentative conclusions about a taxon’s “moral weight,” there remain various second-order effects of welfare interventions to evaluate, among other considerations. Example sources on second-order effects and other considerations, with respect to animal welfare in particular, include Matheny & Chan (2005), Norwood & Lusk (2011), Višak (2013), Shulman (2015), Sittler-Adamczewski (2017), and Bostrom (2006).

There are also those who argue that the numbers of moral patients helped or harmed shouldn’t matter, e.g. Taurek (1977), but I won’t discuss that issue here.

14.Though, there may not be a sharp dividing line between beings which are moral patients and beings which are not; see my later comments on the likely-fuzzy line between conscious and non-conscious beings.

15.Introductions to metaethics include Sayre-McCord (2012), Miller (2013), and Prinz (2015).

Personally, I think it’s pretty clear that different people use moral language in different ways, and often a single person uses moral language in different ways at different times. This view is sometimes called “meta-ethical pluralism” (Wright et al. 2012).

16.I don’t claim this is necessarily how other people use moral language, though.

17.For more on conceptual analysis, see Beaney (2014); King (2016).

18.Though, some panpsychists would say a rock is conscious, and thus a moral patient if one also assumes consciousness is sufficient for moral patienthood.

19.See e.g. Lagercrantz (2016); Trevarthen & Reddy (2017); Macphail (1998), pp. 163-173.

20.See e.g. Basl (2014); Schwitzgebel & Garza (2015); Bostrom & Yudkowsky (2014); Sandberg (2014); Reggia (2013); Gamez (2008); Aleksander (2017).

21.See e.g. Rinner et al. (2015).

22.See e.g. Tomasik (2015b).

23.See Young (2012).

For a detailed discussion of whether the autonomic nervous system (of which the enteric nervous system is one part) satisfies various theories and criteria of consciousness, see Ryder (1996). For example, Ryder examines Daniel Dennett’s theory about which processes are sufficient for various kinds of consciousness, and argues that those processes occur in the autonomic nervous system (ANS). Ryder reports:

In conversation, after I pointed out some of the complexities of ANS operation, [Dennett] suggested to me that the ANS would have approximately the same degree of consciousness as someone blind and deaf since birth.

If Dennett is right about that, then my moral intuitions suggest that I should consider the ANS a moral patient, for the same reasons I morally care about the subjective experiences of a human born blind and deaf. Others’ moral intuitions may vary.

24.See e.g. Gazzaniga & LeDoux (1978), ch. 7; Gazzaniga (1992), pp. 121- 137; Schechter (2012); Blackmon (2016); Marinsek & Gazzaniga (2016); Pinto et al. (2017).

25.On ecosystems as moral patients, see e.g. Brennan & Lo (2015) and Johnson (1993). On companies as moral patients, see e.g. Graham (2001). On nations as potentially conscious, and thus moral patients under some views, see e.g. Schwitzgebel (2015).

26.On personhood, see e.g. Newson (2007). On interests, see e.g. Jaworska & Tannenbaum (2013).

27.For discussions of the relevance of phenomenal consciousness to moral patienthood, see e.g. Shepherd & Levy (forthcoming).

28.See section 4.1 of Jaworska & Tannenbaum (2013). Also see Wasserman et al. (2012).

29.See section 4.2 of Jaworska & Tannenbaum (2013).

30.See section 4.3 of Jaworska & Tannenbaum (2013).

31.See references in sections 4.4 and 4.5 of Jaworska & Tannenbaum (2013) and the discussion of deep ecology in Brennan & Lo (2015).

32.Jaworska & Tannenbaum (2013):

Accounts differ on what it is about the individual that grounds or confers moral status and to what degree, with implications for which beings do or do not have moral status and for their comparative status… For each account discussed, one could hold either a threshold or scalar conception of moral status, though the former is more commonly found in the literature… According to the threshold conception, as it is usually discussed, if capacity C grounds FMS [full moral status], then any being that has C, regardless of how well it can exercise this capacity, has as much moral status as any other being that has C and this status is full. If C is not only sufficient but necessary for FMS, then all beings lacking C would not have FMS, though the threshold conception would nevertheless leave it open whether having some other feature (e.g., parts of C or something lesser but akin to C) might ground lesser degrees of moral status. In contrast, a scalar conception of moral status would hold that if capacity C grounds moral status, then any being who has C has some status; the better it can exercise this capacity, the higher its degree of moral status… [Or] instead of focusing on how well capacity C is exercised, the views could instead focus on the number of relevant capacities a being has. A threshold view might specify some number n of the relevant capacities as both necessary and sufficient for FMS. A scalar conception would hold, on the other hand, that a being with n+1 capacities would have a higher moral status than one with merely n capacities.

33.Some authors distinguish “unconscious” from “non-conscious,” but I use these terms interchangeably.

34.The findings in this literature are relatively new and under substantial debate. For reviews, see Sytsma (2014); Goodwin (2015); Jack & Robbins (2012).

35.For other examples of self-amputation behaviors, see Wikipedia’s article on autotomy; Maginnis (2006); Fleming et al. (2007).

36.In this report I use “subjective experience” as a synonym for “phenomenal consciousness.” Some authors use the term “experience” more broadly, though. For example, Carruthers (1992), pp. 170-171 writes:

Suppose that Abbie is driving her car over a route she knows well, her conscious attention wholly abstracted from her surroundings. Perhaps she is thinking deeply about some aspect of her work, or fantasising about her next summer holiday, to the extent of being unaware of what she is doing on the road. Suddenly she ‘comes to,’ returning her attention to the task in hand with a startled realisation that she has not the faintest idea what she has been doing or seeing for some minutes past. Yet there is a clear sense in which she must have been seeing, or she would have crashed the car. Her passenger sitting next to her may correctly report that she had seen a vehicle double-parked by the side of the road, for example, since she deftly steered the car around it. But she was not aware of seeing that obstacle, either at the time or later in memory.

Another example: when washing up dishes I generally put on music to help pass the time. If it is a piece that I love particularly well I may become totally absorbed, ceasing to be conscious of what I am doing at the sink. Yet someone observing me position a glass neatly on the rack to dry between two coffee mugs would correctly say that I must have seen that those mugs were already there, or I should not have placed the glass where I did. Yet I was not aware of seeing those mugs, or of placing the glass between them. At the time I was swept up in the Finale of Schubert’s Arpeggione Sonata, and if asked even a moment later I should not have been able to recall what I had been looking at.

Let us call such experiences non-conscious ones. What does it feel like to be the subject of a non-conscious experience? It feels like nothing. It does not feel like anything to have a non-conscious visual experience as of a vehicle parked at the side of the road, or as of two coffee mugs placed on a draining rack — precisely because to have such an experience is not to be conscious of it. Only conscious experiences have a distinctive phenomenology, a distinctive feel. Non-conscious experiences are ones that may help to control behavior without being felt by the conscious subject.

[These points] are already sufficient to show that it is wrong to identify the question [of] whether a creature has experiences with the question [of] whether there is something it feels like to be that thing. For there is a class — perhaps a large class — of non-conscious experiences that have no phenomenology.

In contrast to Carruthers’ usage of “experience,” I shall in this report only use the phrase “subjective experience” to refer to what Carruthers calls “conscious experiences.”

37.I do not, however, assume (like Block) that “phenomenal consciousness” must be “distinct from any cognitive, intentional, or functional property.” In Weisberg (2011)’s terms, I intend a “moderate” rather than “zealous” reading of the phrase “phenomenal consciousness.”

For a list of meanings attributed to the term “consciousness,” see Vimal (2009).

38.Schwitzgebel is hardly the first to propose such a definition for “consciousness.” As Schwitzgebel notes:

Definition by example is a common approach among recent phenomenal realists. I interpret Searle (1992, p. 83), Block (1995/2007, p. 166–8), and Chalmers (1996, p. 4) as aiming to define phenomenal consciousness by a mix of synonymy and appeal to example…

See also e.g. Shiller (2016):

I intend for the content of the concept [of “qualia”] to be fixed by its prototypical examples. Qualia are whatever kind of mental qualities we associate with experiences of redness, pain, satisfaction, and déjà vu. This approach leaves the veracity of our intuitive assumptions open to investigation.

Note that Schwitzgebel’s additional “wonderfulness” criterion also seems useful in a definition of consciousness. I skip discussing it here merely for brevity.

39.For example, see Chalmers’ “catalog of conscious experiences” on pp. 4-9 of Chalmers (1996), and also Perry (2009).

40.Another contested negative example is this: one might be tempted to say that in a binocular rivalry task (see e.g. Sterzer 2013), the image the subject self-reports as having experienced (at a given time) provides a positive example of consciousness, and the other image that was not consciously experienced despite being processed to some extent by the brain provides a negative example.

Yet another contested negative example is illustrated by an online exchange between Scott Aaronson and Guilio Tononi, discussed in Cerullo (2015). Tononi has developed the Integrated Information Theory (IIT) of consciousness, according to which consciousness is equal to a measure of integrated information denoted Φ (“phi”). Aaronson (2014a), in reply to Tononi’s theory, argued that IIT “unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly ‘conscious’ at all.” To illustrate this, Aaronson defined a particular kind of expander graph that, according to IIT, has enormous amounts of consciousness, despite not doing anything remotely intelligent, nor exhibiting any features we typically think of as “conscious.” Tononi agreed that certain kinds of simple systems could generate arbitrarily large values of Φ, but disagreed with Aaronson that we should credit our intuitions that such mathematical objects cannot be enormously more conscious than humans (Tononi 2014).

Another contested negative example is dreamless sleep, which is often given as a paradigm case of a state during which one has no phenomenal experience. However, this example has recently been contested (Windt et al. 2016).

The “distracted driver” case seems to be another contested example. For me, it is most natural to say the stimuli to which the distracted driver is responding (e.g. to keep the car in the correct lane), but which she has no memory of consciously experiencing and which she cannot report (because her conscious attention was focused on her cell phone call), were not consciously experienced (as far as we know). But Tye (2016) seems instead to count the distracted driver’s processing of stimuli (that she can’t remember or report experiencing) as an example of conscious experience (pp. 14-15):

Take, for example, the visual experiences of the distracted driver as she drives her car down the road. She is concentrating hard on other matters (the phone call she is answering about the overdue rent; the coffee in her right hand, etc.), so her visual experiences are unconscious. But her experiences exist alright. How else does she keep the car on the road?…

…Sometimes when we say that a mental state is conscious, we mean that it is, in itself, an inherently conscious state. At other times, when we say that a mental state is conscious, we have in mind the subject’s attitude toward the state. We mean that the subject of the mental state is conscious of it or conscious that it is occurring. The latter consciousness is a species of what is sometimes called “creature consciousness.” The [view I’ve articulated] has it that, in the first sense, a mental state is conscious (conscious1) if and only if it is an experience. In the latter sense, a mental state is conscious (conscious2) if and only if another conscious1 state, for example, a conscious thought, is directed upon it. [My view] holds that this higher-order conscious state (in being a conscious1 state) is itself an experience, for example, the experience of thinking of the first-order state or thinking that the first-order state is occurring.

The visual experiences of the distracted driver are unconscious in that she is not conscious of them. Nor is she conscious that they are occurring. So she lacks creature consciousness with respect to certain mental states that are themselves conscious1. Being distracted, she is not conscious of what those experiences are like. There is no inconsistency here. The [distracted driver] objection conflates high-order consciousness with first-order consciousness. Experiences are first-order conscious states on which second-order conscious states may or may not be directed.

A similar example is provided by Block (1995):

…suppose you are engaged in intense conversation when suddenly at noon you realize that right outside your window there is — and has been for some time — a deafening pneumatic drill digging up the street. You were aware of the noise all along, but only at noon are you consciously aware of it. That is, you were [phenomenally conscious] of the noise all along…

But here, I am instead inclined to say that, in this hypothetical scenario, I was not phenomenally conscious of the pneumatic drill. Or, perhaps somewhere in my brain there was a phenomenally conscious experience of the pneumatic drill, but I don’t yet have any (introspective) evidence of that.

41.Compare to the situation in physics, where strongly-confirmed theories give us strong reasons to believe in the presence of structures we cannot ever directly observe, such as the presence of particles beyond our particle horizon.

42.Or, if we want to make things more complicated, we could construct a spectrum from clear positive examples to clear negative examples, similar to Baars (1988), Figure 1.1 (p. 12).

43.In particular, as I have already hinted, my Schwitzgebel-inspired definition of consciousness may have some trouble distinguishing “phenomenal consciousness” from “access consciousness” (in roughly the sense of Block 1995), if indeed the two can be distinguished. In any case, I expect our consciousness-related definitions to evolve and become more useful as we learn more.

44.For examples, see the sources relating to the evolution of scientific concepts listed in Appendix Z.6 and this footnote. See also e.g. Wimsatt (2007), especially ch. 6.

45.I borrow this example from McDermott (2001), pp. 25-26:

Suppose one had demanded of Van Loewenhook and his contemporaries that they provide a similar sort of definition for the concept of life and its subconcepts, such as respiration and reproduction. It would have been a complete waste of time, because what Van Loewenhook wanted to know, and what we are now figuring out, is how life works. We know there are borderline cases, such as viruses, but we don’t care exactly where the border lies, because our understanding encompasses both sides. The only progress we have made in defining “life” is to realize that it doesn’t need to be defined. Similarly, what we want to know about minds is how they work.

46.In the context of consciousness, some philosophers use “physicalism” to refer to “identity theory” (Smart 2007), but that isn’t how I use the term.

In brief, I assume physicalism about consciousness for two major reasons.

First, it seems to me that physicalism is supported by much stronger evidence than can be assembled by the human intuitions used to argue against it. To illustrate: which of these do you think has greater evidential justification?

  1. “The universe is a mathematically simple low-level unified causal process with no non-natural elements or attachments.” (This phrasing is from Eliezer Yudkowsky’s “Executable philosophy.”)
  2. My intuitive judgments about anti-physicalist thought experiments involving zombies, “Mary the super-scientist,” etc.

I think the evidence for (1) is overwhelming at this point, and the trust we should put in (2) is pretty weak. For that reason, I’m comfortable betting that the mystery of consciousness, like every mystery ever solved before it, will eventually be resolved (if it is ever resolved) by understanding some set of physical processes better than we do today — and that the poorly-understood stuff we’re trying to point at with words like “consciousness” will turn out to be constituted by some set of physical processes. (Here, I mean “physical processes” in a broad sense that includes, e.g. water conceived of as H2O.)

Second, the assumption of physicalism has been enormously productive in the past, and has generally seemed ever more reasonable as evidence has accumulated about any given phenomenon. On this, see the Appendix of Papineau (2002), Papineau (2009), and just about any history of any science. (For arguments against this fairly standard view, see e.g. section 1.1 of Goff 2017a.)

47.See Chalmers (2003)’s explanation of different types of physicalism/materialism. Note that most physicalists in philosophy appear to be “type B” materialists (see footnote 2 of Yetter-Chappell’s unpublished draft paper “Dissolving Type-B Physicalism” listed here).

Technically, I can see a case for classifying my view as either “type Q materialism” or “type C materialism” (i.e. “physicalism, fingers crossed” as Tye 2000, p. 22 puts it), but if so, then I’m the sort of type Q or type C materialist (about consciousness) for whom the usual pros and cons of type A materialism arise in more-or-less the same form (rather than, say, the usual pros and cons of type B materialism).

Either way, I should clarify that unlike some type A materialists, I don’t think that merely explaining verbal reports and beliefs is all that needs to be explained. Even if external-to-me scientists could explain my beliefs and verbal reports, I would still want to additionally explain why it feels like something to be me. (See also Chalmers 2010, pp. 52-58.) I just think that in the end, explaining certain functions will explain everything there is to explain, and hence I am probably best described as a type A materialist.

48.Here is a fuller explanation of functionalism, from Mandik (2013), p. 110:

Two key ideas that functionalists have appealed to in developing their position are the ideas of a functional kind and of a multiply realizable kind. A kind is a grouping of things or entities, usually grouped in terms of one or more features common to members of the group. Examples of kinds include cats, diamonds, planets, and mousetraps. To illustrate the idea of a multiply realizable kind, let us draw a contrast between diamonds, which are not multiply realizable, and mousetraps, which are. What makes something a diamond? First off, a diamond has to be made out of carbon. Anything superficially resembling a diamond that is not made out of carbon is not a genuine diamond. Crystals of zirconium dioxide superficially resemble diamonds, but are composed of the chemical elements zirconium and oxygen. Further, the carbon atoms that compose diamonds need to be arranged in a certain way (tetrahedral lattices). Carbon atoms not so arranged make up coal and graphite, not diamonds.

Diamonds may be physically realized in only one way — with tetrahedral lattices of carbon atoms. Thus they are not multiply realizable. Contrast this with mousetraps, which are multiply realizable. There are many ways to make a mousetrap. Some involve metal spring-loaded killing bars mounted on wooden platforms. Others involve a strong sticky glue applied to a flat surface on which the mouse gets stuck. There is no particular chemical element that is necessary for making a mousetrap.

Mousetraps help to illustrate not just the idea of multiply realizable kinds, but also the idea of functional kinds. Functional kinds are defined by what they do, and are so named because they are defined by the function they perform. Mousetraps perform the function of restraining or killing mice… As long as a system is able to achieve its defining function, it is largely irrelevant which physical stuff it happens to be realized by.

Mandik continues (pp. 110-111) with another important point about functionalism and consciousness:

Much of the contemporary enthusiasm for functionalism stems from enthusiasm about analogies drawn between minds and computers… Computers are clearly both functional kinds and multiply realizable kinds. What makes something a computer is what it does — it computes…

All sorts of materials can be deployed to construct computers. Computers have been built from transistors and other electronic components. Others have been built from mechanical components such as cams and gears. A computer that plays tic-tac-toe has even been constructed out of Tinkertoys!

…a [computer] program is not identical to the activity of a particular computer. If brains made out of brainy stuff can just as well give rise to a mind as an electronic computer made out of non-brainy stuff, then perhaps [many functionalists suggest] the solution to the mind–body problem is to think of the mind as the software that is running on the hardware of the brain.

I should clarify, however, that even those functionalists who speak of mind (including consciousness) as a type of computation don’t necessarily think a human mind is a “traditional” computer program (e.g. a GOFAI program) running on a brain-implemented Von Neumann architecture, nor do they necessarily think that all relevant information processing occurs at the scale of neurons or larger — see e.g. Edelman (2008), chs. 1-4.

I use computational language regularly in this report because I think it helps to clarify what I mean, but I have not studied the philosophy of computation (Turner 2013) much, and I don’t mean to assume any particular narrow conception of what does and doesn’t count as “computation” or a “computer program.” My assumption for this report is just functionalism, broadly defined.

There are different types of functionalism (about consciousness), of course (Block 1978; Van Gulick 2009; ch. 9 of Prinz 2012; Maley & Piccinini 2013).

49.For a recent review of the many parallels between contemporary neuroscience and machine learning research, see Marblestone et al. (2016).

50.Many arguments for functionalism are in fact arguments against the plausibility of its alternatives (Polger 2017, sec. 4). Two especially famous arguments for functionalism are Chalmers’ “fading qualia” and “dancing qualia” thought experiments, given in ch. 7 of Chalmers (1996). Note that Chalmers himself argues for a nonreductive version of (what he calls) “organizational invariance,” but if one combines organizational invariance with physicalism, the result is the sort of functionalism I endorse.

51.And indeed, careful experiments have taught us much about how this illusion is produced. See e.g. Pessoa & Weerd (2003).

52.Unlike physicalism, functionalism, and illusionism, “fuzziness” is not a standard term.

In this report I refer to animals and other potential moral patients as distinctly identifiable individuals, but this is largely for convenience of communication. In the limit of scientific understanding, I suspect group minds (see e.g. Schwitzgebel 2015; Roelofs 2016; Langland-Hassan 2015, sec. 4; Theiner 2014) and other structures for morally relevant cognitive processing (e.g. perhaps the “utilitronium” of Pearce 2013) will challenge our notions of morally relevant individual identity. (See also Chappell 2010 on valuing individuals vs. parts of individuals.)

53.Similarly, there is no clear dividing line between systems which do and don’t implement various forms of “attention,” “memory,” “self-modeling,” and so on.

54.Intra-individual fuzziness is thus similar to Dennett’s view that it is often indeterminate whether a given cognitive process was “conscious” or not (Dennett 1991, chs 5-6; Dennett 2005, ch. 4).

55.Dennett (1995).

56.I think it’s possible that information processes can “become conscious” and “exit consciousness” in as clear-cut a way as information in a digital computer “enters RAM” and “is cleared from RAM,” but in most cases I doubt there will turn out to be such a clear dividing line between which processes are and aren’t “conscious.”

57.This may not be the case for some uncommon forms of panpsychism, such as Brian Tomasik’s version of panpsychism, which I find it helpful to think of as “panpsychism about consciousness as an uninformative special case of pan-everythingism about everything” (see notes from my conversation with Brian Tomasik).

58.As also mentioned in Appendix Z.5, the 2009 PhilPapers Survey found that among “Target Faculty,” 56.5% of respondents accepted or leaned toward physicalism about the mind, 27.1% of respondents accepted or leaned toward non-physicalism about the mind, and 16.4% of respondents gave an “Other” response.

The PhilPapers Survey did not ask about functionalism directly. But, it seems to be a widely held understanding that the vast majority of philosophers of mind, but not all of them, are functionalists. Some example quotes are given below, in chronological order.

Block (1978):

The functionalist approach to the philosophy of mind is increasingly popular; indeed, it may now be dominant (Armstrong, 1968; Block & Fodor, 1972; Field, 1975; Fodor, 1965, 1968a; Grice, 1975; Harman, 1973; Lewis, 1971, 1972; Locke, 1968; Lycan, 1974; Nelson, 1969, 1975; Putnam, 1966, 1967, 1970, 1975a; Pitcher, 1971; Sellars, 1968; Shoemaker, 1975; Smart, 1971; Wiggins, 1975).

Churchland (1988), ch. 2:

As this book is written, functionalism is probably the most widely held theory of mind among philosophers, cognitive psychologists, and artificial intelligence researchers.

Ryder (1996):

Most theories of consciousness are functional theories, as functionalism is something of a “received view” among materialists.

Macphail (1998), p. 213:

The idea that consciousness is a product of functional organization lies at the heart of what is now the most widely held materialist account of the mind-body problem — so widely held that it has been claimed [by Searle (1992), p. 7] that functionalism now constitutes a virtual orthodoxy among psychologists and philosophers.

Gray (2004), p. vii, says “Today’s dominant view is functionalism,” though interestingly Gray defines functionalism as “the doctrine that states of consciousness can be identified with sets of functional (input-output) relationships that hold between a behaving organism and the environment in which it behaves,” which is closer to how I might define behaviorism. In my sense of the term, functionalism need not be defined with respect to input-output relationships that hold between a behaving organism and its environment — see e.g. my comments on consciousness inessentialism.

Kim (2010), ch. 5:

In 1967 Hilary Putnam published a paper… [that] ushered in functionalism, which has since been a highly influential — arguably the dominant — position on the nature of mind.

Mandik (2013), p. 122:

Functionalism is the most popular current position on the mind–body problem…

Reggia (2013):

It is probably the case that the vast majority of individuals investigating the philosophical and scientific basis of consciousness today, including those developing computer models of consciousness, are functionalists…

Heil (2013), p. 87:

These days functionalism dominates the landscape in the philosophy of mind, in cognitive science, and in psychology… When basic tenets of functionalism are put to non-philosophers, the response is, often enough, “Well, that’s obvious, isn’t it?”

Nevertheless, functionalism is debated heavily within philosophy. Those arguments are well-covered elsewhere: see e.g. chapter 6 of Weisberg (2014); Block (2007a); Levin (2013); Tye (2015); Polger & Shapiro (2016).

Note that some of the theories I consider “functionalist” are sometimes called “eliminativist.” See my comments on eliminativism in Appendix Z.6.

59.I’m not aware of surveys indicating how common illusionist approaches are, though Frankish (2016a) remarks that:

The topic of this special issue is the view that phenomenal consciousness (in the philosophers’ sense) is an illusion — a view I call illusionism. This view is not a new one: the first wave of identity theorists favoured it, and it currently has powerful and eloquent defenders, including Daniel Dennett, Nicholas Humphrey, Derk Pereboom, and Georges Rey. However, it is widely regarded as a marginal position, and there is no sustained interdisciplinary research programme devoted to developing, testing, and applying illusionist ideas. I think the time is ripe for such a programme. For a quarter of a century at least, the dominant physicalist approach to consciousness has been a realist one. Phenomenal properties, it is said, are physical, or physically realized, but their physical nature is not revealed to us by the concepts we apply to them in introspection. This strategy is looking tired, however. Its weaknesses are becoming evident…, and some of its leading advocates have now abandoned it. It is doubtful that phenomenal realism can be bought so cheaply, and physicalists may have to accept that it is out of their price range. Perhaps phenomenal concepts don’t simply fail to represent their objects as physical but misrepresent them as phenomenal, and phenomenality is an introspective illusion…

60.Dennett (2016a).

61.Classic sources and contemporary overviews include Chalmers (1996), Carruthers (2000), Frankish (2005), Weisberg (2014), Carruthers & Schier (2017), several chapters of Schneider & Velmans (2017), and several chapters of McLaughlin et al. (2009). On illusionism, see Volume 23, Numbers 11-12 of the Journal of Consciousness Studies.

62.Bechtel & Richardson (1998):

Vitalists hold that living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things. In its simplest form, vitalism holds that living entities contain some fluid, or a distinctive ‘spirit’. In more sophisticated forms, the vital spirit becomes a substance infusing bodies and giving life to them; or vitalism becomes the view that there is a distinctive organization among living things. Vitalist positions can be traced back to antiquity. Aristotle’s explanations of biological phenomena are sometimes thought of as vitalistic, though this is problematic. In the third century BC, the Greek anatomist Galen held that vital spirits are necessary for life. Vitalism is best understood, however, in the context of the emergence of modern science during the sixteenth and seventeenth centuries. Mechanistic explanations of natural phenomena were extended to biological systems by Descartes and his successors. Descartes maintained that animals, and the human body, are ‘automata’, mechanical devices differing from artificial devices only in their degree of complexity. Vitalism developed as a contrast to this mechanistic view. Over the next three centuries, numerous figures opposed the extension of Cartesian mechanism to biology, arguing that matter could not explain movement, perception, development or life. Vitalism has fallen out of favour, though it had advocates even into the twentieth century. The most notable is Hans Driesch (1867-1941), an eminent embryologist, who explained the life of an organism in terms of the presence of an entelechy, a substantial entity controlling organic processes. Likewise, the French philosopher Henri Bergson (1874-1948) posited an élan vital to overcome the resistance of inert matter in the formation of living bodies.

Thagard (2008), slightly reformatted:

Theological explanations of life are found in the creation stories of many cultures, including the Judeo-Christian tradition’s book of Genesis… Other cultures worldwide have different accounts of how one or more deities brought the earth and the living things on it into existence. These stories predate by centuries attempts to understand the world scientifically, which may only have begun with the thought of the Greek philosopher-scientist Thales around 600 B.C. The stories do not attempt to tie theological explanations to details of observations of the nature of life…

Unlike theological explanations, qualitative accounts do not invoke supernatural entities, but instead attempt to explain the world in terms of natural properties. For example, in the 18th century, heat and temperature were explained by the presence in objects of a qualitative element called caloric: the more caloric, the more heat. A mechanical theory of heat as motion of molecules only arose in the 19th century. Just as caloric was invoked as a substance to explain heat, qualitative explanations of life can be given by invoking a special kind of substance that inhabits living things. Aristotle, for example, believed that animals and plants have a principle of life (psuche) that initiates and guides reproductive, metabolic, growth, and other capacities (Grene & Depew, 2004).

In the 19th century, qualitative explanations of life became popular in the form of vitalism, according to which living things contain some distinctive force or fluid or spirit that makes them alive (Bechtel & Richardson, 1998). Scientists and philosophers such as Bichat, Magendie, Liebig, and Bergson postulated that there must be some sort of vital force that enables organisms to develop and maintain themselves. Vitalism developed as an opponent to the materialistic view, originating with the Greek atomists and developed by Descartes and his successors, that living things are like machines in that they can be explained purely in terms of the operation of their parts. Unlike natural theology, vitalism does not explicitly employ divine intervention in its explanation of life, but for vitalists such as Bergson there was no doubt that God was the origin of vital force.

Contrast the theological and vitalist explanation patterns.

Theological explanation pattern: Why does an organism have a given property that makes it alive? Because God designed the organism to have that property.

Vitalist explanation pattern: Why does an organism have a given property that makes it alive? Because the organism contains a vital force that gives it that property.

63.It might also be helpful to consider historical cases of conceptual revision in which it was commonly thought that some view could be ruled out a priori, but in fact that view is now the mainstream scientific view on the topic, e.g. as arguably occurred for the relativity of space and time, or for the idea of individually identifiable particles.

64.Many sources employ a mix of both strategies. Example sources that seem to primarily use an “apply a theory” strategy include Carruthers (1989), Dennett (1995), ch. 8 of Tye (2000), Merker (2005), and Barron & Klein (2016). Example sources that seem to primarily use a “potentially consciousness-indicating features” strategy include ch. 4 of Smith & Boyd (1991), Bateson (1991), Beshkar (2008), Le Neindre et al. (2009); Braithwaite (2010), Varner (2012), Sneddon et al. (2014), Tye (2016), Le Neindre et al. (2017), and perhaps Arrabales (2010). Note that most of my examples of the second kind aim to assess the likelihood of a taxon’s capacity for conscious pain in particular. But of course a capacity for conscious pain presumes a capacity for consciousness.

65.I’m not aware of a poll that asked this question of consciousness researchers, but I provide a few supporting sources below. Obviously this is not sufficient to prove that my impression is true. Instead my aim in this footnote is to point to a few example sources that left me with my current impressions.

Naturally, mysterians agree that no currently proposed theory of consciousness is clearly promising. See e.g. Pinker (2007); McGinn (2004); Rowlands (2001).

Many of those who write about the methodological difficulties of consciousness science also seem to share my general impression, e.g. Irvine (2013), the authors of several chapters in Miller (2015), and the authors of several chapters in Overgaard (2015).

As another example, here is a passage from a review article on recent progress in consciousness science, written by several leaders in the field (Boly et al. 2013):

In order to consolidate the results of many relevant experiments that have been conducted within a single conceptual framework, theories of consciousness must become more precise and generate experimentally testable predictions. Accomplishing this requires both additional conceptual work from theorists and greater knowledge of brain architecture and neural computations relevant to consciousness, in order to guide and constraint theory development… Overall, theoretical developments will help move from simple correlation between neural events and conscious level and content, toward causal and explanatory accounts that show how specific neural mechanisms give rise to specific aspects or dimensions of conscious phenomenology…

Valerie Hardcastle, in chapter 12 of Sinnott-Armstrong (2016), is especially blunt:

I shall begin by stating what I believe to be obvious: We do not know what consciousness is…

…we do not have a good definition for consciousness, we do not know what the relevant psychological attributes of consciousness are, and we have no idea what the neural correlates for consciousness are either. We are not clear on what is sufficient for consciousness, and we at best have an incomplete list of what is necessary. We do not understand the relationship between alertness and awareness, if there is one, nor do we understand the connection between cognitive processing and consciousness, if there is one. At best, we can point to some things that some people believe index some aspects of consciousness. But by the standards of contemporary science and medicine, that is not pointing to very much at all.

See also Katz (2013) and Burkeman (2015).

66.To test this hypothesis, one could enumerate theories of consciousness matching some criteria, and then check the year of “first peer-reviewed defense of the theory” (or similar) for each theory. I have not done this, but my impression is that most consciousness researchers would agree that theories of consciousness have proliferated greatly over the last couple decades.

I’ll quote just one example, from Shevlin (2016), pp. 191: “The last two decades have witnessed an explosion in the variety of theories of consciousness…”

67.Björn Merker expressed this point to me, in an August 2016 email, this way (quoted with permission):

Consciousness theory currently labors in what is obviously a pre-paradigmatic stage of development. In this typically protracted prehistory of a science, competing schools in a nascent field find themselves in disagreement over fundamentals.

As described [by Thomas Kuhn], at this stage in the history of a science each competing school builds its system from its own first principles, occasionally metaphysical, in reliance on a rich array of observations and arguments but without criteria for assessing their relative significance either within or across schools. None of the schools is therefore able to take its fundamentals for granted, and each is forced to constantly reiterate a complex system of facts and interpretations, essentially “from scratch.” Argument tends to be interminable when even first principles are in dispute.

Kuhn’s description of the pre-paradigmatic stage of a nascent science applies rather literally to the current state of consciousness theory. It features a disparate array of competing proposals regarding the nature of consciousness, its scope, and genesis, with no agreement on first principles underlying analysis and interpretation. Thus, at one extreme consciousness is seriously proposed to be an intrinsic property of this universe itself on a par with mass, charge and space-time [David Chalmers], and at the other it is construed as a function or product of human language [Euan Macphail]. A field in which such diversity of fundamental commitments regarding its very subject matter can be taken seriously obviously has not yet arrived at the shared paradigm within which normal science, in Kuhn’s sense, proceeds to solve puzzles.

For Kuhn’s account of the “pre-paradigmatic stage of development,” see chapter 2 of Kuhn (2012). (The first edition of Kuhn’s book was published in 1962.)

Or, here is a contemporary summary, from Bird (2011):

Kuhn describes an immature science, in what he sometimes calls its ‘pre-paradigm’ period, as lacking consensus. Competing schools of thought possess differing procedures, theories, even metaphysical presuppositions. Consequently there is little opportunity for collective progress. Even localized progress by a particular school is made difficult, since much intellectual energy is put into arguing over the fundamentals with other schools instead of developing a research tradition. However, progress is not impossible, and one school may make a breakthrough whereby the shared problems of the competing schools are solved in a particularly impressive fashion. This success draws away adherents from the other schools, and a widespread consensus is formed around the new puzzle-solutions.

On the state of consciousness studies, see also e.g. Metzinger (2003), p. 116:

…there is yet no single, unified and paradigmatic theory of consciousness in existence which could serve as an object for constructive criticism and as a backdrop against which new attempts could be formulated. Consciousness research is still in a preparadigmatic stage.

But this assessment is not universal. Bill Faw, in his entry “Consciousness, modern scientific study of” on pp. 182-188 of Bayne et al. (2009), writes:

To use Kuhn’s term, we might think of the period 1980–94 as representing the transition from the pre-paradigm stage of consciousness science to a normal science stage… The second half of the period — from 1994 to 2008 — constitutes what we might think of as an early phase of normal consciousness science.

68.I also found that it was often difficult for me to understand what, exactly, the authors of these theories are claiming, what evidence they think would falsify their theories, which aspects of their theories were intended as claims about consciousness in humans (or primates) rather than as claims about consciousness in general, and which aspects of their theories were intended as claims about scientific explanation as opposed to expressions about which types of processes they intuitively morally value.

69.See Tye (2000), chs. 3-8, especially section 3.4. For some updates to Tye’s theory, see Tye (2009a). For an overview of FOR theories in general, see Weisberg (2014), ch. 7. For a brief introduction to representational theories of mind, see Tye (2009b).

70.For more on current debates about representation in the philosophy of mind, see e.g. Rey (2015).

71.Tye (2000), p. 62.

72.Tye (2000), pp. 62-63.

73.Here is Dennett’s account (Dennett 1994) of a prediction he made, using his theory of consciousness, which had not been tested at the time:

On the last page of Consciousness Explained, I described an experiment with eye-trackers that had not been done and predicted the result. The experiment has since been done, by John Grimes at the Beckmann Institute in Champaign Urbana [Grimes 1996], and the results were much more powerful than I had dared hope. I had inserted lots of safety nets (I was worried about luminance boundaries and the like — an entirely gratuitous worry as it turns out). Grimes showed subjects high-resolution color photographs on a computer screen and told the subjects to study them carefully, since they would be tested on the details. (The subjects were hence highly motivated, like Betsy, to notice, detect, discriminate, or judge whatever it was they were seeing.) They were also told that there might be a change in the pictures while they were studying them (for ten seconds each). If they ever saw (yes, “saw,” the ordinary word) a change, they were to press the button in front of them — even if they could not say (or judge, or discriminate) what the change was. So the subjects were even alerted to be on the lookout for sudden changes. Then when the experiment began, an eye-tracker monitored their eye movements and during a randomly chosen saccade changed some large and obvious feature in each picture. (Some people think I must be saying that this feature was changed, and then changed back, during the saccade. No. The change was accomplished during the saccade, and the picture remained changed thereafter.) Did the subjects press the button, indicating they had seen a change? Usually not; it depended on how large the change was. Grimes, like me, had expected the effect to be rather weak, so he began with minor, discreet changes in the background. Nobody ever pressed the button, so he began getting more and more outrageous. For instance, in a picture of two cowboys sitting on a bench, Grimes exchanged their heads during the saccade and still, most subjects didn’t press the button! In an aerial photograph of a bright blue crater lake, the lake suddenly turned jet black — and half the subjects were oblivious to the change, in spite of the fact that this is a portrait of the lake. (What about the half that did notice the change? They had apparently done what Betsy did when she saw the thimble in the epistemic sense: noted, judged, identified, the lake as blue.)

What does this show? It shows that your brain doesn’t bother keeping a record of what was flitting across your retinas (or your visual cortex), even for the fraction of a second that elapses from one saccade to the next. So little record is kept that if a major change is made during a saccade — during the changing of the guards, you might say — the difference between the scene thereafter and the scene a fraction of a second earlier, though immense, is typically not just unidentifiable; it is undetectable. The earlier information is just about as evanescent as the image on the wall in the camera obscura. Only details that were epistemically seen trigger the alarm when they are subsequently changed. If we follow Dretske’s usage, however, we must nevertheless insist that, for whatever it is worth, the changes in the before and after scenes were not just visible to you; you saw them, though of course you yourself are utterly clueless about what the changes were, or even that there were changes.

Dennett proposes several potential experiments in Appendix B of Consciousness Explained. The final proposed experiment, which Dennett refers to in the passage above, is “the colored checkerboard”:

The colored checkerboard: An experiment designed to show how little is in the “plenum of the visual field.” Subjects are given a task of visual identification or interpretation that requires multiple saccades of a moving scene: they watch animated black-and-white figures shown against the background of a randomly colored checkerboard. The checks are relatively large — for example, the CRT is divided into a 12×18 array of colored squares randomly filled in with different colors. (The colors are randomly chosen so that the pattern has no significance for the visual task superimposed on the background.) There should be luminence differences between the squares, so there is no Liebmann effect, and for each square there should be prepared an isoluminent alternative color: a color which, if switched with the color currently filling the square, would not create radically different luminence boundaries at the edges (this is to keep the luminence-edge detectors quiet). Now suppose that during saccades (as detected by eyetracker) colors in the checkerboard are switched; onlookers would notice one or more squares changing color several times a second. Prediction: There will be conditions under which subjects will be completely oblivious to the fact that large portions of “the background” are being abruptly changed in color. Why? Because the parafoveal visual system is primarily an alarm system, composed of sentries designed to call for saccades when change is noticed; such a system would not bother keeping track of insignificant colors between fixations, and hence would have nothing left over with which to compare the new color. (This depends, of course, on how “fast the film is” in the regions responding to parafoveal color; there may be a sluggish refractory period that will undo the effect I predict.)

I have not consulted the paper by John Grimes to determine whether Dennett’s interpretation of it, in light of his original proposed experiment, is fair.

74.See also Mitchell (2005).

75.For a more detailed discussion of the argument by analogy from my own consciousness to that of other humans, see Tye (2016), ch. 4.

76.Pages 114-115.

77.For example, if I wanted to argue in favor of fish consciousness, I could present a table like this:

POTENTIALLY CONSCIOUSNESS-INDICATING FEATURE TRUE OF A HUMAN? TRUE OF A FISH?
1. Forms and uses mental representations Yes Yes
2. Associates a current mental state with a memory Yes Yes
3. Can process emotions with a certain part of the brain Yes Yes
4. Can alter its view of an aversive situation depending on context Yes Yes
5. Can consider possible actions and ponder their consequences Yes Yes

But if I wanted to nudge you toward thinking that higher primates are conscious and fishes are not, and I knew that you were already inclined to think laptops aren’t conscious, I could instead present the following table:

POTENTIALLY CONSCIOUSNESS-INDICATING FEATURE TRUE OF A HUMAN? TRUE OF A CHIMPANZEE? TRUE OF A FISH? TRUE OF A LAPTOP?
1. Forms and uses mental representations Yes Yes Yes Yes
2. Associates a current mental state with a memory Yes Yes Yes Yes
3. Can process emotions with a certain part of the brain Yes Yes Yes No
4. Can alter its view of an aversive situation depending on context Yes Yes Yes Yes
5. Can consider possible actions and ponder their consequences Yes Yes Yes Yes
6. Has a neocortex Yes Yes No No
7. Passes the mirror self-recognition test Yes Yes No No
8. Engages in complex social politics Yes Yes No No

Note that my first example table is adapted from Braithwaite (2010)’s summary (at the end of chapter 4) of her case for fish consciousness, but it should not be attributed to her, since Braithwaite’s argument is more nuanced than what I’ve put in my example PCIFs table.

Braithwaite summarizes her case for fish consciousness in the paragraph below. I’ve added the numbered PCIFs from my example table to illustrate the similarities:

So pulling the different threads together, fish really do appear to possess key traits associated with consciousness. Their ability to form and use mental representations indicates fish have some degree of access consciousness [PCIF #1]. They can consider a current mental state and associate it with a memory [PCIF #2]. Having an area of the brain specifically associated with processing emotion [PCIF #3] and evidence that they alter their view of an aversive situation depending on context [PCIF #4] suggests that fish have some form of phenomenal consciousness: they are sentient. This leaves monitoring and self consciousness, which I argue is in part what the eel and the grouper are doing: considering their actions and pondering the consequences [PCIF #5]. The grouper is clearly deciding it has no chance to get the prey itself and so swims off to get the eel. The eel is deciding that an easy meal is on offer. On balance then, fish have a capacity for some forms of consciousness, and so I conclude that they therefore have the mental capacity to feel pain. I suspect that what they experience will be different and simpler than the experiences we associate with pain and suffering, but I see no evidence to deny them these abilities, and quite a bit which argues that they will suffer from noxious stimuli.

In the second example table, I compare fishes and laptops according to these PCIFs (plus a few others), but note that Braithwaite explicitly denies that laptops have consciousness (Droege & Braithwaite 2015).

78.Each row in my PCIFs table is meant to report on the status of that row’s PCIF in normally-functioning, adult members of each taxon, except where that designation has no meaning.

Details on why I chose each taxon:

TAXON WHY INCLUDE THIS TAXON?
Human (Homo sapiens sapiens) For comparison.
Chimpanzee (Pan troglodytes) For comparison: it’s the non-human species I’m most confident is conscious, given how closely related it is to humans, and how sophisticated its behavior and cognition appear to be.
Cow (Bos taurus) I wanted to include a heavily-consumed mammal that seems much less cognitively sophisticated than a chimpanzee.
Chicken (Gallus gallus domesticus) I wanted to include a bird species, and the chicken is the most heavily-consumed.
Rainbow trout (Oncorhynchus mykiss) I wanted to include a fish species. Among those fish species that are heavily-consumed, the rainbow trout is one of the most well-studied with respect to PCIFs, especially pain-related PCIFs. I could have chosen a subspecies, but unfortunately many studies of Oncorhynchus mykiss do not specify which subspecies was examined.
Gazami crab (Portunus trituberculatus) I wanted to include a decapod species. Among decapods, some shrimp and crayfish species might be harvested in greater numbers than any crab species, but crabs have thus far been more thoroughly studied with respect to their likelihood of consciousness, largely due to the work of Robert W. Elwood. The Gazami crab is one of the most heavily-consumed species of crab.
Common fruit fly (Drosophila melanogaster) I wanted to include an insect species, and the common fruit fly is perhaps the most-studied insect.
E. coli I wanted to include a single-celled species of bacteria for comparison purposes, and E. coli is among the most-studied species of bacteria.
Function sometimes executed non-consciously in humans? For comparison; see explanation here.
Adult human enteric nervous system (ENS) For comparison; I wanted to include a biological sub-system that most people think of as non-conscious.

My explanation for choosing each PCIF in the table is given in the footnote which appears in the first cell of each row of the table, except in cases where the reason for a PCIFs inclusion seems sufficiently obvious to me (e.g. brain mass), or in some cases where I did not take the time to say anything at all about a PCIF beyond listing it.

79.Damasio (1999), p. 6. Absence seizures involving automatisms are called “complex” absence seizures, and are more common than “simple” absence seizures (i.e. without automatisms). For more on absence automatisms, see e.g. Penry & Dreifuss (1969); Arzimanoglou & Ostrowsky-Coste (2010).

80.LeDoux (2015), ch. 6, makes the point this way:

One strategy used to explore consciousness in animals assumes that if an organism can solve complex problems behaviorally, it has complex mental capacities and therefore mental state consciousness. But this approach conflates cognitive capacities with consciousness, which we’ve seen are not the same. Animals are not, as Descartes characterized them, simple beast machines that only react reflexively to the world. They use internal (cognitive) processing of external events to help them pursue goals, make decisions, and solve problems. But because the human brain can often carry out these same tasks nonconsciously, the mere existence of such cognitive capacities in animals can’t be used as evidence that consciousness was involved.

Similarly, here is Tononi & Koch (2015):

…the lessons learnt from studying the behavioural… and neuronal correlates of consciousness in people must make us cautious about inferring its presence in creatures very different from us, no matter how sophisticated their behaviour and how complicated their brain. Humans can perform complex behaviours—recognizing whether a scene is congruous or incongruous, controlling the size, orientation and strength of how one’s finger should grip an object, doing simple arithmetic, detecting the meaning of words or rapid keyboard typing—in a seemingly non-conscious manner [61–66]. When a bee navigates a maze, does it do so like when we consciously deliberate whether to turn right or left, or rather like when we type on a keyboard?

Dawkins (2015) puts it this way:

There are several reasons [to be cautious about inferences from behavior or physiology to consciousness]. First, we know from our own experience that the three components of human emotion (autonomic/behavioral/cognitive) do not necessarily correlate with each other (Oatley & Jenkins, 1996). Sometimes, for example, strong subjective emotions occur with no obvious autonomic changes, as when someone experiences a rapid switch from excitement to fear on a roller coaster. This does not mean that the change in emotional experience has no physiological basis. It just means that it is probably due to a subtle change in brain state rather than the obvious autonomic changes that are what are usually referred to as physiological (autonomic) measures of emotion…

Second, there is increasing evidence that much more human behavior than we had realized takes place without consciousness at all. Many complex tasks in humans, such as driving a car, playing a musical instrument, or even breathing can be carried out either consciously or unconsciously (Blackmore, 2012; Paul, Harding, & Mendl, 2005; Rolls, 2014; Weiskrantz, 2003). Some human patients with certain sorts of brain damage can successfully reach out and touch objects in front of them but then say they are not conscious of having seen them at all (Weiskrantz, 2003). They are simultaneously blind (as far as their verbal reports go) but also sighted (unconsciously guided reaching). For much of what we humans do there appears to be multiple routes to the same behavior, only some of which reach consciousness (Rolls, 2014). But if the same action (e.g., breathing or touching an object) can occur in humans through either an unconscious or conscious pathway, the argument that if the behavior of another animal is similar to that of a human, that animal must be conscious (der Waal, 2005) is seriously weakened. An animal could be doing the same behavior as a human using his or her unconscious circuits (McPhail, 1998). Unconscious mechanisms explain much more of human behavior than previously thought and may also underlie much animal behavior (Shettleworth, 2010b). Many of the more complex aspects of animal behavior, such as corvid re-caching, that had previously thought to involve awareness can be mimicked by relatively simple computer programs without a theory of mind (van der Vaart, Verbrugge, & Hemelrijk, 2012). In fact, a recent trend in comparative psychology has been away from emphasizing the complexity of animal behavior and toward emphasizing the simplicity of human behavior (Shettleworth, 2010b).

Humans can even have unconscious emotions and changes of emotional state that they are completely unaware of (Morris, Ohman, & Dolan, 1998; Berridge & Winkielman, 2003; Sato & Aoki, 2006). This has important implications for our interpretation of animal emotions, because if we can have unconscious emotions, then the fact that animals behave ‘like us’ says much less about their consciousness or otherwise than we might think (Dawkins, 2001b, 2012).

81.For this column, I distinguish “conscious” and “non-conscious” processes in the normal way they are discussed in the psychological and neuroscientific literature, and thus I temporarily set aside the possibility of “hidden qualia” (see Appendix H). However, in the full analysis, this possibility must be considered: it is possible that many of the cognitive processes normally described by psychologists and neuroscientists as “unconscious” actually instantiate phenomenally conscious experience, but not for the “self” who can report experiences to an external observer.

82.I included “millions of years since last common ancestor with humans” as a PCIF because it is a relatively theory-agnostic measure of “similarity to humans.”

My source for “years since last common ancestor with humans” was the website TimeTree, which compiles estimates from a variety of published sources. Here are the specific pages from which I drew my numbers (in March 2017): chimpanzees, cows, chickens, rainbow trout, gazami crab, common fruit fly, E. coli.

83.My sources of brain mass estimates are: Olkowicz et al. (2016) for humans, Herndon et al. (1999) for chimpanzees, Ballarin et al. (2016) for cows, and Sangiao-Alvarellos et al. (2004) for rainbow trout. For chickens I computed the average brain mass across the 80 domestic chickens (from 8 breeds) summarized in table 1 of Rehkämper et al. (2003).

84.For humans, see Olkowicz et al. (2016). For chickens I just used Olkowicz et al. (2016)’s estimate for the red junglefowl, which is the same species but a different subspecies from the domestic chicken. For the human ENS, I used the midpoint of Furness et al. (2014)’s estimate of “200-600 million neurons.”

As far as I know, no one has yet counted the number of neurons in the brains of chimpanzees, cows, rainbow trout, or gazami crabs. My source for an estimate of neurons in the brain of the common fruit fly is Strausfeld (2012), p. 80.

85.For humans, see Olkowicz et al. (2016). For chickens I just used Olkowicz et al. (2016)’s estimate for the red junglefowl, which is the same species but a different subspecies from the domestic chicken.

As far as I know, no one has yet counted the number of pallial neurons in chimpanzees, cows, and rainbow trout.

86.It’s my impression that encephalization quotient is quickly falling out of favor as an important predictor of higher cognitive capacities — see e.g. Herculano-Houzel (2011, 2016), Deaner et al. (2007), and MacLean et al. (2014). Nevertheless, I include here the numbers collected in table 1 of Roth & Dickie (2005). Where that table lists a range, I used the midpoint of that range.

87.Typically, all mammals are considered to have a neocortex (Lui et al. 2011), but there is some terminological debate about whether any non-mammals should be considered to have a “neocortex” (e.g. see Jarvis et al. 2005; Reiner et al. 2004).

For example arguments that a neocortex (or some structure performing similar functions) might be required for consciousness, see my later section on that debate.

88.See e.g. Herculano-Houzel (2017).

89.See Appendix D.

90.See Appendix D.

91.By nociceptive reflexes I mean “movement away from noxious stimuli.” In general, see Sneddon et al. (2014). On rainbow trout in particular, see e.g. Chervova et al. (1994).

On nociceptive reflexes without conscious experience, see e.g. Crook & Walters (2011):

Nociceptive reflexes and nociceptive plasticity can occur without conscious, emotional experience because these responses are expressed not only in the simplest animals but also in reduced preparations, such as spinalized animals [Clarke and Harris (2001); Egger (1978)] and snail ganglia [Walters et al. (1983)]. Similarly, in human patients nociceptive reflexes can occur without conscious awareness below a level of complete spinal transection [Finnerup and Jensen (2004)].

92.By “physiological responses” I have in mind Sneddon et al. (2014)’s “one or a combination of the following: change in respiration, heart rate or hormonal levels (e.g. cortisol in some vertebrates).” That paper and Sneddon (2015) are my central sources for the values I put in the cells of this row. For debate about the interpretation of some physiological responses to nociception in fishes, see Rose et al. (2014).

93.This PCIF is ill-defined and I did not investigate it, but see e.g. the discussions in Sneddon et al. (2014).

94.I have not investigated this PCIF, but some potentially relevant sources include Parker (2003); Riley & Freeman (2004); Reilly & Schachtman (2008).

95.My central source for this PCIF is Sneddon et al. (2014).

96.My central source for this PCIF is Sneddon et al. (2014).

97.My central source for this PCIF is Sneddon et al. (2014).

98.One source for this PCIF is Sneddon et al. (2014).

99.I did not investigate this PCIF. For fishes, see Sneddon et al. (2014).

100.My central source for this PCIF is Gerber et al. (2014).

101.This is usually taken to be the most important PCIF, but it is typically thought to be prone to many false negatives: i.e. there are likely systems that are conscious but simply do not have the faculties needed to describe their conscious experiences to human scientists.

I should note that one might argue that some monkeys “report” some “detail” about their conscious experience in binocular rivalry studies, as Bayne (2010), p. 97, mentions:

…the science of consciousness draws on data from creatures whose ability to produce any kind of reports is questionable.

In an influential set of experiments designed to identify the neural correlates of visual consciousness, Logothetis and colleagues examined the neural responses of rhesus monkeys to binocular rivalry… The monkeys were first trained to press bars in response to various images — horizontal and vertical gratings, for example — and then presented with rivalrous stimuli. As expected, their responses closely modelled those of human observers to the same stimuli. The question that concerns us here is not what this research tells us about the neural correlates of visual experience, but what we should say about the monkeys’ button-presses. Logothetis and colleagues describe the monkeys as reporting their mental states, but I would want to resist this interpretation. It seems to me that there is little reason to suppose that the monkeys were producing reports of any kind let alone introspective reports. Arguably, to report that such-and-such is the case one has to conceive of… one’s behaviour as likely to bring about a particular belief in the mind of one’s audience — indeed, as likely to bring this belief about in virtue of the fact that one’s audience appreciates that one’s behavior carries the relevant informational content — and I know of no good reason to believe that the monkeys conceived of their button-presses in these terms.

Does it follow that we have no grounds for thinking that the monkeys were experiencing binocular rivalry? Not at all; in fact, I think the monkeys’ button-presses qualify as very good evidence for the claim that they had rivalrous experiences. However, their button-presses constitute evidence of consciousness not because they were reports of any kind but because they were intentional actions…

102.My sense is that these are not yet well-developed enough to serve as a quantitative PCIF, but they may become useful for that purpose within a decade or two. For reviews, see Burkart et al. (forthcoming); Hernandez-Orallo (2017); Kabadayi et al. (2016); Fernandes et al. (2014). For an argument that there are, in a certain sense, no between-species differences in “intelligence,” see Macphail (1987).

103.Ng (1995) makes the case for plastic behavior as a strong indicator of consciousness, and points to Bunge (1980), p. 45 for a definition of “plasticity”:

The ability of the [central nervous system] to change either its composition or its organization (structure), and consequently some of its functions (activities), even in the presence of a (roughly) constant environment, is called plasticity (cf. Paillard, 1976). Plasticity seems to be characteristic of the associative cerebral cortex from birth to senility, to the point that this system has been characterized as “the organ capable of forming new functional organs”… In psychological terms, plasticity is the ability to learn and unlearn. From a monistic perspective learning is activating neural systems not previously engaged in the task in question, presumably by establishing or reinforcing certain synaptic connections.

104.Rial et al. (2008) propose detour behaviors as a PCIF:

The detour behaviour represents the ability of an animal to reach a goal by moving round an interposed obstacle with temporal loss of sensorial contact… The acquisition of “object constancy” in the human child, i.e., the ability to understand that an object temporally hidden is the same after being retrieved, has received considerable attention… Similarly, the detour behaviour requires the maintenance of a memory of the location of a disappeared object, that is, an internal representation of the environment and the production of a “mental” experiment as the animal should construct a complex motor trajectory in advance to the final behavioural performance… Looking at comparative and phylogenetic studies on the detour behaviour, numerous examples have been described in mammals. In birds, it has been convincingly demonstrated in chickens, quails and in herring gulls, but not in canaries [Vallortigara 2000]…

I did not check whether the detour behavior has been observed in other animals besides humans and chickens.

105.Rial et al. (2008) propose play behaviors as a PCIF:

…play shows several traits indicative of consciousness. Besides of being an onerous activity, play seems to be always pleasant. The only explanation for the play paradox lies in considering that the expenditure of energy must have a wide variation in hedonic value, from rather unpleasant to extremely pleasurable, that is, it shows a wide range of alliesthesia. An animal confronted with the possibility of playing should rank the costs and the benefits of each alternative and its final decision will aim at maximizing pleasure. Therefore, the presence of play should be a sign of consciousness.

Overview sources on animal play include Burghardt (2005); Balcombe (2006), ch. 4; Graham & Burghardt (2010); Held & Špinka (2011).

According to Graham & Burghardt (2010), “play is well-developed in primates, rodents, carnivorans, ungulates, elephants, and cetaceans,” and according to figure 1 has been observed in several other taxa as well, including birds and ray-finned fishes. I have filled in the cells of this row accordingly.

106.I have not investigated this PCIF, but some potentially relevant sources include King (2013); King (2016) and the replies in that issue of Animal Sentience; Preti (2007, 2011).

107.See e.g. Rossano (2003); Helton (2005).

108.For overviews, see e.g. Verschure et al. (2014); Dickinson (2011); Trestman (2012).

Also, it is perhaps worth combating a pervasive anecdote used to suggest that insect behavior is rigid rather than adaptive and (at least sometimes) goal-directed. I refer to what Keijzer (2012) calls “the sphex story”:

The Sphex story is an anecdote about a female digger wasp that at first sight seems to act quite intelligently, but subsequently is shown to be a mere automaton that can be made to repeat herself endlessly. Dennett and Hofstadter made this story well known and widely influential within the cognitive sciences, where it is regularly used as evidence that insect behavior is highly rigid…

Here is the version [of the anecdote] that became a classic of cognitive science…: “When the time comes for egg laying, the wasp Sphex builds a burrow for the purpose and seeks a cricket which she stings in such a way as to paralyze but not kill it. She drags the cricket into the burrow, lays her eggs alongside, closes the burrow, then flies away, never to return. In due course, the eggs hatch and the wasp grubs feed off the paralyzed cricket, which has not decayed, having been kept in the wasp equivalent of a deep freeze. To the human mind, such an elaborately organized and seemingly purposeful routine conveys a convincing flavor of logic and thoughtfulness—until more details are examined. For example, the wasp’s routine is to bring the paralyzed cricket to the burrow, leave it on the threshold, go inside to see that all is well, emerge, and then drag the cricket in. If, while the wasp is inside making her preliminary inspection, the cricket is moved a few inches away, the wasp, on emerging from the burrow, will bring the cricket back to the threshold, but not inside, and will then repeat the preparatory procedure of entering the burrow to see that everything is all right. If again the cricket is removed a few inches while the wasp is inside, once again the wasp will move the cricket up to the threshold and re-enter the burrow for a final check. The wasp never thinks of pulling the cricket straight in. On one occasion this procedure was repeated forty times, always with the same result.” [Wooldridge (1963), pp. 82–83.]

The message is clear and simple. Behavior that seems to be strikingly intelligent is actually the result of a straightforward mechanical setup that involves a strict and rigid sequencing of environmental triggers to regulate the several steps involved. The insect is not at all aware of what it is doing and its internal processes are in this sense very different from the characteristics of human cognition. Hofstadter even coined the term ‘sphexish’ to refer to such an unknowing and mechanical form of “seeming intelligence,” and set it as “totally opposite to what we feel we are all about, particularly when we talk about our own consciousness” (1985, p. 529). Dennett (1984) used this notion to refer to the possibility that we might be sphexish ourselves, only less obviously so, and investigated possible implications for free will. The general idea here is that, if this rigidity of behavior is true for insects as a fundamental property that can be uncovered under the right circumstances, then the same should apply to the more complex but not intrinsically different case of human beings.

…[But] looking at this history, there are several striking features. First and foremost, digger wasps very often do not repeat themselves endlessly when the cricket test is done. After a few trials many wasps take the cricket into their burrow without the visit. Second, in certain cases there are ecological and practical reasons for repeating the visit. Third, the cricket test focuses on an extremely minor component of digger wasp behavior, which has since its discovery been completely swamped by many other findings that provide a very different general picture of the mind of the digger wasp.

…

[One example study is] a wonderfully sophisticated and extensive report on the cricket test derives from a five year study done by Jane Brockmann (1985). The cricket test was only one aspect of this study, discussed under the name of “prey-retrieval behavior.” First she discusses six natural reasons why the prey of Sphex ichneumoneus may be missing when the wasp reappears from the nest. Subsequently she describes the results of the cricket test performed systematically on 31 wasps. For each wasp, she used 15 different places for repositioning the prey, positioned at four different distances (2, 4, 6, and 8 cm) from the entrance, spread in four right-angled directions, the 16th position being the place where the wasp left her prey herself. Brockmann placed the prey at each of the 15 non-standard positions in random order, and then finished by placing it in the normal position, from which the wasp always drew it in. Twelve wasps came to the end of the full procedure, repeating the visit fifteen times. Ten wasps drew the prey in from another position, breaking the loop. Of the remainder, five gave up searching for their missing prey, while four did not finish for other reasons. In a retest with fourteen wasps, four wasps remained stuck in their loop, while five broke out if it (Brockmann, 1985, pp. 639–641). In her discussion, where she also takes into account many other findings concerning the provisioning behavior of the great golden digger, Brockmann says: “Although the behavior generally follows one scheme, there are many situations that arise and the wasps behave in an adaptive manner towards each… . The fixity of repeatedly repositioning and re-entering the nest is almost certainly an adaptive response to prey that can easily become lodged in the nest if pulled in backwards.” (1985, p. 651)

And as a final concluding remark: “The adaptable provisioning behavior of Sphex ichneumoneus would be surprising to anyone who viewed insect behavior as stereotyped and fixed. The versatility of individuals extended to all phases of their behavior, from the habitats in which they hunted, to the types of prey captured, to the behavior used in getting the prey into the brood cell. Where responses show stereotypy, such as in repeated prey retrievals, there is an obvious, adaptive explanation. I suspect that long-term studies of known individuals in other species of insects would similarly reveal the same kind of adaptive behavioral versatility.” (Brockmann, 1985, p. 652)

See also e.g. Strausfeld (2012), pp. 307-308, and Mallinson (2016).

109.Descriptions of this test can be found on Wikipedia’s mirror test article, in Anderson & Gallup Jr. (2015), and especially in Gallup Jr. et al. (2011). For an example claim of mirror self-recognition in a robot, see Takeno (2012). For a more recent journalistic overview, see Yong (2017).

Note that body self-recognition and “conceptual” self-recognition might be quite different functions, and thus evidence for the presence of one might not be strong evidence for the presence of the other, as explained by Lieberman (2013), pp. 185-186:

For forty years we have taken mirror self-recognition as a decisive sign of self-awareness in others, but the truth is more complicated. In Cartesian terms, this test focuses on the recognition of our body as our body…

…In an fMRI study, participants were shown adjectives, such as polite and talkative. For some of the trials, participants had to judge whether the adjective described George W. Bush, who was the U.S. president at the time. On other trials, participants had to judge whether the adjectives described themselves. The critical analysis examined whether there were any regions of the brain that were more active when people judged the applicability of an adjective to themselves as opposed to George Bush. There were only two regions of the brain whose activity followed this pattern.

Just as in the mirror self-recognition studies, there was activity in the prefrontal cortex and parietal cortex. But unlike the mirror self-recognition studies, these activations were present in the medial prefrontal cortex (MPFC) and the precuneus — on the midline of the brain where the two hemispheres meet, rather than on the lateral surface of the brain near the skull… In other words, recognizing yourself in the mirror and thinking about yourself conceptually rely on very different neural circuits. Seeing yourself and knowing yourself are two different things…

…this distinction clarifies what the mirror self-recognition test tells us about the animals that can pass it. Chimps, dolphins, and elephants all have some sense of their corporeal identity, that the body they see in the mirror is their body. However, the fMRI data suggests that passing this test does not imply that these animals engage in self-reflection the same way that we do, reflecting on whether we possess a particular personality trait or wondering what will become of us in ten years. It does not imply that these animals reflection the wisdom of their past decisions. And it certainly does not imply that these animals come to have a conceptual sense of self through introspective contemplation.

Relatedly, Wynne & Udell (2013), pp. 176-177, note:

Is an individual’s performance on a mirror-guided task a good measure of whether that individual is self-aware? Imagine a good friend of yours suffers a stroke. This stroke leaves her intellectually unaffected except in one respect – she can no longer recognize herself in a mirror. You have to help her comb her hair and apply lipstick because she is unable to do these things herself, but other than that, there are no symptoms to her syndrome. Her mental faculties are unaffected, and you have no reason whatever to doubt her sense of self, yet she can no longer pass the traditional mark test described above. While a strict interpretation of mirror self-recognition test performance might result in the conclusion that your friend does not display evidence of being self-aware, this is likely untrue.

In fact, there is a known syndrome, prosopagnosia, which leads to an inability to recognize familiar faces. Severe prosopagnosics are unable to recognize themselves in a mirror or in pictures, but nobody has suggested that they lack a healthy self-concept…

…So there exists a neurological syndrome that can lead to failures on mirror self-recognition without any diminution of self-concept. Conversely, there are also syndromes that can lead to a disrupted self-concept without any effect on the ability to recognize oneself in a mirror. For example, autistic individuals are characterized as severely lacking in the ability to see themselves as others view them and in the ability to put themselves imaginatively into the situation of others. This lack of self-concept is measured in tests of the understanding of other people’s intentions and thoughts. Although autistics’ self-concept can be severely limited, for some the ability to recognize themselves in a mirror is quite normal. Many autistic children can use mirrors to inspect their bodies and to pass the mark test just as typically developing children do (Dawson & McKissick, 1984), although in some autistic individuals who do ultimately show mirror self-recognition, this ability takes longer to develop (Ferrari & Matthews, 1983).

Thus, there are people who are unable to recognize themselves in mirrors but whose self-concept is unaffected (prosopagnosics), and there are other people who are impaired in their self-concept but well able to recognize themselves in mirrors (autistic children). Consequently, the mirror test cannot be considered a foolproof test of an animal’s (or human’s) self-concept. In so far as self-recognition in a mirror demonstrates anything, it shows that an animal has what we might call an ‘own-body’ concept – it is able to differentiate between itself and the rest of the world.

Another caveat about the mirror test is the following (Wynne & Udell 2013, p. 174):

…in controlled studies that have compared the rate at which the ape touches the mark on its forehead with a mirror present with the rate of mark touching when the mirror is absent, the differences in rates with and without a mirror are not as great as the typical summary of this research implies. It is not the case that chimpanzees never touch the dye mark in the absence of the mirror and touch it energetically as soon as the mirror is introduced. In one of the few studies to report the frequency with which chimps touched their dye marks, it was reported that on average chimps touched their marks 2.5 times in 30 minutes in the absence of a mirror and only 3.9 times in 30 minutes with a mirror (Povinelli et al., 1993).

110.E.g. see Cheke & Clayton (2010).

111.For the relation between sleep and phenomenal consciousness in animals, see e.g. the discussion in Allen (2013), pp. 30-32. On the distribution of sleep across the animal kingdom, see Siegel (2008).

112.E.g. see de Waal (2007); Schubert & Masters (1991).

113.My primary source for this PCIF is Smith & Washburn (2005). I concluded that chimpanzees “probably?” exhibit uncertainty monitoring, because uncertainty monitoring has been observed in rhesus monkeys.

114.E.g. see de Waal (1992); Osvath & Karvonen (2012).

115.For example see Loukola et al. (2017).

116.I have not investigated this PCIF, but some potentially relevant sources include Botha & Everaert (2013); Fitch (2010); Anderson (2004).

117.Here, I have in mind the arguments of Bayne (2013) concerning agency as a mark of phenomenal consciousness.

118.E.g. see Miklósi & Soproni (2006).

119.E.g. see Best et al. (2008).

120.There are many kinds of tool use, and it’s unclear which kinds are most indicative of phenomenal consciousness. In the foreword to Shumaker et al. (2011), Gordon M. Burghardt succinctly illustrates the diversity of animal tool use:

Ground squirrels kick sand into the faces of venomous snakes to deter attacks. Ant lions engage in a similar behavior in their sand pits to incapacitate prey. Degus (small rodents) use rakes to access food, an ability shared with many birds and non-human primates. Some mice set out markers to aid in finding their way home. Birds use small food items to bait fish, but crocodiles have turned the tables, using fish to attract birds, which they then attack. New Caledonian crows sometimes travel with a toolkit of proven implements for probing for food (including lizards in crevices). Crabs use all sorts of objects, animate and inanimate, to affix to themselves or to the shells they inhabit, for camouflage against predators. Apes are able to use tools of all kinds in both captivity and the wild. Through observation and practice they crack open nuts, apply herbal medications, open locks and doors, use sticks to stir liquids, saw wood, and even dig with a shovel. In fact, while tools are mostly used in foraging for food, they also are employed in many other contexts, such as to deter predators, facilitate courtship and copulation, mark territories, and intimidate competitors of their own species.

In the book’s Introduction, the authors further illustrate the difficulty of deciding what should and shouldn’t count as “tool use” by listing 53 observed animal behaviors that different definitions classify differently. After surveying the strengths and weaknesses of several proposed definitions, they opt for the following definition of tool use:

Our present definition of tool use is: The external employment of an unattached or manipulable attached environmental object to alter more efficiently the form, position, or condition of another object, another organism, or the user itself, when the user holds and directly manipulates the tool during or prior to use and is responsible for the proper and effective orientation of the tool.

Perhaps more useful is table 1.1, in which the authors describe 26 “modes” of tool use and manufacture. Below are just a few example rows, quoted directly from table 1.1:

NAME OF USE MODE FUNCTION COMMENTS
Throw Create or augment signal value of social display; amplify mechanical force; extend user’s reach Propel an object through open space. Can be aimed or unaimed. The object is propelled by the user’s own energy.
Prop and Climb, Balance and Climb, Bridge, Reposition Extend user’s reach by expanding accessible three-dimensional space; bodily comfort Prop and Climb: Place and stabilize an elongate object vertically or diagonally against another object or surface, and then move up or climb up the object. Distal end of propped object touches the other object or surface. Stable.

Balance and Climb: Place an elongate object vertically and then move up or climb up the object. The distal end of the balanced object does not touch another object or surface. Unstable.

Bridge: Place an elongate object or organism over water or open space such that each end rests on a surface on opposite sides of the water or spatial gap. User locomotes on the subject. Stable.

Reposition: Relocate and climb on an object or organism. Includes rafting (placing a buoyant object on water to support user’s weight).

Symbolize Abstract or represent reality Carry, keep, or trade an object that represents another object, another organism, or a psychological state.
Detach Structural modification of an object or an existing tool by the user or a conspecific so that the object/tool serves, or serves more effectively, as a tool Remove the eventual tool from a fixed connection to the substrate or another object.
Add, Combine As above [for ‘Detach’] Join or connect two or more objects to make one tool that is held or directly manipulated in its entirety during its eventual use.

Most of the rest of the book, then, catalogues published observations of these various modes of tool use, organized by taxa such as “insects,” “crustaceans,” “fish,” “birds,” “rodents,” “cetaceans,” “old world monkeys,” “gibbons,” “chimpanzees,” etc. In table 7.1, the authors organize observed cases of tool use by mode and taxon.

In the end, I decided not to choose one or more modes of tool use for inclusion in my table of PCIFs and taxa, but future creators of similar tables might want to. For example, perhaps “Symbolize” is a particularly consciousness-informative mode of animal tool use.

Another useful recent source on animal tool use is Sanz et al. (2013).

121.What I have in mind is the kind of planning for the future exhibited by western scrub-jays in Raby et al. (2007):

Knowledge of and planning for the future is a complex skill that is considered by many to be uniquely human. We are not born with it; children develop a sense of the future at around the age of two and some planning ability by only the age of four to five. According to the Bischof-Köhler hypothesis, only humans can dissociate themselves from their current motivation and take action for future needs: other animals are incapable of anticipating future needs, and any future-oriented behaviours they exhibit are either fixed action patterns or cued by their current motivational state. The experiments described here test whether a member of the corvid family, the western scrub-jay (Aphelocoma californica), plans for the future. We show that the jays make provision for a future need, both by preferentially caching food in a place in which they have learned that they will be hungry the following morning and by differentially storing a particular food in a place in which that type of food will not be available the next morning. Previous studies have shown that, in accord with the Bischof-Koöhler hypothesis, rats and pigeons may solve tasks by encoding the future but only over very short time scales. Although some primates and corvids take actions now that are based on their future consequences, these have not been shown to be selected with reference to future motivational states, or without extensive reinforcement of the anticipatory act. The results described here suggest that the jays can spontaneously plan for tomorrow without reference to their current motivational state, thereby challenging the idea that this is a uniquely human ability.

122.I have not investigated this PCIF, but see section 9 of David DeGrazia’s “Self-awareness in animals,” which is chapter 11 in Lurz (2009).

123.I have not investigated this PCIF, but see e.g. Kaminski (2016).

124.For discussions and debates about the relatively sophisticated behavior controlled by unconscious (or at least unconscious-to-us) processes in humans, see e.g. appendix 3 of Shevlin (2016); Prinz (2015); Bargh & Morsella (2010); Gigerenzer (2007); ch. 15 of Macchi et al. (2016); Hassin (2013); Newell & Shanks (2014); Goodale & Milner (2013); Weiskrantz (2008); Shepherd (2015); Kihlstrom (2013); de Gelder et al. (2002). See also my section on cortex-required views and the appendices it links to.

However, we must be careful not to exaggerate the powers of the unconscious human mind. For example, several results in this area have fared poorly in psychology’s “replication crisis.” (See also Appendix Z.8.)

For an example argument against the relevance of one PCIF (associationist learning) to consciousness, by way of pointing out that occurs in humans unconsciously, see Macphail (1998), p. 154-155:

If our unconscious learning system is in fact the human version of a basic associative learning system common to animals and humans, what is implied for animal consciousness?

The major implication is that, since the associative system works efficiently in humans without being subject to conscious monitoring, the fact that associative learning proceeds efficiently in animals can provide no evidence that animals are conscious. In other words, the obvious intelligence of animals does not imply that they are conscious… It is perfectly conceivable, therefore, that the non-verbal thought of animals is not a conscious process.

125.For an example discussion involving learning by the rat spinal cord, see Allen et al. (2009). On the enteric nervous system, see Young (2012), Wood (2011), and Rao & Gershon (2016). On the autonomic nervous system more generally, see Ryder (1996).

126.See e.g. my example extensions to MESH: Hero here, Herzog et al. (2007), and Table 1 of Liu & Schubert (2010).

127.On bacteria, see van Duijn et al. (2006); Bray (2011); Lyon (2015). On plants, see Trewavas (2005); Smith (2016); Gagliano et al. (2016).

Wynne (2004), p. 58, raises this objection to Donald Griffin, one of the earliest prominent scientific proponents of animal consciousness:

…Harvard zoologist Donald Griffin argues… that the communicative system of honeybees is possible evidence for consciousness. Griffin also argues that commentators have overemphasized the inflexibility of mother wasp behavior and that we should not too hastily deny the possibility of consciousness in wasps. But if communication is a true sign of consciousness, and inflexibility of behavior does not disqualify an individual from being considered conscious, should we not consider as equally likely the possibility that the plants that send signals to wasps might be conscious? They communicate — which Griffin views as positive evidence for consciousness — and, on the example of mother wasps, the inflexibility of the rest of the plants’ behavior should not disqualify them from being considered conscious. To me this is a reductio ad arbsurdum of Griffin’s position. Of course we can’t consider plants conscious. (If you don’t agree that plants can’t be conscious, then your concept of what it means to be conscious is just so different from mine that there is little point our continuing to discuss the matter.) We should not be misled into thinking that every example of complex behavior is proof of consciousness; complex behavior can arise from simple mechanical processes.

128.See Yong (2016).

129.This quote is from Shettleworth (2009), p. 5, which cites Dyer (1994) as an example.

130.Dawkins (2012) gives the following example:

Security cameras are sensitive to movement and respond appropriately by switching on a light, sounding an alarm, or even ringing a police station, but most of us don’t worry too much about whether or not they are conscious, despite our tendency to describe them in anthropomorphic terms (‘Don’t do that or the camera will think you are an intruder’).

We know that what a surveillance camera does is very simple. It can detect movement and it can then respond in a totally automated way to raise the alarm and even to summon the police. We also know that if we looked out of the window and saw a strange man running across the lawn at night brandishing a gun, we would perform a similar task of alerting the police but we would do it in a completely different, conscious way. The end result is the same, but with a different way of getting there. One is the totally unconscious activation of a phone line, the other has the full panoply of conscious recognition of the presence of an intruder, followed by the experience of fear at what he might do, and then the conscious action of telephoning the police and explaining to them rationally what is happening.

This simple example shows why identifying where there is consciousness is so difficult. There is clearly a spectrum of mechanisms for producing a similar outcome that has security cameras at one end and ourselves peering into the night at the other. Where on this spectrum are we to put, say, slugs? Fish? Chimpanzees? Plants? The fact that so many of the attributes of consciousness, such as the ability to respond to stimuli and choose an appropriate action, can be mimicked by relatively simple machines shows that it is not necessary to feel or experience anything in order to have adaptive, appropriate behaviour. A few simple sensors, a bit of programming, and an electrically powered output of a sort we are all familiar with and you can do a lot of routine, everyday behaviour. Consciousness just isn’t necessary.

131.See e.g. ch. 17 of Russell & Norvig (2009).

132.See e.g. Goodfellow et al. (2016).

133.Intuitively, we might rate PCIFs on a “strength of indication” scale from -1 to 1, such that:

  • A score of -1 means that if a system exhibits that PCIF, then the system is definitely not conscious.
  • A score of 0 means that if a system exhibits that PCIF, this doesn’t indicate the presence or absence of consciousness in that system at all.
  • A score of 1 means that if a system exhibits that PCIF, then the system is definitely conscious.

Using this rating system, a true “sufficient” condition of consciousness could be scored as 1, or nearly that high. A property found to be totally irrelevant to whether or not a system is conscious, such as whether the most common English term for it includes the letter g, could be scored as 0. A true necessary condition of consciousness might be scored across a wide range of values, depending on the degree to which it is also a sufficient condition of consciousness. But the inverse of a true necessary condition of consciousness would be scored as -1.

This scoring system would have to be extended to accommodate PCIFs that are scalar rather than binary, such as number of neurons. In such cases, the strength of indication would (perhaps) typically be a monotonic function, but probably not a linear function — i.e. more neurons is always more consciousness-indicating, all else equal, but strength of indication changes more between 100 neurons and 1 million and one hundred neurons than it does between 100 billion neurons and 100 billion and one million neurons, even though the difference between the two is one million neurons in both cases.

134.See also notes from my conversation with David Chalmers.

135.For example, language of a certain sort is sometimes argued to be a necessary condition for phenomenal consciousness. See, for example, the sources cited in the endnote at the end of this passage from ch. 6 of LeDoux (2015):

As part of our daily lives we use language to label and describe our perceptions, memories, thoughts, beliefs, desires, and feelings. As we’ve seen, this capacity to talk about our inner states makes it relatively easy for us to study human consciousness scientifically. But the contribution of language goes far beyond simply providing a tool for assessing consciousness. Language, Daniel Dennett says, lays down tracks on which thoughts can travel. Many other philosophers of mind and scientists have argued for a strong relation between language and consciousness.

The sources cited in the endnote are: Alanen (2003); Bloom (2000); Bridgeman (1992); Carruthers (1996, 2002); Chafe (1996); Clark (1996); Edelman (1990); Fireman et al. (2003); Jackendoff (2007); Lecours (1998); Macphail (1998, 2000); Ricciardelli (1993); Rosenthal (1990); Searle (2002); Stamenov (1997); Subitzky (2003); Wittgenstein (1953); and also a source that is cited as “Sekhar, A.C. ‘Language and Consciousness.’ Indian Journal of Psychology (1948) 23: 79–84.”

Macphail (1998), in particular, might (in my view) be the strongest published case for (human-like) language as a necessary condition for phenomenal consciousness.

136.For example, Merker (2007):

Few [cognitive scientists] or neuroscientists would today object to the assertion that “cortex is the organ of consciousness.”

And Devor et al. (2014):

…there has never been much doubt among neuroscientists or neurologists that the neural process that constitutes pain perception, as well as other forms of conscious experience, occurs in the cerebral cortex…

The dogma of a cortical seat of pain and consciousness is convenient. First, until recently, the presence of a ‘flat’ EEG (a marker of lost cortical function) was a key criterion for ‘brain death’ and a diagnostic basis for the ethical harvesting of vital organs for transplantation … conventional reasoning in the neurological community arrives at the conclusion that pain experience is absent, and indeed impossible, in the PVS [persistent vegetative state] patient [because] it is assumed that all conscious perception, including pain, resides in the cortex, and in the PVS patient cortical function is absent, [therefore] it follows that the PVS patient is incapable of feeling pain.

Similarly, Mallatt & Feinberg (2016), who argue for an ancient origin of consciousness, say:

Most investigators have for centuries located consciousness in the cerebral cortex. To this day, the dominant paradigm in consciousness studies is that primary consciousness of mapped mental images in mammals comes from the cerebral cortex or from interactions between the cortex and the thalamus, not from the superior colliculus/tectum as Merker claims. We agree with the dominant paradigm for mammals because so many of the neural correlates of mammalian exteroceptive consciousness are in this corticothalamic system (Koch, Massimini, Boly, & Tononi, 2016). Medical neuroimaging and brain-lesion studies strongly support cortical consciousness when the results are interpreted in the most direct and straightforward way: damage to the cortex leads to loss of some sensory consciousness (Boly et al., 2013; Feinberg, 2009). Destruction of the visual, occipital, cortex causes blindness in primates. In his argument for tectal instead of cortical consciousness, Merker (2007) said that these loss phenomena are more complex than they appear, that the cortical damage actually inhibits the conscious role of the superior colliculus, etc. However, his interpretation is less parsimonious and therefore it requires extraordinary counterevidence to be believed. The indirect counterevidence that Merker provided — on the “Sprague effect” (p. 67) and on the cortex projecting to the superior colliculus (p. 76) — does not seem definitive enough to topple the dominant view of cortical consciousness in mammals.

Tye (2016), who likewise argues against CRVs, notes that (pp. 79-80):

The claim that in humans pain and other experiences require a neocortex is widely accepted. For example, the American Academy of Neurology asserts (1989):

Neurologically, being awake but unaware is the result of a functioning brainstem and the total loss of cerebral cortical functioning… Pain and suffering are attributes of consciousness requiring cerebral cortical functioning.

The Medical Task Force on Anencephaly (1990) says much the same thing in connection with congenital cases:

Infants with anencephaly, lacking functioning cerebral cortex are permanently unconscious… The suffering associated with noxious stimuli (pain) is a cerebral interpretation of the stimuli; therefore, infants with anencephaly presumably cannot suffer. (pp. 671-672)

137.For some interesting historical context, see Thompson (1993). Ward (2011) also covers some of the history very briefly:

The search for the neural correlates of consciousness (NCC) has been intense and productive in the past two decades since Crick and Koch (1990) focused attention on this project… Recent work has emphasized the importance of the thalamo-cortical system of the brain in generating conscious awareness. Within this system three possibilities for the critical brain activity most closely associated with consciousness have been proposed: it occurs primarily within the cortex (e.g., Crick & Koch, 2003; Romijn, 2002), it occurs in the entire system of thalamo-cortical loops (e.g., Edelman & Tononi, 2000; John, 2001, 2002; Llinás, Ribary, Contreras, & Pedroarena, 1998), it occurs primarily within the thalamus (e.g., Penfield, 1975)… In this paper I discuss the third possibility… that the neural activity most closely associated with primary consciousness occurs primarily in the thalamus…

The proposal that the thalamus is a particularly important locus in the brain involved in generating consciousness also is not new. As Newman (1995) pointed out, views of the locus of the neural correlate of conscious awareness have oscillated around three foci for many years: the cerebral cortex, the reticular activating system, and the thalamus. Penfield (e.g., 1975, p. 19) was perhaps the most extreme proponent of the subcortical view, asserting that “The indispensable substratum of consciousness lies outside the cerebral cortex, probably in the diencephalon (the higher brainstem).” The thalamus is a major component of the diencephalon (it also includes the epithalamus and the hypothalamus). Penfield’s ideas were founded upon several major evidential bases: (1) the results of stimulating the brain, especially the temporal lobe, with low-intensity direct current electricity; (2) the results of surgery for epilepsy in which various chunks of brain, especially temporal lobe, were removed; and (3) the structural and functional anatomy of the brain that was known in the early 1970s. More recently, several authors have argued that the thalamic reticular nucleus (TRN) plays a role in consciousness by modulating the local 40-Hz oscillations observed in various parts of the brain via its inhibitory inputs to the dorsal thalamic nuclei (Min, 2010; Newman, 1995). Indeed, abolishing inhibitory interactions among the neurons of the TRN dramatically increases absence-epilepsy-like, low-frequency synchronous oscillations in the dorsal thalamic nuclei (Huntsman, Porcello, Homanics, DeLorey, & Huguenard, 1999), indicating that such inhibition might play a major role in preventing the neural hyper-synchrony that characterizes epilepsy and its accompanying state of unconsciousness. Moreover, the TRN has been implicated in producing the unconscious state seen in absence epilepsy, presumably playing the role of strongly inhibiting thalamic neuron activity under the influence of cortical excitation (e.g., Steriade, 2005).

Another vociferous recent proponent of localizing the critical NCC for the state of consciousness in the thalamus has been Joseph Bogen, the surgeon who developed the split brain operation… Bogen argued that the intralaminar nuclei of the thalamus form the essential substrate of the state of phenomenal consciousness. His argument is complex, but critical to it is the fact that there are only two places in the CNS where very small bilateral lesions (those involving less than 1 g of neural tissue) abolish the state of consciousness: in the mesencephalic reticular formation and in the intralaminar nuclei of the thalamus. Moreover, the intralaminar nuclei are connected to much of the rest of the brain through diffuse reciprocal connections, making this a candidate for a central clearinghouse or modulator of cortical and subcortical activity.

Another proponent of a subcortical locus for a substrate of phenomenal awareness is Merker (2007). He updated Penfield and Jasper’s (1954) centrencephalic system proposal and reviewed extensive evidence that the top of the brainstem, and the superior colliculus in particular, forms a system that integrates motivation, the sensory world, and body capabilities to accomplish goal-directed action. His arguments rely on the extensive convergence of inputs from pretty much the entire brain into this region, including to and from parts of the thalamus. Indeed, the thalamus plays an important integrative role in his theory, although the theory does not specify it to be the substrate of consciousness… Among other important facts discussed by Merker (2007) is the remarkable observation that in over 750 operations to cure epilepsy, during which the patient was not anesthetized, Penfield and Jasper (1954) never once observed even an interruption in the continuity, let alone cessation, of a patient’s consciousness as they removed large chunks of cortex, sometimes even an entire hemisphere. Interestingly, in his later book Penfield (1975) identified the diencephalon with the “highest brain mechanism” that is directly responsible for consciousness. The part investigated thoroughly by Merker, on the other hand, was termed by Penfield “the brain’s computer” and was said to be responsible for sensory-motor integration, as updated and extended by Merker (2007). Both mechanisms acting together were thought to be necessary to explain human behavior because the diencephalon has privileged access to frontal and temporal areas of the cortex, whereas the older system just below this area at the roof of the brainstem has privileged access to sensory and motor mechanisms. It is thus likely that the entire upper brain stem plays a crucial role in human behavior…

138.Also as above, I wouldn’t be surprised if many of the studies cited in this section turn out to not “hold up” very well upon closer scrutiny; see Appendix Z.8.

139.One example is Veatch (1975), who identifies conscious experience with the neocortex but doesn’t cite any evidence in support of this connection. Similarly, Bartlett & Youngner (1988) and Rich (1997) repeatedly assert that consciousness requires cortical function, but they don’t argue for, or cite evidential support for, that assertion.

Another example is Green & Wikler (1980), though they do qualify their statement with “arguably”:

…it does not follow from our argument that all humans lacking the substrate of consciousness are dead. Anencephalic infants are lacking at birth the cortical material necessary for the development of cognitive functioning and, arguably, consciousness. Still, due to possession of a functioning brain stem, they may have spontaneous breathing and heartbeat, and a good suck.

Even the more recent DeGrazia (2016) seems to assume a CRV, at least for humans (and again, without argument), e.g. in statements such as “…the cerebrum, the primary vehicle of conscious awareness…” and “in a permanent (irreversible) vegetative state (PVS), while the higher brain is extensively damaged, causing irretrievable loss of consciousness, the brainstem is largely intact…” and “Consider… anencephalic infants, who are born without cerebral hemispheres and never have the capacity for consciousness…”

Note also that “many… court decisions have relied on the presumption that consciousness is permanently lost in the persistent vegetative state, and have assumed that physicians can reliably make this diagnosis [Cranford & Smith (1987); Smith (1988)]” (Truog & Fackler 1992).

Another example is Tononi et al. (2016), which seems to assume that at least human consciousness requires cortical function:

Consciousness depends on the integrity of certain brain regions and the particular content of an experience depends on the activity of neurons in parts of the cerebral cortex. However, despite increasingly refined clinical and experimental studies, a proper understanding of the relationship between consciousness and the brain has yet to be established. For example, it is not known why the cortex supports consciousness when the cerebellum does not, despite having four times as many neurons, or why consciousness fades during deep sleep while the cerebral cortex remains active. There are also many other difficult questions about consciousness. Are patients with a functional island of cortex surrounded by widespread damage conscious, and if so, of what?…

140.Rose (2002):

Extensive evidence demonstrates that our capacity for conscious awareness of our experiences and of our own existence depends on the functions of this expansive, specialized neocortex. This evidence has come from diverse sources such as clinical neuropsychology (Kolb and Whishaw, 1995), neurology (Young et al., 1998; Laureys et al., 1999, 2000a-c), neurosurgery (Kihlstrom et al., 1999), functional brain imaging (Dolan, 2000; Laureys et al., 1999, 2000a-c), electrophysiology (Libet, 1999) and cognitive neuroscience (Guzeldere et al., 2000; Merikle and Daneman, 2000; Preuss, 2000). A strong case has been made that it is mainly those cortical regions that have achieved such massive expansion in humans that are most centrally involved in the production of consciousness (Edelman and Tononi, 2000; Laureys et al., 1999, 2000a-c).

…The evidence that the neocortex is critical for conscious awareness applies to both types of consciousness [“primary” consciousness and “higher-order” consciousness]. Evidence showing that neocortex is the foundation for consciousness also has led to an equally important conclusion: that we are unaware of the perpetual neural activity that is confined to subcortical regions of the central nervous system, including cerebral regions beneath the neocortex as well as the brainstem and spinal cord (Dolan, 2000; Güzeldere et al., 2000; Jouvet, 1969; Kihlstrom et al., 1999; Treede et al., 1999).

…From the clinical perspective, primary consciousness is defined by: (1) sustained awareness of the environment in a way that is appropriate and meaningful, (2) ability to immediately follow commands to perform novel actions, and (3) exhibiting verbal or nonverbal communication indicating awareness of the ongoing interaction… Thus, reflexive or other stereotyped responses to sensory stimuli are excluded by this definition. Primary consciousness appears to depend greatly on the functional integrity of several cortical regions of the cerebral hemispheres especially the “association areas” of the frontal, temporal, and parietal lobes (Laureys et al., 1999, 2000a-c). Primary consciousness also requires the operation of subcortical support systems such as the brainstem reticular formation and the thalamus that enable a working condition of the cortex. However, in the absence of cortical operations, activity limited to these subcortical systems cannot generate consciousness (Kandel et al., 2000; Laureys et al., 1999, 2000a; Young et al., 1998). Wakefulness is not evidence of consciousness because it can exist in situations where consciousness is absent (Laureys et al., 2000a-c). Dysfunction of the more lateral or posterior cortical regions does not eliminate primary consciousness unless this dysfunction is very anatomically extensive (Young et al., 1998).

…Diverse, converging lines of evidence have shown that consciousness is a product of an activated state in a broad, distributed expanse of neocortex. Most critical are regions of “association” or homotypical cortex (Laureys et al., 1999, 2000a-c; Mountcastle, 1998), which are not specialized for sensory or motor function and which comprise the vast majority of human neocortex. In fact, activity confined to regions of sensory (heterotypical) cortex is inadequate for consciousness (Koch and Crick, 2000; Lamme and Roelfsema, 2000; Laureys et al., 2000a,b; Libet, 1997; Rees et al., 2000).

About a decade later, Rose et al. (2014) added:

The neural basis of consciousness was reviewed and applied to the problem of fish pain by Rose (2002)… Subsequent research has further substantiated and refined the fundamental principles identified earlier, that, the existence of all the previously described forms of consciousness [primary consciousness and higher-order consciousness] depends on neocortex, particularly frontoparietal ‘association’ cortex in distinction from primary or secondary sensory or motor cortex (Laureys and Boly 2007; Amting et al. 2010; Vanhaudenhuyse et al. 2012). Primary consciousness also requires supporting operation of subcortical systems including (i) the brainstem reticular formation to enable a working condition of the cortex and (ii) interactions between the cortex and thalamus as well as cortex and basal ganglia structures (Edelman and Tononi 2000; Laureys et al. 1999, 2000a,b,c)… Human neocortex, the six-layered cortex that is unique to mammals, has specialized functional regions of sensory and motor processing, but activity confined to these regions is insufficient for consciousness (Koch and Crick 2000; Lamme and Roelfsma 2000; Laureys et al. 2000a,b; Rees et al. 2000). Although neocortex is usually identified as the critical substrate for consciousness, a critical role for some regions of mesocortex, particularly the cingulate gyrus, is well established. Mesocortical structures have fewer than six layers, but like neocortex, are unique to mammalian brains and highly interconnected with neocortex. The cingulate gyrus, in concert with neocortex, is particularly important for conscious awareness of the emotional aspect of pain (Vogt et al. 2003), other dimensions of emotional feelings (Amting et al. 2010) and self-awareness (Vanhaudenhuyse et al. 2012).

Building on these earlier articles by Rose, Key (2015) argues:

What is so unique about the cortex that enables inner mental states? First, the cortex is parcellated into discrete anatomically structures or cortical areas that process information related to specific functions. It is estimated that there are about 200 cortical areas in humans (Kaas 2012). For instance, the cortical visual system consists of over a dozen distinct regions with diverse subfunctions that are strongly interconnected by reciprocal axon pathways. One of the defining features of these subregions is that they become simultaneously active. Both recurrent activity and binding of neural activity across cortical regions are believed to be essential prerequisites for the subjective experience of vision (Sillito et al. 2006; Pollen 2011; Koivisto and Silvanto 2012). It has been shown that when neural processing of recurrent signalling from higher cortical regions entering the V1 visual cortex is perturbed by transcranial magnetic stimulation, the subjective awareness of a visual stimulus is disrupted (Koivisto et al. 2010, 2011; Jacobs et al. 2012; Railo and Koivisto 2012; Avanzini et al. 2013).

The subregionalisation of the neocortex also allows the formation of spatial maps of the sensory world, such as those associated with the representations of the surface of the body or the visual field. These topographical maps are important for the multiscale processing of sensory information (Kaas 1997; Thivierge and Marcus 2007). Variation in the size of the maps alters the sensitivity of responses to stimuli while spatial segregation of neurons responding to selective parts of a stimulus allows for finer perceptual discrimination. Painful and non-painful somatosensory stimuli are topographically mapped to overlying regions in the primary somatosensory cortex (SI) in humans (Mancini et al. 2012). These results are consistent with the known point-to-point topography from the body surface to SI (called somatotopy) that underlies spatial acuity. However, by using high resolution mapping in the squirrel monkey SI (sub-millimetre level) it was revealed that there were slight differences in the localisation of different somatosensory modalities (Chen et al. 2001). This slight physical separation of cortical neurons responding to different peripheral stimuli suggests that differences in the subjective quality of somatosensory sensations may arise as early as in SI. Somatotopic maps for painful stimuli are also present in the human SII and insular cortices (Baumgartner et al. 2010). Interestingly, different qualities of painful stimuli (such as heat and pinprick) are more distinctly mapped topographically to different regions of SII and the insular cortex than in SI. Similarly, painful and non-painful stimuli are mapped to separate regions in human SII (Torquati et al. 2005). This separation of cortical processing of heat and tactile stimuli within different cortical areas has also been observed in non-human primates (Chen et al. 2011). These multiple neural maps suggests that SII and the insular cortex play important roles in discriminating differences in the subjective quality of somatosensory stimuli, particularly painful from non-painful (Tommerdahl et al. 1996; Baumgartner et al. 2010; Chen et al. 2011; Mazzola et al. 2012). This idea is supported by evidence from direct electrical stimulation of discrete areas in the human insular cortex (Afif et al. 2010).

Second, the cortex is a laminated structure that enables the efficient processing and integration of different types of neural information by unique subpopulations of neurons (Schubert et al. 2007; Maier et al. 2010; Larkum 2013). Lamination appears to facilitate complex wiring patterns during development. If two populations of neurons were randomly distributed within a specific brain region and incoming axons were required to synapse with only one subpopulation, then those axons would need to rely on stochastic and hence error-prone searching to complete wiring. On the other hand, when similar neurons are partitioned together in a single lamina then a small set of molecular cues is able to guide axons with high precision to their appropriate post-synaptic target. Two principal afferent inputs (from the neocortex itself, and the thalamus) enter the neocortex and separately innervate distinct layers (Nieuwenhuys 1994). The main thalamic fibres terminate densely in layer IV (called the granular layer) while the neocortical fibres innervate different pyramidal neurons in layers I–III (supragranular layers) (Opris 2013). By selectively ablating Pax6, a developmentally significant patterning gene, in the cortex of mice it is possible to disrupt the laminar organisation of this structure (Tuoc et al. 2009). This altered cortical layering causes neurological deficits that are similar to those observed in humans with Pax6 haploinsufficiency (Tuoc et al. 2009) and provides strong experimental evidence of the importance of lamination to cortical function. A number of human brain disorders involve defects in cortical lamination that are detrimental to brain function (Guerrini et al. 2008; Guerrini and Parrini 2010; Bozzi et al. 2012).

Third, lamination facilitates the economical establishment of microcircuitry between neurons processing different properties of the stimulus. A vertical canonical microcircuit is established which leads to the emergence of functionally interconnected columns and minicolumns of neurons (Mountcastle 1997). For example, a hexagonal column in the primate somatosensory cortex is about 400 µm in width and contains populations of neurons that respond to the same stimulus (e.g. light touch or joint stimulation) arising from a specific topographical zone of the body. Columns can be associated with processing information related to a specific function (e.g. “visual tracking” and “arm reach” columns in the parietal cortex; Kass 2012). Each column itself consists of minicolumns (80–100 neurons) that are ~30–50 µm in diameter and interconnected by short-range horizontal processes (Buxhoeveden and Casanova 2002). While columns are most clearly distinguished in the sensory and motor cortices of primates, minicolumns appear to be ubiquitous in all animals with a neocortex (Kaas 2012). Minicolumns have a small receptive field within the larger receptive field of the column. The correlated activity in the fine-scale networks of minicolumns produces concentrated bursts of neural activity that may enable the cortex to transmit signals in the face of background noise (Ohiorhenuan et al. 2010). The function of the cortex seems to depend on the ability of canonical circuitry within the minicolumns to rapidly switch from feedforward to feedback processing between layers. During learned tasks in responses to cues in the awake monkey, information flows from layer 4 to layer 2/3 and then down to layer 5 in a feedforward loop in the temporal neocortex (Takeuchi et al. 2011; Bastos et al. 2012). This is followed shortly afterwards by a feedback loop from layer 5 to layer 2/3. Correlated firing of layer 2/3 and layer 5 neurons in minicolumns occurs during decision making in the monkey prefrontal cortex, an area responsible for executive control in primates (Opris et al. 2012). The accuracy of error-prone tasks was increased when layer 5 neurons were artificially stimulated by activity recorded during successful task execution. These results provide evidence for the role of the minicolumn as the fundamental processing unit of the neocortex associated with higher order behaviour (Bastos et al. 2012; Opris et al. 2012).

In summary, the unique morphology of the mammalian cortex facilitates multiscale processing of sensory information. Initially there is course scaling at the level of gross anatomical cortical regions specialising, for example, in processing of visual or somatosensory information. Some of these regions are then topographically mapped in order to preserve spatial relationships and facilitate selective processing of specific sensory features. Importantly, to preserve the holistic quality of a sensory stimulus, these subregions are strongly interconnected via axon pathways that create synchronized re-entrant loops of neural activity. Cortical regions are laminated which supports finer scale sensitivity in the processing of specific features. Finally, canonical microcircuits (minicolumns) bridge across layers to enhance signal contrast (Casanova 2010). Local connectivity between minicolumns enables the lowest level of stimulus binding that contributes to the holistic nature of the stimulus (Buxhoeveden and Casanova 2002).

I propose that only animals possessing the above neuroanatomical features (i.e. discrete cortical sensory regions, topographical maps, multiple cortical layers, columns/minicolumns and strong local and long-range interconnections), or their functionally analogous counterparts, have the necessary morphological prerequisites for experiencing subjective inner mental states such as pain.

For much more on this, see Key (2016) and the many replies to it in the same issue of Animal Sentience. See also the brief article “An Argument in Defense of Fishing” by Michael LaChat on pp. 20-21 in volume 21, issue 7 (1996) of Fisheries.

An earlier defense of a CRV (at least, for the human case) was mounted by Puccetti (1998):

If… the neocortical surface is itself selectively destroyed… that is sufficient to obliterate all conscious functions.

The midbrain is of course just the top of the brain stem, where the superior and inferior colliculi trigger orientating reflexes related, respectively, to sources of visual and auditory stimuli: such reflexive responses do not require conscious mediation, as we all know from finding ourselves turned towards an abrupt movement in the peripheral visual field, or in the direction of a sudden sound, before such stimuli register in consciousness. If a brain structure does its job unconsciously, then there is no reason to think its integrity in a comatose patient is evidence of residual conscious functions. Similarly with the cerebellum, which pre orchestrates complex bodily movements, and under therapeutic electrode stimulation does not yield clear sensations [3]. The cerebellum probably also stores learned subroutines of behavior, like swimming or typing: precisely the kinds of things you do better when not concentrating on them.

…[Douglas N. Walton’s] statement [that “the pupillary reflex could, for all we know, indicate some presence of feeling or sensation even if the higher cognitive faculties are absent”]… reeks of superstition. As we all know, when the doctor flashes his penlight on the eye, we do not feel the pupil contract, then expand again when he turns the light off. If not, then why in the world does Walton suppose that a deeply comatose patient feels anything in the same testing situation? The whole point of evolving reflexes like this, especially in large brained animals that do little peripheral but lots of central information processing, is to shunt quick-response mechanisms away from the cerebrum so that the animal can make appropriate initial responses to stimuli before registering them consciously. If one could keep an excised human eye alive in vitro and provoke the pupillary reflex, the way slices of rat hippocampus have been stimulated to threshold for neuronal excitation, would Walton argue that the isolated eye might feel something as its pupil contracts?

…One thing I feel reasonably confident in stating is that sensations are not experienced without recruitment of populations of neurons in the grey matter on the cerebral cortical surface. And it is easy to see why this is so: the phylogenetic novelty of neocortex is due to brain expansion in primates beginning about 50 million years ago to accommodate increasing intelligence, for where else could new cell layers appear but on the outer surface of the brain [9]? That being the case, sensation migrated there as well, and although deeper structures certainly contribute complexly to the sentient input, this is not transduced as sensation until, at a minimum, some 104 neurons are provoked to discharge on the surface of at least one cerebral hemisphere at the same time [16]. It is also plain why the contribution of subcortical mechanisms to this input does not itself implicate conscious perception. If it did, we would have sensations in seriatum: a baseball leaving the pitcher’s hand would be seen as arriving by the hitter several times in succession as neural impulses course from retina to optic chiasm to geniculate body through the optic radiation to primary visual cortex in the occipital lobe. From an evolutionary viewpoint, that would be a recipe for disaster.

…What Walton is doing is confusing the normally necessary contribution of subcortical mechanisms to sensation with the sufficient condition of neocortical functions. In the case of the primary visual system in man this is indisputable: destruction of Brodmann’s area 17 alone, say by shrapnel wounds, brings permanent total blindness [7]; whereas a peripherally blind person with intact visual cortex can be induced to experience visual sensations by direct electrode stimulation of that grey matter [2].

…

[Another of Walton’s points] alludes to findings by Lober (reported in [12]), that some people recovered from infantile hydrocephaly, thus growing up with severely reduced cerebral hemispheres, can nevertheless function well: an example being that of a university student, IQ 126, who gained first class honors in mathematics. This Walton takes to be evidence that the neocortex is neither the sole seat of consciousness nor, perhaps, crucial to the return of conscious functions.

One wants to scream aloud a commonplace of clinical psychopathology: When neural plasticity enters the picture, all bets are off! The neural plasticity of the infant brain allows a lot less than the normal quantity of grey matter to take over a wide range of functions that are usually diffused in greater brain space. This is strikingly and uncontroversially demonstrated in complete hemispherectomy for infantile hemiplegia, where control of the whole body (except for distal finger movements in the arm contralateral to the missing half brain) is found in adulthood [1]. Furthermore, as Epstein has said (quoted in [12]), hydrocephalus is principally a disease of the white matter of the brain (the cerebral ventricles, swelled by overproduction of cerebrospinal fluid, disrupt the axons of association fibers around them). It is precisely the sparing of nerve cells in the grey matter, even in severe cases of hydrocephalus, that explains the retention of conscious functions and high-performance IQs.

Tye (2016) argues against CRVs, but he presents some of the case for CRVs this way (pp. 78-79):

In humans, in standard cases, the sensory aspect of pain is generated by activity in the primary and secondary somatosensory cortices of the parietal lobe (SI and SII). The unpleasantness of pain — what we might call its “felt badness” — is closely tied to activity in the anterior cingulate cortex (ACC). Human functional imaging studies show that there is a significant correlation between the felt badness of pain and ACC activation… Further, when subjects feel pain from a stimulus that is held constant and a hypnotic suggestion is used to increase or decrease subjective unpleasantness, a correlation is found in regional cerebral blood flow in ACC but not in the primary somatosensory cortex… Also, patients with ACC lesions say that their pain is “less bothersome.”

If regions of SI and SII are damaged but not ACC, what results is an unpleasant sensation that is not pain. For example, a laser was used to deliver a thermal stimulus to a fifty-seven-year-old man with most of his SI damaged as a result of a stroke… When the stimulus significantly above normal threshold on his right hand was delivered to his left hand, the man reported an unpleasant sensation, but he denied that it was a case of pain. [Tye’s footnote here says: “Asked to classify his sensation from a list of terms that included ‘hot,’ ‘burning,’ and ‘pain,’ the patient picked none.”]

It appears, then, that the painfulness of pain in humans is based on activity in two different neural regions: the somatosensory cortex (comprised of SI and SII) and ACC.

Some animals, such as fish, lack a neocortex. So they lack these regions. This neurophysiological difference, it might be said, makes a crucial difference. A related thought is that the causal story for animals lacking a neocortex that lies behind this behavior… cannot be reconciled with the story for humans. So we aren’t entitled to infer a common cause [i.e. consciousness], even given common behavior. The neurophysiological difference between the nonhuman animals and humans defeats the explanation of behavior that [appeals] to pain.

Baars et al. (2003) is focused on the human case, and reviews several lines of evidence suggesting that frontoparietal association cortex could be especially critical for consciousness. Their summary reads:

…several lines of evidence suggest that [frontoparietal association areas] could have a special relationship with consciousness, even though they do not support the contents of sensory experience. (i) Conscious stimulation in the waking state leads to frontoparietal activation, but unconscious input does not; (ii) in unconscious states, sensory stimulation activates only sensory cortex, but not frontoparietal regions; (iii) the conscious resting state shows high frontoparietal metabolism compared with outward-directed cognitive tasks; and (iv) four causally very different unconscious states show marked metabolic decrements in the same areas.

141.See notes from my conversation with James Rose.

142.For example, Freud et al. (2016), despite being critics of the view, write:

The cortical visual system is almost universally thought to be segregated into two anatomically and functionally distinct pathways: a ventral occipitotemporal pathway that subserves object perception, and a dorsal occipitoparietal pathway that subserves object localization and visually guided action.

Similarly, another paper critical of the view, Cardoso-Leite & Gorea (2010), describe Goodale & Milner’s “two streams” account as “the most widespread account” of “how a physical stimulus can lead to a motor response, with or without an accompanying conscious experience.”

143.This finding directly contradicts our intuitive “assumption of experience-based control” Clark (2001).

144.Some additional visual information travels not through the LGN but instead to the superior colliculus, then to the pulvinar, and then joins the dorsal stream of visual processing in the posterior parietal cortex. See Snowden et al. (2012), ch. 11. In addition to these two pathways, there are several other retinal projections that are less well-studied (see Milner & Goodale 2006, section 1.1).

145.Milner & Goodale (2006), p. 67:

…both the dorsal and ventral streams diverge anatomically from the primary visual cortex, but, as we noted in Chapter 2, the dorsal stream also has substantial inputs from several subcortical visual structures in addition to the input from V1… In contrast, the ventral stream appears to depend on V1 almost entirely for its visual inputs.

146.This quote from Goodale & Milner (2013), ch. 9.

Campbell (2002), pp. 55-56, offers a different analogy, that of a heat-seeking missile:

…conscious attention is what defines the target of processing for the visuomotor system, and thereby ensures that the object you intend to act on is the very same as the object with which the visuomotor system becomes engaged… This whole procedure may work even though your experience of the location of the object is not particularly accurate. If you see a penny in a mirror without realizing that you are seeing it in a mirror, you may use its apparent location in verifying that it is brown, and [in grasping] your hand correctly to pick it up, even though your experience of its location is not actually correct. There is an obvious analogy with the behaviour of a heat-seeking missile. Once the thing is launched, it sets the parameters for action on its target in its own way; but to have it reach the target you want, you have to have it roughly pointed in the right direction before it begins, so that it has actually locked on to the intended object.

147.Of course, vision neuroscience could still support some kind of CRV even if this “two streams” account turns out to be false. For example, even if the “two streams” account is wrong, it could still be the case that all subcortical visual processing is unconscious. E.g., the pupillary light reflex is controlled via subcortical visual processing, and we are not consciously aware of our own pupil dilation (Dragoi 2016).

Also, there are several other lines of evidence (besides those reviewed in Appendix C) that could be used to argue for unconscious vision of various kinds, supported by different parts of the brain. For example see the literatures on backward masking, binocular rivalry, and blindsight, which I don’t review here. On backward masking, see Breitmeyer & Ogmen (2006). On binocular rivalry, see Miller (2015). On blindsight, see Cowey (2004).

148.Goodale & Milner (2013), ch. 7, summarize some of this evidence briefly:

The somatosensory (touch) system seems to have an organization remarkably similar to that of the visual system, with two distinct representations of the body existing in the brain, one for the guidance of action and the other for perception and memory. This model, proposed by [Dijkerman & De Haan (2007)], is supported not only by neuroscience evidence, but by the fact that there are somatosensory illusions that fool our perception without fooling actions based on the same form of bodily sensation. Dijkerman and De Haan call the perceptual representation of the body (which is vulnerable to illusions) our “body image” and the metrical representation that guides action our “body schema.”

One illusion to which the body image is prone is the so-called “rubber hand illusion.” This illusion is induced by having a person rest one of their arms on a table, but hidden from sight, with an artificial arm made of rubber lying alongside it. The experimenter proceeds to stroke the person’s arm while simultaneously stroking the artificial arm in full view. The result is that the person gains a strong impression that the stroking sensations are located not where their real arm is really is, but as if shifted in space to where the artificial arm is lying. Yet when tested in a reaching task with the affected arm, they do not make erroneously long or short movements based on where they wrongly sense the starting point of their arm to be. Instead, their actions are guided on the basis of the true location of the arm, independently of their perceptions—presumably on the basis of the veridical body schema.

…There are even reports of brain-damaged individuals who have a somatosensory equivalent of blindsight — Yves Rossetti calls this phenomenon “numbsense.” Such patients are able to locate a touch on an arm which has completely lost the conscious sense of touch, by pointing with the other arm while blindfolded. Rossetti reports that his patient was amazed that she could do this. Even more dramatically…, Rossetti found that this numbsense ability evaporated to chance guessing when the patient was asked to delay two to three seconds before making her pointing response. It may be then that while the body schema may survive brain damage that disables the body image, it is constantly reinventing itself, with each reinvention having only a transient lifetime before being lost or replaced. Just like the dorsal visual stream.

…There is recent evidence to suggest that the auditory system too may be divided into perception and action pathways. Steve Lomber… has recently shown [Lomber & Malhotra (2008)] that when one region of auditory cortex in the cat is temporarily inactivated by local cooling, the cat has no problem turning its head and body toward the sounds but cannot recognize differences in the patterns of those sounds, whereas when another quite separate area is cooled the cat can tell the sound patterns apart but can no longer turn towards them. These findings in the auditory system — and the work discussed earlier on the organization of the somatosensory system — suggest that a division of labor between perceiving objects and acting on them could be a ubiquitous feature of sensory systems in the mammalian cerebral cortex.

149.E.g. see Kihlstrom (2013).

150.Laureys (2005):

Voxel-based statistical analyses have sought to identify regions showing metabolic dysfunction in the vegetative state as compared with the conscious resting state in healthy controls. These studies have identified a metabolic dysfunction, not in one brain region but in a wide frontoparietal network encompassing the polymodal associative cortices…

…Awareness seems not to be exclusively related to activity in the frontoparietal network, but equally important is the relation of awareness to the functional connectivity within this network, and with the thalami. ‘Functional disconnections’ in long-range cortico–cortical (between latero-frontal and midline-posterior areas) and cortico–thalamo–cortical (between non-specific thalamic nuclei and lateral and medial frontal cortices) pathways have been identified in the vegetative state [6,9]. Moreover, recovery is accompanied by a functional restoration of the frontoparietal network [7] and some of its cortico–thalamo–cortical connections [9]. In addition to measuring resting brain function and connectivity, recent neuroimaging studies have identified brain areas that still show activation during external stimulation in vegetative patients.

151.See e.g. pp. 421-425 of Tononi et al. (2015a).

152.See e.g. pp. 425-426 of Tononi et al. (2015a).

153.One might wonder why Merker does not discuss cases of a related condition hydrocephalus, for example the famous patient described by John Lorber as “a young student… who has an IQ of 126 [and who] gained a first-class honors degree in mathematics” and yet who “has virtually no brain” (Lewin 1980). See also Jackson & Lorber (1984)’s report of one patient with a brain volume 56% of the normal volume, yet possessing “a first class degree in mathematics and… [an] IQ of 130” (this might be the same patient). (For another relatively dramatic case, see Feuillet et al. 2007.)

Merker describes hydrocephalus as a “far more benign” condition than hydranencephaly, because “cortical tissue is is compressed by enlarging ventricles but is present in anatomically distorted form [Sutton et al. (1980)].” Merker seems to be saying that hydranencephaly can result in far more dramatic loss of cortical tissue than hydrocephalus does, and from the literature I’ve skimmed, that seems to be correct.

154.Damasio et al. (2013); Feinstein et al. (2016); Starr et al. (2009); Philippi et al. (2012). If one considers some PVS patients to be “reporting” conscious experience as detected by neuroimaging alone (Owen 2013; Klein 2017a), these cases might serve as additional examples of this phenomenon.

155.For these and other details on Mike, see the following sources:

  • Teri Thomas’ The Official Mike the Headless Chicken Book (2000). The book was previously available online here, and has an Amazon page (with no copies available when I checked) here. I was able to order the book, in September 2016, by calling the Fruita Community Center at 970-858-0360.
  • The story “Headless Rooster: Beheaded chicken lives normally after freak decapitation by ax” on pp. 53-54 of the Oct 22, 1945 issue of Life magazine.
  • Lambert & Kinsley (2004), pp. 83-84.
  • The history page on the website for the annual Mike the Headless Chicken Festival in Fruita, Colorado.
  • Crew (2014).
  • A 2015 BBC Magazine article, “The chicken that lived for 18 months without a head.”
  • The 2001 PBS documentary The Natural History of the Chicken, directed by Mark Lewis. The section on Mike, including interviews with some of the people who witnessed Mike post-decapitation, appears from 34:14-41:30.

Note that while some popular sources report that Mike was killed specifically to serve as that night’s dinner, Lloyd Olsen’s great-grandson Troy Waters disputes this (unimportant) detail, claiming instead that Lloyd and his wife “were actually slaughtering, oh, 40 or 50 of them that day” (Thomas 2000, p. 1).

Here are a few additional details from Thomas (2000):

Q: Did Mike ever try to mate after he was beheaded?

A: No. Mike was only about four and a half months old when he was beheaded, and that’s not old enough for roosters to mate. Usually they have to be about a year old for that. When Mike was beheaded, although his body continued to develop and gain weight, he didn’t mature in the traditional sense…

Q: Did Mike suffer from pain or discomfort?

A: Officers from several humane societies examined Mike on several occasions, and declared him to be free from suffering. According to one account, he had lost the part of his brain that would have caused him to feel pain. That seems to have been verified by the fact that his owners had to tape up his feet to keep him from instinctively scratching his neck where his head would have been. The sharp spurs on his feet would have damaged his exposed neck if they hadn’t been taped up.

However… Troy Waters says that post-decapitation Mike was more docile than most chickens, and that most of the time he just lay in his straw-filled apple box. Lloyd and Hope had to prod him to get him to flap his wings and walk around for the tourists. He may have been depressed… or just in a fowl mood. [p. 21]

…

According to numerous accounts, Mike’s domeless existence was studied extensively by scientists and students at the University of Utah back when he was alive…

However, apparently that research is lost…, and no one at the University of Utah was able to find anything about it, even after extensive searches done of their archives at the request of the Fruita Chamber of Commerce last year, and PBS this year (2000). By the time I called, the librarians had heard all about the story of Mike, and they explained to me that the University of Utah had changed the structure of its life sciences department since the 1940s… Any records of Mike were simply lost, if they ever existed in the first place. [p. 60]

I located only one other source on consciousness which mentions Mike the headless chicken: Leisman & Koch (2009).

156.Mallatt & Feinberg (2016):

Our own solution to the problem raised by the convincing evidence for cortical consciousness is that the consciousness shifted from the tectum to the enlarging cerebral cortex when mammals evolved from their reptile-like ancestors.

157.See especially Merker (2007), including the response commentaries, and also Devor et al. (2014) and Merker (2016).Barron & Klein (2016) are especially succinct in their case against CRVs:

There is now considerable evidence that, in humans, subjective experience can exist in the absence of self-reflexive consciousness, and that the two are supported by different neural structures. Midbrain structures, rather than cortex, seem to be especially important. [Merker (2005, 2007), Parvizi & Damasio (2001), Damasio & Carvalho (2013), and Mashour & Alkire (2013)] have all argued that the integrated structures of the vertebrate midbrain are sufficient to support the capacity for subjective experience.

[Merker (2007)] notes that subjective experience is remarkably sensitive to damage to midbrain structures. Conversely, there is evidence of preserved consciousness even in patients who lack a cortex [Merker (2005)]. Further, although cortical damage can have profound effects on the contents of consciousness, damage to any portion of the cortex alone can spare the basic capacity for subjective experience [Damasio et al. (2013); Philippi et al. (2012); Herbet et al. (2014); Kapur et al. (1994); Friedman-Hill et al. (1995); Damasio & Van Hoesen (1983)]. Cortical damage alone can have profound effects on the contents of consciousness, but even massive cortical damage seems to spare subjective experience itself [Merker (2007); Damasio et al. (2013); Philippi et al. (2012)]. Indeed, there is evidence of residual conscious awareness in patients with severe cortical damage who are otherwise unresponsive to the world, suggesting that preserved subcortical structures may continue to support subjective experience [Owen et al. (2002); Klein & Hohwy (2015)]. Although the mechanism of anesthetic action is still debated [Hudetz (2012)], there is increasing evidence that the effect of anesthetics depends on the disconnection of cortical circuits from subcortical structures rather than on their direct cortical activity [Alkire et al. (2008); Gili et al. (2013)]. Anesthetics [Gyulai et al. (1996)] or electrical stimulation [Herbet et al. (2014)], which affect cortical midline structures without affecting subcortical structures, do not abolish consciousness; they instead produce unresponsive but conscious dreamlike states. Conversely, emergence from anesthesia [Mashour & Alkire (2013); Långsjö et al. (2012)] and coma/vegetative state [Schiff (2010)] are predicted by the reengagement of subcortical structures and reintegration of those structures with cortical circuits. Other authors have noted the powerful subcortical effect of drugs, endogenous peptides, and direct stimulation on primitive motivational states [Panksepp (2008); Mashour & Alkire (2013); Denton (2006)].

In sum, there is good evidence that subcortical structures underlie the basic capacity for subjective experience in humans. This is not to say that the cortex is unimportant for conscious experience, of course. Rather, the proposal is that subcortical structures support the basic capacity for experience, the detailed contents of which might be elaborated by or otherwise depend upon cortical structures [Merker (2013)].

For a reply to Barron & Klein on these points, see Allen-Hermanson (2016). For a counter-reply, and additional clarifications of Klein & Barron’s case against CRVs, see Klein & Barron (2016).

Machado (2007), ch. 3, provides another fairly succinct case against CRVs, in the context of his discussion of the neurological grounds for pronouncing a medical patient “dead.” In the excerpt below, I have left out Machado’s many citations, to avoid cluttering the text:

Any full account of death should include three distinct elements: the definition of death, the criterion (anatomical substratum) of brain death, and the tests to prove that the criterion has been satisfied. Undoubtedly, the term ‘criterion’ for referring to the anatomical substratum introduces confusion in this discussion, because protocols of tests (clinical and instrumental) for brain diagnosis are called ‘diagnostic criteria’ or ‘sets of diagnostic criteria’. Therefore, I will use the term ‘anatomical substratum’ instead of criterion.

During the last decades, three main brain-oriented formulations of death have been discussed: whole brain, brainstem death and higher brain standards… The whole brain criterion refers to the irreversible cessation of all intracranial structure functions. It has been accepted by society mainly for practical reasons…

The brainstem standard was adopted in several Commonwealth countries. Pallis emphasized that the capacity for consciousness and respiration are two hallmarks of life of the human being, and that brainstem death predicts an inescapable asystole. However, a physiopathological review of consciousness generation will provide a basis for not accepting Pallis’ definition of death…

The higher brain formulation springs largely from consideration of the persistent vegetative state (PVS), and has been mainly defended by philosophers. The higher brain theorists have defined human death as the ‘the loss of consciousness’ (definition), related to the irreversible destruction of the neocortex (anatomical substratum).

I will demonstrate in this chapter that consciousness does not bear a simple one-to-one relationship with higher or lower brain structures and that, consequently, the higher brain view is wrong, because the definition (consciousness) does not harmonize with the anatomical substratum (neocortex)…

…Two physiological components control conscious behavior: arousal and awareness… Arousal represents a group of behavioral changes that occurs when a person awakens from sleep or transits to a state of alertness… Awareness, also known as content of consciousness, represents the sum of cognitive and affective mental functions, and denotes the knowledge of one’s existence, and the recognition of the internal and external worlds…

In summary, a human being’s state of consciousness reflects both his or her level of arousal that depends on subcortical arousal-energizing systems and, the sum of the cognitive, affective, and other higher brain functions (content of consciousness or awareness), related to “complex physical and psychologic mechanisms by which limbic systems and the cerebrum enrich and individualize human consciousness.” Therefore, I will use the term arousal when referring to those subcortical arousal-energizing systems, and awareness, to denote the sum of those complex brain functions, related to limbic and cerebrum levels…

…Awareness is thought to be dependent upon the functional integrity of the cerebral cortex and its subcortical connections; each of its many parts are located, to some extent, in anatomically defined regions of the brain…

…Shewmon has discussed some examples of clear participation of subcortical structures in awareness. Experimental animals with complete decortication have been shown to be capable of complex interactions with the environment, which is evidence of some awareness. In lesions of the somatosensory cortex an evident loss of tactile, vibration and joint position sense is observed; nonetheless, conscious experience of pain and temperature is preserved, mediated by subcortical structures, probably the thalamus. This author also commented that two hydranencephalic patients (“prenatal destruction of the cerebral hemispheres with intact skull and scalp”) unquestionably manifested conscious behavior. These two cases are examples of the brainstem “plasticity” in newborns. Clinical and experimental evidence convincingly suggests that the brainstem of newborns is potentially capable of much more complex integrative functioning. This includes some functions commonly considered to be cortical, even in animals. Based on these subjects, the potential presence of some primitive form of awareness in anencephalics, and the possibility of subjective feeling of pain, has been suggested. Thus, according to Shewmon “the human brainstem and diencephalon, in the absence of cerebral cortex, can mediate consciousness and purposeful interaction with the environment.”

…PVS [persistent vegetative state] provides an anatomic-functional model in which arousal is preserved and awareness is apparently lacking. Therefore, it has been suggested that both components of consciousness [i.e., arousal and awareness] “are mediated by distinct anatomic, neurochemical and/or physiological systems.” Nonetheless, the potential plasticity of the brain has demonstrated that subcortical structures could mediate awareness, even with the complete absence of the cerebral cortex. Cases that have undergone hemispherectomy have shown clear signs of neuroplasticity. Austin and Grant reported 3 cases that had undergone total hemispherectomy (comprising cortex, white matter and basal ganglia), who continued speaking and were aware of their environment during the operation, done under local anesthesia…

Thus, awareness is not only related to the function of the neocortex (although it is of primary importance), but also to complex physical and psychological mechanisms, due to the interrelation of the ARAS [ascending reticular activating system], limbic system, and the cerebrum…

…we cannot simply differentiate and locate arousal as a function of the ARAS, and awareness as a function of the cerebral cortex. Substantial interconnections among the brainstem, subcortical structures and the neocortex, are essential for subserving and integrating both components of human consciousness.

The above considerations lead one to conclude that there is no single anatomical place of the brain “necessary and sufficient for consciousness.”

…

Can we deny the existence of internal awareness in PVS, because these patients apparently seem to be disconnected from the external world? The subjective dimension of awareness is philosophically impossible to test, but physiologically it seems conceivable that subjective awareness might continue. Karen Ann Quinlan’s brain showed severe damage of the thalamus, with the cerebral hemispheres relatively spared, and other authors have reported similar findings…

…Thus, in PVS cases it is impossible to deny a possible preservation of internal awareness. According to the neuropathological pattern, either subcortical structures could provide internal awareness, or some remaining activating pathways projecting to the cerebral cortex without relaying through the thalamus could stimulate the cerebral cortex. As consciousness is based on anatomy and physiology throughout the brain, it is impossible to classify a PVS case as dead. The brain is severely damaged, but not fully and irreversibly destroyed.

In ch. 7, Machado adds:

The perceptions of pain and suffering are conscious experiences; unconsciousness, by definition, precludes these experiences. The Multi-Society Task Force on PVS concluded that PVS patients are unconscious, and they “cannot experience pain and suffering.” However, it is important to argue that the pain response in newborns does not involve the cerebral cortex, which is one of the primary loci of damage in PVS. It logically follows that PVS patients have the same potential to experience pain and suffering as newborns. This argument also implies that brute animals cannot experience pain since they are not self-conscious. Howsepian considered this to be “at best counterintuitive and at worst patently false.”

158.I haven’t seen a poll on this question, this is just the sense I get from reading AI papers and talking to AI researchers, especially those in the AI subfield of deep learning.

159.Flanagan (1992), p. 5. Note that Flanagan rejects consciousness inessentialism (p. 6).

160.I’m not the only one to prefer this term. E.g. Rose & Dietrich (2009) write that “[‘conscious essentialism’] should probably be ‘consciousness inessentialism’ since it is a thesis about consciousness…”

161.This is not an argument against functionalism. In the example given here, the original version of my brain and the post-rewiring version of my brain are different functions (at the level that, by hypothesis, matters for consciousness), even though they result in the same input-output behavior at the level of my global behavior (modulo some extra energy expenditure by the version of my brain that has to undertake the additional computational work of encrypting and decrypting I, and performing computations on an encrypted form of I using fully homomorphic encryption).

For more on fully homomorphic encryption, see Armknecht et al. (2015).

Note that fully homomorphic encryption is possible within a machine learning context. See e.g. Jiang et al. (2016).

162.As Rachels (1990), p. 131, put it:

Descartes’s view [about the strong difference between human and animal minds] was extreme, even for his own time, and despite its wide influence most thinkers did not share it. Nevertheless, it was a possible view then, in a way that it is not possible now. The reason Descartes’s view of animals is not possible today – the reason his view seems so obviously wrong to us-is that between him and us came Darwin. Once we see the other animals as our kin, we have little choice but to see their condition as analogous to our own. Darwin stressed that, in an important sense, their nervous systems, their behaviours, their cries, are our nervous systems, our behaviours, and our cries, with only a little modification. They are our common property because we inherited them from the same ancestors. Not knowing this, Descartes was free to postulate far greater differences between humans and non-humans than is possible for us.

I think this statement of the point is much too strong, though. Yes, we must take seriously the fact that our brains are on a continuum with those of other animals, but this fact alone cannot tell us which specific cognitive functions we share with specific other species, and which ones we do not. One could make the same argument as Rachels does for mirror self-recognition, and this argument would fail to correctly predict that chimpanzees exhibit mirror self-recognition and gorillas do not (Anderson & Gallup Jr. 2015). Darwin alone cannot answer the specific questions of comparative psychology — that’s what the fields of comparative psychology and ethology are for.

163.Compare to Dennett (2017)’s extended discussion of “competence without comprehension.” One could make a similar (but not identical) case for “competence without consciousness.”

164.See e.g. Seager & Allen-Hermanson (2010); Chalmers (2015); ch. 4 of Weisberg (2014); Goff (2017b).

165.Barring concerns about “triviality,” that is.

166.Tye (2000), ch. 8:

States with PANIC are nonconceptual states that track certain features, internal or external, under optimal conditions (and thereby represent those features). They are also states that stand ready and available to make a direct difference to beliefs and desires. It follows that creatures that are incapable of reasoning, of changing their behavior in light of assessments they make, based upon information provided to them by sensory stimulation of one sort or another, are not phenomenally conscious…

Consider, to begin with, the case of plants. There are many different sorts of plant behavior… [but] the behavior of plants is inflexible. It is genetically determined and, therefore, not modifiable by learning… [Plants] neither acquire beliefs and change them in light of things that happen to them nor do they have any desires… [Plants exhibit] no goal-directed behavior, no purpose, nothing that is the result of any learning, no desire for [e.g.] water. Plants, then, are not subject to any PANIC states… [and thus are] not phenomenally conscious.

…What about caterpillars? …Different kinds of caterpillars show different sorts of behavior upon hatching… Some, for example, eat the shells of the eggs from which they emerge; others crawl away from their cells immediately. But there is no clear reason to suppose that caterpillars are anything more than stimulus-response devices. They have a very limited range of behaviors available to them, each of which is automatically triggered at the appropriate time by the appropriate stimulus. Consider, for example, their sensitivity to light. Caterpillars have two eyes, one on each side of the head. Given equal light on both eyes, they move straight ahead. But given more light on one of the eyes, that side of the body toward the direction of most intense light, which is why caterpillars climb trees all the way to the top; the light there is strongest. Shift the light to the bottom of the tree, and the caterpillar will go down, not up, as it usually does, even if it means starving to death. Remove one of its eyes, and it will travel in a circle without ever changing its route.

Once one is made aware of these facts, there seems no more reason intuitively to attribute phenomenal consciousness to a caterpillar on the basis of how it moves than to an automatic door. The latter responds in a fixed, mechanical way to the presence of pressure on a plate in the floor or ground in front of it, just as the former responds mechanically to the presence of light. No learning, no variation in behavior with changed circumstances, no reasoned assessment occurs… Caterpillars, then, do not support states with PANIC any more than plants do.

…I come finally to the case of honey bees. There are many examples of sophisticated honey bee behavior. Bee colonies take on odors, primarily as a result of the food contained in the hives. These odors, which vary from hive to hive, are absorbed by the fur on the bees, and guards, placed at the entrance to the hive, learn to use it to check whether incoming bees are intruders or members of the colony. Scouts fly out from the hive each spring in search of a cavity suitable for a new hive. They use the sun as their main guide but they also rely upon landmarks. Upon returning, they dance to show bees in the hive what they have discovered. Their dance requires them to remember how the sun moves relative to the positions of the landmarks, enabling them to communicate the position of the cavity correctly. Recruit bees must learn what the dancers are telling them. This demands that they form some sort of cognitive map involving the landmarks. Scouts back from their trips attend to the dances of other scouts and then go out again to visit the different cavities. With their later return, they dance again. Eventually, the dances agree and the colony moves as one to the chosen spot…

Of course, some of this is preprogrammed. Bees choose neither to dance nor how to navigate; these activities are instinctive. But, equally clearly, in the above examples, the bees learn and use facts about their environments as they go along…

…[Another] example is provided by an experiment in which bees were shown some sugar solution on a plate near the hive. Then every five minutes or so, the plate was moved away so that the distance from the hive increased by one quarter. Initially, with the plate only four inches away, it was moved just one inch. But later when the food was four hundred feet away, the plate was removed another 100 feet. Amazingly, the bees caught on to this procedure and began to anticipate where the sugar would be next by flying there and waiting for the plate to arrive!

There seems to be ample evidence, then, that honey bees make decisions about how to behave in response to how things look, taste, and smell. They use the information their senses give them to identify things, to find their way around, to survive. They learn what to do in many cases as the situation demands. Their behavior is sometimes flexible and goal-driven. They are, therefore, the subjects of states with PANIC… [and thus] are phenomenally conscious…

167.Carruthers was, for decades, a leading defender of higher-order approaches to consciousness, but has recently (Carruthers 2017) recanted, and now defends a first-order view.

168.Technically, if Tye’s “poised” criterion requires that conscious contents be poised to affect “belief” or “desire” (rather than behavior more generally), then dorsal stream processing might not satisfy Tye’s PANIC theory, but a very similar first-order theory would be satisfied.

169.For example:

  • Dennett’s illusionist theory of consciousness makes use not just of (e.g.) global broadcasting, but also of culturally learned memes (Dennett 2017). Dennett seems to suggest that as a consequence, consciousness might be limited to humans, or perhaps apes.
  • Graziano’s illusionist theory (Graziano 2013) assumes not just integrated information, higher-order representations, and a “global workspace,” but also an internal model of one’s attentional processes, an internal model of the self, and a set of ways in which these models must interact in order to instantiate a conscious experience.

However, it’s not clear to me that illusionist theories of consciousness necessarily imply that consciousness is complex; this could be an historical accident. (See also notes from my conversation with Keith Frankish.)

Frankish (2016b) lists several more (strong) illusionist theories in footnote 2, reproduced below with links to the cited papers:

Defenders of illusionist positions (under various names) include Dennett (1988; 1991; 2005), Hall (2007), Humphrey (2011), Pereboom (2011), Rey (1992; 1995; 2007), and Tartaglia (2013). As Tartaglia notes, Place and Smart also denied the existence of phenomenal properties, which Place described as ‘mythological’ (Place, 1956, p. 49; Smart, 1959, p. 151).

Arguably, one might also cite Chen et al. (2016), Chrisley & Sloman (2016), Kammerer (2016), and perhaps some ancient Buddhist philosophers (Garfield 2016).

170.For example, Carruthers (2017) is a somewhat illusionist account, and might (in the end) be relatively simple:

…what Carruthers & Veillet (2011) proposed is that phenomenal consciousness can be operationalized as whatever gives rise to the “hard problems” of consciousness… That is, a given type of content can qualify as phenomenally conscious if and only if it seems ineffable, one can seemingly imagine zombie characters who lack it, one can imagine what-Mary-didn’t-know scenarios for it, and so on. For the very notion of phenomenal consciousness seems constitutively tied to these issues. If there is a kind of state or a kind of content for which none of these problems arise, then what would be the point of describing it as phenomenally conscious nonetheless? And conversely, if there is a novel type of content not previously considered in this context for which hard-problem thought-experiments can readily be generated, then that would surely be sufficient to qualify it as phenomenally conscious.

Once phenomenal consciousness is operationalized as whatever gives rise to hard-problem thought-experiments, however, it should be obvious that the initial challenge to first-order representationalism collapses. The reason why nonconceptual contents made available to central thought processes are phenomenally conscious, whereas those that are not so available are not, is simply that without thought one cannot have a thought-experiment. Only those nonconceptual contents available [via global broadcasting] to central thought are ones that will seem to slip through one’s fingers when one attempts to describe them (that is, be ineffable), only they can give rise to inversion and zombie thought-experiments, and so on. This is because those thought-experiments depend on a distinctively first-personal way of thinking of the experiences in question. This is possible if the experiences thought about are themselves available to the systems that generate and entertain such thoughts, but not otherwise. Experiences that are used for online guidance of action, for example, cannot give rise to zombie thought-experiments for the simple reason that they are not available for us to think about in a first-person way, as this experience or something of the sort. They can only be thought about third-personally, as the experience that guides my hand when I grasp the cup, or whatever.

There is simply no need, then, to propose that dual higher-order / first-order nonconceptual contents are necessary in order for globally broadcast experiences to acquire a subjective dimension and be like something to undergo. Once possession of such a dimension / possession of phenomenal consciousness is operationalized as whatever gives rise to hard-problem thought-experiments, then the mere fact of global broadcasting provides the required explanation. For it is nonconceptual content made available to central thought processes, and which is thus available to be thought about in a distinctively first-personal way, that grounds those thought-experiments.

If we suppose that this explanation on behalf of the first-order theorist is correct, however, then what should be said about phenomenally conscious experience in nonhuman animals? Presumably no animals have the conceptual resources to engage in hard-problem-type thought-experiments. (Indeed, the same may be true of many humans.) Does that mean that their experiences aren’t phenomenally conscious ones? Surely not. For giving rise to hard-problem thought-experiments is not supposed to be constitutive of phenomenal consciousness. Rather, it provides a theory-neutral way of delimiting the class of phenomenally conscious states in ourselves: roughly, phenomenally conscious states are the ones that are especially philosophically challenging or puzzling. Instead (according to first-order representationalism), what constitutes phenomenal consciousness is being a globally broadcast nonconceptual state. And there is plenty of reason to think that many species of animal (perhaps all vertebrates) have states of that general kind…

Seen from this perspective, indeed, there isn’t any deep issue about the phenomenally consciousness status of animal experience. Once we have established that an animal has a similar cognitive architecture to ourselves, with globally broadcast nonconceptual states that are made available to a range of different belief-forming, affect-generating, and executive decision-making systems, then there is simply no further question whether its experiences are really like something for the animal, or whether its experiences genuinely possess a subjective — felt — dimension. For there is no further property that needs to be added in order to render an experience phenomenally conscious. All that needs to be shown is that the animal possesses states of the same kind that we identify as phenomenally conscious (that is, which give rise to hard-problem thought-experiments) in ourselves.

Indeed, from this perspective it also emerges that there isn’t really a deep divide between creatures capable of phenomenal consciousness and ones that aren’t. For instance, we know that bees have structured belief-like states that guide them in the service of multiple goals, informed by perceptual input from a number of different sense-modalities… So they seem to possess simple minds… But suppose it turns out that bees nevertheless lack globally broadcast perceptual states. This might be because different types of perceptual content are made available only to specific decision-making systems, for example. Perhaps no perceptual states are broadcast to most such systems simultaneously. In which case they lack phenomenal consciousness, according to an account that identifies the latter with globally broadcast nonconceptual content. But so what? This doesn’t mean that bees are all “dark on the inside” or anything of the sort. Nor does it mean that there is any point in phylogeny when some special type of experience (one that is intrinsically like something to undergo) appears on the scene. Indeed, the question of when, precisely, phenomenal consciousness emerged in phylogeny makes no sense, from this perspective.

All that can be said is that there are a variety of kinds of nonconceptual perceptual state across creatures, some of which are available to inform more systems and some of which are available to inform fewer. These states thus differ in their functional roles, and some of these roles are more similar than others to the states in ourselves that give rise to hard-problem thought-experiments, that is all. Nothing special, or magical, or especially significant happened in evolution when global-broadcasting architectures first emerged on the scene. It was just more of the same, but somewhat differently organized.

171.By “lower-order,” I have in mind accounts of consciousness that are less complex than typical higher-order accounts, but which are more complex than typical first-order accounts.

172.For example Shettleworth (2009), Vonk & Shackelford (2012), Pearce (2008), Wynne & Udell (2013), Dugatkin (2013), Olmstead & Kuhlmeier (2015), Cheng (2016), or Menzel & Fischer (2011).

See also task-specific and taxa-specific sources on animal cognition, such as:

  • Shumaker et al. (2011) and Sanz et al. (2013) on tool use
  • Whitehead & Rendell (2014) on culture among dolphins and whales
  • Brown et al. (2011) and Balcombe (2016) on fish cognition and behavior
  • Emery (2016) and Ackerman (2016) on birds
  • Marino (2017) on cetaceans

(Some of these are popular sources rather than academic sources.)

173.From James Rachels’ chapter “The Basic Argument for Vegetarianism” in Sapontzis (2004), p. 78.

174.See also this Quora thread: What is the most intelligent thing a non-human animal has done?

175.See Shumaker et al. (2011), ch. 2, for an overview. Below are some examples. Note that the type of animal tool behavior is capitalized, e.g. Baiting and Inserting.

McMahan… described Baiting of prey by the neotropical assassin bug (Salyavata variegata), which uses previously captured termite carcasses to capture additional termites. By holding and shaking the carcass over the nest’s entrance hole, the assassin bug lures a termite into attempting to retrieve the carcass for its own consumption. Once a termite grasps the lure, the assassin bug slowly pulls the termite out of the nest entrance toward itself. Once the termite is within reach, the assassin bug quickly kills and eats its new victim. McMahan… reported that the assassin bug, if not disturbed, will repeat this process an average of seven or eight times. The author once saw an assassin bug capture thirty-one termites in this manner over the course of three hours.

…

Some female digger wasps Insert small twigs into nest burrows they have closed with soil and Probe with them… This behavior may settle and pack the soil and provide the female with sensory information regarding the adequacy of the closure.

…

Boxer crabs, also known as “pom-pom crabs”… Detach small anemones from the substrate and Brandish or Wave one in each cheliped [claw]… The crab moves with its chelipeds extended and waving. If the crab is mechanically disturbed, the chelipeds with the anemones are directed at the source of irritation. This behavior would presumably facilitate the discharge of stinging nematocysts by the anemones toward the threat. However, the crab’s use of anemones is not limited to protection or defense. If food is placed near the oral disc of the anemone, the crab immediately seizes the food with its anterior ambulatory appendages. Thus any food ensnared by the anemone in its own tentacles is apt to be appropriated by the crab. The crab also removes debris adhering to the body of the anemone and ingests the edible bits.

…

Finn, Tregenza, and Norman (2009) reported defensive tool use by the veined octopus (Amphioctopus marginatus). The octopuses frequently carried coconut shell halves and, when threatened, assembled them into a shelter by aligning the two halves of the coconut and hiding inside. These authors argued that the behavior is significant, as the octopuses carry the shells for future use as a shelter, despite the immediate energetic and locomotor costs. During travel, the octopus carries the shells under its body, in a form of locomotion termed “stilt walking,” which the researchers described as “ungainly and clearly less efficient than unencumbered locomotion”…

176.Finn et al. (2009).

177.Grosenick et al. (2007).

178.See e.g. Jelbert et al. (2014) and the resources on this page by the Behavioural Ecology Research Group at the University of Oxford.

179.See this earlier footnote.

180.The study of mirror self-recognition provides some examples of failed replication (Anderson & Gallup Jr. 2015):

…the results of any study must be independently replicated by other scientists in order for the findings to be considered reliable. The demonstration of mirror self-recognition in chimpanzees, orangutans and humans has been replicated many times by different investigators all over the world… In contrast, the track record for claims of self-recognition in other species has not been encouraging. Single published reports of mirror self-recognition in one elephant that failed on a re-test (Plotnik et al. 2006), one dolphin (Reiss and Marino 2001), and two magpies (Prior et al. 2008) have yet to be replicated. Indeed, recent evidence with other corvids suggests that apparent instances of mirror self-recognition by magpies may be an artifact of tactile cues (Soler et al. 2014). And in the case of cottontop tamarins (Hauser et al. 1995) an attempt to replicate the original positive results completely failed (Hauser et al. 2001).

A famous study that is still widely discussed, but in fact was overturned several decades ago, is B.F. Skinner’s famous “superstition in pigeons” experiment. Here is Wynne & Udell (2013), pp. 105-106:

The view that contiguity [in time and space] between behavior and reward was the key element in animals’ learning about instrumental conditioning led to an interesting controversy. Skinner (1948/1972) performed an experiment in which hungry pigeons caged individually were given food at regular intervals of time (every 15 seconds). They did not have to do anything to get this food. Skinner left the pigeons alone for a while and then came back to see what had happened. What Skinner claimed to find was that each pigeon was doing something different in the moment just before food was delivered.

One bird was conditioned to turn counterclockwise about the cage, making two or three turns between [rewards]. Another repeatedly thrust its head into one of the upper corners of the cage. A third developed a ‘tossing’ response, as if placing its head beneath an invisible bar and lifting it repeatedly. (Skinner, 1948/1972)

Skinner argued that the following development had taken place. At first, each pigeon had been quietly minding its own business, preening, walking around, or whatever. Then, unexpectedly, food was delivered. The closeness together in time (contiguity) between the action the pigeon had been performing and the delivery of food led the pigeon to react as if it had been rewarded for that action and therefore to repeat that action more often in the future. In other words, Skinner was arguing that contiguity alone was a complete explanation of how animals learn about instrumental conditioning.

This account stood until, in 1971, John Staddon and Virginia Simmelhag repeated Skinner’s experiment but this time with much more detailed recording of what the pigeons were actually doing. Staddon and Simmelhag did not find their pigeons engaging in different, more or less randomly chosen behaviors after a period of regular food deliveries. Rather, just before each food delivery all the pigeons tended to do the same thing – peck at the food magazine. In other words, the pigeons had learned to expect food at regular intervals and therefore, each time food came around, performed actions that were appropriate to trying to get the food… Certainly, contiguity is important in instrumental conditioning, just as it is in Pavlovian conditioning, but it is not the whole story. Animals do not learn what the consequences of their actions are just on the basis of what happens immediately after they perform some action – another factor is necessary.

Part of that something else is contingency – the reliable dependency of one event on another. If a pigeon, or other animal, is going to learn that something it did had a particular consequence in the world around it, then the consequence must reliably follow on the action – a behavior-consequence contingency must exist. The simplest experimental demonstrations of this fact are those in which reward is not delivered for every appropriate response. This breaks up the contingency between behavior and its consequence and makes it more difficult for an animal to learn about the outcomes of its actions. The contingency between behavior and consequence can also be disturbed by giving an animal ‘free’ rewards – rewards that do not depend on performance of any particular action… Under this condition animals are much less likely to continue performing the intended response. Contingency, however, is once again not the whole story. Much contemporary research in instrumental conditioning is aimed at understanding just how animals come to learn about the relationship between their actions and the consequences of those actions.

Or consider a case involving honeybee navigation, recounted in Wynne (2004), ch. 2:

Foraging honeybees flying to and from the hive use landmarks, such as rows of trees, hills, or other salient features, to find their way. As they fly out, bees commit these scenes to memory and later use them to guide themselves on future trips. This has been demonstrated by experiments in which bees were captured and then released in the vicinity of familiar landmarks but out of sight of the hive or foraging site. Experiments with a movable landmark (a car) point to a similar conclusion.

James Gould, an expert in honeybee habits at Princeton University, claimed that bees use landmarks in a very complex way. Gould proposed that honeybees not only recognize how landmarks look when they are heading to or from a foraging site but integrate their representations of these landmarks into a proper map. A map is something much more than just a set of landmarks with information about which way you should head as you pass by them. We can call the use of landmarks without a map a route sketch: a set of directions relating to different landmarks. I can find my way to an unfamiliar spot if I am given information like “turn right at the lights by the vet,” “head straight past the football field,” and so on. This is a route sketch. If I get lost, however, or want to approach this unfamiliar spot from a different direction, then this route-sketch method is completely ineffective. If I have a proper map, on the other hand, I should be able to find my way even if I take a wrong turning, and I can find my way no matter which direction I am coming from. A map is a far superior instrument to a route sketch, but it is also far more complex. Is it really possible for the sand-grain-sized brain of the bee to contain a map?

The most direct way to test if a person or a honeybee really has a mental map or is just relying on landmarks is to first let them set off on their travels; then capture them, blindfold them (or, if they are bees, put them in a darkened container like a match box); and remove them to another place; then release them to see if they can figure out which way to head. This is just what Gould did with his bees, and he found that after being captured and displaced by the experimenters, the bees were able to correct their course for their goal when released. Consequently, Gould concluded that the bees were using mental maps. For several years, other researchers attempted to replicate Gould’s results without success. Finally the explanation for these disparate results seems to have been found.

Imagine traveling in a car using a route sketch to find your way somewhere. If you are displaced from your route and have to find your way back, you will, in general, be completely lost. However, if your route sketch includes things like “head for the mountains,” or “keep the Eiffel Tower on your left,” you may well be able to find your way even after quite large displacements, because such major objects may still be visible. It seems that this was the situation for Gould’s displaced bees. From their displaced positions they were still able to see some relevant landmarks.

To better test whether honeybees have mental maps, one must displace them to a position where landmarks are definitely not visible. A sometime collaborator of Gould’s, Fred Dyer, now at Michigan State University, displaced bees while they were out foraging. He compared their behavior when they were displaced so that the original landmarks were still visible with their behavior when they were displaced so that they could no longer see any landmarks. It was only when the landmarks were still visible that the bees successfully continued in the direction of the feeder they had been heading for before the experimenters displaced them. Removed from the area where the familiar landmarks were visible, the bees were completely lost.

Thus, pending further studies, it seems that bees, though they remember landmarks and how to fly round them, do not possess a true map for navigation.

Here is another example, this time involving the study of object permanence in dogs, recounted in Wynne & Udell (2013), pp. 50-52:

Testing for object permanence can be straightforward. One test is simply to make a desired object disappear from view and ask whether the subject searches for it in the spot where it was placed… A task of this type is known as a ‘visible displacement’. There is no trick – everything that happens to the object is clearly visible to the subject (adult human, child, or animal). Somewhat more complex is the ‘invisible displacement’ task. In this case, a desired object is first placed in a container, which is then taken behind a screen, out of the subject’s sight, where the object is removed from the container. Finally, the empty container is shown to the subject, who is then free to search for the desired object. An individual capable of object permanence will recognize that if the object is no longer in the container, then it must have been removed while it was behind the screen. Consequently, this subject will search for the object behind the screen. Children can solve the simpler task, the visible displacement, at around 12 months of age. Only children above about 18 months are able to solve invisible displacement problems.

…

Gagnon and Dore (1994) reported that puppies were unsuccessful on the invisible displacement task but that full-grown dogs were the only animals outside the great ape clade to succeed on an invisible displacement problem. However, Emma Collier-Baker and colleagues (2004) noticed a problem with Gagnon and Dore’s experimental design. Gagnon and Dore had been in the habit of leaving the device used to move the toy from box to box next to the last box it had visited. This was the box that the toy had been left behind. When Collier-Baker’s group left the container in various different locations, the dogs had no idea where to find the toy. Thus, it seemed clear that the dogs had not really understood the invisible displacement.

181.For a discussion of some of these issues in the context of studies of animal behavior (or in some cases, ecology more broadly), see e.g. Fidler et al. (2017); Ihle et al. (2017); Nakagawa et al. (2017); Tuyttens et al. (2016); Borries et al. (2016); Garamszegi (2016); Parker et al. (2016); the section on “sample sizes and confounding factors” in Biro & Stamps (2015); Nakagawa & Parker (2015); Kardish et al. (2015); Caetano & Aisenberg (2014); van Wilgenburg & Elgar (2013); Waller et al. (2013); Nakagawa & Santos (2012); Taborsky (2010); James et al. (2009); Hurlbert (2009); Jennions & Møller (2003); Lessells & Boag (1987).

182.For example, Dewsbury (1984) called Morgan’s Canon “perhaps, the most quoted statement in the history of comparative psychology” (p. 187), and de Waal (2001) called it “the perhaps most quoted statement in all of psychology” (p. 67). See also Breed (2017), ch. 11.

183.See Breed (2017), ch. 11.

For another account of this shift in cultural norms, de Waal (2016) relates (in the prologue):

For most of the last century, science was overly cautious and skeptical about the intelligence of animals. Attributing intentions and emotions to animals was seen as naïve “folk” nonsense. We, the scientists, knew better! We never went in for any of this “my dog is jealous” stuff, or “my cat knows what she wants,” let alone anything more complicated, such as that animals might reflect on the past or feel one another’s pain. Students of animal behavior either didn’t care about cognition or actively opposed the whole notion. Most didn’t want to touch the topic with a ten-foot pole. Fortunately, there were exceptions — and I will make sure to dwell on those, since I love the history of my field — but the two dominant schools of thought viewed animals as either stimulus-response machines out to obtain rewards and avoid punishment or as robots genetically endowed with useful instincts. While each school fought the other and deemed it too narrow, they shared a fundamentally mechanistic outlook: there was no need to worry about the internal lives of animals, and anyone who did was anthropomorphic, romantic, or unscientific.

Did we have to go through this bleak period? In earlier days, the thinking was noticeably more liberal. Charles Darwin wrote extensively about human and animal emotions, and many a scientist in the nineteenth century was eager to find higher intelligence in animals. It remains a mystery why these efforts were temporarily suspended, and why we voluntarily hung a millstone around the neck of biology… But times are changing. Everyone must have noticed the avalanche of knowledge emerging over the last few decades, diffused rapidly over the Internet. Almost every week there is a new finding regarding sophisticated animal cognition, often with compelling videos to back it up. We hear that rats may regret their own decisions, that crows manufacture tools, that octopuses recognize human faces, and that special neurons allow monkeys to learn from each other’s mistakes. We speak openly about culture in animals and about their empathy and friendships. Nothing is off limits anymore, not even the rationality that was once considered humanity’s trademark.

See also Shettleworth’s short history of attitudes toward anthropomorphism within ethology, which I excerpt in this footnote.

184.Though of course, this may have become true during the last 20 years, for reasons unknown to me.

185.For Wynne’s views on animal “thought” and “intelligence,” as well as internal mental representations, see Wynne & Udell (2013), pp. 3-4:

I am [not] convinced that the term ‘thought’ can be made to do useful work for a modern animal psychology… to me ‘thought’ implies language. Though there are certainly mental operations that do not involve language, I would not consider these as thought. Furthermore, my assessment of the attempts to teach language to apes is that this enterprise has been unsuccessful (see Chapter 12). Consequently, I do not believe that animals think in the way we mean that word when we apply it to members of our own species. It is in my view unhelpful anthropocentrism, verging on anthropomorphism, to talk of thought in nonhumans. Talking about ‘thinking’ animals tricks us into believing we understand animal mental life better than we really do…

‘Intelligence’ is another problematic term for animal psychology. Even in our own species, arguments rage over what constitutes intelligence, where it comes from, and how to measure it. But even if we accept that, for humans, there exists a package of problem-solving skills that can be effectively measured and labeled ‘intelligence’, it is not clear what this means for nonhumans. Talk of intelligence is often linked to attempts to form a single scale of intelligence, with some species at the top (humans, inevitably) and others at the bottom (marsupials, perhaps, or insects). Modern comparative psychologists take the more Darwinian view that each species has its own problems to solve and has therefore evolved its own skills to solve them. All the species that we see around us today have evolved for exactly the same length of time, and each has adapted to a unique niche. As we shall see in the chapters that follow, different niches make different demands on their occupants, and these include different cognitive demands.

…Finally, what about ‘cognition’ – what does that term mean? In this text, ‘animal cognition’ simply means the full richness and complexity of animal behavior. To many psychologists, ‘cognition’ implies behavior driven by internal representations of the world. I prefer to remain agnostic on the question of internal representations because I believe that seeking simple parsimonious explanations is central to the scientific approach. That means that I look for ways of explaining complex behavior in terms of the simplest possible behavioral rules. This is particularly apparent in Chapter 6 of the present text, where I consider animal reasoning. But even if it can be demonstrated that a particular, apparently complex, behavior can be explained with simple behavioral rules, I still believe it is reasonable to call that behavior cognitive. Therefore, ‘cognition’ will not be defined here in a way that demands the involvement of internal representations.

For Wynne’s reference to Morgan’s Canon as “the most awesome weapon in animal psychology,” see Wynne & Udell (2013), p. 14.

On chimpanzee theory of mind, see Wynne & Udell (2013), pp. 190-197.

On mirror self-recognition, see the quote here.

On teaching of conspecifics by apes, see Wynne & Udell (2013), pp. 225-226.

On the grammatic competence of language-trained apes, see Wynne & Udell (2013), pp. 289-291:

Overall, the evidence for grammatical structure in the spontaneous utterances of any of the language-trained apes is very slight. As we have seen, most of the spontaneous statements made by these animals have been extremely short (typically less than two signs long), leaving little room for grammatical development. Some preferred patterns of word order have been found in these studies. For example… Patterson (1978) reported that Koko usually put the sign ‘more’ in first position in most of the utterances that contained it. However, syntax is not a set of rules about where specific words or signs appear in a sentence but about the ordering of different types of words. Imagine if I placed the word ‘more’ at the beginning of every sentence that contains it. More would that be grammatical? Obviously not. In English it is usual to place subject, action, and object in that order. Consequently, in this simple sentence structure, the word ‘more’ would belong at the beginning of a sentence only if it refers to the subject of the sentence (‘More water flooded into houses’). If I wanted ‘more’ to refer to the object of the sentence, then it would belong toward the end (‘Water flooded into more houses’). A habit for putting one word or another at some position in a sentence is not itself grammatical.

By the early 1980s the great excitement generated among those interested in the possibilities of language in other species by Washoe, Lana, Nim, and the other apes in language training had given way to disappointment as the problems with these animals’ performances became apparent. It was around this time that a new species entered the ape language arena… Kanzi [a bonobo] was trained by Duane Rumbaugh and Sue Savage-Rumbaugh to express himself with Yerkish symbols, but they communicate with him in spoken English… it was found that Kanzi quickly picked up how to use the Yerkish keyboard to indicate things that he wanted, and he could also ‘name’ things he was shown by pressing the correct symbol…

Sue Savage-Rumbaugh and colleagues (1993) carried out on Kanzi the most thorough tests of grammatical comprehension of any of the ape language studies. They tested Kanzi with 310 different sentences; for example, ‘Would you please carry the straw?’ Of these 310, Kanzi responded correctly to 298. First off, Savage-Rumbaugh and colleagues acknowledged that there was no evidence to suggest that words such as ‘would’, ‘please’, ‘the’, and so on, carried any meaning for Kanzi. They maintained, however, that Kanzi did understand the syntax in the remaining words. That Kanzi could understand, to take an example that was not used in the study, that ‘Dog bites man’ means something different from ‘Man bites dog.’ The problem with this interpretation is that very few of the commands on which Kanzi was tested were at all ambiguous as to what could be done to what. Consider the example given above, ‘Would you please carry the straw?’ Kanzi could carry a straw, but could a straw carry Kanzi? Certainly not. Kanzi correctly responded to ‘Grab Jeannine’ and ‘Give the trash to Jeannine.’ In the first, Jeannine is the object of the sentence (the thing grabbed); in the second she is the subject (the person to whom the trash should be given): Did Kanzi understand this grammatical distinction when he responded correctly to these two commands? The most likely answer is no. In the first example, the verb ‘grab’ does not permit any other interpretation than to grab the thing named. In the second example, Kanzi could hardly give Jeannine to the trash instead of the trash to Jeannine!

Kanzi was presented with just 21 pairs of commands that could form a test of grammatical comprehension; the pairs of instructions were offered in two different forms, such as ‘Make the [toy] doggie bite the [toy] snake,’ and ‘Make the [toy] snake bite the [toy] doggie.’ Although Savage-Rumbaugh and colleagues reported that Kanzi got 57 percent of these items correct, a reanalysis of the original results indicated that Savage-Rumbaugh and colleagues’ grading of Kanzi’s responses had been extremely generous (Wynne, 2004). Rescored to eliminate overinterpretation, Kanzi actually achieved fewer than 30 percent correct responses to these critical commands that demand grammatical comprehension for their correct completion.

Interestingly, Kanzi’s spontaneously utterances remain extremely short: The vast majority of them (94%) are just a single sign…

To summarize, the claims of their advocates notwithstanding, the ape language projects have generated very little evidence of linguistic comprehension or production. The labeling of the signs produced by chimpanzees and gorillas with English words has obscured a great deal more than it has uncovered. Convincing evidence for even rudimentary features of what could be considered language understanding, such as displaced reference (using a sign to refer to an object that is not present), is extremely scarce. As for syntax and grammar, the typical ape’s one- or two-word utterance hardly offers much scope for grammatical prowess, and very little has been observed.

186.On Alex the Parrot’s counting ability, see Wynne & Udell (2013), pp. 78-79:

In order to count, it is not just necessary to recognize that five items are more than four items (relative number). It is not even enough to recognize that every group of five items has something in common with every other group of five items (absolute number), whatever those items may be. Counting also means recognizing at least two further qualities of numbers:

1. Tagging: a certain number name or tag goes with a certain quantity of items. In English the name ‘one’ or symbol ‘1’ stands for a single item. ‘Two’ or ‘2’ goes with a pair of items, and so on. These tags must always be applied in the same order.

2. Cardinality: the tag applied to the last item of a set is the name for the number of items in that set. Thus, as I tag the pens on my desk, I call the first one ‘one’, the next one ‘two’, and the last one ‘three’. ‘Three’ is consequently the correct name for the number of pens on my desk.

…Some of the strongest evidence for counting comes from an African gray parrot called Alex, trained in a rather original way by Irene Pepperberg… Alex was trained to respond verbally in English to questions presented to him verbally by an experimenter. He would be presented a tray of several objects and asked ‘What’s this?’ or ‘How many?’ In tests with novel objects Alex was able to correctly identify the number of items in groups of up to six objects with an accuracy of around 80 percent. Even mixed groups of more than one type of object were not an insurmountable problem for Alex, though his accuracy suffered a little. Although only small numbers were tested, Pepperberg took care to ensure that the objects did not fall into characteristic patterns on the tray. Alex was also able to answer questions such as ‘How many purple wood?’ when presented with a tray containing pieces of purple wood along with orange wooden items, purple pieces of chalk, and orange chalk all intermixed.

On the formation of person-specific concepts in northern mockingbirds, see Wynne & Udell (2013), p. 49:

Most studies on concept formation in animals have had to be carried out in the laboratory in order to fully control the stimuli that are presented to the subjects. But in one particularly creative study Douglas Levey and colleagues (2009) succeeded in demonstrating that northern mockingbirds on a university campus could form a concept of particular people. Mockingbirds learned to assess the threat posed by different individuals. People who threatened a nest on four successive days were much more likely to be intercepted and squawked at than a novel person who had never interfered with the nest before.

On pigeons discriminating different schools of art, see Wynne & Udell (2013), p. 41:

In a study by Shigeru Watanabe and his colleagues (Watanabe et al., 1995), pigeons were even able to discriminate paintings by Monet and Picasso. The pigeons also correctly identified novel paintings by these two artists. In the first study of the categorization of schools of art by a nonhuman subject, paintings by Cezanne and Renoir were spontaneously categorized as belonging to the Monet school, while paintings by Braque and Matisse were categorized as belonging to the Picasso school.

On tool modification and/or use by some birds, see Wynne & Udell (2013), pp. 119 & 127-129:

During participation in a study to assess preference between a hooked or straight wire in a food-retrieval task, [a crow named] Betty found herself with only a straight wire after the male crow in the experiment removed the more effective hooked version. Not to be deterred, Betty spontaneously began to bend the end of the straight wire into a hook, which she then used to successfully retrieve the food reward. To explore this behavior further, a year later Alex Weir, Jackie Chappell, and Alex Kacelnik (2002) gave Betty and her male companion ten trials where food was placed in a small bucket lowered into a pipe. For each trial, the crows were provided with only a straight metal wire. While the male crow did manage to retrieve the food using the straight wire during one trial, he never bent the wire and was generally unsuccessful at the task. Betty, on the other hand, quickly bent the wire into a hook (often in less than 10 seconds), which allowed her to once again retrieve the food from the tube with relative ease. In total, Betty displayed this behavior on nine out of the ten trials, suggesting that at least some crows are capable of novel tool construction even from materials not found in their natural environment.

…

In [one of Aesop’s fables], a crow dying of thirst encounters a pitcher with a small amount of water located at the bottom, just out of the crow’s reach. In a moment of insight the crow begins to drop nearby rocks in the pitcher, raising the level of the water and saving his own life. Given the growing number of reports of tool use and other complex behaviors in crows and other related birds, some scientists questioned whether there might be truth to this old tale. The first study in this line of investigation was conducted by Christopher Bird and Nathan Emery (2009). Four adult hand-raised rooks, a species of corvid that is not known for tool use in the wild, were individually presented with a clear plastic tube holding a waxworm – a preferred food item – that was floating on a small amount of water located at the bottom. As in Aesop’s tale the birds could not reach the water or the worm floating upon it. While all subjects had been exposed to tubes of this type before and had encountered gravel stones in their home enclosure, none of the subjects had engaged in a task that required them to add rocks to such a tube previously. Nonetheless, when stones were presented, all four rooks began to drop the rocks in the tube, raising the water level to secure their floating food reward… The rooks also demonstrated sensitivity to the number of stones required to raise the water level to an appropriate height. When this study was later replicated with New Caledonian crows (Taylor et al., 2011), the crows did not spontaneously make use of the stones to adjust the water level. Instead the stones had to be initially set up on a platform so that they would accidently fall into the water when the bird attempted to reach for the worm. After observing the outcome of several accidental stone drops (a rise in the water level and, in some cases, a reachable worm), all five crows learned to systematically place the stones in the water for themselves. While not all corvids demonstrate the same level of spontaneous insight in this situation as Aesop’s thirsty crow, there may nonetheless be some truth to the age-old tale after all.

Examples of both tool use and insightful behavior suggest that animals can sometimes take pieces of experience and put them together in novel ways to obtain desired ends – this is a form of reasoning. When making such claims, however, we must be careful to demonstrate that the animal subject really is using insight and not relying on trial and error learning that took place before or during the experiment. With tool use we likewise need to be sure that the animal is showing a flexible exploitation of the tool that shows an ability to reason and and is not simply instinctive or habitual behavior.

On transitive inference, see Wynne & Udell (2013), pp. 130-133.

On pointing, see Wynne & Udell (2013), p. 185:

A stronger emphasis on environmental context and the socialization history of animals participating in human-guided object-choice tasks has led to a growing body of studies that demonstrate that this skill is not limited to dogs and primates (Figure 8.5). Captive and hand-reared dolphins (Pack & Herman, 2004), fur seals (Scheumann & Call, 2004), wolves (Udell et al., 2008), bats (Hall et al., 2011), and horses (Maros et al., 2008) have all demonstrated the capacity to follow human points to a target among others, and the list of successful species is growing. It is possible that not all species will prove to be proficient on human-guided object-choice tasks.

On teaching of conspecifics by meerkats, see Wynne & Udell (2013), pp. 222-224:

…in 1992 Tim Caro and Marc Hauser suggested three functional criteria for teaching:

1. The teacher must modify its behavior, specifically in the presence of naive individuals (but not knowledgeable individuals) to facilitate learning.
2. The teacher should not immediately benefit from the change in its own behavior. Instead the modification of behavior should benefit the learner alone (sometimes at the immediate cost of the teacher). However, long-term payoff in the form of young that can feed themselves or now contribute to the group in some other way may still occur.
3. The pupil must learn a behavior it would not have otherwise learned, or it must learn the skill faster or earlier in life than it otherwise would have without the aid of the teacher.

Taken together, these criteria serve as a new, more inclusive, definition of teaching behavior. This definition is now considered the standard by most scientists working with nonhuman species…

…As early as the 1960s, meerkats had been identified as another species that assists its young in the capture of prey in the wild (Ewer, 1963). Recently, however, a series of observations and experiments by Alex Thornton and Katherine McAuliffe (2006) have made it possible to go beyond the anecdotal evidence and scientifically evaluate whether these actions meet the criteria for teaching. When meerkat pups are around four weeks old, they begin to follow groups of adult foragers into the hunt, producing begging calls that prompt the adults to surrender their captured prey. However, the meerkat diet includes scorpions – which can produce a dangerous, if not deadly, sting when handled incorrectly – making it critical that young members of the group master proper handling of this creature early on. Adult meerkats have risen to this challenge. Instead of presenting whole living prey to pups, adults first present only dead prey to their offspring and then, as the pups get older, graduate them to live scorpions with stingers removed. This allows the pups to practice capturing scorpions without risking harm; it also allows for easy retrieval and re-presentation of the scorpion by adults should the pup lose the prey during the learning process. According to Thornton and McAuliffe’s (2006) data, these actions do indeed allow the pups to learn the scorpion capture technique more quickly than they would on their own; the multistep approach was also more effective than providing pups only with dead scorpions. Furthermore, adults are only willing to help young individuals in this manner. Adults can identify the age of pups by the quality of their begging calls, and a playback experiment confirmed that adults modify their behavior in response to the age-specific calls that are being produced. Lastly, adults experience short-term costs (in terms of time, energy, and sacrifice of personal food) to the benefit of the learners; although clear long-term benefits (preserving the lives and health of offspring and increasing the rate at which they achieve foraging autonomy) are shared by all. As a result, this study was one of the first to demonstrate that prey capture assistance could indeed meet the criterion for teaching in a nonhuman species.

On Chaser’s vocabulary, see Wynne & Udell (2013), pp. 282-284:

John Pilley and Alliston Reid recently reported on a border collie, Chaser, who knows the names of over 1,000 different toys as well as a small number of verbs (Pilley & Reid, 2011)…

One factor that differentiates Chaser from any other dog that has been reported to have a substantial vocabulary is that John Pilley, one of the scientists responsible for the study, trained Chaser himself. Thus the method of training is fully known (Chaser was mainly rewarded for collecting a named toy by being allowed to chase that toy), and Pilley and Reid reported on Chaser’s rate of learning… Chaser’s vocabulary grew at a very steady rate over the course of the three years that she was in training. There is nothing to suggest that Chaser was reaching the limit of the number of words she could learn when the study was terminated after 36 months.

Chaser’s knowledge of vocabulary was tested periodically throughout her three years of training. A group of objects selected from all of those she had been trained on was placed on the floor, and Chaser was instructed to collect them one by one by name only. In routine tests during training, Chaser was required to collect one particular toy from eight different mixtures of toys. To collect the toy once from a group of eight might occur by chance one time in eight. To collect the correct toy from eight different groups of eight would be expected to occur by chance only one time in less than 16 million. Each month Chaser was also tested with five groups of 20 objects. In these tests Chaser and her trainer could not see each other as she went in another room to retrieve the named item. To be considered ‘learned’, Chaser had to get every single test with that object correct.

In addition to a large vocabulary, Chaser’s training and testing also led to several further interesting results. The first was that, in addition to many nouns, Chaser was also taught three different verbs: take, paw, and nose. ‘Take’ indicated that Chaser should pick up an object; ‘paw’ and ‘nose’ meant that she should touch the object with the named part of her body. Pilley and Reid tested Chaser with three objects, each of which she was instructed to either take, paw, or nose. The order of commands was fully randomized over 14 trials. Chaser got every single one of them correct. This indicates two things. The first is that the object names really functioned as nouns for Chaser. Prior studies of animals with large vocabularies of object names had been criticized for not being able to demonstrate that the object names were truly nouns and not commands to fetch the object. Second, this experiment showed that Chaser could differentiate verbs from nouns.

Pilley and Reid also investigated whether Chaser had any comprehension for common nouns – words such as ‘ball’ and ‘chair’, which can refer to many different unique objects. Among the many items that Chaser had learned names for were many different balls and Frisbees. The remaining objects in Chaser’s vocabulary were all toys. Using the same methods as in the original training, Pilley and Reid taught Chaser that each of these objects could also be referred to as balls, Frisbees, or toys. In tests, Chaser successfully discriminated the three classes of objects from each other, but she also discriminated between toys and non-toys: items around the house that she was not allowed to play with. This categorization is particularly interesting because objects that are toys are very diverse, and of course objects that are not toys have little if anything in common with each other beyond the fact that Chaser was not allowed to play with them.

187.Ihle et al. (2017) is a good step forward, and may spur further discussion of these issues within ethology. For earlier, less wide-ranging discussions of related issues in the ethology literature, see the sources I cite in this footnote.

188.Jennings (1906), pp. 335-337:

All that we have said thus far in the present chapter is independent of the question whether there exist in the lower organisms such subjective accompaniments of behavior as we find in ourselves, and which we call consciousness. We have asked merely whether there exist in the lower organisms objective phenomena of a character similar to what we find in the behavior of man. To this question we have been compelled to give an affirmative answer. So far as objective evidence goes, there is no difference in kind, but a complete continuity between the behavior of lower and of higher organisms.

Has this any bearing on the question of the existence of consciousness in lower animals? It is clear that objective evidence cannot give a demonstration either of the existence or of the non-existence of consciousness, for consciousness is precisely that which cannot be perceived objectively. No statement concerning consciousness in animals is open to verification or refutation by observation and experiment. There are no processes in the behavior of organisms that are not as readily conceivable without supposing them to be accompanied by consciousness as with it.

But the question is sometimes proposed: Is the behavior of lower organisms of the character which we should “naturally” expect and appreciate if they did have conscious states, of undifferentiated character, and acted under similar conscious states in a parallel way to man? Or is their behavior of such a character that it does not suggest to the observer the existence of consciousness?

If one thinks these questions through for such an organism as Paramecium, with all its limitations of sensitiveness and movement, it appears to the writer that an affirmative answer must be given to the first of the above questions, and a negative one to the second. Suppose that this animal were conscious to such an extent as its limitations seem to permit. Suppose that it could feel a certain degree of pain when injured; that it received certain sensations from alkali, others from acids, others from solid bodies, etc., — would it not be natural for it to act as it does? That is, can we not, through our consciousness, appreciate its drawing away from things that hurt it, its trial of the environment when the conditions are bad, its attempting to move forward in various directions, till it finds one where the conditions are not bad, and the like? To the writer it seems that we can; that Paramecium in this behavior makes such an impression that one involuntarily recognizes it as a little subject acting in ways analogous to our own. Still stronger, perhaps, is this impression when observing an Amoeba obtaining food… The writer is thoroughly convinced, after long study of the behavior of this organism, that if Amoeba were a large animal, so as to come within the everyday experience of human beings, its behavior would at once call forth the attribution to it of states of pleasure and pain, of hunger, desire, and the like, on precisely the same basis as we attribute these things to the dog. This natural recognition is exactly what Munsterberg (1900) has emphasized as the test of a subject. In conducting objective investigations we train ourselves to suppress this impression, but thorough investigation tends to restore it stronger than at first.

Of a character somewhat similar to that last mentioned is another test that has been proposed as a basis for deciding as to the consciousness of animals. This is the satisfactoriness or usefulness of the concept of consciousness in the given case. We do not usually attribute consciousness to a stone, because this would not assist us in understanding or controlling the behavior of the stone. Practically indeed it would lead us much astray in dealing with such an object. On the other hand, we usually do attribute consciousness to the dog, because this is useful; it enables us practically to appreciate, foresee, and control its actions much more readily than we could otherwise do so. If Amoeba were so large as to come within our everyday ken, I believe it beyond question that we should find similar attribution to it of certain states of consciousness a practical assistance in foreseeing and controlling its behavior. Amoeba is a beast of prey, and gives the impression of being controlled by the same elemental impulses as higher beasts of prey. If it were as large as a whale, it is quite conceivable that occasions might arise when the attribution to it of the elemental states of consciousness might save the unsophisticated human being from the destruction that would result from the lack of such attribution. In such a case, then, the attribution of consciousness would be satisfactory and useful. In a small way this is still true for the investigator who wishes to appreciate and predict the behavior of Amoeba under his microscope.

But such impressions and suggestions of course do not demonstrate the existence of consciousness in lower organisms. Anv belief on this matter can be held without conflict with the objective facts. All that experiment and observation can do is to show us whether the behavior of lower organisms is objectively similar to the behavior that in man is accompanied by consciousness. If this question is answered in the affirmative, as the facts seem to require, and if we further hold, as is commonly held, that man and the lower organisms are subdivisions of the same substance, then it may perhaps be said that objective investigation is as favorable to the view of the general distribution of consciousness throughout animals as it could well be. But the problem as to the actual existence of consciousness outside of the self is an indeterminate one; no increase of objective knowledge can ever solve it. Opinions on this subject must then be largely dominated by general philosophical considerations, drawn from other fields.

189.From a paper presented at a conference on Darwin and the Human Sciences, at the London School of Economics (1993). Quoted in Baron-Cohen (1995), p. 4.

190.On anthropomorphism in general, see Hutson (2012), ch. 6; Guthrie (1993); Horowitz (2010); Epley et al. (2007); Epley (2011); Waytz et al. (2012); Urquiza-Haas & Kotrschal (2015).

Shettleworth (2009), ch. 1, provides a summary of the debates over anthropomorphism in the study of animal cognition and behavior:

Crows [in Davis, California] crack walnuts by dropping them from heights of 5–10 meters or more onto sidewalks, roads, and parking lots. Occasionally they drop walnuts in front of approaching cars, as if using the cars to crush the nuts for them. Do crows intentionally use cars as nutcrackers? Some of the citizens of Davis, as well as some professional biologists (Maple 1974, in Cristol et al. 1997) were convinced that they do, at least until a team of young biologists at UC Davis put this anecdote to the test (Cristol et al. 1997). They reasoned that if crows were using cars as tools, the birds would be more likely to drop nuts onto the road when cars were coming than when the road was empty. Furthermore, if a crow was standing in the road with an uncracked walnut as a car approached, it should leave the nut in the road to be crushed rather than carry it away.

Cristol and his collaborators watched crows feeding on walnuts and recorded how likely the birds were to leave an uncracked walnut in the road when cars were approaching and when the road was empty. They found no support for the notion that crows were using automobiles as nutcrackers…

…The people in Davis and elsewhere (Nihei 1995; Caffrey 2001) who saw nutcracking as an expression of clever crows’ ability to reason and plan were engaging in an anthropomorphism that is common even among professional students of animal behavior (…Kennedy 1992; Wynne 2007a, 2007b). As we will see, such thinking can be a fertile source of ideas, but research often reveals that simple processes apparently quite unlike explicit reasoning are doing surprisingly complex jobs…

…

…some of Darwin’s [early] supporters… set out to collect anecdotes appearing to prove animals could think and solve problems the way people do. Their approach was not just anthropocentric but frankly anthropomorphic, explaining animals’ apparently clever problem solving in terms of human-like thinking and reasoning. But as we have seen in the case of the nutcracking crows, just because an animal’s behavior looks to the casual observer like what a person would do in a similar-appearing situation does not mean it can be explained in the same way. Such reasoning based on analogy between humans and other animals must be tested with experiments that take into account alternative hypotheses (Heyes 2008).

Fortunately for progress in understanding animal cognition, critics of extreme anthropomorphism were not slow to appear. E.L. Thorndike’s (1911/1970) pioneering experiments on how animals solve simple physical problems showed that gradual learning by trial and error was more common than human-like insight and planning (Galef 1998). C. Lloyd Morgan also observed animals in a systematic way but is now best known for stating a principle commonly taken as forbidding unsupported anthropomorphism. What Morgan (1894) called his Canon states, “In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale.” Morgan’s Canon is clearly not without problems (Sober 2005). What is the “psychological scale”? Don’t “higher” and “lower” assume the phylogenetic scale? In contemporary practice “lower” usually means associative learning, that is, classical and instrumental conditioning or untrained species-specific responses. “Higher” is reasoning, planning, insight, in short any cognitive process other than associative learning.

For an example of how Morgan’s Canon might be applied today, suppose… that crows had been found to drop nuts in front of cars more than on the empty road. An obvious “simple” explanation is that they had been reinforced more often when dropping a nut when a car was coming than when the road was empty and thereby had learned to discriminate these two situations. A “higher,” anthropomorphic, explanation might be that having seen fallen nuts crushed by cars the insightful crows reasoned that they could drop the nuts themselves. The contrast between these explanations suggests a straightforward test: observe naive crows to see if the discrimination between approaching cars and empty roads develops gradually (supporting the “simple” explanation) or appears suddenly, without any previous trial and error (supporting the “higher” explanation). Unfortunately, competing explanations do not always make such readily discriminable predictions about observable behavior. Even when they do, experiments designed to pit them against each other may not yield clear results. Then agnosticism may be the most defensible policy (Sober 2005).

In practice, the field of comparative cognition as it has developed in the past 30–40 years has a very strong bias in favor of “simple” mechanisms (Sober 2001; Wasserman and Zentall 2006a). The burden of proof is generally on anyone wishing to explain behavior in terms of processes other than associative learning and/or species-typical perceptual and response biases. To many, anthropomorphism is a dirty word in scientific study of animal cognition (Mitchell 2005; Wynne 2007a, 2007b). But dismissing anthropomorphism altogether is not necessarily the best way forward. “Anthropodenial” (de Waal 1999) may also be a sin. After all, if other species share common ancestors with us, then we share an a priori unspecifiable number of biological processes with any species one cares to name. Thus in some ways, as Morgan apparently thought (Sober 2005), the simplest account of any behavior is arguably the anthropomorphic one, that behavior analogous to ours is the product of a similar cognitive process. Note, however, that “simple” has shifted here from the cognitive process to the explanation (Karin-D’Arcy 2005), from “simpler for them” to “simpler for us” (Heyes 1998).

Where do these considerations leave Morgan’s Canon? A reasonable modern interpretation of the Canon (Sober 2005) is that a bias in favor of simple associative explanations is justified because basic conditioning mechanisms are widespread in the animal kingdom, having been found in every animal, from worms and fruitflies to primates, in which they have been sought (Papini 2008). Thus they may be evolutionarily very old, present in species ancestral to all present-day animals and reflecting adaptations to universal causal regularities in the world and/or fundamental properties of neural circuits. As species diverged, other mechanisms may have become available on some branches of the evolutionary tree, and it might be said to be the job of comparative psychologists to understand their distribution (Papini 2002).

But for such a project to make sense, it must be clear what is meant by associative explanations and what their limits are. Associative learning… is basically the learning that results from experiencing contingencies, or predictive relationships, between events. At the theoretical level, such experience in Pavlovian (stimulus-stimulus) or instrumental (response-stimulus) conditioning has traditionally been thought of as strengthening excitatory or inhibitory connections between event representations. Thus one might say that any cognitive performance that does not result from experience of contingencies between events and/or cannot be explained in terms of excitatory and/or inhibitory connections is nonassociative.

Path integration… is one example: an animal moving in a winding path from home implicitly integrates distance and direction information into a vector leading straight home. As another, on one view of conditioning… the flow of events in time is encoded as such and computed on to compare rates of food presentation during a signal and in its absence. Other nonassociative cognitive processes which might be (but rarely if ever have been) demonstrated in nonhumans include imitation, that is, storing a representation of an actor’s behavior and later reproducing the behavior; insight; and any kind of reasoning or higher-order representations or computations on event representations. As we will see throughout the book, discriminating nonassociative “higher” processes from associative ones is seldom straightforward, in part because the learning resulting from associative procedures may have subtle and interesting cognitive content. In any case, the goal of comparative research should be understanding the cognitive mechanisms underlying animal behavior in their full variety and complexity rather than partitioning them into rational or nonassociative vs. associative (Papineau and Heyes 2006).

In conclusion, neither blanket anthropomorphism nor complete anthropodenial is the answer (Mitchell 2005). Evolutionary continuity justifies anthropomorphism as a source of hypotheses. When it comes to comparing human cognition with that of other species, it is most likely that — just as with our genes and other physical characters — we will find some processes shared with many other species, some with only a few, and some that are uniquely human. One of the most exciting aspects of contemporary research on comparative cognition is the increasing detail and subtlety in our picture of how other species’ minds are both like and not like ours.

191.In a 2004 interview conducted by Keith Frankish (audio; transcript), Dennett said:

Animals are of course awake, they can feel pain, and they can experience pleasure, but they can’t, I think, …dwell on things the way we can. They can’t shift their attention the way we can. They can’t reflect on things the way we can. That sort of recursive, reflective mulling over and letting one thing remind you of something else and so forth, and being able to control that to some degree — that’s what animals (I think) can’t do: not chimpanzees, not dolphins, not dogs… their consciousness is so disunified, so fragmented, so impoverished compared to ours, that to call them conscious is almost certainly to misimagine their circumstances.

…I like to ask people, “What is it like to be a brace of oxen, a pair of oxen yoked together?” And they say “Well, it’s not like anything, of course. I mean it’s like something to be one ox, and it’s like something to be the other ox — the left and the one on the right — but it’s not like anything to be a brace of oxen, because they aren’t unified in the right way.” Well, but you’d be amazed [at] the extent of which many animals are like a brace of oxen, [at] how much disunity is possible in a mammalian nervous system. It’s this further unification which is the fruits of the Joycean machine [Dennett’s term for the “virtual machine” that he thinks enables human consciousness; see Dennett (1991)] that gets installed on us.

What is it like to be an ant colony? Well, it’s not like anything to be an ant colony, even if it’s like something to be an individual ant, so people think. Well, stop and think. A brain is composed of billions of neurons, each one of those is a lot stupider than an ant. They happen to be enclosed in a skull and their inter-communications are rich but of the same sort that is possible between one ant and another. Now if we opened up somebody’s head and we found inside, not neurons but millions of little ants, maybe we would say, “Oh gosh, maybe it’s not like anything to be this person.” Well, an ant colony can exhibit a lot of the same unified behaviour, a lot of the same protracted projects… that an organism inside a skin can exhibit.

Now if you think it’s pretty obvious that an ant colony is not something that is itself conscious… then you should be at least willing to entertain the hypothesis that a bird is just as unconscious as an ant colony is. Now I’m deliberately setting the bar high, forcing the burden of proof onto those who say, it’s just obvious that, say, other mammals (at least) are conscious the way we are. I say, “No, it’s not obvious, prove it.” And the more we learn about specific organisms — that’s why you have to do the science — the more we find out that a lot of things that are obvious to philosophers in the armchair are just false.

What is it like to be a rabbit? Well you may think that it’s obvious that rabbits have an inner life that’s something like ours. Well it turns out that if you put a patch over a rabbit’s left eye and train it in a particular circumstance to be (say) afraid of something, and then you move the patch to the right eye, so that the very same signal, the very same circumstance that it has been trained to be afraid of, now is coming in the other eye, you have a naive rabbit, because in the rabbit brain the connections that are standard in our brains just aren’t there, there isn’t that unification. What is it like to be which rabbit? The rabbit on the left, or the rabbit on the right? The disunity in a rabbit’s brain is stunning when you think about it, and you just haven’t tested many species to see just how disunified they can be. The answer is they can be quite disunified.

See also his earlier Dennett (1995).

I asked Dennett (via email) which rabbit study he was referring to in the quote above, and he pointed me to a research abstract that I have not read: Ian Steele-Russell’s “The absence of interhemispheric communication in the rabbit” in the Society for Neuroscience’s Abstracts, Volume 20 (1994): 414.11. (This abstract is in Part 2 of Volume 20; Part 1 runs through 383.20).

For reviews of similar and other related studies of interhemispheric transfer, in a variety of species, see e.g. Steele-Russell et al. (1979).

Studies of interocular transfer in birds are especially interesting, as they seem to reveal substantial differences between bird cognition and mammal cognition (Qadri & Cook 2015). Here is one example reported in Remy & Watanabe (1993):

…an ubiquitous and particularly striking result in many [interocular transfer] studies [in birds] is the “mirror image reversal” effect… Thus, when pigeons were trained [in one eye] on a mirror-image discrimination (e.g., 45° vs. 135° oblique lines) and then exposed to the stimuli with the untrained eye, they preferred the previously [unrewarded] stimulus…

See also e.g. Ortega (2005) and Xiao & Güntürkün (2009).

I have not examined these studies closely. No doubt there are a variety of ways to interpret the data they report.

192.In other words, I disagree with the perspective of e.g. Safina (2015), who writes:

So, do other animals have human emotions? Yes, they do. Do humans have animal emotions? Yes; they’re largely the same.

For another argument against Safina’s perspective, though not one I entirely agree with, see Barrett (2017), ch. 12.

193.Carruthers (1999) offers the following account, which I would guess is true of many incidents of attributing conscoiusness to animals:

[My view that most animals lack phenomenal consciousness] is highly controversial, of course… It also conflicts with a powerful common-sense intuition to the contrary. But I suggest that this intuition may well be illusory, and can easily be explained away. For notice that one important strategy we often adopt when attributing mental states to a subject, is to try imagining the world from the subject’s point of view, to see how things then seem. But when we do that, what we inevitably get are imaginings of conscious perceptions and thoughts, and of experiences with phenomenal feels to them. So, of course we naturally assume that the experiences of a cat will be like something, once we have got to the point of accepting (correctly, in my view) that the cat does have experiences. But this may merely reflect the fact that imaginings of perceptual states are always imaginings of conscious perceptual states, that is all. It may go no deeper than the fact that we have no idea how to imagine a non-conscious perception.

194.The major possible exception, at least for reasonable-scale systems, is human-style verbal self-report of conscious experience.

195.By “actually consciousness indicating” I mean “knowably providing substantial evidence that the creature possessing that feature is phenomenally conscious.”

196.But, see Appendix H.

197.Other “sophisticated” PCIFs from my table of PCIFs might include, for example, intentional deception, teaching others, some forms of tool behavior, spontaneously planning for future days without reference to current motivational state, taking into account another’s spatial perspective, play behaviors, and grief behaviors. Unfortunately, my ratings for “apparent cognitive-behavioral sophistication” below draw from much more information than I took the time to record in my table of PCIFs and taxa.

198.For arguments about why the absolute number of pallial neurons might be especially indicative of “higher” cognitive functions (and thus perhaps consciousness), see Herculano-Houzel (2016, 2017).

199.Technically, this four-factor approach isn’t entirely theory-agnostic, but it is relatively theory-agnostic.

200.Here and in many other locations in this report, I should really say “brains or ganglia or perhaps entire nervous systems,” but instead I just say “brains” for brevity. For more on this, see Aronyosi (2013).

201.See Olkowicz et al. (2016).

202.As far as I know, the number of neurons in a chimpanzee brain has not been counted, but Herculano-Houzel & Kaas (2011), table 4, estimates the number at 27.9 billion neurons.

An earlier version of this report gave an estimate of 6 billion neurons in the chimpanzee brain, based on a misinterpretation of Herculano-Houzel (2016). My thanks to reader Avi Norowitz for drawing my attention to this error and pointing me to the better estimate in Herculano-Houzel & Kaas (2011). This discovery also increased my estimate for the number of neurons in a cow’s brain, since that estimate was based on the chimpanzee estimate. I have not updated my overall probabilities of chimpanzee consciousness or cow consciousness in response to this error correction, simply because there are so many sources of uncertainty in my estimates that a correction of this magnitude for a single variable (neurons) doesn’t change my overall estimates much, and adjusting my estimates by 1-3 percentage points would risk a misleading degree of precision in my estimates about something as uncertain as consciousness.

203.As far as I know, the number of neurons in a cow brain has not been counted, but Herculano-Houzel (2016), p. 75, says that, given the neuronal scaling rules discovered to hold for primates vs. other mammals, “the chimpanzee can be expected to have at least twice as many neurons as a cow.” Given the estimate I use in this table for the number of neurons in a chimpanzee brain, I here make a very rough guess of 10 billion neurons in the cow brain.

204.This very rough guess was derived by comparing the average brain mass of chickens and rainbow trout, and by (unrealistically) assuming that chicken brains and rainbow trout brains follow the same neuronal scaling rule.

205.I couldn’t locate the number of neurons for a Gazami crab, so I simply guessed it might be similar to the number of neurons in a lobster. Wikipedia’s list of animals by number of neurons says a lobster has 100,000 neurons. After I found this number, I saw that Tye (2016), p. 156, also claims that “Crabs have… around a hundred thousand [neurons]”, though Tye doesn’t provide a source for this claim.

206.At the time of the 2016 match with Lee Sedol.

207.After substantial calibration training, I appear to be well-calibrated about trivia questions, but I have no systematic quantitative evidence that I am well-calibrated about anything else. For more on the concept of probability calibration, see our blog post Efforts to Improve the Accuracy of Our Judgments and Forecasts and the sources cited therein.

208.There is a vast literature on the limits of introspection. See e.g. Wilson (2004); Carruthers (2011).

209.Page 44.

210.In Bayesian statistics, an “ignorance prior” is a probability distribution for which equal probability is assigned to all possibilities — i.e., when one is still “ignorant” of all (or nearly all) the relevant evidence, before the probability distribution is updated by the observation of some Bayesian evidence. See e.g. Jaynes (2003), ch. 12.

211.Panpsychists, of course, will think that all my probabilities are too low. This difference of opinion likely traces back to the “metaphysical” debates about consciousness that I mostly skipped over in this report (but, see here).

212.I also have reservations about the lack of precision and comprehensiveness with which GNWT, as currently formulated, explains the explananda of consciousness (see Appendix B).

213.We’ve wrestled with related issues before, but do not feel satisfied. See Why we can’t take expected value estimates literally (even when they’re unbiased), Modeling Extreme Model Uncertainty, Sequence thinking vs. cluster thinking, and section 2 of Technical and Philosophical Questions That Might Affect Our Grantmaking. Various formal models have been proposed for acting under various kinds of “radical” uncertainty — e.g. Jaynes (2003, ch. 18) and Jøsang (2016) — but I haven’t studied them closely enough to endorse any of them in particular. See also the papers cited in Romeijn & Roy (2014).

214.See e.g. the literature on the “sophistication effect,” as described in Yudkowsky’s “Knowing About Biases Can Hurt People” and papers such as Achen & Bartels (2006), and also the literature on differences between intelligence and rationality (e.g. Stanovich et al. 2016, though also see Ritchie 2017).

215.Or increase, in the case of “years since last common ancestor with humans.”

216.This is sometimes called “the crowd within” effect. See Vul & Pashler (2008); Steegen et al. (2014).

217.I elicited my probabilities 6 times, via a Google Form (screenshot; spreadsheet of results; chart of results). The form walked me through these steps:

  1. First, it led me through two anti-anchoring exercises meant to minimize the effect of my earlier estimates on my current estimate (see below).
  2. Second, the form asked me to give my probability of consciousness (of a sort that I would morally care about, given my current moral judgments) for each of the following animal taxa: chimpanzees, cows, chickens, rainbow trout, gazami crabs, and common fruit flies.

The anti-anchoring exercise the form prompted me to engage in worked as follows:

First, it prompted me to invent a written justification for a randomly-selected probability of consciousness in a randomly-chosen animal taxon. (Randomization was done using random.org.)

Second, the form prompted me to “take a deep breath,” and then name a musical artist I hadn’t heard for a while but would like to hear again soon. (The purpose of this was merely to force me to think for a while about something else, in the hopes that this would accomplish some amount of anti-anchoring.)

218.Here’s an intuition pump: Did philosophical argumentation in the 17th and 18th centuries contribute much to the improvement of our understanding of “life,” or was such progress made almost exclusively by scientific means? See also Baars & McGovern (1993).

This said, it’s not the case that I see no role for philosophical argument. Rather, I think that on the present margin, “scientific” work is needed more urgently than “philosophical” work (though, the line between the two is fuzzy). I could see my intuition about this changing on a different margin. For example, it may be that substantial philosophical work will be invaluable once we have collected more scientific data than we have now.

219.For example, it could be informative for philosophers of consciousness to collaborate with neurologists in interviewing patients suffering from auto-activation deficit (see here) about the details of their conscious experiences at different stages in the progression of their symptoms, in a way that might test different hypotheses about consciousness. It could also be informative for philosophers to collaborate with scientists in the design of experiments aimed at empirically distinguishing the hypotheses put forward in Block (2007b) (and commentaries) to explain the results of the experiments described therein.

220.After writing this paragraph, I discovered that Jesse Prinz expressed a similar view in a 2014 interview with the Moscow Center for Consciousness Studies, starting around 54:27 in the published recording:

[In The Conscious Brain] I dedicate very little time to the debate between different metaphysical positions [on consciousness]. So rather than arguing for materialism, I presuppose it. Where most of what’s been written about consciousness by philosophers in recent decades has focused on the metaphysical debate, I spend just one chapter on it, and that’s not a very thorough discussion of the literature, and in that sense I ignored it, and intentionally. I felt that while there are very profound difficult problems… there are all these other problems that are extremely exciting and important, and we’ve dedicated so much time to these metaphysical problems that we’ve neglected the others…

221.This list does not include projects outside the scope of “What is the likely distribution of morally-relevant phenomenal consciousness?”, for example projects related to other criteria for moral patienthood, or projects related specifically to moral weight.

222.Contra Dennett, I take my “first-person data” to be part of what needs to be explained. My reasons for favoring this view over Dennett’s “heterophenomenology” are basically the same as Chalmers’ in Chalmers (2010), pp. 52-58.

223.Variations on Searle’s Chinese Room, Block’s Chinese Nation, etc. as well as more recent hypothetical minds such as the computational “Mary” from Yetter-Chappell’s unpublished draft paper “Dissolving Type-B Physicalism” (listed here).

224.I’m not satisfied with any of the cases for insect consciousness or chimpanzee non-consciousness that I’ve seen so far, so I can’t just link to sources that I think make a good case for either, but I think I could piece together a good case for either from a variety of arguments and evidence I’ve come across via disparate sources.

225.In some fields, self-report is considered so unreliable that it is avoided — but in consciousness studies, it’s the best we’ve got!

I listed some sources relevant to the reliability of self-report measures in my earlier report on behavioral treatments for insomnia:

Sources that provide theoretical considerations and non-systematic evidence in favor of substantial a priori concern about the accuracy of self-report measures include Stone et al. (1999); Groves et al. (2009), especially section 7.3; Stalans (2012); Schwarz et al. (2008). Broad (and in some cases, systematic) empirical reviews (or unusually large-scale primary studies) comparing self-report measures to “gold standard” objective measures include Bryant et al. (2014); Gorber et al. (2007); Prince et al. (2008); Bhandari & Wagner (2006); Gorber et al. (2009); Adamo et al. (2009); Kowalski et al. (2012); Kuncel et al. (2005); Bound et al. (2001); Meyer et al. (2009); Barnow & Greenberg (2014). Finally, one cherry-picked primary study I found disheartening with regard to the accuracy of self-report was Suziedelyte & Johar (2013). Please keep in mind that this is only a preliminary list of sources: I have not evaluated any of them closely, they may be unrepresentative of the literature on self-report as a whole, and I can imagine having a different impression of the typical accuracy of self-report measures if and when I complete [a separate investigation] on the accuracy of self-report measures… For those interested in the topic, I list some additional general sources I found useful, again without comment or argument at this time: Stone et al. (2007); Smith (2011); Fernandez-Ballesteros & Botella (2007); Donaldson & Grant-Vallone (2002); Thomas & Frankenberg (2002); Chan (2009); Streiner & Norman (2008); Fayers & Machin (2016), ch. 19.

226.See e.g. Klein (2010).

227.The quoted phrase is taken from Güzeldere et al. (2000):

Many [researchers] have labored to develop various constructive explanatory accounts of consciousness. This group can be characterized by a common ontological denominator, say, a commitment to a materialist/naturalist framework; but here, too, we find differences of opinion. Consciousness is explained in terms of causal/functional roles (Lewis, 1966; Lycan, 1987), representational properties (Dretske, 1995; Tye, 1995), emergent biological properties (Flanagan, 1992; Searle, 1992), higher-order mental states (Armstrong, 1980; Rosenthal, 1997), or computer-related metaphors (Dennett, 1991), or a combination of these.

Within this latter group, we find a certain group of philosophers characterized by an emerging commitment to a particular methodological strategy, a commitment shared by some psychologists and neuroscientists interested in explaining the nature and function of our subjective experiences. Research into the nature and function of consciousness has made some recent advances, especially in the field of cognitive neuroscience, on the basis of a triangulation of data coming from the phenomenological reports of patients, psychological testing at the cognitive/behavioral level, and neurophysiological and neuroanatomical findings. Churchland (1986) calls this strategy for studying the mind the “co-evolutionary strategy”; and Shallice (1988), Dennett (1978, 1991), and Flanagan (1985, 1991, 1992) each promote co-evolutionary methodologies that attempt to bring into equilibrium the phenomenological, the psychological, and the neurobiological in understanding the mind.

228.See e.g. Bayne (2010); Bennett & Hill (2014); Brook & Raymont (2017).

229.See Olkowicz et al. (2016).

230.For more detail, see this footnote.

231.I haven’t seen many surveys of experts on consciousness, but I’ll list some related sources. McDermott (2007) reports the results of an informal survey of Fellows of the American Association for Artificial Intelligence from 2003. Miller (2000) didn’t conduct a survey, but claimed that “Almost every member of the American Philosophical Association would agree that all mammals are conscious, and that all conscious experience is of some moral significance.”

Some authors point to the Cambridge Declaration on Consciousness (2012) as evidence that there is now a scientific consensus that:

The neural substrates of emotions do not appear to be confined to cortical structures… Systems associated with affect are concentrated in subcortical regions where neural homologies [between humans and animals] abound… The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.

However:

The document reads more like a political document than a scientific document. (See e.g. this commentary.)

As far as I can tell, the declaration was signed by a small number of people, perhaps about 15 people, and thus hardly demonstrates a “scientific consensus.”

Several of the signers of the declaration have since written scientific papers that seem to treat cortex-required views as a live possibility, e.g. Koch et al. (2016) and Laureys et al. (2015), p. 427.

232.When put into computational terms, the “triviality objection” is explained by Searle (1992), pp. 208-209, in this way:

…the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar, if it is a big enough wall it is implementing any program, including any program implemented in the brain.

Here is an alternate description of the problem, from Drescher (2006), ch. 2:

…any message can be construed as a substitution-cipher encoding of any other message (of the same length), simply by contriving the appropriate key. The contrivance is easy: just align the corresponding letters of the two messages and for each pair of aligned letters, choose as the corresponding key-number however many alphabet steps are needed to get from the “unencrypted” letter to the “encrypted” letter…

Returning to the subject at hand, the point… is that construing physical events (such as brain activity) as representations (of external things such as flowers, and of other brain events themselves) leaves room for… mischievously creative interpretations…

…For example, we could pick up a random rock and construe it as playing a game of chess. We’d already need to have a detailed description of the series of internal states that a real chess-playing computer goes through in the course of a particular game. Then, we’d just point to as many atoms in the rock as we need and we’d stipulate, for each atom at each moment, that its state at that moment represents a particular constituent state of the chess-playing computer. That is, we’d build a translation table with entries like the following:

If the rock’s atoms numbered 458,620,198,259,728 through 458,620,198,570,954 at time t are in such-and-such state (the state they were in fact in at t), that represents the chess-playing computer’s transistor number 11,252,664,293 being in thus-and-such state at t (the state it was in fact in at t).

Of course, there’d be no uniformity to our interpretation scheme — the same state, exhibited by different atoms, would have an entirely different “meaning” in each case. Even the same state of the same atom would “mean” entirely different things at different times… And of course, we have no hope of writing down every entry of the mapping table — it’s just too huge. Nonetheless, we can speak of the interpretation scheme that the hypothetical table implements…

Here, though, is a problem regarding [consciousness]. The problem is that we could likewise contrive a joke interpretation scheme according to which a rock is conscious. As with the joke chess-machine interpretation, we could (in principle) devise this scheme by taking a conscious entity — say, me — and recording all the states in its brain over a period of several minutes. We then map some portion of the rock onto some part of the brain. And we contrive a mapping function that, at each next moment, just asserts by fiat that the state of a given portion of the rock (the specific placement of individual atoms there, say) represents the next recorded state of the corresponding brain portion’s state. Under this joke interpretation scheme, the rock undergoes the same sequence of conscious (and unconscious) thoughts over the next few minutes as I did during the few recorded minutes. Or, we could in principle create a different mapping table that attributes to the rock a series of thoughts and feelings all its own.

Of course, [Dennett’s] intentional-stance test easily disqualifies such joke interpretations from being taken seriously (just as with the joke chess-machine interpretation). Still, there is a reason this joke interpretation poses a problem. The intentional stance only tells us what representation (if any) an external observer has reason to ascribe to an object of interest. And as noted above, a certain practicality follows from an intentional-stance-supported interpretation: it lets us predict that the correspondence in question would or will continue to exist under a reasonable range of circumstances. In contrast, it is of no more practical value to contrivedly ascribe a consciousness-implementing set of representations to a rock than it is to contrivedly ascribe a chess-implementing set of representations. In that sense, a rock is clearly no more engaged in conscious experience than it is engaged in chess playing.

But if we are asking whether an object — be it a person or a computer or a rock — is conscious, we are asking (at least in large measure) about the object’s own point of view, about how (or whether at all) the object feels to itself, regardless of any external observer’s perspective or the practical merits thereof. If… consciousness is just a particular kind of representational process, then why isn’t it the case that at least as far as the rock itself is concerned, the rock possesses the same stream of consciousness as I do, by virtue of the (albeit impractical) joke interpretation?

On this topic, see also Putnam (1988), especially the appendix; volume 4, issue 4 of Minds and Machines (1994); Bishop (2004); Copeland (1996); Almond (2008); Godfrey-Smith (2009); Shagrir (2012); Chalmers (2011) and the replies cited in Chalmers (2012); Egan (2012); section 6 of Aaronson (2013); Blackmon (2013); Tomasik (2015a); Matthews & Dresner (2016).

233.Aaronson (2013):

…consider a waterfall (though any other physical system with a large enough state space would do as well)… say, Niagara Falls. Being governed by laws of physics, the waterfall implements some mapping f from a set of possible initial states to a set of possible final states. If we accept that the laws of physics are reversible, then f must also be injective. Now suppose we restrict attention to some finite subset S of possible initial states, with |S| = n. Then f is just a one-to-one mapping from S to some output set T = f(S) with |T| = n. The “crucial observation” is now this: given any permutation σ from the set of integers {1, … , n} to itself, there is some way to label the elements of S and T by integers in {1, … , n}, such that we can interpret f as implementing σ. For example, if we let S = {s1, … , si} and f (si) = ti, then it suffices to label the initial state si by i and the final state ti by σ(i). But the permutation σ could have any “semantics” we like: it might represent a program for playing chess, or factoring integers, or simulating a different waterfall. Therefore “mere computation” cannot give rise to semantic meaning.

…To my mind… perhaps the easiest way to demolish the waterfall argument is through computational complexity considerations.

Indeed, suppose we actually wanted to use a waterfall to help us calculate chess moves. How would we do that? In complexity terms, what we want is a reduction from the chess problem to the waterfall-simulation problem. That is, we want an efficient algorithm that somehow encodes a chess position P into an initial state sp ∈ S of the waterfall, in such a way that a good move from P can be read out efficiently from the waterfall’s corresponding final state, f(sp ∈ T. But what would such an algorithm look like? We cannot say for sure — certainly not without detailed knowledge about f (i.e., the physics of waterfalls), as well as the means by which the S and T elements are encoded as binary strings. But for any reasonable choice, it seems overwhelmingly likely that any reduction algorithm would just solve the chess problem itself, without using the waterfall in an essential way at all! A bit more precisely, I conjecture that, given any chess-playing algorithm A that accesses a “waterfall oracle” W, there is an equally good chess-playing algorithm A′, with similar time and space requirements, that does not access W. If this conjecture holds, then it gives us a perfectly observer-independent way to formalize our intuition that the “semantics” of waterfalls have nothing to do with chess.

…

In my view, there is an important lesson here for debates about computationalism. Suppose we want to claim, for example, that a computation that plays chess is “equivalent” to some other computation that simulates a waterfall. Then our claim is only non-vacuous if it’s possible to exhibit the equivalence (i.e., give the reductions) within a model of computation that isn’t itself powerful enough to solve the chess or waterfall problems.

234.See also the final section of the notes from my conversation with David Chalmers.

235.For example Tye (2016), Godfrey-Smith (2016a), and Dennett (2017). (For a condensed account of some of the key theoretical ideas in Godfrey-Smith 2016a, see Godfrey-Smith 2016b.)

236.Dennett (1995), p. 700. The full quote is:

Lockwood says “probably” all birds are conscious, but maybe some of them — or even all of them — are rather like sleepwalkers! Or what about the idea that there could be unconscious pains (and that animal pain, though real, and — yes — morally important, was unconscious pain)? Maybe there is a certain amount of generous-minded delusion (which I once called the Beatrix Potter syndrome) in our bland mutual assurance that as Lockwood puts it, “Pace Descartes, consciousness, thus construed, isn’t remotely, on this planet, the monopoly of human beings.”

How, though, could we ever explore these “maybes”? We could do so in a constructive, anchored way by first devising a theory that concentrated exclusively on human consciousness — the one variety about which we will brook no “maybes” or “probablys” — and then look and see which features of that account apply to which animals, and why. There is plenty of work to do…

237.This is a common view, but the point is made especially by Uttal (2011, 2015, 2016). See also this episode of The Brain Science Podcast.

238.As Greenwald (2012) put it, “There is nothing so theoretical as a good method.”

239.This general approach sometimes goes by names such as “ideal advisor theory” or, arguably, “reflective equilibrium.” Diverse sources explicating various extrapolation procedures (or fragments of extrapolation procedures) include: Rosati (1995); Daniels (2016); Campbell (2013); chapter 9 of Miller (2013); Muehlhauser & Williamson (2013); Trout (2014); Yudkowsky’s “Extrapolated volition (normative moral theory)” (2016); Baker (2016); Stanovich (2004), pp. 224-275; Stanovich (2013).

On the prospects for values convergence, see e.g. Sobel (1999); Döring and Andersen’s 2009 unpublished manuscript “Rationality, Convergence and Objectivity”; Swanton (1996); the sources listed in footnote 19 of Egan (2012); section 5.1 of Sobel (2001); Dahlsgaard et al. (2005); Pinker (2011); Bicchieri & Mercier (2014); Shermer (2015); Huemer (2016); Norberg (2016); Sobel (2017).

240.I am even more skeptical, of course, that visiting aliens or future artificial intelligence systems capable of comprehending this report would, upon completing such an extrapolation procedure, converge on the same values.

241.For more on forecasting accuracy, see this blog post. My use of research on the psychological predictors of forecasting accuracy for the purposes of doing moral philosophy is one example of my support for the use of “ameliorative psychology” in philosophical practice — see e.g. Bishop & Trout (2004, 2008).

242.Specifically, the scenario I try to imagine (and make conditional forecasts about) looks something like this:

  1. In the distant future, I am non-destructively “uploaded.” In other words, my brain and some supporting cells are scanned (non-destructively) at a fine enough spatial and chemical resolution that, when this scan is combined with accurate models of how different cell types carry out their information-processing functions, one can create an executable computer model of my brain that matches my biological brain’s input-output behavior almost exactly. This whole brain emulation (“em”) is then connected to a virtual world: computed inputs are fed to the em’s (now virtual) signal transduction neurons for sight, sound, etc., and computed outputs from the em’s virtual arm movements, speech, etc. are received by the virtual world, which computes appropriate changes to the virtual world in response. (I don’t think anything remotely like this will ever happen, but as far as I know it is a physically possible world that can be described in some detail; for one attempt, see Hanson 2016.) Given functionalism, this “em” has the same memories, personality, and conscious experience that I have, though it experiences quite a shock when it awakens to a virtual world that might look and feel somewhat different from the “real” world.
  2. This initial em is copied thousands of times. Some of the copies interact inside the same virtual world, other copies are placed inside isolated virtual worlds.
  3. Then, these ems spend a very long time (a) collecting and generating arguments and evidence about morality and related topics, (b) undergoing various experiences, in varying orders, and reflecting on those experiences, (c) dialoguing with ems sourced from other biological humans who have different values than I do, and perhaps with sophisticated chat-bots meant to simulate the plausible reasoning of other types of people (from the past, or from other worlds) who were not available to be uploaded, and so on. They are able to do these things for a very long time because they and their virtual worlds are run at speeds thousands of times faster than my biological brain runs, allowing subjective eons to pass in mere months of “objective” time.
  4. Finally, at some time, the ems dialogue with each other about which values seem “best,” they engage in moral trade (Ord 2015), and they try to explain to me what values they think I should have and why. In the end, I am not forced to accept any of the values they then hold (collectively or individually), but I am able to come to much better-informed moral judgments than I could have without their input.

For more context on this sort of values extrapolation procedure, see Muehlhauser & Williamson (2013).

243.For more on forecasting “best practices,” see this blog post.

244.Following Hanson (2002) and ch. 2 of Beckstead (2013), I consider my moral intuitions in the context of Bayesian curve-fitting. To explain, I’ll quote Beckstead (2013) at some length:

Curve fitting is a problem frequently discussed in the philosophy of science. In the standard presentation, a scientist is given some data points, usually with an independent variable and a dependent variable, and is asked to predict the values of the dependent variable given other values of the independent variable. Typically, the data points are observations, such as “measured height” on a scale or “reported income” on a survey, rather than true values, such as height or income. Thus, in making predictions about additional data points, the scientist has to account for the possibility of error in the observations. By an error process I mean anything that makes the observed values of the data points differ from their true values. Error processes could arise from a faulty scale, failures of memory on the part of survey participants, bias on the part of the experimenter, or any number of other sources. While some treatments of this problem focus on predicting observations (such as measured height), I’m going to focus on predicting the true values (such as true height).

…For any consistent data set, it is possible to construct a curve that fits the data exactly… If the scientist chooses one of these polynomial curves for predictive purposes, the result will usually be overfitting, and the scientist will make worse predictions than he would have if he had chosen a curve that did not fit the data as well, but had other virtues, such as a straight line. On the other hand, always going with the simplest curve and giving no weight to the data leads to underfitting…

I intend to carry over our thinking about curve fitting in science to reflective equilibrium in moral philosophy, so I should note immediately that curve fitting is not limited to the case of two variables. When we must understand relationships between multiple variables, we can turn to multiple-dimensional spaces and fit planes (or hyperplanes) to our data points. Different axes might correspond to different considerations which seem relevant (such as total well-being, equality, number of people, fairness, etc.), and another axis could correspond to the value of the alternative, which we can assume is a function of the relevant considerations. Direct Bayesian updating on such data points would be impractical, but the philosophical issues will not be affected by these difficulties.

…On a Bayesian approach to this problem, the scientist would consider a number of different hypotheses about the relationship between the two variables, including both hypotheses about the phenomena (the relationship between X and Y) and hypotheses about the error process (the relationship between observed values of Y and true values of Y) that produces the observations…

…Lessons from the Bayesian approach to curve fitting apply to moral philosophy. Our moral intuitions are the data, and there are error processes that make our moral intuitions deviate from the truth. The complete moral theories under consideration are the hypotheses about the phenomena. (Here, I use “theory” broadly to include any complete set of possibilities about the moral truth. My use of the word “theory” does not assume that the truth about morality is simple, systematic, and neat rather than complex, circumstantial, and messy.) If we expect the error processes to be widespread and significant, we must rely on our priors more. If we expect the error processes to be, in addition, biased and correlated, then we will have to rely significantly on our priors even when we have a lot of intuitive data.

Beckstead then summarizes the framework with the following table (p. 32):

SCIENCE MORAL PHILOSOPHY
Hypotheses about phenomena Different trajectories of a ball that has been dropped Moral theories (specific versions of utilitarianism, Kantianism, contractualism, pluralistic deontology, etc.)
Hypotheses about error processes Our position measurements are accurate on average, and are within 1 inch 95% of the time (with normally distributed error) Different hypotheses about the causes of error in historical cases; cognitive and moral biases; different hypotheses about the biases that cause inconsistent judgments in important philosophical cases
Observations Recorded position of a ball at different times recorded with a certain clock Intuitions about particular cases or general principles, and any other relevant observations
Background theory The ball never bounces higher than the height it started at. The ball always moves along a continuous trajectory. Meta-ethical or normative background theory (or theories)

245.For more on this, see my conversation with Carl Shulman, O’Neill (2015), the literature on the evolution of moral values (e.g. de Waal et al. 2014; Sinnott-Armstrong & Miller 2007; Joyce 2005), the literature on moral psychology more generally (e.g. Graham et al. 2013; Doris 2010; Liao 2016; Christen et al. 2014; Sunstein 2005), the literature on how moral values vary between cultures and eras (e.g. see Flanagan 2016; Inglehart & Welzel 2010; Pinker 2011; Morris 2015; Friedman 2005; Prinz 2007, pp. 187-195), and the literature on moral thought experiments (e.g. Tittle 2004, ch. 7). See also Wilson (2016)’s comments on internal and external validity in ethical thought experiments, and Bakker (2017) on “alien philosophy.”

I do not read much fiction, but I suspect that some types of fiction — e.g. historical fiction, fantasy, and science fiction — can help readers to temporarily transport themselves into fully-realized alternate realities, in which readers can test how their moral intuitions differ when they are temporarily “lost” in an alternate world.

246.There are many sources which discuss how people’s values seem to change along with (and perhaps in response to) components of my proposed extrapolation procedure, such as learning more facts, reasoning through more moral arguments, and dialoguing with others who have different values. See e.g. Inglehart & Welzel (2010), Pinker (2011), Shermer (2015), and Buchanan & Powell (2016). See also the literatures on “enlightened preferences” (Althaus 2003, chs. 4-6) and on “deliberative polling.”

247.For example, as I’ve learned more, considered more moral arguments, and dialogued more with people who don’t share my values, my moral values have become more “secular-rational” and “self-expressive” (Inglehart & Welzel 2010), more geographically global, more extensive (e.g. throughout more of the animal kingdom), less person-affecting, and subject to greater moral uncertainty (Bykvist 2017).

248.Or, as Allen-Hermanson (2008) might put it: what if fishes are “natural zombies,” or “naturally blindsighted” about all their sensory and internal states?

249.See Grahek (2007).

250.It would be interesting to test my hypothesis on several subjects with AAD, for example the 13 patients of Leu-Semenescu et al. (2013), if they are still alive.

251.Not counting her earlier transition from neurotypical function to a condition of AAD, the moral significance of which is outside the scope of this point.

252.In other words, neuroscientists don’t yet know much about what David Marr called the “algorithmic level” (Wikipedia).

Here is the explanation of Marr’s levels of analysis from Bermudez (2014), p. 47:

Marr distinguishes three different levels for analyzing cognitive systems. The highest is the computational level. Here cognitive scientists analyze in very general terms the particular type of task that the system performs…

The guiding assumption here is that cognition is ultimately to be understood in terms of information processing, so that the job of individual cognitive systems is to transform one kind of information (say, the information coming into a cognitive system through its sensory systems) into another type of information (say, information about what type of objects there might be in the organism’s immediate environment). A computational analysis identifies the information with which the cognitive system has to begin (the input to that system) and the information with which it needs to end up (the output from that system).

The next level down is what Marr calls the algorithmic level. The algorithmic level tells us how the cognitive system actually solves the specific information- processing task identified at the computational level. It tells us how the input information is transformed into the output information. It does this by giving algorithms that effect that transformation. An algorithmic level explanation takes the form of specifying detailed sets of information-processing instructions that will explain how, for example, information from the sensory systems about the distribution of light in the visual field is transformed into a representation of the three-dimensional environment around the perceiver.

In contrast, the principal task at the implementational level is to find a physical realization for the algorithm – that is to say, to identify physical structures that will realize the representational states over which the algorithm is defined and to find mechanisms at the neural level that can properly be described as computing the algorithm in question.

For a nice illustration of some reasons why it’s so difficult for neuroscientists to study brain function at the algorithmic level given current tools, see Jonas & Kording (2017).

253.To run my Python code without needing to install any software, you can use an online Python sandbox such as this one from Tutorials Point. Or, you can install a full-featured Python IDE such as PyCharm (the Community edition is free).

254.Python implementations vary; see here.

255.In general, I think lots of philosophical discussion and argument should be conducted using short and long snippets of source code, to improve the clarity and concreteness of those discussions. Steven Phillips calls this approach to philosophy “executable philosophy” (see also this incomplete draft of his thoughts on the subject). See also Yudkowsky’s “Executable Philosophy,” which includes a similar recommendation about philosophical practice on a list which includes several other recommendations. (As far as I know, Phillips and Yudkowsky use this term independently of each other.)

One example of this approach being used in philosophy of consciousness is Brian Tomasik’s “A Simple Program to Illustrate the Hard Problem of Consciousness.” To make his “simple program” more comprehensible to people who are not Python programmers, I added extensive comments to his code (and bumped the syntax to Python Version 3): see here.

For related but not identical ideas about philosophical methodology, see discussions on computational explanations and computational models in philosophy, e.g. Grim (2004); Rusanen & Lappi (2016).

256.This “short program argument” is a generalization of Herzog et al. (2007)’s “small network argument.” It is also similar to some remarks in Rey (1983):

…it seems to me to be entirely feasible… to render an existing computing machine intentional by providing it with a program that would include the following:

1. The alphabet, formation, and transformation rules for quantified modal logic (the system’s “language of thought”).

2. The axioms for your favorite inductive logic and/or abductive system of hypotheses, with a “reasonable” function for selecting among them on the basis of given input.

3. The axioms of your favorite decision theory, and some set of basic preferences.

4. Mechanical inputs, via sensory transducers, for Clauses 2 and 3.

5. Mechanical connections that permit the machine to realize its outputs (e.g., its “most preferred” basic act descriptions).

Any computer that functioned according to such a program would, I submit, realize significant Rational Regularities, complete with intensionality. Notice, for example, that it would be entirely appropriate — and probably unavoidable — for us to explain and predict its behavior and internal states on the basis of those regularities. It would be entirely reasonable, that is to say, for us to adopt toward it what Dennett (1971) has called the “intentional stance.”

…However clever a machine programmed with Clauses 1-5 might become, counting thereby as a thinking thing, surely it would not also count thereby as conscious. The program is just far too trivial. Moreover, we are already familiar with systems satisfying at least Clauses 1-5 that we also emphatically deny are conscious: there are all those unconscious neurotic systems postulated in so many of us by Freud, and all those surprisingly intelligent, but still unconscious, subsystems for perception and language postulated in us by contemporary cognitive psychology. (Some evidence of the cognitive richness of unconscious processing is provided by the interesting review of such material in Nisbett & Wilson, 1977, but especially by such psycholinguistic experiments as that by Lackner & Garrett, 1973, in which subliminal linguistic material provided to one ear biased subjects in their understanding of ambiguous sentences provided to the other ear.) In all of these cases we are, I submit, quite reasonably led to ascribe beliefs, preferences, and sometimes highly elaborate thought processes to a system on the basis of the Rational Regularities, despite the fact that the systems involved are often not the least bit “conscious” of any such mental activity at all. It is impossible to imagine these psychological theories getting anywhere without the ascription of unconscious content — and it is equally difficult to imagine any animals getting anywhere without the exploitation of it. Whatever consciousness will turn out to be, it will pretty certainly need to be distinguished from the thought processes we ascribe on the basis of the rational regularities.

How easily this point can be forgotten, neglected, or missed altogether is evidenced by the sorts of proposals about the nature of consciousness one finds in some of the recent psychobiological literature. The following seem to be representative:

Consciousness is usually defined by the ability: (1) to appreciate sensory information; (2) to react critically to it with thoughts or movements; (3) to permit the accumulation of memory traces. (Moruzzi, 1966)

Perceptions, memories, anticipatory organization, a combination of these factors into learning — all imply rudimentary consciousness. (Knapp, 1976)

Modern views… regard human conscious activity as consisting of a number of major components. These include the reception and processing (recoding) of information, with the selection of its most important elements and retention of the experience thus gained in the memory; enunciation of the task or formulation of an intention, with the preservation of the corresponding modes of activity, the creation of a pattern or model of the required action, and production of the appropriate program (plan) to control the selection of necessary actions; and finally the comparison of the results of the action with the original intention … with correction of the mistakes made. (Luria, 1978)

Consciousness is a process in which information about multiple individual modalities of sensation and perception is combined into a unified, multidimensional representation of the state of the system and its environment and is integrated with information about memories and the needs of the organism, generating emotional reactions and programs of behavior to adjust the organism to its environment. (John, 1976)

What I find astonishing about such proposals is that they are all more-or-less satisfiable by almost any information-processing system, for precisely what modern computational machinery is designed to do is to receive, process, unify, and retain information; create (or “call”) patterns, models, and subroutines to control its activity; and, by all means, to compare the results of its action with its original intention in order to adjust its behavior to its environment. This latter process is exactly what the “feedback” that Wiener (1954) built into his homing rocket was for! Certainly, most of the descriptions in these proposals are satisfied by any recent game-playing program (see, e.g., Berliner, 1980). And if it’s genuine “modalities,” “thoughts,” “intentions,” “perceptions,” or “representations” that are wanted, then, as I’ve argued, supplementing the program with Clauses 1-5 will suffice, but without rendering anything a whit more conscious.

White (1991), ch. 6, summed up Rey’s point like so:

…a survey of recent characterizations of consciousness by philosophers and psychologists reveals that most or all characterizations would be satisfied by information-processing devices that either exist now or would be trivial extensions of devices that exist.

See also Rey (1995; 2016).

257.Even if they are not functionalists, they could still clarify their views by saying “If I was a functionalist, then such-and-such computer program exhibits the kind of functional behavior and cognitive processing that I think would be sufficient for moral patienthood.”

258.For more details on the algorithm whose behavior is shown in the video, see section VI.A of Togelius et al. (2010).

259.However, if you do think this algorithm is a moral patient — because it seems to have goals and aversions, is capable of planning (its path through the level), and so on — and you are some kind of utilitarian, then this may have some surprising implications. For example, suppose you think that this Mario-controlling algorithm is a moral patient, but only has a tiny fraction of the “moral weight” that a human has, such that when Mario reaches the goal at the end of the level, that has about 1/1000th as much positive moral value as when you consume a single spoonful of ice cream. In that case, it might still be the case that, given your moral intuitions, the most morally valuable thing you could do per dollar is to run this Mario-controlling algorithm (or some other algorithm, better-optimized for positive moral value) trillions of times a day using rented cloud computation from e.g. Amazon Web Services.

Similarly, if you (unlike me) have the intuition that today’s reinforcement learning algorithms are moral patients, there are practical code modifications that could be made today to reduce the risk that these (very common) algorithms are instantiating negative phenomenal experiences: see Tomasik (2014), p. 17.

260.Search for “SUPERHRO” on this page.

261.I might be mis-remembering the details of these algorithms, but these details don’t matter much to my illustration.

262.The closest analogue of this exercise I’ve seen elsewhere is Rey (1983), though I discovered that article after writing a first draft of this section.

263.We also have to assume the game is deterministic, i.e. that it doesn’t allow for random number generation. Off the top of my head, I can’t recall whether this is true for MESH: Hero.

264.Implementers of BDI systems can make a wide variety of choices about how to implement “beliefs,” “desires,” and “intentions,” how these relate to one another, and how they determine an agent’s actions. For example, Davies et al. (2006), Palazzo et al. (2013), and Kim et al. (2014) make different choices about these things. Still, if an agent roughly fits the BDI architecture, it’s more likely that Dennett’s “intentional stance” could be used to interpret or predict its actions.

As an example, Daniel Dewey (a Program Officer for the Open Philanthropy Project) describes the Davies et al. (2006) BDI system with the following table:

SOURCES BELIEFS DESIRES INTENTIONS
Davies et al. (2006); Pokahr et al. (2005) Set of facts stored in an object-oriented style. “Goals” = “concrete, momentary desires of an agent.” May be used to make plans or not depending on the agent’s believed context. Goals may be to execute certain actions, to reach certain states of the world, to reach certain internal states (e.g. the agent learns particular things), etc. May include “subgoals” created by plans to achieve other goals. A library of plan templates, added to a list of active plans when certain goals are active and certain beliefs are held. Plans are procedures that include templated external actions (e.g. moving the agent) and internal actions (manipulating beliefs, creating subgoals).

On BDI architectures in general, see e.g. Wikipedia and Wooldridge (2000).

265.Carruthers (1999):

The conclusion C1 [that “The mental states of non-human animals lack phenomenal feels”]… generates, quite naturally, a further question…

Question 1: Given C1, ought we to conclude that sympathy (and other moral attitudes) towards the sufferings and disappointments of non-human animals is inappropriate?

In my [Carruthers (1992)], chapter 8, I argued tentatively for a positive answer to this question. But I am now not so sure. Indeed, the main burden of this paper is to demonstrate that there is a powerful case for answering Q1 in the negative…

…

I propose… to defend the following claim:

A6: Only subjective frustrations or thwartings of desire count as psychological harms, and are appropriate objects of sympathetic concern.

However, the sense of ‘subjective’ in A6 need not be… that of possessing phenomenological properties. Rather, the sense can be that of being believed in by the subject. On this account, a desire counts as being subjectively frustrated, in the relevant sense, if the subject believes that it has been frustrated, or believes that the desired state of affairs has not (and/or will not) come about. Then there would be nothing to stop a phenomenology-less frustration of desire from counting as subjective, and from constituting an appropriate object of moral concern. So we have a question:

Q2: Which is the appropriate notion of subjective to render A6 true? — (a) possessing phenomenology? or (b) being believed in by the subject?

If the answer to Q2 is (a), then animal frustrations and pains, in lacking phenomenology by C1, will not be appropriate objects of sympathy or concern. This would then require us to answer Q1 in the affirmative, and animals would, necessarily, be beyond the moral pale. However, if the answer to Q2 is (b), then there will be nothing in C1 and A6 together to rule out the appropriateness of moral concern for animals; and we shall then have answered Q1 in the negative.

It is important to see that desire-frustration can be characterised in a purely first-order way, without introducing into the account any higher-order belief concerning the existence of that desire… So, suppose that an animal has a strong desire to eat, and that this desire is now activated; suppose, too, that the animal is aware that it is not now eating; then that seems sufficient for its desire to be subjectively frustrated, despite the fact that the animal may be incapable of higher-order belief.

…

So putting A6 and Q2 together, in effect, we have the question:

Q3: What is bad or harmful, from the point of view of a sympathetic observer, about the frustration or thwarting of desire? — (a) the phenomenology associated with desire frustration? or (b) the fact of learning that the object of desire has not been achieved?

…

If my assumptions… are granted, then the main point is (at least tentatively) established: the most basic form of psychological harm, from the perspective of a sympathetic observer, consists in the known or believed frustration of first-order desires (which need not require that agents have knowledge that they have those desires — just knowledge of what states of affairs have come about). That is to say, the answer to Q3 is (b). So the proper object of sympathy, when we sympathise with what has happened to an agent, is the known (or believed) frustration of first-order desire. And it follows, then (given A1 and A2), that the non-conscious desires of non-human animals are at least possible, or appropriate, objects of moral sympathy and concern. (Whether they should then be objects of such concern is a further distinctively moral question, to be answered by considerations pertaining to ethical theory rather than to philosophical psychology.) And it emerges that the complete absence of phenomenology from the lives of most non-human animals, derived in C1, is of little or no direct relevance to ethics.

Carruthers (2004) develops this line of thinking further.

Several others have advocated views which might be interpreted as according moral patienthood to animals on account of their having preferences that can be satisfied or frustrated, regardless of whether they are also conscious. See e.g. Dawkins (2012), chs. 7-9, and the endorsement of that account by Rose (2016).

266.For example Jaworska & Tannenbaum (2013) write:

Historically, the most famous [account of moral status grounded in intellectual capacities] was given by Kant, according to whom autonomy, the capacity to set ends via practical reasoning, must be respected… and grounds the dignity of all rational beings… Beings without reason may be treated as a mere means…

Similarly, Dillon (2014) writes:

The most influential position on [the topic of respect for persons] is found in the moral philosophy of Immanuel Kant… Indeed, most contemporary discussions of respect for persons explicitly claim to rely on, develop, or challenge some aspect of Kant’s ethics. Central to Kant’s ethical theory is the claim that all persons are owed respect just because they are persons, that is, free rational beings. To be a person is to have a status and worth that is unlike that of any other kind of being: it is to be an end in itself with dignity. And the only response that is appropriate to such a being is respect. Respect (that is, moral recognition respect) is the acknowledgment in attitude and conduct of the dignity of persons as ends in themselves. Respect for such beings is not only appropriate but also morally and unconditionally required: the status and worth of persons is such that they must always be respected…

267.This isn’t really a rebuttal against Braithwaite’s argument for fish consciousness, because it is easy to find details of her account that are technically not satisfied by the program I sketched here, even for the specific features I’ve listed above; rather, I sketched the program above and pointed to Braithwaite’s account merely to illustrate the more general point I make in the next paragraph.

In any case, here is Braithwaite’s summary of her case for fish consciousness, from Braithwaite (2010), ch. 4:

So pulling the different threads together, fish really do appear to possess key traits associated with consciousness. Their ability to form and use mental representations indicates fish have some degree of access consciousness. They can consider a current mental state and associate it with a memory. Having an area of the brain specifically associated with processing emotion and evidence that they alter their view of an aversive situation depending on context suggests that fish have some form of phenomenal consciousness: they are sentient. This leaves monitoring and self consciousness, which I argue is in part what the eel and the grouper are doing: considering their actions and pondering the consequences. The grouper is clearly deciding it has no chance to get the prey itself and so swims off to get the eel. The eel is deciding that an easy meal is on offer. On balance then, fish have a capacity for some forms of consciousness, and so I conclude that they therefore have the mental capacity to feel pain.

Braithwaite doesn’t mention nociceptors or the transmission of nociceptive signalling for central processing in this quote, but it’s clear from earlier sections of the book that these two features of fish neurobiology are critical to her confidence in conscious fish pain.

This version of MESH: Hero might also satisfy the criteria for having “interests” of the sort that Johnson (1993) argues are sufficient for moral status. Note that unlike some authors defending the moral status of plants and ecosystems, Johnson is explicit that his account might accord moral status to certain kinds of machines (pp. 145-146).

268.A much more satisfying, but also more costly to write, version of this exercise would involve doing the following:

Collect several dozen functionalist theories of consciousness and moral patienthood.

Summarize their key features.

Think of a basic program design that would allow you to chart an efficient course through as many of these theories of consciousness and moral patienthood as possible, merely by adding 1-5 new “features” for each updated version of the program.

Write the code for each version of that program.

Briefly describe each version of the program in order, and after each program version description, quote the theory or theories of consciousness or moral patienthood that now seem to be satisfied by the program.

269.I don’t worry about just any “large deep reinforcement learning agent” or “complicated candidate solution,” of course. E.g. I might start to worry if I can’t trace what the system is doing and it exhibits some highly sophisticated behavior that matches human conscious behavior in certain ways. On the other hand, I might not worry if a strong argument can be made that a given (large and complicated) system is essentially doing a very high-dimensional variant on linear regression, and is not engaging in e.g. dynamic control of memory and attention subsystems, as is the case for some deep learning agent architectures (see e.g. Marblestone et al. 2016.).

270.I owe this phrase to Yudkowsky, “How an Algorithm Feels from Inside.”

271.Graziano (2013), ch. 1.

272.Or, to be more accurate: How do such-and-such brain processes instantiate consciousness?

273.One way to think about this is from the perspective of “inference to the best explanation” or “explanationism,” according to which theories are judged by how well they perform on a list of common-sense “explanatory virtues.” Years ago, I collected the following list of commonly-endorsed explanatory virtues from philosophical defenders of inference to the best explanation:

  1. Testability: better explanations render specific predictions that can be falsified or corroborated.
  2. Scope (aka “comprehensiveness” or “consilience”): better explanations explain more types of phenomena.
  3. Precision: better explanations explain phenomena with greater precision.
  4. Simplicity: better explanations make use of fewer claims, especially fewer as yet unsupported claims (“lack of ad-hoc-ness”).
  5. Mechanism: better explanations provide more information about underlying mechanisms.
  6. Unification: better explanations unify apparently disparate phenomena (also sometimes called “consilience”).
  7. Predictive novelty: better explanations don’t just “retrodict” what we already know, but predict things we observe only after they are predicted.
  8. Analogy (aka “fit with background knowledge”): better explanations generally fit with what we already know with some certainty.
  9. Past explanatory success: better explanations fit within a tradition or trend with past explanatory success (e.g. astronomy, not astrology).

On this framework, a more precise way to state my core complaint about current theories of consciousness is that they are lacking in precision and scope.

(Of course, they may be lacking in other explanatory virtues, too.)

274.One can compare my exercise to section 4 (“Some case studies”) from Chalmers (1995):

In the last few years, a number of works have addressed the problems of consciousness within the framework of cognitive science and neuroscience. This might suggest that the foregoing analysis is faulty, but in fact a close examination of the relevant work only lends the analysis further support. When we investigate just which aspects of consciousness these studies are aimed at and which aspects they end up explaining, we find that the ultimate target of explanation is always one of the easy problems. I illustrate this with two representative examples.

After explaining Crick & Koch’s temporal binding theory, Chalmers says:

The details of how this binding might be achieved are still poorly understood, but suppose that they can be worked out. What might the resulting theory explain? Clearly it might explain the binding of information, and perhaps it might yield a more general account of the integration of information in the brain. Crick and Koch also suggest that these oscillations activate the mechanisms of working memory, so that there may be an account of this and perhaps other forms of memory in the distance. The theory might eventually lead to a general account of how perceived information is bound and stored in memory for use by later processing.

Such a theory would be valuable, but it would tell us nothing about why the relevant contents are experienced. Crick and Koch suggest that these oscillations are the neural correlates of experience. This claim is arguable— does not binding also take place in the processing of unconscious information?— but even if it is accepted, the explanatory question remains: why do the oscillations give rise to experience? The only basis for an explanatory connection is the role they play in binding and storage, but the question of why binding and storage should themselves be accompanied by experience is never addressed. If we do not know why binding and storage should give rise to experience, telling a story about the oscillations cannot help us. Conversely, if we knew why binding and storage gave rise to experience, the neurophysiological details would be just the icing on the cake. Crick and Koch’s theory gains its purchase by assuming a connection between binding and experience and so can do nothing to explain that link.

Chalmers then elaborates a similar complaint about some other theories.

One difference between Chalmers’ complaint and mine is that, as an illusionist about consciousness who thus “replaces the hard problem with the illusion problem” (Frankish 2016b), one way to view my complaint is that it is the complaint that the theories of consciousness surveyed here fail to explain the illusions of conscious experience.

But really, my complaint is more general than this, and not dependent on illusionism in particular. If later I decide that (say) I am a Prinz-style realist about consciousness (Prinz 2016), then my core complaint will remain: simply, these theories do not explain enough consciousness explananda, with enough precision, to be satisfying.

275.See Crick & Koch (1990, 1998).

Note that they later abandoned their early theory of consciousness. Koch (2004), p. 46, writes:

Today, Francis [Crick] and I no longer think that synchronized firing is a sufficient condition for the [neural correlates of consciousness]. A functional role more in line with the data is that synchronization assists a nascent coalition in its competition with other nascent coalitions. As explained in Chapter 9, this occurs when you attend to an object or event. A neuronal substrate of this bias could be synchronized firing in certain frequency bands… Once a coalition has established itself as a winner and you are conscious of the associated attributes, the coalition may be able to maintain itself without the assistance of synchrony, at least for a time. Thus, one might expect synchronized oscillations to occur in the early stages of perception, but not necessarily in later ones.

276.See Tononi (2004); Oizumi et al. (2014); Tononi (2015); Tononi et al. (2015b). I have not read these sources in full.

277.I am hardly an expert on IIT, so my criticisms could be misguided, but even if they are, I hope they will help to illustrate how I think about theories of consciousness.

278.For example, IIT predicts enormous quantities of consciousness in the “trivially simple network” of Seth et al. (2006) and the expander graph of Aaronson (2014a).

Tononi (2014) replied to the latter example by confirming that Aaronson’s expander graph would be enormously conscious according to IIT, but then saying that we shouldn’t trust our intuitions that the expander graph isn’t enormously conscious:

[Aaronson’s] main point that certain systems that are simple — in the sense that they are easy to describe — could have large values of PHI, [is correct]… Resorting to expander graphs is actually overkill. This is because systems that are even simpler to describe than expander graphs, for example a 2D lattice of identical logic gates (a “grid”) could also achieve very large values of PHI. So things are “even worse” for IIT… [Aaronson also] argues that some systems with high PHI may not only have a structure that is simple to describe, but they may perform computations that are also just as simple to describe, such as parity checks. In fact, the situation for IIT is actually “worse”, since it allows for a large 2D grid to be conscious even if it were doing nothing, with all gates switched off at a fixed point. Thus, if IIT can be invalidated by an expander graph doing not much at all, it can be invalidated all the more by a mere grid doing absolutely nothing…

…However, [Aaronson’s] “commonsense” intuition that such simple systems cannot possibly be conscious is wrong and should be revised.

…it can be dangerous to rely too much on one’s pre-theoretical intuitions, however strong they may seem. Examples in science are numerous, starting with the strong intuitions people once had that the earth must be still and the sun must revolve around it, or that the earth cannot be round because otherwise we would fall off. Concerning consciousness, the reliability of pre-theoretical intuitions is even worse, because different people often hold radically different ones…

Aaronson (2014b)’s responses to Tononi are roughly the same ones I would give (though, see also the additional exchanges between David Chalmers and Scott Aaronson on that post), so I won’t repeat them here.

279.Graziano (2013), ch. 11.

In Graziano (2016), he makes his case against IIT this way:

[One] popular explanation of consciousness is the integrated information theory. Actually, there are several different theories that fit into this same general category. They share the underlying idea that consciousness is caused by linking together large amounts of information. It’s one thing to process a few disconnected scraps of information. But when information is connected into vast brain-spanning webs, then, according to the proposal, subjective consciousness emerges.

I can’t deny that information is integrated in the brain on a massive scale. Vast networks of information play a role in many brain functions. If you could de-integrate the information in the brain, a lot of basic functions would fail, probably including consciousness. And yet, as a specific explanation of consciousness, this one is definitely a phlegm theory.

Again, it flatters intuition. Most people have an intuition about consciousness as an integrated whole. Your various impressions and thoughts are somehow rolled together into a single inner you. That’s the impression we get, anyway.

You see this same trope in science fiction: If you bundle enough information into a computer, creating a big enough connected mass of data, it’ll wake up and start to act conscious, like Skynet. This appeal to our latent biases has given the integrated information theory tremendous currency. It’s compelling to many respected figures in the field of neuroscience, and is one of the most popular current theories.

And yet it doesn’t actually explain anything. What exactly is the mechanism that leads from integrated information in the brain to a person who ups and claims, “Hey, I have a conscious experience of all that integrated information!” There isn’t one.

If you point a wavelength detector at the sky, it will compute that the sky is blue. If you build a machine that integrates the blueness of the sky with a lot of other information – the fact that the blue stuff is a sky, that it’s above the earth, that it extends so far here and so far there – if the machine integrates a massive amount of information about that sky – what makes the machine claim that it has a subjective awareness of blue? Why doesn’t it just have a bunch of integrated information, without the subjective awareness? The integration theory doesn’t even try to explain. It flatters our intuitions while explaining nothing.

Some scholars retreat to the position that consciousness must be a primary property of information that cannot be explained. If information is present, so is a primordial, conscious experience of it. The more information that is integrated together, the richer the conscious experience. This type of thinking leads straight to a mystical theory called panpsychism, the claim that everything in the universe is conscious, each in its own way, since everything contains at least some information. Rocks, trees, rivers, stars. This theory is the ultimate in phlegm theories. It has enormous intuitive appeal to people who are prone to project consciousness onto the objects around them, but it explains absolutely nothing. One must simply accept consciousness as an elemental property and abandon all hope of understanding it.

280.E.g. see the section on “multiple complexes” in Tononi et al. (2016).

281.Proponents of IIT do say some things about IIT and reportability (e.g. see Tononi et al. 2016), but if they’ve said anything about how IIT specifically predicts the specific features of conscious self-report we observe, then I have been unable to understand what that account is, in what I’ve read about IIT so far.

282.For an introduction to LIDA, see Franklin et al. (2016).

283.Note that the Müller-Lyer illusion image included below does not appear in the quoted passage of Weisberg (2014); I added it for convenience.

284.Baars (1988); Baars et al. (2013); Shanahan (2010), ch. 4; Dehaene (2014); Dehaene et al. (2014); Shevlin (2016); Franklin et al. 2012. I have not read these sources in full.

285.As with IIT, I am hardly an expert on GWT, so my criticisms could be misguided, but even if they are, I hope they will help to illustrate how I think about theories of consciousness.

286.Quoted text is from Dehaene (2014), ch. 5:

When we say that we are aware of a certain piece of information, what we mean is just this: the information has entered into a specific storage area that makes it available to the rest of the brain. Among the millions of mental representations that constantly criss-cross our brains in an unconscious manner, one is selected because of its relevance to our present goals. Consciousness makes it globally available to all our high-level decision systems. We possess a mental router, an evolved architecture for extracting relevant information and dispatching it. The psychologist Bernard Baars calls it a “global workspace”: an internal system, detached from the outside world, that allows us to freely entertain our private mental images and to spread them across the mind’s vast array of specialized processors (figure 24).

Figure 24, borrowed from Dehaene et al. (1998), is:

GNWT.png
Image © National Academy of Sciences, but does not require permission for noncommercial use.

Dehaene continues:

According to this theory, consciousness is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That’s because our brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it; this is why the capacity to report is a key feature of a conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are…

…

Like the psychologist Bernard Baars, I believe that consciousness reduces to what the workspace does: it makes relevant information globally accessible and flexibly broadcasts it to a variety of brain systems…

Flexible information sharing requires a specific neuronal architecture to link the many distant and specialized regions of the cortex into a coherent role. Can we identify such a structure inside our brains? …Unlike the dense mosaic of cells that make up our skin, the brain comprises enormously elongated cells: neurons. With their long axon, neurons possess the property, unique among cells, of measuring up to meters in size. A single neuron in the motor cortex may send its axon to extraordinarily distant regions of the spinal cord, in order to command specific muscles. Most interestingly, …long-distance projection cells are quite dense in the cortex… From their locations in the cortex, nerve cells shaped like pyramids often send their axons all the way to the back of the brain or to the other hemisphere…

Importantly, not all brain areas are equally well connected. Sensory regions, such as the primary visual area V1, tend to be choosy and to establish only a small set of connections, primarily with their neighbors. Early visual regions are arranged in a coarse hierarchy: area V1 speaks primarily to V2, which in turns speaks to V3 and V4, and so on. As a result, early visual operations are functionally encapsulated: visual neurons initially receive only a small fraction of the retinal input and process it in relative isolation, without any “awareness” of the overall picture.

In the higher association areas of the cortex, however, connectivity loses its local nearest-neighbor or point-to-point character, thus breaking the modularity of cognitive operations. Neurons with long-distance axons are most abundant in the prefrontal cortex… This region connects to many other sites in the inferior parietal lobe, the middle and anterior temporal lobe, and the anterior and posterior cingulate areas that lie on the brain’s midline. These regions have been identified as major hubs — the brain’s main interconnection centers. All are heavily connected by reciprocal projections: if area A projects to area B, then almost invariably B also sends a projection back to A… Furthermore, long-distance connections tend to form triangles: if area A projects jointly to areas B and C, then they, in turn, are very likely to be interconnected.

These cortical regions are strongly connected to additional players, such as the central lateral and intralaminar nuclei of the thalamus (involved in attention, vigilance, and synchronization), the basal ganglia (crucial for decision making and action), and the hippocampus (essential for memorizing the episodes of our lives and for recalling them). Pathways linking the cortex with the thalamus are especially important. The thalamus is a collection of nuclei, each of which enters into a tight loop with at least one region of the cortex and often many of them at once. Virtually all regions of the cortex that are directly interconnected also share information via a parallel information route through a deep thalamic relay. Inputs from the thalamus to the cortex also play a fundamental role in exciting the cortex and maintaining it in an “up” state of sustained activity…

The workspace thus rests on a dense network of interconnected brain regions — a decentralized organization without a single physical meeting site. At the top of the cortical hierarchy, an elitist board of executives, distributed in distant territories, stays in sync by exchanging a plethora of messages… We are now in a position to understand why these associative areas systematically ignite whenever a piece of information enters our awareness: those regions possess precisely the long-distance connectivity needed to broadcast messages across the long distances of the brain.

Later in the same chapter, Dehaene describes how his theory says a visual percept would be come conscious:

Suppose we could track all the connections that are activated as we consciously recognize a face… What kind of network would we see? Initially, very short connections, located inside our retinas, clean up the incoming image. The compressed image is then sent, via the massive cable of the optic nerve, to the visual thalamus, then on to the primary visual area in the occipital lobe. Via local U-shaped fibers, it gets progressively transmitted to several clusters of neurons in the right fusiform gyrus, where researchers have discovered… patches of neurons tuned to faces. All this activity remains unconscious. What happens next? Where do the fibers go? The Swiss anatomist Stéphanie Clarke found the surprising answer [Di Virgilio & Clarke (1997)]: all of a sudden, long-distance axons allow the visual information to be dispatched to virtually any corner of the brain. From the right inferior temporal lobe, massive and direct connections project, in a single synaptic step, to distant areas of the associative cortex, including those in the opposite hemisphere. The projections concentrate in the inferior frontal cortex (Broca’s area) and in the temporal association cortex (Wernicke’s area). Both regions are key nodes of the human language network — and at this stage, therefore, words begin to be attached to the incoming visual information.

Because these regions themselves participate in a broader network of workspace areas, the information can now be further disseminated to the entire inner circle of higher-level executive systems; it can circulate in a reverberating assembly of active neurons. According to my theory, access to this dense network is all that is needed for the incoming information to become conscious.

With this (and more) in place, Dehaene finally describes what his theory says a particular conscious state is:

[My theory] proposes that a conscious state is encoded by the stable activation, for a few tenths of a second, of a subset of active workspace neurons. These neurons are distributed in many brain areas, and they all code for different facets of the same mental representation. Becoming aware of the Mona Lisa involves the joint activation of millions of neurons that care about objects, fragments of meaning, and memories.

During conscious access, thanks to the workspace neurons’ long axons, all these neurons exchange reciprocal messages, in a massively parallel attempt to achieve a coherent and synchronous interpretation. Conscious perception is complete when they converge. The cell assembly that encodes this conscious content is spread throughout the brain: fragments of relevant information, each distilled by a distinct brain region, cohere because all the neurons are kept in sync, in a top-down manner, by neurons with long-distance axons.

Neuronal synchrony may be a key ingredient. There is growing evidence that distant neurons form giant assemblies by synchronizing their spikes with ongoing background electrical oscillations. If this picture is correct, the brain web that encodes each of our thoughts resembles a swarm of fireflies that harmonize their discharges according to the overall rhythm of the group’s pattern. In the absence of consciousness, moderate-size cell assemblies may still synchronize locally — for instance, when we unconsciously encode a word’s meaning inside the language networks of our left temporal lobe. However, because the prefrontal cortex does not gain access to the corresponding message, it cannot be broadly shared and therefore remains unconscious.

Let us conjure one more mental image of this neuronal code for consciousness. Picture the sixteen billion cortical neurons in your cortex. Each of them cares about a small range of stimuli. Their sheer diversity is flabbergasting: in the visual cortex alone, one finds neurons that care about faces, hands, objects, perspective, shape, lines, curves, colors, 3-D depth… Each cell conveys only a few bits of information about the perceived scene. Collectively, though, they are capable of representing an immense repertoire of thoughts. The global workspace model claims that, at any given moment, out of this enormous potential set, a single object of thought gets selected and becomes the focus of our consciousness. At this moment, all the relevant neurons activate in partial synchrony, under the aegis of a subset of prefrontal cortex neurons.

It is crucial to understand that, in this sort of coding scheme, the silent neurons, which do not fire, also encode information. Their muteness implicitly signals to others that their preferred feature is not present or is irrelevant to the current mental scene. A conscious content is defined just as much by its silent neurons as by its active ones.

287.However, I do not agree with those who argue that higher-order theories strongly imply that consciousness is rare. I suspect that even if consciousness is a fairly complicated, self-representational, higher-order phenomenon, it might still be implemented by small insect ganglia. The question is: is it?

288.My preferred short explanation of attention schema theory (AST) is the one given in Graziano (2016):

One useful way to introduce the theory is through the hypothetical challenge of building a robot that asserts it is subjectively aware of an object and describes its awareness in the same ways that we do…

[The figure below] shows a robot looking at an apple. What information should be incorporated into its brain? First, we give it information about the apple [box A]. Light enters the eye, is transduced into signals, and the information is processed to construct a description of the apple that includes shape, colour, size, location, and other attributes. This representation, or internal model, is constantly updated as new signals arrive. The model is schematic. It is a simplified proxy for the real thing. Given the limited processing resources in the brain, internal models are necessarily incomplete and simplified… Here we give our robot just such a simplified, schematic internal model of an apple.

Graziano's robot, Box A
Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on figure 1A from Graziano (2016).

Is the robot in [box A] aware of the apple? In one sense, yes. The term ‘objective awareness’ is sometimes used to indicate that the information has gotten in and is being processed… The machine in [box A] is objectively aware of the apple. But does it have a subjective experience?

To help explore that question we add a user interface, the linguistic processor shown in [box B]. Like a search engine, it can take in a question, search the internal model, and answer. We ask, “What’s there?” It answers, “An apple.” We ask, “What are the properties of the apple?” It answers, “It’s red, it’s round, it’s at that location.” It can provide those answers because it contains that information.

Graziano's robot, Box B
Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on figure 1B from Graziano (2016).

[Box B] could represent an entire category of theory about consciousness, such as the global workspace theory (Baars, 1988; Newman and Baars, 1993). In that theory, consciousness occurs when information is broadcast globally throughout the brain. In [box B], the sensory representation of the apple is broadcast globally, and as a result the cognitive and linguistic machinery has access to information about the apple. The robot can therefore report that the apple is present.

But [box B] remains an incomplete account of how a machine claims to be conscious of an apple. Consider asking, “Are you aware of the apple?” The search engine searches the internal model and finds no answer. It finds information about an apple, but no information about what “awareness” is or whether it has any of it, and no information about what the quantity “you” is. It cannot answer the question…

Perhaps we can improve the machine. In [box C], a second internal model is added, a model of the self. This new internal model, like the model of the apple, is a constantly updated set of information. It might include the body schema, the brain’s model of the physical self and how it moves. The self model might also include autobiographical memory and general information about personality, beliefs, and goals. If we ask the robot in [box C], “Tell us about yourself?” it can now answer. It has been given the construct of self. It might reply, “I’m a person, I’m standing right here, I’m so tall, so wide, I can move, I grew up in Buffalo, I’m a nice guy,” and so on, as its cognitive search engine accesses its internal models. [Box C] could represent an entire category of theory in which consciousness depends on self-knowledge…

Graziano's robot, Box C
Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on figure 1C from Graziano (2016).

However, once again this account is incomplete. We can ask the machine in [box C], “What is the mental relationship between you and the apple?” The search engine accesses the two available internal models and finds no answer. It finds plenty of information about the self and plenty of separate information about the apple, but no information about a mental relationship between them — no information about what a mental relationship is. Equipped only with the components shown in [box C], the machine cannot even parse the question.

To provide a computational relationship between the self and the apple, we now give the robot an internal model of its own attention process. Attention can switch from object to object, including internal objects such as memories, selecting the temporary objects of its “gaze” for deeper processing than the other things in (e.g.) the visual field. In box D, we add an internal schematic model of this attention process.

Graziano's robot, Box D
Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on figure 1D from Graziano (2016).

Graziano continues:

First consider what information might be contained in an internal model of attention. How would it describe attention? Presumably, like the internal model of the apple, it would describe useful, functional, abstracted properties of attention, not microscopic physical details. It might describe attention as a mental possession of something. It might describe attention as something that empowers oneself to react. It might describe attention as something located inside oneself, belonging to oneself, and not directly observable to the outside world… But this internal model would not contain information about neurons, competing electrochemical signals, or other physical nuts and bolts that the brain has no pragmatic need to know…

We ask the robot in [box D], “What is the mental relationship between you and the apple?” The search engine accesses its internal models and reports the available information. It says, “I have a mental possession of the apple.” The answer is promising and we probe deeper. “Tell us more about this mental possession. What are its physical properties?” For clarity, we also ask, “Do you know what physical properties are?” The machine can answer “Yes” because it has a body schema that describes the physical body and it has an internal model of an apple that describes a physical object. Reporting the information available to it, it might say (if it has a good vocabulary), “I know what physical properties are. But my mental possession of the apple, the mental possession in-and-of-itself, has no physically describable properties. It’s an essence located inside me. Like my arms and legs are physical parts of me, I also have a non-physical or metaphysical part of me. It’s my mind taking hold of things — the colour, the shape, the location. My subjective self seizes those things.” The machine is describing… attention, and the description sounds semi-magical only because it is vague on the details and the mechanistic basis of attention.

Because we built the robot, we know why it gives that answer. It’s a machine accessing internal models. Whatever information is contained in those models it reports to be true. That information lies deeper than language, deeper than higher cognition. The machine insists it has subjective awareness because, when its internal models are searched, they return that information. Introspection will always return that answer. In the same way, it reports that the apple has a colour even though in reality the apple has a reflectance spectrum, not colour. Just as in Metzinger’s ego tunnel (Metzinger, 2010), this brain is captive to the schematic information in its internal models.

…

The logic of the theory can be summarized in four points. One, the brain constructs internal models of important objects and processes in the world. Therefore, two, the brain constructs an internal model of its own process of attention. Three, internal models are never accurate descriptions. They are incomplete and schematic, due to a trade-off between accuracy and processing resources. Therefore, four, a brain with an internal model of attention, even if that brain has a good enough linguistic and cognitive capacity to talk about it, would not report its attention in a physically accurate, detailed, or mechanistic manner. Instead it would claim to have something physically incoherent: a subjective mental experience.

289.See Graziano (2013).

Note that Graziano’s theory is vulnerable to objections over “vague psychological language” like those I raise elsewhere. For example, Brian Tomasik suggests (here) that much of Graziano’s broad theory seems to be satisfied by the Windows Task Manager, or a slightly modified version of it. I doubt that Graziano thinks the Windows Task Manager is conscious, but if not, the analogy to Windows Task Manager may allow Graziano to state his theory more precisely, in a way that excludes the Windows Task Manager from consciousness.

290.See Drescher (2006), ch. 2, and notes from my conversation with Gary Drescher.

Many readers might wonder what a “gensym” is. It is a function for creating symbols in the Lisp programming language, though the term “gensym” is also sometimes used to refer to the generated symbol itself. I have not used Lisp before, but Daniel Dewey, a Program Officer for the Open Philanthropy Project, offers this brief explanation:

Symbols are a Lisp data type, like numbers or strings. For Drescher’s purposes, the important feature of symbols is that the only acceptable operation on a symbol is checking whether it’s identical with, or different from, another symbol. Instances of other data types, like numbers or strings, can be checked for equality (1 != 2, or “Gary Drescher” = “Gary Drescher”), but can also be operated on in other ways, exposing additional information they contain; for example, numbers can be added or multiplied, and strings can be broken up into their component characters. Symbols don’t contain any other information, and can’t be added, multiplied, or split apart; they can only be compared. A program can tell that the symbols ‘italic’ and ‘italic’ are identical, and that the symbols ‘italic’ and ‘bold’ are different, but it can’t get any other information about ‘italic’ or ‘bold’. “Gensym” is a function that generates a new symbol that’s guaranteed not to be identical with any symbol that’s been generated so far. (Because some versions of Lisp are designed to be useful instead of philosophically pure, some versions of Lisp don’t totally follow these properties, and add more information to symbols that programs can access, but that’s not relevant to Drescher’s analogy.)

Note that the core idea of Drescher’s “qualia as gensyms” account was described at least as early as Chalmers (1990):

Very briefly, here is what I believe to be the correct account of why we think we are conscious, and why it seems like a mystery. The basic notion is that of pattern processing. This is one of the things that the brain does best. It can take raw physical data, usually from the environment but even from the brain itself, and extract patterns from these. In particular, it can discriminate on the basis of patterns. The original patterns are in the environment, but they are transformed on their path through neural circuits, until they are represented as quite different patterns in the cerebral cortex. This process can also be represented as information flow (not surprisingly), from the environment into the brain. The key point is that once the information flow has reached the central processing portions for the brain, further brain function is not sensitive to the original raw data, but only to the pattern (to the information!) which is embodied in the neural structure.

Consider color perception, for instance. Originally, a spectral envelope of light-wavelengths impinges upon our eyes. Immediately, some distinctions are collapsed, and some pattern is processed. Three different kinds of cones abstract out information about how much light is present in various overlapping wavelength-ranges. This information travels down the optic nerve (as a physical pattern, of course), where it gets further transformed by neural processing into an abstraction about how much intensity is present on what we call the red-green, yellow-blue, and achromatic scales. What happens after this is poorly-understood, but there is no doubt that by the time the central processing region is reached, the pattern is very much transformed, and the information that remains is only an abstraction of certain aspects of the original data.

Anyway, here is why color perception seems strange. In terms of further processing, we are sensitive not to the original data, not even directly to the physical structure of the neural system, but only to the patterns which the system embodies, to the information it contains. It is a matter of access. When our linguistic system (to be homuncular about things) wants to make verbal reports, it cannot get access to the original data; it does not even have direct access to neural structure. It is sensitive only to pattern. Thus, we know that we can make distinctions between certain wavelength distributions, but we do not know how we do it. We’ve lost access to the original wavelengths – we certainly cannot say “yes, that patch is saturated with 500-600 nm reflections”. And we do not have access to our neural structure, so we cannot say “yes, that’s a 50 Hz spiking frequency”. It is a distinction that we are able to make, but only on the basis of pattern. We can merely say “Yes, that looks different from that.” When asked “How are they different?”, all we can say is “Well, that one’s red, and that one’s green”. We have access to nothing more – we can simply make raw distinctions based on pattern – and it seems very strange.

So this is why conscious experience seems strange. We are able to make distinctions, but we have direct access neither to the sources of those distinctions, or to how we make the distinctions. The distinctions are based purely on the information that is processed. Incidentally, it seems that the more abstract the information-processing – that is, the more that distinctions are collapsed, and information recoded – the stranger the conscious experience seems. Shape- perception, for instance, strikes us as relatively non-strange; the visual system is extremely good at preserving shape information through its neural pathways. Color and taste are strange indeed, and the processing of both seems to involve a considerable amount of recoding.

The story for “internal perception” is exactly the same. When we reflect on our thoughts, information makes its way from one part of the brain to another, and perhaps eventually to our speech center. It is to only certain abstract features of brain structure that the process is sensitive. (One might imagine that if somehow reflection could be sensitive to every last detail of brain structure, it would seem very different.) Again, we can perceive only via pattern, via information. The brute, seemingly non-concrete distinctions thus entailed are extremely difficult for us to understand, and to articulate. That is why consciousness seems strange, and that is why the debate over the Mind-Body Problem has raged for thousands of years.

A related idea, cast in terms of neural networks, can be found in Loosemore (2012), which Tomasik (2014) summarizes like this:

Loosemore presents what I consider a biologically plausible sketch of connectionist concept networks in which a concept’s meaning is assessed based on related concepts. For instance, “chair” activates “legs”, “back”, “seat”, “sitting”, “furniture”, etc. (p. 294). As we imagine lower-level concepts, the associations that get activated become more primitive. At the most primitive level, we could ask for the meaning of something like “red”. Since our “red” concept node connects directly to sensory inputs, we can’t decompose “red” into further understandable concepts. Instead, we “bottom out” and declare “red” to be basic and ineffable. But our concept-analysis machinery still claims that “red” is something — namely, some additional property of experience. This leads us to believe in qualia as “extra” properties that aren’t reducible.

291.As Carruthers (2016) puts it, conscious states will seem “ineffable” or “indescribable” because those states

…have fine-grained contents that can slip through the mesh of any conceptual net. We can always distinguish many more shades of red than we have concepts for, or could describe in language (other than indexically — e.g., ‘That shade’).

292.Armstrong (1968):

To produce [the “headless woman”] illusion, a woman is placed on a suitably illuminated stage with a dark background and a black cloth is placed over her head. It looks to the spectators as if she has no head. The spectators cannot see the woman’s head. But they gain the impression that they can see that the woman has not got a head. (Cf. ‘I looked inside, and saw that he was not there.’) Unsophisticated spectators might conclude that the woman did not in fact have a head.

What the example shows is that, in certain cases, it is very natural for human beings to pass from something that is true: ‘I do not perceive that X is Y’, to something that may be false: ‘I perceive that X is not Y’. We have here one of those unselfconscious and immediate movements of the mind of which Hume spoke, and which he thought to be so important in our mental life.

It can now be suggested by the Materialist that we tend to pass from something that is true:

I am not introspectively aware that mental images are brain-processes

to something that is false:

I am introspectively aware that mental images are not brain-processes.

…Does ordinary experience, then, involve the illusion of the truth of anti-materialism? The Materialist can now admit that it does involve such an illusion, but urge that the illusion is no more than the illusion involved in the “headless woman”: the taking of an absence of awareness of X to be an awareness of the absence of X.

See also Smart (2006), a kind of follow-up to Armstrong’s article.

293.Kammerer’s explanation of his “theoretical introspection hypothesis” (TIH) is several pages long (and very much worth reading), but Frankish (2016c) provides a simplified but helpful summary:

Introspection is informed by an innate and modular theory of mind and epistemology, which states that (a) we acquire perceptual information via mental states — experiences — whose properties determine how the world appears to us, and (b) experiences can be fallacious, a fallacious experience of A being one in which we are mentally affected in the same way as when we have a veridical experience of A, except that A is not present.

Here, I’ll interject to add that Kammerer does not require that our innate “theories” of mind and epistemology (which inform our introspection) represent or “state” (a) and (b). For example, one could give a dispositionalist account of these innate theories of mind and epistemology which nevertheless roughly captures statements (a) and (b). If anything like Kammerer’s account is true, I would (personally) expect it to be a dispositionalist account. Anyway, back to Frankish’s summary:

Given this theory, Kammerer notes, it is incoherent to suppose that we could have a fallacious experience [i.e. an illusory experience] of an experience, E. For that would involve being mentally affected in the same way as when we have a veridical experience of E, without E being present. But when we are having a veridical experience of E, we are having E (otherwise the experience wouldn’t be veridical). So, if we are mentally affected in the same way as when we are having a veridical experience of E, then we are having E. So E is both present and not present, which is contradictory…

Kammerer proposes that this explains the peculiar hardness of the illusion problem. The illusionist thesis cannot be coherently articulated using our everyday concept of illusion, which is rooted in our naïve concept of fallacious experience. Moreover, if the naïve theory Kammerer sketches does inform our introspective activity, then we shall not be able to form any imaginative conception of what it would be like for illusionism to be true. Hence the common claim that, where consciousness is concerned, appearance is reality. As Kammerer stresses, this does not mean that illusionism actually is incoherent. It simply means that in order to state it we must employ a technical concept of illusion — as, say, a cognitively impenetrable, non-veridical mental representation that is systematically generated in certain circumstances.

Frankish notes that one might develop a similar illusionist account of our sense that introspective acquaintance is “direct”:

Of course, even if Kammerer is right about the source of our intuitive resistance to illusionism, this would not show that illusionism is true, though it would help to dispel one common objection to it. Realists will say that phenomenality is not an illusion even in a technical sense: our relation to our phenomenal properties is one of direct acquaintance, which does not depend on potentially fallible representational processes. Perhaps Kammerer could employ the strategy again here, arguing that our concept of introspective acquaintance is also a theoretical one.

Kammerer’s account depends on a “theory theory” of introspection, according to which introspection interprets its (mental) objects through a theory or theories, e.g. a theory of mind. Kammerer’s example of such a “theory theory” of introspection is that of Nichols & Stich (2003), illustrated with “boxological” diagrams like this:

Nichols-Stich boxological diagram
Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on figure 6.4 from Nichols & Stich (2003).

Personally, I expect introspection will turn out to be a big mess of competing processes, a la Schwitzgebel (2012):

My thesis is: introspection is not a single process but a plurality of processes. It’s a plurality both within and between cases: most individual introspective judgments arise from a plurality of processes (that’s the within-case claim), and the collection of processes issuing in introspective judgments differs from case to case (that’s the between-case claim). Introspection is not the operation of a single cognitive mechanism or small collection of mechanisms. Introspective judgments arise from a shifting confluence of many processes, recruited opportunistically.

The following analogy might be helpful. Suppose you’re at a psychology conference or a high school science fair and you’re trying to quickly take in a poster. You are not equipped with a dedicated faculty of poster-taking-in. Rather, you opportunistically deploy a variety of processes with the aim of getting the gist of the poster: you look at the poster-or perhaps only listen to a recital of portions of it, if you’re in the mood or visually impaired-you attend to what the poster’s author is saying about it; you follow out implications, charitably rejecting some interpretations of the poster’s content as too obviously foolish; you think about what it makes sense to claim given the social and scientific context and other work by the author or the author’s advisor, if you know any; you pose questions and assess the author’s responses both for overt content and for emotional flavor. Although the cognitive systems involved range widely and are not dedicated just to taking in posters, not just any activity counts as taking in a poster-one’s judgments about the poster must aim to reflect a certain kind of sensitivity to its contents. Likewise for introspection, I will suggest: the cognitive activities range widely and vary between cases-that is the main claim I will defend-and yet, as I will suggest near the end of this essay, it wouldn’t be natural to call a judgment introspective if it weren’t formed with the aim or intention of reflecting a certain kind of sensitivity to the target mental state.

Here, then, is Schwitzgebel’s own “boxological diagram” of introspection as a big, jumbled mess, which I found amusing:

Image © Oxford University Press. Image is figure 1.1 from Introspection and Consciousness, edited by Declan Smithies and Daniel Stoljar (2012), on p. 40. Used by permission of Oxford University Press.

If one has a Schwitzgebel-like model of how introspection works, this poses a challenge to a Kammerer-like explanation of why it seems to us that consciousness, uniquely, cannot be an illusion. However, it could still be the case that something like Kammerer’s account is an important piece of the “big mess” of introspection, and thereby goes a long way toward explaining what it aims to explain concerning phenomenal consciousness.

294.By “toy program,” I have in mind something perhaps 5x-100x as large and complicated as Brian Tomasik’s “Simple Program to Illustrate the Hard Problem of Consciousness.”

295.See Kotseruba et al. (2016) for an overview of cognitive architectures.

296.If these first three steps of the project were described in a book, the structure of the exposition might be similar to that of Baars (1988), chs. 2-9 — but much longer, and with links to code for every version of the program/model.

297.Another caveat about this project concerns the moral implications of trying to build potentially-conscious machines (Metzinger 2010, pp. 194-196):

Imagine you are a member of an ethics committee considering scientific grant applications. One says:

We want to use gene technology to breed [intellectually disabled] human infants. For urgent scientific reasons, we need to generate human babies possessing certain cognitive, emotional, and perceptual deficits. This is an important and innovative research strategy, and it requires the controlled and reproducible investigation of the [intellectually disabled] babies’ psychological development after birth. This is not only important for understanding how our own minds work but also has great potential for healing psychiatric diseases. Therefore, we urgently need comprehensive funding.

No doubt you will decide immediately that this idea is not only absurd and tasteless but also dangerous. One imagines that a proposal of this kind would not pass any ethics committee in the democratic world. The point of this thought experiment, however, is to make you aware that the unborn [conscious machines] of the future would have no champions on today’s ethics committees. The first machines satisfying a minimally sufficient set of conditions for conscious experience and selfhood would find themselves in a situation similar to that of the genetically engineered [and intellectually disabled] human infants. Like them, these machines would have all kinds of functional and representational deficits — various disabilities resulting from errors in human engineering. It is safe to assume that their perceptual systems — their artificial eyes, ears, and so on—would not work well in the early stages. They would likely be half-deaf, half-blind, and have all kinds of difficulties in perceiving the world and themselves in it — and if they were true [conscious machines], they would, ex hypothesi, also be able to suffer.

If they had a stable bodily self-model, they would be able to feel sensory pain as their own pain. If their postbiotic self-model was directly anchored in the low-level, self-regulatory mechanisms of their hardware — just as our own emotional self-model is anchored in the upper brainstem and the hypothalamus — they would be consciously feeling selves. They would experience a loss of homeostatic control as painful, because they had an inbuilt concern about their own existence. They would have interests of their own, and they would subjectively experience this fact. They might suffer emotionally in qualitative ways completely alien to us or in degrees of intensity that we, their creators, could not even imagine. In fact, the first generations of such machines would very likely have many negative emotions, reflecting their failures in successful self-regulation because of various hardware deficits and higher-level disturbances. These negative emotions would be conscious and intensely felt, but in many cases we might not be able to understand or even recognize them.

Take the thought experiment a step further. Imagine these postbiotic [conscious machines] as possessing a cognitive self-model — as being intelligent thinkers of thoughts. They could then not only conceptually grasp the bizarreness of their existence as mere objects of scientific interest but also could intellectually suffer from knowing that, as such, they lacked the innate “dignity” that seemed so important to their creators. They might well be able to consciously represent the fact of being only second-class sentient citizens, alienated postbiotic selves being used as interchangeable experimental tools. How would it feel to “come to” as an advanced artificial subject, only to discover that even though you possessed a robust sense of selfhood and experienced yourself as a genuine subject, you were only a commodity?

The story of the first artificial [conscious machines], those postbiotic phenomenal selves with no civil rights and no lobby in any ethics committee, nicely illustrates how the capacity for suffering emerges along with the phenomenal [self]… It also presents a principled argument against the creation of artificial consciousness as a goal of academic research…

Schwitzgebel & Garza (2015) (see also this interview) raise similar concerns, and as a result, propose what they call an “Excluded Middle Policy” for AI development:

Although it seems reasonable to assume that we have not yet developed an artificial entity with a genuinely conscious stream of experience that merits substantial moral consideration, our poor understanding of consciousness raises the possibility that we might some day create an artificial entity whose status as a genuinely conscious being is a matter of serious dispute. This entity, we might imagine, says “ow!” when you strike its toe, says it enjoys watching sports on television, professes love for its friends — and it’s not obvious that these are simple pre-programmed responses… but neither is it obvious that these responses reflect the genuine feelings of a conscious being. The world’s most knowledgeable authorities disagree, dividing into believers (yes, this is real conscious experience, just like we have!) and disbelievers (no way, you’re just falling for tricks instantiated in a dumb machine).

Such cases raise the possibility of moral catastrophe. If the disbelievers wrongly win, then we might perpetrate slavery and murder without realizing we are doing so. If the believers wrongly win, we might sacrifice real human interests for the sake of artificial entities who don’t have interests worth the sacrifice.

…we draw two lessons. First, if society continues on the path toward developing more sophisticated artificial intelligence, developing a good theory of consciousness is a moral imperative. Second if we do reach the point where we can create entities whose moral status is reasonably disputable, we should consider an Excluded Middle Policy — that is, a policy of only creating AIs whose moral status is clear, one way or the other.

Such considerations might constitute another reason to leave some parts of a cognitive architecture (designed for the purpose of studying consciousness) as “black boxes,” and to never actually run the cognitive architecture — at least, for those with certain philosophical views, who also aspire to radical empathy.

298.But see also Snowden et al. (2012), ch. 11; Goodale & Ganel (2016); Kravitz et al. (2011). For wide-ranging discussions of this topic, see the essays in Gangopadhyay et al. (2010).

299.G&M-13, ch. 4. For more on the evolution of visual systems, see Milner & Goodale (2006), ch. 1.

300.This quote from Milner & Goodale (2006), p. 65.

Another piece of evidence sometimes cited (e.g. by Wolpert 2011) in favor of the view that brains are primarily for controlling behavior is the fact that a tunicate (“sea squirt”), upon swimming to and attaching itself to a suitable rock (and thus no longer needing to plan and control its movement), digests its own brain. (Dennett 1991, p. 177, humorously remarks: “When it finds its spot and takes root, it doesn’t need its brain anymore, so it eats it! It’s rather like getting tenure.”)

Wolpert and Dennett don’t cite any sources, but see e.g. Mackie & Burighel (2005), p. 169; Cloney (1982).

301.Not her real name. The next several paragraphs draw from, and quote from, G&M-13, ch. 1

302.Image is figure 1.3 from Sight Unseen, Second Edition by Goodale and Milner (2013), on p. 6.

303.Technically, G&M-13 doesn’t say whether it was G&M, or some other experimenters, who showed Dee the flashlight, and I haven’t bothered to find out for sure. For simplicity, I’ve simply guessed that it was G&M who did this.

304.Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on G&M-13, figure 1.4.

305.Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on G&M-13, figure 1.9.

306.Image is figure 5.2 from The Visual Brain in Action, Second Edition by Milner and Goodale (2006), on p. 127.

307.For the next several passages, I am now following the discussion in, and quoting from, G&M-13, ch. 2.

308.Image is figure 2.2 from Sight Unseen, Second Edition by Goodale and Milner (2013), on p. 20. That page claims that the image is reproduced from Goodale et al. (2001) in Nature, but that seems to be incorrect.

309.This wasn’t a deficit in Dee’s ability to rotate the card. G&M-13 report:

We were able to rule out that possibility by asking her to imagine a slot at different orientations. Once she had done this, she had no difficulty rotating the card to show us the orientation she’d been asked to imagine. It was only when she had to look at a real slot and match its orientation that her deficit appeared.

310.G&M-13, Figure 2.2.

311.Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on G&M-13, figure 2.3.

312.Of course for a square Efron block, width-wise and length-wise are the same.

313.Heider (2000). For a detailed account of another patient, “John,” with an overlapping but non-identical set of symptoms, see Humphreys & Riddoch (2013).

314.G&M-13, ch. 3.

315.This section draws from, and quotes from, G&M-13, ch. 4.

316.This section draws from G&M-13, chs. 7-8.

317.Full citation for the source of this image is:

Y. Hu and M. A. Goodale, “Grasping after a Delay Shifts Size-Scaling from Absolute to Relative Metrics,” Journal of Cognitive Neuroscience, 12:5 (September, 2000), pp. 8556-868. © 2000 by the Massachusetts Institute of Technology, published by the MIT Press. http://www.mitpressjournals.org/doi/abs/10.1162/089892900562462

318.Illustration produced by Weni Pratiwi for the Open Philanthropy Project. Licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 3.0 United States License. Image is based on G&M-13, figure 8.1.

319.See G&M-13, ch. 8; Goodale & Ganel (2016).

320.Here again, I’m quoting from G&M-13, ch. 4.

321.For a review, see Milner & Goodale (2006), pp. 42-66, plus a few updates in ch. 8.

There is some neurophysiological evidence against G&M’s proposed functional dissociation, though (Cardoso-Leite & Gorea 2010):

…accumulating neurophysiological evidence was also pointing to many instances where neurons and cortical sites in the ventral and dorsal streams behave contrary to predictions of the dissociation theory. For example, both neurophysiological and neuroimaging studies show evident dorsal stream responsiveness to stimulus features supposed to be processed in the ventral stream such as shape (e.g., Konen and Kastner, 2008; Lehky and Sereno, 2007) and color (e.g., Claeys et al., 2004; Toth and Assad, 2002). Equivalently some prototypical dorsal processing features such as motion are equally well processed in the ventral stream (e.g., Gur and Snodderly, 2007). Also, while the temporal processing characteristics of the two streams have also been cited in favor of their functional dissociation (with magnocellular neurons in dorsal areas responding earlier to visual stimulation than the parvocellular neurons in the ventral stream; e.g., Nowak and Bullier, 1997; Rossetti et al., 2003), the significance of such latency differences has been obscured by numerous reports that visual information processing is not strictly feedforward (as supposed in the classic view) so that frontal areas may respond to visual stimuli at about the same time as V1 (Lamme and Roelfsema, 2000; Schmolesky et al., 1998; Zanon et al., 2009). Hence, efferent signals from the frontal cortex may modulate processing in both the dorsal and ventral extrastriate areas (Moore and Armstrong, 2003; Moore and Fallah, 2001, 2004).

322.Single-cell recordings currently require cutting a hole in the skull, and are thus only considered for humans in cases when a hole in the skull must be made for clinical reasons.

323.For reviews, see Quian Quiroga (2012) and Rey et al. (2014).

324.See e.g. Milner & Goodale (2006), ch. 8.

325.For example partial critiques of the two streams theory, see Briscoe & Schwenkler (2015); Freud et al. (2016); Cardoso-Leite & Gorea (2010); Schenk & McIntosh (2009); Clark (2009); Gorea (2015); Shepherd (2015), sec. 4.1; Hesse et al. (2011).

326.Briscoe & Schwenkler (2015); Cardoso-Leite & Gorea (2010); Clark (2009).

327.Freud et al. (2016); Cardoso-Leite & Gorea (2010).

328.E.g. see Klein (2010).

329.E.g. see Eklund et al. (2016).

330.Sneddon (2002), Sneddon et al. (2003), and Ashley et al. (2007) for rainbow trout, and Gentle (2011) and Egger et al. (2014) for chickens. On chicken behavior and cognition more generally, see Marino (2017) and Nicol (2015).

331.For convenience, I quote below some sections of the paper that describe key neuroanatomical differences in phylogeny, and explain (in brackets) a few especially important but perhaps unfamiliar terms:

One of the most important terms in understanding the evolution of the brain is ‘homology’. Two structures are homologous if they can be traced to a common ancestor… When we seek to understand the evolution of the brain, we identify homologies and draw conclusions about what has changed or remained the same, based on those homologies. For example, the cerebellum of all fishes and tetrapods [amphibians, reptiles, birds, and mammals] is considered to be homologous, because there are similarities of origin, structure and function. In contrast, the cerebral cortex exists in its present form only in mammals…

Among the many phyla of invertebrates, two major groups can be distinguished: those with radial symmetry and those with bilateral symmetry. These groups differ fundamentally in the structure of their nervous systems. Radially symmetrical organisms evolved earlier and their nervous systems are net-like, with rings of neurons and typically no concentration of nerve cells in one place. In contrast, bilaterally symmetrical organisms tend to have concentrations of neurons in the head region that are often called ganglia but resemble the brains of vertebrates…

Bilaterally symmetrical invertebrates are very numerous, and their nervous systems vary from simple to complex…

More complex nervous systems are found in the [flatworms and segmented worms], which have a pair of nerves running the length of their bodies connected at each segment in a ladder-like arrangement and paired head ganglia at the anterior end. Among the most complex brains in invertebrates are the insects and crustaceans… and the octopi… Their nervous systems consist of paired ganglia at each segment and, in the head, fused paired ganglia forming the brain. The brain consists of multiple lobes, as many as 40 in octopi…

The earliest vertebrates still extant today are the jawless fishes, cyclostomes and hagfish. The brains of these animals possess all of the characters of vertebrates, including five divisions of the brain: medulla oblongata and pons (together called the hindbrain), midbrain, diencephalon and telencephalon. (Lampreys lack a cerebellum, and its presence is debated in hagfish, but it is present in all other groups.)…

In all vertebrates, the medulla oblongata is the caudalmost [toward the tail] region of the hindbrain…

Major sensory and motor tracts that connect the brain and spinal cord run through the medulla. These tracts are more extensive in mammals than in nonmammalian vertebrates: nonmammals do not have corticospinal tracts (because they have no neocortex) although some descending tracts run from the telencephalon to at least the hindbrain in birds…

Whether the cerebellum exists in hagfish is debated, and it is absent in lampreys…, but appears as a well-developed structure in chondrichthyes, the cartilaginous fishes, and in all gnathostomes (jawed vertebrates). It has been proposed that the cerebellum evolved by duplication of cerebellum-like structures in the dorsolateral wall of the hindbrain in cartilaginous fishes, which receive input from the lateral line system and the electrosensory system…

A cerebellum with similar cells and circuits and layers is present in other vertebrates, but has expanded independently in the taxa with elaborate cerebella. Sharks, fishes and birds, as well as mammals, possess an elaborate cerebellum with multiple lobes. In each group and also in the simpler cerebellums of cyclostomes, amphibians and nonavian reptiles, a cerebellar cortex is found, with the same three layers found in mammals…

The roof of the midbrain forms sensory centres in all vertebrates. In mammals, these are termed the superior (vision) and inferior (auditory) colliculi. The homologous structures in nonmammals are the optic tectum and the torus semicircularis, respectively. The optic tectum or superior colliculus is laminated, and it receives retinal input to its superficial layers and auditory, somatosensory, and where present, electrosensory input to its deep layers. In animals that rely heavily on vision, such as birds, however, the optic tectumis larger and more elaborate, suggesting that its most important functions are with vision…

The forebrain (diencephalon and cerebral hemispheres) is the most diverse portion of the vertebrate brain. Certain general principles apply to its organisation, but wide variation in structure is seen. In all vertebrates, the diencephalon consists of four divisions from dorsal to ventral: the epithalamus, dorsal thalamus, ventral thalamus and hypothalamus. The dorsal thalamus and, in some organisms, the ventral thalamus and/or hypothalamus relay sensory information to the telencephalon. The epithalamus and the hypothalamus function in the regulation of visceral functions, including reproduction, circadian rhythms, and sleep and waking…

The cerebral hemispheres, or telencephalon, in all vertebrates can be divided into dorsal [toward the animal’s back] and ventral [toward the animal’s front] parts, called pallial and subpallial. In mammals, pallial regions can be subdivided into the olfactory bulb and associated cortex (lateral pallium), neocortex (dorsal pallium), hippocampus (medial pallium), and claustrum and pallial amygdala (or ventral pallium). Subpallial regions become the basal ganglia and some limbic regions. In most nonmammals, the medial pallium is recognised as the equivalent of the hippocampus. The lateral pallium is also recognised as the olfactory cortex. But the identity of the dorsal pallium and the equivalent structures is controversial.

…

The basal ganglia can be recognised in all vertebrates. In lampreys, equivalences to all major components of the basal ganglia can be identified: the striatum, the globus pallidus, the subthalamic nucleus and the substantia nigra pars compacta, although identification of the globus pallidus is not yet conclusive… In cartilaginous fishes, a structure termed the area periventricularis ventralis is thought to be equivalent to the dorsal striatum, and a nucleus superficialis basalis may correspond to the ventral striatum. In actinopterygian fishes, it is also believed that there are structures corresponding to the dorsal and ventral striatum. In amphibians, a dorsal and ventral striatum can be recognised, and a nucleus that is believed to be pallidal, called the entopeduncular nucleus. In nonavian reptiles and birds, the dorsal striatum is called the lateral striatum, and the ventral striatum is the medial striatum. A major difference in basal ganglia among vertebrate groups is that a major output in mammals goes to the thalamus, which projects back to the cortex, whereas in nonmammals, and especially in anamniotes, the basal ganglia project downstream and modulate the downstream motor pathways. Mammals have descending pathways as well, but the loops involving the cortex predominate…

The cerebral hemispheres of mammals appear to be very different from those of nonmammalian vertebrates because all mammals have cerebral cortex, a layered structure on the surface of the brain. Nonmammalian amniotes have some cortical structures, equivalent to the olfactory cortex (lateral pallium) and hippocampus (medial pallium) of mammals, but these are made up of only three layers. The structure that is different in mammals is the neocortex (dorsal pallium), which is extensive and is made up of six interconnected layers of cells…

There is no general agreement about the homologies of the cell populations of the pallium in mammals and nonmammals. Two major positions are represented at the present time, one called the neocortex hypothesis…, the other called the claustroamygdaloid hypothesis … The neocortex hypothesis is that the cell populations of the pallium in nonmammals, which are arranged in nuclei, are homologous with populations of cells in the neocortex of mammals, which are arranged in layers. The claustroamygdaloid hypothesis is that the cell populations of the pallium in nonmammals are homologous with cell populations in the amygdala and claustrum of mammals…

332.My sources for the information in this table are Powers (2014) and T.M. Preuss’ chapter “Primate Brain Evolution” in Kaas (2009).

333.See M.C. Corballis’ chapter “The Evolution of Hemispheric Specializations of the Human Brain” in Kaas (2009).

334.T.M. Preuss, in his chapter “Primate Brain Evolution” (Kaas 2009, ch. 35), reviews the debate:

Among mammals, only primates have a region of cortex with a well-developed granular layer on the dorsolateral surface of the frontal lobe (Brodmann, 1909). The region is present in all primates that have been examined… Owing in part to the influence of Brodmann, the granular dorsolateral prefrontal cortex initially came to be regarded as a hallmark of the primate brain. The fact that some neurologists in the early part of the twentieth century regarded this region as the seat of higher-order cognitive functions reinforced this view. Modern experimental studies in nonhuman primates… reveal it to have strong connections with the higher-order parietal and temporal areas discussed above, and functional studies in humans and nonhuman primates indicate that different parts of the granular frontal cortex are involved in attention, working memory, and planning…

The idea that dorsolateral prefrontal cortex is special to primates has, nevertheless, been challenged (see the reviews of Preuss, 1995a, 2006). With the introduction of the first generation of techniques for studying cortical connectivity (lesion-degeneration techniques), it became clear that the cortical regions differed in their patterns of connectivity as well as their histology. Early research on the forebrain connections of the cortex focused on connections with the thalamus because cortical lesions produce degeneration in thalamic nuclei that project to them; most other connections could not be reliably resolved until improved methods became available in the 1970s. Rose and Woolsey (1949) championed the idea that regions of cortex could be defined by the thalamic nuclei that projected to them. As the dorsolateral prefrontal cortex, the largest prefrontal region in primates, receives its major thalamic inputs from the mediodorsal thalamic (MD) nucleus, prefrontal cortex came to be defined as MD-projection cortex (Rose and Woolsey, 1948). As it happens, all mammals that have been examined have a MD nucleus and a cortical territory to which it projects, so by this reasoning, all mammals possess a homologue of dorsolateral prefrontal cortex, even though the MD-projection cortex of nonprimates lacks the well-developed granular layer that marks this region in primates (Rose and Woolsey, 1948; Akert, 1964). It was also reported that dopamine-containing nuclei of the brainstem project very strongly to MD-projection cortex in both primates and nonprimates, and this has also been used to identify homologues in different mammals (Divac et al., 1978). Attempts have also been made to refine this analysis by identifying homologues of specific subdivisions of primate dorsolateral prefrontal cortex in nonprimates (Akert, 1964). A region of special interest has been the cortex that lines the principal sulcus of macaques (principalis cortex), because lesions of this region impair performance on spatial working memory tasks, a set of cognitive tasks that have been adapted for use in a wide range of mammals. Using the criteria of MD-projections, dopamine projections, and involvement in spatial working memory tasks, homologues of macaque principalis cortex have been proposed in nonprimate species, and most importantly in rats, which are the most widely used model animals in mammalian neuroscience. In rats, the principalis homologue has usually been localized to the medial surface of the frontal lobe, and some workers have identified it specifically with area 32 (the prelimbic area)…

This might seem a satisfactory account of prefrontal homologies, but there are difficulties with both the evidence and the reasoning (Preuss, 1995a). For one thing, in primates, MD projects not only to the granular, dorsolateral prefrontal frontal cortex, but also to agranular regions, including orbital cortex, the classical anterior cingulate areas (areas 24 and 32 of Brodmann), and even to insular and premotor cortex. For another, while dorsolateral prefrontal cortex receives dopaminergic inputs, the strongest dopamine projections in primates are actually to the motor region and the orbital and medial cortex. Finally, in primates, lesions of the medial frontal cortex, involving the cingulate region and sparing the dorsolateral region, produce impairments on spatial working memory tasks. Thus, none of the features that have been used to identify homologues of granular prefrontal cortex in nonprimates are actually diagnostic of granular prefrontal cortex in primates. In fact, the medial frontal cortex of rodents very closely resembles the agranular parts of the medial frontal cortex of primates on a variety of structural and functional grounds – both are limbic regions, after all. It is true that the medial frontal cortex of rodents resembles primate granular frontal cortex in certain respects, but these are also the ways that the medial frontal cortex of primates resembles the dorsolateral prefrontal cortex of primates; the similarities are not diagnostic. Moreover, primate granular frontal cortex has additional features of areal organization and connectivity that do not match any known region of frontal cortex in any nonprimate mammal (Preuss, 1995a).

On present evidence, then, there are good grounds for concluding that dorsolateral prefrontal cortex is in fact one of the distinctive features of the primate brain.

335.See e.g. Dugas-Ford et al. (2012) on possible neocortex homologs in avian brains. I say “not really,” because even if Dugas-Ford & colleagues are right, it is still not the case that birds have the typical mammalian 6-layer neocortex.

336.The distinction between a telencephalon formed by eversion and a telencephalon formed by evagination is explained succinctly by Powers (2014):

In all vertebrates except actinopterygian fishes [i.e. ray-finned fishes, e.g. trout and nearly all other well-known fishes], the telencephalon developed by evagination, that is, it grew outward, away from the lateral ventricle in all directions. In actinopterygian fishes, however, the telencephalon developed by a different method, called eversion. The roof of the lateral ventricle thinned and stretched to form a membrane over the ventricle on the surface of the hemisphere, and the telencephalon forms a solid mass (with no ventricle). Because of this unique pattern of development, establishment of homologies between the actinopterygian and tetrapod brain is difficult…

337.Though, see Frankish (2012a), where Frankish argues (in different terms) that Carruthers’ account of consciousness may be an example of strong illusionism being “mis-sold” as weak illusionism.

Theories which use a “phenomenal concepts strategy” might in some cases qualify as examples of weak illusionism. For example, Tye (2000), p. 23:

I accept that experiences are fully, robustly physical but I maintain that there is no explanatory gap posed by their phenomenology. The gap, I claim, is unreal; it is a cognitive illusion to which we only too easily fall prey… There aren’t two sorts of natural phenomena — the irreducibly subjective and the objective. The so-called “explanatory gap” derives largely from a failure to recognize the special features of phenomenal concepts. These concepts, I maintain, have a character that not only explains why we have the intuition that something important is left out by the physical (and/or functional) story but also explains why this intuition is not to be trusted.

Sensorimotor theory might also qualify as a weak (or perhaps even strong) illusionist theory. O’Regan & Noe (2001):

In our view, the qualia debate rests on what Ryle (1949/1990) called a category mistake. Qualia are meant to be properties of experiential states or events. But experiences, we have argued, are not states. They are ways of acting. They are things we do. There is no introspectibly available property determining the character of one’s experiential states, for there are no such states. Hence, there are, in this sense at least, no (visual) qualia. Qualia are an illusion, and the explanatory gap is no real gap at all.

It is important to stress that in saying this we are not denying that experience has a qualitative character. We have already said a good deal about the qualitative character of experience and how it is constituted by the character of the sensorimotor contingencies at play when we perceive… Our claim, rather, is that it is confused to think of the qualitative character of experience in terms of the occurrence of something (whether in the mind or brain). Experience is something we do and its qualitative features are aspects of this activity.

…Many philosophers, vision scientists, and lay people will say that seeing always involves the occurrence of raw feels or qualia. If this view is mistaken, as we believe, then how can we explain its apparent plausibility to so many? In order to make our case convincing, we must address this question.

In our view, there are two main sources of the illusion. The first pertains to the unity and complexity of experience. We tend to overlook the complexity and heterogeneity of experience, and this makes it seem as if in experience there are unified sensation-like occurrences. The second source of illusion has to do with the felt presence of perceptible qualities. Because, when we see, we have continuous access to features of a scene, it is as if we continuously represent those features in consciousness…

338.Some of my reasons for wanting to avoid saying things like “phenomenal consciousness is an illusion” or “phenomenal properties are illusory” were expressed by Graziano (2016):

The attention schema theory [Graziano’s theory] has much in common with illusionism. It clearly belongs to the same category of theory, and is especially close to the approach of Dennett (1991). But I confess that I baulk at the term ‘illusionism’ because I think it miscommunicates. To call consciousness an illusion risks confusion and unwarranted backlash. To me, consciousness is not an illusion but a useful caricature of something real and mechanistic…

In my own discussions with colleagues, I invariably encounter the confusion and backlash. To most people, an illusion is something that does not exist. Calling consciousness an illusion suggests a theory in which there is nothing present that corresponds to consciousness. However, in the attention schema theory, and in the illusionism described by Frankish, something specific is present. In the attention schema theory, the real item that exists inside us is covert attention — the deep processing of selected information. Attention truly does exist. Our internal model of it lacks details and therefore provides us with a blurred, seemingly magicalist account of it.

Second, in normal English, to experience an illusion is to be fooled. To call consciousness an illusion suggests to most people that the brain has made an error. In the attention schema theory, and also in the illusionism approach described by Frankish, the relevant systems in the brain are not in error. They are well adapted. Internal models always, and strategically, leave out the unnecessary detail.

Third, most people understand illusions to be the result of a subjective experience. The claim that consciousness is an illusion therefore sounds inherently circular. Who is experiencing the illusion? It is difficult to explain to people that the experiencer is not itself conscious, and that what is important is the presence of the information and its impact on the system. The term illusion instantly aligns people’s thoughts in the wrong direction.

All of the common objections I encounter have answers. They are based on a misunderstanding of illusionism. But the misunderstanding is my point. Why use a misleading word that requires one to backtrack and explain? For these reasons, in my own writing I have avoided calling consciousness an illusion except in specific circumstances, such as the consciousness we attribute to a ventriloquist puppet, in which the term seems to apply more exactly.

Perhaps I am too much of a visual physiologist at heart. To me, an illusion is a mistake in a sensory internal model. It introduces a consequential discrepancy between the internal model and the real world. That discrepancy can cause errors in behaviour. In contrast, an internal model, at all times, with or without an illusion, is an efficient, useful compression of data. It is never literally accurate. Even when it is operating correctly and guiding behaviour usefully, it is a caricature of reality. I am comfortable calling consciousness a caricature, but not an illusion. It is a cartoonish model of something real.

See also Chrisley & Sloman (2016)’s comments on potential terminological distinctions between illusionism, eliminativism, “revisionism,” and “hallucinationism.”

339.In Frankish (2016b), Frankish writes:

Is the illusionist claiming that we are mistaken in thinking we have conscious experiences? It depends on what we mean by ‘conscious experiences’. If we mean experiences with phenomenal properties, then illusionists do indeed deny that such things exist. But if we mean experiences of the kind that philosophers characterize as having phenomenal properties, then illusionists do not deny their existence. They simply offer a different account of their nature, characterizing them as having merely quasi-phenomenal properties. Similarly, illusionists deny the existence of phenomenal consciousness properly so-called, but do not deny the existence of a form of consciousness (perhaps distinct from other kinds, such as access consciousness) which consists in the possession of states with quasi-phenomenal properties and is commonly mischaracterized as phenomenal. Henceforth, I shall use ‘consciousness’ and ‘conscious experience’ without qualification in an inclusive sense to refer to states that might turn out to be either genuinely phenomenal or only quasi-phenomenal. In this sense realists and illusionists agree that consciousness exists.

Do illusionists then recommend eliminating talk of phenomenal properties and phenomenal consciousness? Not necessarily. We might reconceptualize phenomenal properties as quasi-phenomenal ones. Recall Pereboom’s analogy with secondary qualities. The discovery that colours are mind-dependent did not lead scientists to deny that objects are coloured. Rather, they reconceptualized colours as the properties that cause our colour sensations. Similarly, we might respond to the discovery that experiences lack phenomenal properties by reconceptualizing phenomenal properties as the properties that cause our representations of phenomenal feels — that is, quasi-phenomenal properties…

In everyday life… we would surely continue to talk of the feel or quality of experience in the traditional, substantive sense. As subjects of experience, our interest is in how things seem to us introspectively — the illusion itself, not the mechanisms that cause it. Such talk may fail to pick out real properties, but it is not empty or pointless.

When I defined consciousness by example above, the paper I built on was Schwitzgebel (2016), which is a response to Frankish (2016b). In his response to Schwitzgebel’s proposed definition, Frankish replied (Frankish 2016c):

[Schwitzgebel offers] a definition by example, describing a range of uncontentious positive and negative cases and identifying phenomenal consciousness as ‘the most folk-psychologically obvious thing or feature that the positive examples possess and that the negative examples lack’…

I think Schwitzgebel succeeds in identifying an important folk-psychological kind — indeed the very one that should be our focus in theorizing about consciousness…

…He has defined a neutral explanandum for theories of consciousness, which both realists and illusionists can adopt. (I have referred to this as consciousness in an inclusive sense. We might call it simply consciousness, or, if we need to distinguish it from other forms, putative phenomenal consciousness.)

So, my guess is that Frankish would not mind my way of talking, so long as we have some way of distinguishing what the phenomenal realist means by “qualia” and “phenomenal properties” and so on from what the illusionist means by such terms.

340.Indeed, when reading the journal issue linked above, I often found myself wondering what the authors meant by terms like “what it’s like” and “phenomenal properties” and “quasi-phenomenal properties,” and sometimes I couldn’t tell which author I would agree with more if I was able to understand better what each of them was trying to say. Moreover, it seems likely to me that even assuming something like “strong illusionism” about consciousness is correct, our attempts to describe it using the terms and concepts and metaphors we’ve come up with so far will look quite naive and confused 50 years from now. In my view, detailed empirical work and computational modeling could be the most useful inputs, in the long run, to the clarification of illusionist and other hypotheses about consciousness (see my comments starting here). Nevertheless, I will attempt to clarify (what I see as) the illusionist view, using the concepts and metaphors available to me now.

341.Image is figure 0.6 from Basic Vision by Snowden et al. (2012), on p. 8. The image was adapted from the figure on p. 46 of Shepard (1990), which in turn was an elaboration of a figure on p. 298 of Shepard’s chapter “Psychophysical complementarity” in Kubovy & Pomerantz (1981).

342.In this case, we do know some things about why the visual illusion is produced, but reading the relevant studies isn’t necessary for knowing that our perception has been tricked.

343.Grzybowski & Aydin (2007).

344.See e.g. Hansen et al. (2009).

345.Baird (1905), ch. 1.

346.Quote from Heilman (1991).

347.Hartmann et al. (1991).

348.Young & Leafhead (1996).

349.Some of “illusions” of this sort might be more accurately called “delusions,” but I won’t bother to make this distinction here. See e.g. Blackmore (2016).

350.See e.g. Chalmers (1996), pp. 193-195.

351.Searle (1997), p. 112. Italics modified.

352.For additional clarifications on this point, see Schwitzgebel (2007a), the two papers it links to (Schwitzgebel 2007b; Dennett 2007), and Schwitzgebel (2008), especially this passage from section x:

I sometimes hear the following objection: When we make claims about our phenomenology, we’re making claims about how things appear to us, not about how anything actually is. The claims, thus divorced from reality, can’t be false; and if they’re true, they’re true in a peculiar way that shields them from error. In looking at an illusion, for example, I may well be wrong if I say the top line is longer; but if I say it appears or seems to me that the top line is longer, I can’t in the same way be wrong. The sincerity of the latter claim seemingly guarantees its truth. It’s tempting, perhaps, to say this: If something appears to appear a certain way, necessarily it appears that way. Therefore, we can’t misjudge appearances, which is to say, phenomenology.

This reasoning rests on an equivocation between what we might call an epistemic and a phenomenal sense of “appears” (or, alternatively, “seems”). Sometimes, we use the phrase “it appears to me that such-and-such” simply to express a judgment — a hedged judgment, of a sort — with no phenomenological implications whatsoever. If I say, “It appears to me that the Democrats are headed for defeat,” ordinarily I’m merely expressing my opinion about the Democrats’ prospects. I’m not attributing to myself any particular phenomenology. I’m not claiming to have an image, say, of defeated Democrats, or to hear the word “defeat” ringing in my head. In contrast, if I’m looking at an illusion in a vision science textbook, and I say that the top line “appears” longer, I’m not expressing any sort of judgment about the line. I know perfectly well it’s not longer. I’m making instead, it seems, a claim about my phenomenology, about my visual experience.

Epistemic uses of “appears” might under certain circumstances be infallible in the sense of the previous section. Maybe, if we assume that they’re sincere and normally caused, their truth conditions will be a subset of their existence conditions — though a story needs to be told here. But phenomenal uses of “appears” are by no means similarly infallible. This is evident from the case of weak, nonobvious, or merely purported illusions. Confronted with a perfect cross and told there may be a “horizontal-vertical illusion” in the lengths of the lines, one can feel uncertainty, change one’s mind, and make what at least plausibly seem to be errors about whether one line “looks” or “appears” or “seems” in one’s visual phenomenology to be longer than another. You might, for example, fail to notice — or worry that you may be failing to notice — a real illusion in your experience of the relative lengths of the lines; or you might (perhaps under the influence of a theory) erroneously report a minor illusion that actually isn’t part of your visual experience at all. Why not?

Block (2007b) argues against the idea that we can be wrong about our own subjective experiences, in his discussion of what he calls “hyper-illusions.”

353.Quoted from Schwitzgebel (2011), ch. 3.

354.Dennett fabricated #4.

Dennett lists sources for these phenomena in a footnote, which I’ve reformatted below to match the citation style of this report and to include links:

For the red and green patch, see Crane & Piantanida (1983) and Hardin (1988); for the disappearing color boundary… see Spillmann and Werner (1990); for the auditory barber pole, see Shepard (1964); for the Pinocchio effect, see Lackner (1988)…

355.For extensive debate on these issues, see also Hurlburt & Schwitzgebel (2007) and volume 18, issue 1 of the Journal of Consciousness Studies.

356.For discussion of additional visual illusions, see e.g. Shapiro & Todorovic (2017) and Michael Bach’s website on optical illusions and visual phenomena. On auditory illusions, see the website for Diana Deutsch’s Audio Illusions.

357.Perhaps there are as many answers to this question as there are pairs of realists and illusionists.

358.See also Frankish (2012b).

359.Among the properties of “classic” qualia, the “intrinsic character” property is typically thought to be the most directly incompatible with physicalism. It might also be the most difficult to explain. Because of this, I’ll elaborate here what is usually meant by the “intrinsic character” of (classic) qualia, even though I do not take the time to elaborate the meaning of other properties of classic qualia.

Harman (1990) puts it this way:

…when you attend to a pain in your leg or to your experience of the redness of an apple, you are aware of an intrinsic quality of your experience, where an intrinsic quality is a quality something has in itself, apart from its relations to other things. This quality of experience cannot be captured in a functional definition, since such a definition is concerned entirely with relations, relations between mental states and perceptual input, relations among mental states, and relations between mental states and behavioral output.

Or, here is Weisberg (2014), pp. 54-56:

Theories in physics deal in what we can call “relational” information only. Relational information tells us how things are lawfully connected, how changing one thing changes another according to the physical laws posited by the theory. And then, in turn, we can define physical things like electrons, protons, or quarks in terms of how they fit into this framework of causal connections… on this view to be an electron is just to be the kind of thing that fits into this pattern of causal relations – that plays this “causal role.” We learn with extreme precision from physical theory just what this special electron “role” is. But we don’t learn, so it seems, just what the thing playing the role is, on its own. It’s like learning all we can about what it is to be a goalie: a goalie is the person who defends the goal and can use his or her hands to stop the ball, who can’t touch the ball when it’s passed back by his or her own team, who usually kicks the goal kicks, and so on, without learning who is playing goalie for our team. We know all about the goalie role without learning that its Hope Solo or Tim Howard. We learn the role without learning about the role-filler, beyond that he or she plays that role. But there may be other things true of the role-filler beyond what he or she does in the role.

…

Another way to put this is that all physics tells us about is what physical stuff is disposed to do: what physical law dictates will occur, given certain antecedent conditions. In philosophers’ terms, physics only tells us about the dispositional properties of physical stuff, not the categorical base of these dispositions. A paradigm dispositional property is fragility. Something is fragile if it is disposed to break in certain situations. When we learn that a window is fragile, we learn that it will likely break if hit even softly. But there is a further question we can ask about the window or any fragile object. What is it about the makeup of the window that gives it this dispositional property? The answer is that the window has a certain molecular structure with bonds that are relatively weak in key areas. This molecular structure is what we call the categorical base for the window’s fragility. Returning to physics, at its most basic level, all we get is the dispositional properties, not the categorical base. We learn that electrons (or quarks or “hadrons”) are things disposed to do such-and-such, to be repelled by certain charges, to move in certain ways. We don’t learn what’s “underneath” making this happen.

For more on the idea of intrinsicality, see e.g. Marshall (2014, 2016).

For an accessible explanation of what it means for classic qualia to be “subjective” and “ineffable,” see e.g. ch. 1 of Frankish (2005).

360.Though as with all the intuitions reported here, carrying out the “extreme effort” version of my process for making moral judgments could result in a different judgment.

361.And, to be clear: that person would also have greatly impoverished behavior, and wouldn’t talk the way we do about qualia.

362.I borrow the framing of this section from Eliezer Yudkowsky’s “Rescuing the utility function” (2016). An alternate framing might be to say that we don’t know how to translate our value for something in our “manifest image” of the world into an appropriate value for something in the “scientific image” of the world (see Dennett 2017, ch. 4; Sellars 1962).

For an example toy formalization of the “cross-ontology value translation” problem discussed in this section, see de Blanc (2011). See also section 5 of Soares (2015), section 3.2 of Soares (2016), Pärnpuu (2016), and Yudkowsky’s “Ontology Identification” (2016).

For additional accounts of how scientific concepts evolve along with scientific progress, see e.g. Thagard (1992, 1999, 2008), Grisdale (2010), Laureys (2005), Chalmers (2009), and Chang (2004, 2012).

Block (2007b) cites some further examples:

…as theory, driven by experiment, advances, important distinctions come to light among what appeared at first to be unified phenomena (see Block & Dworkin 1974, on temperature; Churchland 1986; 1994; 2002, on life and fire).

363.In reality, things are bit more complex than water = H2O, both scientifically and historically (Chang 2012), but for the purposes of this illustration please assume the simple view that water = H2O.

364.I’m not aware of any real-world water-lovers, but there do seem to be some real-world life-lovers. For example Albert Schweitzer (see Warren 1997, ch. 2) and Mautner (2009).

365.Clegg (2001); Kitano (2007).

366.Of course if you count the reproduction of single-celled organisms by cell division as defining a “life span,” then many single-celled organisms live for mere hours.

367.Schulze-Makuch & Irwin (2008), pp. 10-12:

By the traditional definition viruses are not considered living entities because they cannot reproduce and grow by themselves and do not metabolize. Nevertheless, they possess a genetic code that enables them to reproduce and direct a limited amount of metabolism inside another living cell. They thus fulfill the traditional criteria only part of the time and under special circumstances… Since viruses presumably evolved from bacteria that clearly are alive, do they represent a case in which a living entity has been transformed to a non-living state by natural selection? Or, alternatively, if viruses were indeed the precursors of the three domains of life (Archaea, Bacteria, and Eukarya) as recently suggested…, where would we draw the line between life and non-life? If we accept the proposition that viruses are not alive, how would we consider parasitic organisms or bacterial spores? Parasites cannot grow by themselves either and spores remain in dormant stages with no dynamic biological attributes until they become active under favorable environmental conditions. Thus, if we consider parasites or bacterial spores to be alive, the logical consequence would be to consider viruses alive as well.

Consider also the multi-part virus reported in Ladner et al. (2016).

368.Schulze-Makuch & Irwin (2008), pp. 8-9:

…just as cells grow in favorable environments with nutrients available, inorganic crystals can grow so long as ion sources and favorable surroundings are provided. Furthermore, just as the development of living organisms follows a regulated trajectory, so does the process of local surface reversibility regulate the course of silicate or metal oxide crystals that grow in aqueous solutions (Cairns-Smith 1982).

…The visible consequence of reproduction in living organisms is the multiplication of individuals into offspring of like form and function. Mineral crystals do not reproduce in a biological sense, but when they reach a certain size they break apart along their cleavage planes. This is clearly a form of multiplication. The consequence of biological reproduction is also the transmission of information. Biological information is stored in the one-dimensional form of a linear code (DNA, RNA), that, at the functional level, is translated into the 3-dimensional structure of proteins. Prior to multiplication, the one-dimensional genetic code is copied, and complete sets of the code are transmitted to each of the two daughter cells that originate from binary fission. An analogous process occurs in minerals, where information may be stored in the two-dimensional lattice of a crystal plane. If a mineral has a strong preference for cleaving across the direction of growth and in the plane in which the information is held (Cairns-Smith 1982), the information can be reproduced.

Note that I have not checked Schulze-Makuch & Irwin’s account for accuracy, as the exact details don’t matter much to my story.

369.Antony (2008) argues in favor of a sharp dividing line (of a certain sort) between the conscious and the non-conscious, but he thinks most theorists disagree with him:

Not all theorists agree [with me] that there can be no borderline conscious states or creatures. In fact most think there can be, and are… In fact, most philosophers working on consciousness think borderline conscious states and creatures must be possible. That is because most philosophers working on consciousness reduce consciousness to complex physical (e.g., neurophysiological) or functional properties, and concepts for such properties are vague.

Papineau (1993) presents this “fuzzy” view of consciousness this way (pp. 123-126):

…any physicalist account of consciousness is likely to make consciousness a vague property. In the next section I shall argue that questions of consciousness may not only be vague, but quite arbitrary, in application to beings unlike ourselves…

The point about vagueness is suggested by the analogy with life. If life is simply a matter of a certain kind of physical complexity — the kind of complexity that fosters survival and reproduction, as I put it above — then it would seem to follow that there is no sharp line between life and non-life. For there is nothing in the idea of such physical complexity to give us a definite cut-off point beyond which you have enough complexity to qualify as alive. Rather as with baldness, or being a pile of sand, we should expect there to be some clear cases of life, and some clear cases of non-life, but a grey area in between where there there is no fact of the matter. And of course this is just what we do find. While there is no doubt that trees are alive and stones are not, there are borderline cases in between, like viruses, or certain kinds of simpler self-replicating molecules, where our physicalist account of life simply leaves it indeterminate whether these are living beings or not.

But now, if consciousness is like life, we should expect a similar point to apply to consciousness. For any physicalist account of consciousness is likely to make consciousness depend similarly on the possession of some kind of structural complexity — the kind of complexity which qualifies you as having self-representing states, say, or short-term memories. Yet any kind of such complexity is likely to come in degrees, with no clear cut-off point beyond which you definitely qualify as conscious, and before which you don’t. So we should expect there to be borderline cases — such as the states of certain kinds of insects, say, or fishes, or cybernetic devices — where our physicalist account simply leaves it indeterminate whether these are conscious states or not.

…I think that the physicalist approach to consciousness is correct. So I reject the intuition that there is a sharp line between conscious and non-conscious states…

It would be a mistake to conclude from this, however, that consciousness is unimportant or unreal. Any number of genuine and important properties are vague. Consider the difference between being elastic or inelastic, or between being young or old, or, for that matter, between being alive and not being alive. All these distinctions will admit indeterminate borderline cases. But all of them involve perfectly serious properties, properties which enter into significant generalizations, are explanatorily important, and so on.

Dennett (1995) concurs:

The very idea of there being a dividing line between those creatures “it is like something to be” and those that are mere “automata” begins to look like an artifact of our traditional presumptions… Consciousness, I claim, even in the case we understand best — our own — is not an all-or-nothing, on-or-off phenomenon. If this is right, then consciousness is not the sort of phenomenon it is assumed to be by most of the participants in the debates over animal consciousness. Wondering whether it is “probable” that all mammals have it thus begins to look like wondering whether or not any birds are wise or reptiles have gumption: a case of overworking a term from folk psychology that has losts its utility along with its hard edges.

Also see Tye (2000), pp. 180-181:

Some philosophers will no doubt respond that the boundary between the creatures that are phenomenally conscious and those that are zombies cannot be blurry. Conscious experience or feeling is either present or it isn’t… [But] it seems to me that we can make sense of the idea of a borderline experience. Suppose you are participating in a psychological experiment and you are listening to quieter and quieter sounds through headphones. As the process continues, a point may come at which you are unsure whether you hear anything at all. Now it could be that there is a still a fact of the matter here (as on the dimming light model); but, equally, it could be that whether you still hear anything is objectively indeterminate. So, it could be that there is no fact of the matter about whether there is anything it is like for you to be in the state you are in at that time. In short, it could be that you are undergoing a borderline experience.

See also Unger (1988), ch. 7 of Papineau (2002), Papineau (2003), the sources cited in footnote 6 of Antony (2008), Simon (2016), and Fazekas & Overgaard (2016).

370.To be clear: I’m not just saying that I expect there to be a wide variety of conscious experiences. That much is already obvious: consider the variety of subjective experiences and consciousness-related mechanisms revealed by psychedelic experiences, mystical experiences, hallucinations, sensory substitution, synesthesia, akinetic mutism, lucid dreams, out of body experiences, absence seizures, hypnosis, sensory agnosias, pain asymbolia, delirium, split-brain patients, fused-cranium conjoined twins who can experience their co-twin’s sensations, and other phenomena (see Appendix Z.2). These phenomena reveal dimensions of human consciousness that can vary from person to person or from moment to moment, and which we might morally value independently of each other.Instead, what I’m saying is that I don’t expect there to be a clear dividing line between beings that have no conscious experience at all and things that have any conscious experience whatsoever, and nor do I expect there to be a clear dividing line between processes (within a given individual) that are “conscious” and “not conscious.” This is what I mean when I say that consciousness is “fuzzy.”

371.The other two major options I considered were:

Rather than investigating the likely distribution of “consciousness,” I could have investigated the likely distribution of better-characterized phenomena instead, such as neural nociception, goal-directed behavior, bodily self-awareness, long-term memory, or some collection of such attributes. But, since I don’t whether any collection of these things is sufficient for phenomenal consciousness of any sort, I might not have learned much, from such an investigation, about which beings are conscious, or about which beings I would judge to be moral patients due to their consciousness.

I could have characterized in some detail the varieties of “consciousness” I do and don’t intuitively morally value, and then investigated the likely distribution of those roughly-sketched varieties of consciousness. But at that level of detail, my moral intuitions might have been fairly uncommon, and thus my findings might not have been of much value to others, including key decision-makers at the Open Philanthropy Project. Moreover, since the thing I (most confidently) intuitively value is still “subjective experience” of a certain sort, and since I don’t know which physical processes are sufficient for subjective experience of that sort, it would have remained difficult to know what I was looking for in the animal kingdom and other taxa, even after using my moral intuitions to identify the types of “consciousness” I’d be looking for. Also, extracting evidence about the distribution of those types of consciousness from the relevant scientific literature would have required extra work, since there is no literature explicitly investigating “evidence of types of consciousness Luke Muehlhauser intuitively morally values,” but there are substantial literatures explicitly investigating the likely distribution of phenomenal consciousness (defined roughly as above).

372.I suspect this is true for other writers on consciousness as well, and in many cases I think I would understand their reasoning better if I knew more about their moral intuitions, even if their reasoning doesn’t explicitly appeal to their moral intuitions.

373.See also Unger (1988).

374.Case report #3: A 64-year-old housewife

This case is also from Bogousslavsky et al. (1991):

A 64-year-old housewife was in good health until she was admitted to hospital after she lost consciousness for 5 min and subsequently complained that she was unable to look downwards… According to her family, the patient’s mood had changed completely after her stroke. Previously, she was active, she liked jokes, and enjoyed being with her grandchildren, but she had become indifferent, did not manifest emotions and did not laugh anymore. Though she was not sleepy, she would stay in bed for the whole morning, unless the nurse would come and ask her to get up and wash herself. She spoke very little spontaneously and did not join conversations with other patients, except to answer questions. She did not seem to be interested in anything and did not enjoy or criticize the meals, though she used to be an expert cook. Unless stimulated by the staff or her family, she did not do anything spontaneously, except sitting in front of the television for hours, going to the bathroom, and browsing among magazines, but without apparent curiosity. She remembered perfectly what she had seen on television or in the magazines, but without expressing any particular interest in any item. She did not have any projects for the future and did not report personal thoughts. This passive behaviour contrasted with her ability to perform usual daily activities when stimulated and ordered by another person. She could sew, knit, play dominoes and go shopping when assisted by her daughter, who had to tell her what to do next. However, she did not show imitation and utilization behaviour, and even after a rather automatic activity (like knitting), she showed no perseverations.

Neuropsychological examination was done 2 and 17 days after stroke. The patient was not disorientated or distracted, but tended to stop doing the tests unless constantly stimulated. With such help, she collaborated well with the examiner. Naming, repetition, comprehension, reading, writing, Poppelreuter, and recognition of objects and faces were normal. [Verbal and visual] learning was unimpaired. She made no mistake but some self-corrected perseverations on Luria’s conflicting tasks and, on Stroop’s test, naming was slightly slowed… On Wisconsin card sorting test, she found 4 clues out of 6. [Editor’s note: see the paper for sources describing these tests.]

The patient was discharged 3 weeks after stroke to her daughter’s home. On follow-up phone calls 2 and 4 months later, the daughter reported that the apathy and indifference had slightly improved, but that the patient still needed external stimulations to accomplish her daily activities. Under this condition, the patient was able to help cleaning the house and to cook; she did not seem, however, to have feelings of happiness or sadness.

Case report #4: Mr. V

From Laplane et al. (1984):

In 1968, Mr V, a 41-year-old healthy man was stung on the left arm by a wasp. He immediately sustained a convulsive coma for 24 hours, then adopted intensive choreic [brief, irregular, “dance-like”] movements (which were alleviated by thioproperazine), and impairment of gait. These… symptoms diminished over several months. Then, and during the twelve following years he appeared to be a mild dement. He was evaluated by us in 1980, and was at this time not receiving drugs. All his activities were dramatically reduced. He spent many days doing nothing, without initiative or motivation, but without getting bored. The patient described this state as “a blank in my mind.” His affect was disturbed. When talking about family problems, sad or pleasant, he had an appropriate behaviour and gave signs of normal interest, but this attitude did not last and he became rapidly indifferent again. His fantasy life was poor, but dreaming was preserved. But when stimulated by external events, or more specially by another person, he could perform quite correct complex tasks (for example, playing bridge). This fact was dramatically demonstrated by neuropsychological tests which showed intellectual capacities within the normal range…

Two years after the encephalopathy, he began to show stereotyped activities. The most frequent consisted in mental counting, for example, up to twelve or a multiple of twelve, but sometimes it was a more complex calculation. Such mental activities sometimes were accompanied by gestures, such as a finger pacing of the counts. To switch on and off a light for one hour or more was another of his most common compulsions. When asked about this behaviour he answered that he had to count… that he could not stop it… as that it was stronger than him… Once he was found on his knees pushing a stone with the hands; he gave the explanation that he must push the stone, and he used the hands because he experienced some difficulties in skilled movements with his legs. There was however no anxiety, and in his past history there was no suggestion of an obsessional neurosis. Personality evaluation was normal…

Neurological examination showed abnormal movements. At the time of examination in 1980 choreic movements were very mild but voluntary movements were often brisk. He had a permanent facial rictus [a curl of the upper lip, as in an expression of contempt] with some facial or mandibular movements somewhat resembling tics. With his finger movements it was difficult to distinguish between involuntary or “voluntary” activity associated with mental counting. Walking was a mixture of Parkinsonism and choreic disturbances.

Several drugs were systematically tried. Dopaminergic agents (agonist and antagonist), serotoninergic, cholinergic, noradrenergic and benzodiazepines were used. Most of this drugs had no or mild effects on the patient’s symptomatology. Then, clomipramine was given up to 250 mg/day (under cardiovascular supervision) and this drug induced a dramatic improvement. For the first time for twelve years the patient was able to take the initiative to drive a car, and to initiate talking. Speech fluency reduction and stereotyped activities, however, remained.

The patient died suddenly from massive inhalation of food.

Case report #5: Mr. D

Also from Laplane et al. (1984):

Mr D was 23-years-old when he sustained in November 1979 carbon monoxide poisoning… He was examined for the first time by us in January 1980. Neurological examination was normal except for intellectual performance; memory and verbal fluency seemed deeply disturbed. The patient was examined again one year later. He had still severe disorders of memory and verbal fluency. However, intellectual performances were dramatically improved. He could perform complex tasks correctly and solve problems. But these faculties were largely underused. In the absence of external stimulation he lay for hours, eyes open, doing nothing. The contrast between his intellectual capacities and this inactivity was obvious in all aspects of his life. He talked only if asked, he took part in sport (he was a sports coach) only if stimulated by his wife, he went to visit friends only if invited by a phone call, and so on. His only spontaneous activity was of a routine nature like going out and getting the newspaper. His affect was impaired in the same way. If asked about the recent death of someone he cared about he cried sincerely, but if asked about recent events of his life he forgot the death and talked only about some political news. Stereotyped activities were not reported spontaneously by the patient, nor by his wife. But when questioned on this point, he admitted counting when he was alone with nothing to do; he counted from 1 to 20 again. To stop it he had to go out, or watch TV. This purely mental activity did not give him anxiety nor did its withdrawal. Besides this activity, there was no sign of an obsessional neurosis. Neurological examination was normal…

Case report #6: Mr. P

Also from Laplane et al. (1984):

Mr P, born in Russia in 1911 and living in France since 1933, suffered in March 1970 carbon monoxide poisoning. There was a short coma followed by several days of headache and confusion. This man, whose profession was that of an artistic painter but who also did the job of messenger, was unable to re-start his work. He attended for neurologic consultation in April 1970, after being fired from his job because he was too slow. He had… a resting tremor…. [Impairment of voluntary movement] was obvious and verbal fluency was markedly reduced. Intellectual processes were slow and the whole picture tended to give the impression of mental deterioration. He was, however, able to perform complex tasks on request. He was institutionalised and his status improved during the first year… At first unable on admission to conceive and execute basic drawings, one year later he could produce elaborate and artistic drawings and paintings. The most striking feature lay in his dramatic passivity, his lack of initiative despite the fact that his motor and mental capacities were largely preserved. He stayed in a ward, spending most of the time doing nothing and he never attempted to leave hospital. If questioned on this point, he answered that he didn’t know, or that an authorisation to leave was required, that he had no family anyway, he didn’t feel bored. His lack of initiative was, however, not total since he had some spontaneous activities, such as helping the other patients in eating and shaving, and sometimes he watched TV, or read the newspaper. He went out to walk in the park only if he had been actively encouraged. He was able to perform artistic paintings; but for years he painted the same landscape of moors and fens, and this several dozens of times. His affect was poor in relation to his solitude — when questioned about his biography, he evoked spontaneously with sadness the death of his mother or brother.

Case report #7: A 58-year-old salesman

From Engelborghs et al. (2000):

During a [test of how blood flows through one’s arteries], this 58-year-old right-handed patient suddenly… [entered a] coma…

…Consciousness was fully recovered 3 days after stroke.

The patient’s behavior had dramatically changed. Instead of the active man he used to be, he had become an apathetic, passive, and indifferent person who seemed to have lost all emotional concern and initiative. His affect was clearly blunted. The patient remained indifferent to visitors or when he received gifts. He did not show any concern for his relatives or his illness and manifested no desire, no complaint, and no concern about the future. The patient, who used to be a successful traveling salesman, had entirely lost concern about his business. There was a striking absence of thoughts and spontaneous mental activity. He rarely spoke spontaneously and took no verbal initiative. When asked about the content of his thoughts, the patient claimed he had none, suggesting a state of mental emptiness. Unless encouraged by the hospital staff or his relatives, he did not initiate any activity. When external stimulation disappeared, any induced activity was immediately interrupted. Every morning, the patient stayed in bed until he was encouraged to rise and get dressed. Once dressed, he returned to bed again or sat down in an armchair for the entire day. He moved very little unless urged to do so and reverted to his habitual state of athymia [lack of emotion] once he was left alone. There were no symptoms of depression. No stereotyped activities were observed.

…Two weeks after onset, a neurolinguistic examination revealed [language impairments]. Language functions progressively improved. Six months after onset, the neurolinguistic profile disclosed only a reduced verbal fluency.

In the acute phase, the Wechsler Adult Intelligence Scale (WAIS) reflected a generalized cognitive dysfunction (global IQ of 78). Concentration, sustained attention, and frontal problem solving were impaired. A general memory disorder was objectified by means of the Wechsler Memory Scale (revised). The remainder of the neuropsychological examination was within normal limits.

During the next 12 months, a systematic increase in the WAIS scores showed a complete recovery of intelligence levels (global IQ of 114). Although memory, concentration, and problem-solving capacity improved, the patient did not regain his premorbid levels. Despite intensive cognitive rehabilitation, it was impossible to re-engage the patient in his former professional activities because the athymic syndrome remained unchanged.

375.Arguably, Tye’s PANIC theory doesn’t imply that dorsal stream processing should be conscious, for dorsal stream processing arguably impacts behavior without impacting “beliefs” or “desires,” but one can easily imagine a very similar first-order theory which does imply that dorsal stream processing should be conscious.

376.Two example responses that are unpersuasive to me are summarized by Carruthers (2016):

What options does a first-order theorist have to resist this conclusion? One is to deny that the data are as problematic as they appear (as does Dretske 1995). It can be said that the unconscious states in question lack the kind of fineness of grain and richness of content necessary to count as genuinely perceptual states. On this view, the contrast discussed above isn’t really a difference between conscious and unconscious perceptions, but rather between conscious perceptions, on the one hand, and unconscious belief-like states, on the other. Another option is to accept the distinction between conscious and unconscious perceptions, and then to explain that distinction in first-order terms. It might be said, for example, that conscious perceptions are those that are available to belief and thought, whereas unconscious ones are those that are available to guide movement (Kirk 1994).

Another reply I found unpersuasive, this time responding specifically to the proposed counterexample of blindsight, is given by Tye (2000), p. 63:

If their reports are to be taken at face value, blindsight subjects… have no phenomenal consciousness in the blind region. What is missing, on the PANIC theory, is the presence of appropriately poised, nonconceptual, representational states. There are nonconceptual states, no doubt representationally impoverished, that make a cognitive difference in blindsight subjects. For some information from the blind field does reach the cognitive centers and controls their guessing behavior. But there is no complete, unified representation of the visual field, the content of which is poised to make a direct difference in beliefs. Blindsight subjects do not believe their guesses. The cognitive processes at play in these subjects are not belief-forming at all.

377.One other reply that may have some merit, but which I don’t discuss here, is given by Carruthers (2017):

So the questions posed to first-order theories were these: Why should nonconceptual contents have feel or be like something to undergo when available to central thought processes, while lacking such properties otherwise? How can these differences in functional role confer on one set of contents a distinctive subjective dimension that the other set lacks? In short: how does role (specifically, global broadcasting) create phenomenal character?

The seeming-unanswerability of these questions motivated Carruthers to propose his dual-content theory, according to which one effect of global broadcasting is to make first-order nonconceptual contents available to a higher-order mentalizing or “mindreading” faculty capable of entertaining higher-order thoughts about those experiences. This, when combined with the truth of some or other kind of consumer semantics, was said to add a dimension of higher-order nonconceptual content to the first-order experiences in question. Every globally broadcast experience is then both a nonconceptual representation of the world or body (red, say) and a nonconceptual representation of that experience of the world or body (seeming red, or experiencing red, say). Globally broadcast experiences are thus not just world-presenting but also self-presenting. They thereby acquire a subjective dimension and become like something to undergo. Moreover, only globally broadcast experiences have this sort of dual content, on the assumption that only such experiences are available to the mindreading faculty. Hence the conscious / unconscious distinction can genuinely be explained, it was claimed.

…a first-order theorist needs to say something about why nonconceptual contents should be phenomenally conscious if globally broadcast, but not otherwise. It is obviously true (almost by definition) that global broadcasting renders nonconceptual content access-conscious. But what is it about global broadcasting that renders nonconceptual content phenomenally conscious? Even if one seeks to deny that these concepts pick out distinct properties, something needs to be said to explain why what-it-is-likeness and other properties distinctive of phenomenal consciousness should only co-occur with global broadcasting.

The way forward for first-order theorists, I suggest, is to co-opt the operationalization of phenomenal consciousness first proposed by Carruthers & Veillet (2011)… [namely] that phenomenal consciousness can be operationalized as whatever gives rise to the “hard problems” of consciousness… That is, a given type of content can qualify as phenomenally conscious if and only if it seems ineffable, one can seemingly imagine zombie characters who lack it, one can imagine what-Mary-didn’t-know scenarios for it, and so on. For the very notion of phenomenal consciousness seems constitutively tied to these issues. If there is a kind of state or a kind of content for which none of these problems arise, then what would be the point of describing it as phenomenally conscious nonetheless? And conversely, if there is a novel type of content not previously considered in this context for which hard-problem thought-experiments can readily be generated, then that would surely be sufficient to qualify it as phenomenally conscious.

Once phenomenal consciousness is operationalized as whatever gives rise to hard-problem thought-experiments, however, it should be obvious that the initial challenge to first-order representationalism collapses. The reason why nonconceptual contents made available to central thought processes are phenomenally conscious, whereas those that are not so available are not, is simply that without thought one cannot have a thought-experiment. Only those nonconceptual contents available [via global broadcasting] to central thought are ones that will seem to slip through one’s fingers when one attempts to describe them (that is, be ineffable), only they can give rise to inversion and zombie thought-experiments, and so on. This is because those thought-experiments depend on a distinctively first-personal way of thinking of the experiences in question. This is possible if the experiences thought about are themselves available to the systems that generate and entertain such thoughts, but not otherwise. Experiences that are used for online guidance of action, for example, cannot give rise to zombie thought-experiments for the simple reason that they are not available for us to think about in a first-person way, as this experience or something of the sort. They can only be thought about third-personally, as the experience that guides my hand when I grasp the cup, or whatever.

There is simply no need, then, to propose that dual higher-order / first-order nonconceptual contents are necessary in order for globally broadcast experiences to acquire a subjective dimension and be like something to undergo.

One reason I don’t (yet) find this line of reasoning compelling is that I’m not sure it makes sense to operationalize phenomenal consciousness as “whatever gives rise to the ‘hard problems’ of consciousness.”

378.Of course, I don’t think phenomenology can be dissociated from function, as Cohen & Dennett (2011) worry about. I’m a functionalist, after all. But I don’t see why qualia couldn’t be dissociated from the representations, verbal reports, etc. that I call “me,” and that “I” have introspective access to.

379.In addition to the sources discussed below, see also White (1991), ch. 6; Papineau (2002), section 7.14; Munevar (2012); Schechter (2014).

380.I should also mention that Block’s arguments might even undermine the case for consciousness being as complex as first-order theorists suggest it is.

381.This quote from Prinz (2012).

382.Marinsek & Gazzaniga (2016).

383.See episode 117 of the Brain Science Podcast with Ginger Campbell.

384.See also notes from my conversation with Derek Shiller.

385.See also notes from my conversation with Aaron Sloman. For a contrary follow-up on the experiments by Pitts et al., see Rutiku et al. (2015).

386.For a shorter explanation of Dennett (1991)’s positive theory of consciousness, see ch. 3 of Thompson (2009).

387.Here is what Prinz (2012), pp. 342-343, says about the distribution question:

…there are ways in which the AIR theory can be applied to answer perennial questions about which creatures are conscious. Are human infants conscious? Are nonhuman animals conscious? It is difficult to answer these questions on the basis of behavior alone. Attention is difficult to distinguish from orienting, and working memory is difficult to distinguish from what ethologists call reference memory, a longer-term storage of objects and their locations (Green and Stanton, 1989). To search for consciousness in infants and animals, it would help to look for the neural signatures of consciousness. In infant brains, connections between areas are underdeveloped (Homae et al., 2010), and this may limit the capacity for allocating attention. The mammalian brain is similar across species, and we do know that gamma activity can be found in rodents (e.g., Vianney-Rodrigues, Iancu, and Welsh, 2011). But there are also differences across mammalian brains. At a cellular level, human visual streams are even subtly different from those of great apes and monkeys (Preuss and Coleman, 2002). We don’t yet know whether these differences have functional implications or implications for consciousness, but given the large numbers of similarities, it is likely that primates and perhaps all mammals are conscious. Many of the experiments cited in developing the AIR theory were performed on mammals, and the extraordinary lessons we have learned from that research may ultimately bear on the ethics of its continuation. What about birds? Their brains are built out of structures that are related to the mammalian subcortex, but these structures have evolved to function like the mammalian neocortex, resulting in working-memory and attention capacities that look surprisingly similar to our own (Güntürkün, 2005; Kirsch et al., 2008; Milmine, Rose, and Colombo, 2008). Cephalopods engage in strategic hunting behavior, and that may require attending to and briefly storing information about the spatial locations of their prey. The brain mechanisms of octopus working memory have been explored (Shomrat et al., 2008). There has also been work on the neural mechanisms of short-term memory in crabs (Tomsic, Berón de Astrada, and Sztarker, 2003). Pushing things even farther, there are studies of attention in fruit flies, and their capacity to attend is linked to genes that allow for short-term memory (van Swinderen, 2007). Obviously, there is also great variation between us and these other creatures. Given the doubts raised about multiple realizability in chapter 9, it wouldn’t be surprising to discover that only mammals have what it takes to be conscious on the AIR theory. But there is also astonishing continuity across animal phyla, so it is important to investigate the extent of similarity with respect to the mechanisms of consciousness. Then we can decide whether we can eat octopuses with impunity.

388.Chapter 5 is “Hemispheric interactions and specializations: insights from the split brain” (pp. 103-120) by Margaret Funnell, Paul Corballis, and Michael Gazzaniga.

389.On the potential implications of such “mindmelding” for arguments about the privacy of consciousness, see Hirstein (2012).

390.Locked-in syndrome is included in this table because even though people cannot give verbal report while they are locked in, there are cases in which people have provided verbal report of their experiences during locked-in syndrome after recovering out of a locked-in state.

391.If terminal lucidity is genuine phenomenon, it might not be particularly interesting as a variation in phenomenal experience, but it might provide a (fleeting) opportunity to ask the patient what their subjective experience of less-lucid states of consciousness were like for them. Of course, less-fleeting opportunities are also available in cases of normal recovery from disorders of consciousness and other varieties of conscious experience, and these cases are far more numerous.

392.I asked Merker via email, in August 2016, whether he thought there are likely to be any cases of hydranencephalics who are able to describe their subjective experiences. He said that such cases “are not likely to exist as far as I am able to tell. These children – of any age – are utterly without anything that may be called human language. When a minority of [caregivers] answer [a survey question about word use] in the affirmative, they are referring to things like one or a few sounds that the child uses less than randomly, say ‘ba’ when the mother is present. These may be associatively learned vocalizations, reinforced by delighted caregiver reactions, but do not amount to anything even close to propositional language. The cognitive state of these children is one of profound dementia…”

393.Quoted from Poe (2014), pp. 369-370. I borrow the Poe example from Eliezer Yudkowsky.

394.My own replies to these arguments are similar to those made by e.g. Dennett (1991); Carruthers & Schier (2017).

395.LeDoux (2015), ch. 2, provides one example:

In June 2014, a psychology website’s headline read: “Fear Center in Brain Larger Among Anxious Kids.” The story that followed described a study that measured the level of anxiety in a large group of children based on a questionnaire answered by their parents. The brains of these children were then imaged and the findings related to the parents’ assessments. The results showed that the larger the amygdala of the child, the higher the level of anxiety rated by the parents. Let’s consider what this actually means. In this study the parents did what animal researchers often do: They based a conclusion about anxiety, an inner feeling, on observations of behavior — their child seemed nervous, edgy, or had trouble concentrating or sleeping. Thus, although the size of the amygdala might well correlate with certain behaviors, whether it was related to feelings of anxiety was not tested. The website’s headline was inaccurate in three respects: (1) What was being measured was behavioral activity, not the feeling of anxiety; (2) the kids were not anxious in the clinical sense, in spite of some being described as “anxious” in the story; and (3) the amygdala is not the fear center (and certainly not the anxiety center) if by fear or anxiety we mean a conscious feeling.

396.My usage is inspired by the proposal in LeDoux (2015), ch. 2:

When scientifically discussing fear and anxiety, we should let the words “fear” and “anxiety” have their everyday meaning — namely, as descriptions of conscious experiences that people have when threatened by present or anticipated events. The scientific meaning will obviously go deeper and be more complex than the lay meaning, but both will refer to the same fundamental concept. In addition, we should avoid using these words that refer to conscious feeling when discussing systems that nonconsciously detect threats and control defense responses to them.

Thus, rather than saying that fear stimuli activate a fear system to produce fear responses, we should state that threat stimuli elicit defense responses via activation of a defensive system. Because “threat” and “defense” are not terms derived specifically from human subjective experiences, using them would go a long way toward making it easier to distinguish brain mechanisms underlying the conscious feeling of being afraid or anxious from mechanisms that detect and respond to actual or perceived danger. Similarly, what we now call fear conditioning can simply be called what it is: threat conditioning. So, in place of “fear CSs” and “fear CRs,” we can refer instead to “threat CSs” and “defensive CRs”…

397.For evidence and argument on the topic of unconscious emotions, see LeDoux (2015); Keltner et al. (2013), ch. 7; Amting et al. (2010); Dawkins (2015); Rose et al. (2014); Barrett et al. (2005); Hofree & Winkielman (2012); Lewis (2013); Rolls (2013), ch. 10; Berridge & Winkielman (2003); Winkielman & Berridge (2004); Berridge & Kringelbach (2016); Sato & Aoki (2006); Tamietto & de Gelder (2010); Hatzimoysis (2007); Anselme & Robinson (2016); Dawkins (2017).

398.For a “network theory of well-being” that integrates both subjective and objective factors, see Bishop (2015). Lin (2015) proposes a subjective list theory of well-being. See Daswani & Leike (2015) for a suggested formal definition of well-being for reinforcement learning agents, and see Oesterheld (2016) for an example preliminary formalization of preference satisfaction. For an example neural model of monetary subjective well-being in humans, see Rutledge et al. (2014).

399.See e.g. the discussion of intensity and other factors in Fazekas & Overgaard (2016).

400.See e.g. Tomasik (2016a), Lee (2014), Phillips (2014), and the following points from Hanson (2016), ch. 6, about processing speed and body size:

The natural oscillation periods of most consciously controllable human body parts are greater than a tenth of a second. Because of this, the human brain has been designed with a matching reaction time of roughly a tenth of a second. As it costs more to have faster reaction times, there is little point in paying to react much faster than body parts can change position.

…the first resonant period of a bending cantilever, that is, a stick fixed at one end, is proportional to its length, at least if the stick’s thickness scales with its length. For example, sticks twice as long take twice as much time to complete each oscillation. Body size and reaction time are predictably related for animals today… [Healy et al. (2013)]

In other words, there is reason to expect that smaller species tend to have shorter neural oscillation periods, and thus smaller animals may tend to have “more” subjective experience per objective second than larger animals do.

For more on some philosophical issues relating to temporal experience, see Phillips (2017), which I have not yet consulted myself.

401.Here is Bayne on subject unity:

My conscious states possess a certain kind of unity insofar as they are all mine; likewise, your conscious states possess that same kind of unity insofar as they are all yours. We can describe conscious states that are had by or belong to the same subject of experience as subject unified. Within subject unity we need to distinguish the unity provided by the subject of experience across time (diachronic unity) from that provided by the subject at a time (synchronic unity).

On representational unity, Bayne says:

Let us say that conscious states are representationally unified to the degree that their contents are integrated with each other. Representational unity comes in a variety of forms. A particularly important form of representational unity concerns the integration of the contents of consciousness around perceptual objects—what we might call ‘object unity’. Perceptual features are not normally represented by isolated states of consciousness but are bound together in the form of integrated perceptual objects. This process is known as feature-binding. Feature-binding occurs not only within modalities but also between them, for we enjoy multimodal representations of perceptual objects.

And on phenomenal unity, he says:

Subject unity and representational unity capture important aspects of the unity of consciousness, but they don’t get to the heart of the matter. Consider again what it’s like to hear a rumba playing on the stereo whilst seeing a bartender mix a mojito. These two experiences might be subject unified insofar as they are both yours. They might also be representationally unified, for one might hear the rumba as coming from behind the bartender. But over and above these unities is a deeper and more primitive unity: the fact that these two experiences possess a conjoint experiential character. There is something it is like to hear the rumba, there is something it is like to see the bartender work, and there is something it is like to hear the rumba while seeing the bartender work. Any description of one’s overall state of consciousness that omitted the fact that these experiences are had together as components, parts, or elements of a single conscious state would be incomplete. Let us call this kind of unity — sometimes dubbed ‘co-consciousness’ — phenomenal unity.

Phenomenal unity is often in the background in discussions of the ‘stream’ or ‘field’ of consciousness. The stream metaphor is perhaps most naturally associated with the flow of consciousness — its unity through time — whereas the field metaphor more accurately captures the structure of consciousness at a time. We can say that what it is for a pair of experiences to occur within a single phenomenal field just is for them to enjoy a conjoint phenomenality — for there to be something it is like for the subject in question not only to have both experiences but to have them together. By contrast, simultaneous experiences that occur within distinct phenomenal fields do not share a conjoint phenomenal character.

402.See my section on anthropomorphism.

403.For a discussion of the replication crisis in the context of neuropsychology, see Gelman & Geurts (2017).

404.On “researcher degrees of freedom,” see Simmons et al. (2011), Gelman & Loken (2013), Simonsohn et al. (2015), Wicherts et al. (2016), and FiveThirtyEight’s p-hacking tool.

Note that misleading exploitation of researcher degrees of freedom need not be a conscious, deliberate act. As Gelman & Loken (2014) put it:

This multiple comparisons issue is well known in statistics and has been called “p-hacking” in an influential 2011 paper… Our main point in the present article is that it is possible to have multiple potential comparisons… without the researcher performing any conscious procedure of fishing through the data or explicitly examining multiple comparisons.

405.E.g. Ioannidis (2005b); Simmons et al. (2011); Nissen et al. (2016); Jørgensen et al. (2016); Smaldino & McElreath (2016).

406.My claim about “most of the situations for which it is used” is just a guess, based on my own experience reading or skimming hundreds of meta-analyses which use the DL algorithm, across many different fields in the life and social sciences.

407.See notes from my conversation with Joel Hektner.

408.This is likely a problem for many meta-analyses, not just primary studies (Bender et al. 2008; Tendal et al. 2011).

409.For a readable guide to many common statistical errors, see Reinhart et al. (2015).

410.But also see Fiedler & Schwarz (2015).

411.See e.g. Vanpaemel et al. (2015) for psychology, and Chang & Li (2015) for economics.

412.On why Odgaard-Jensen et al. (2011) and Anglemyer et al. (2014) may have gotten somewhat different results, see Howick & Mebius (2015).

413.See also, for example, Leek & Jager (2017), though I disagree with them on several points.

414.One interpretation of “consciousness is a feature of fundamental physics” (i.e. panpsychism) seems particularly dubious to me for reasons that are ultimately similar to why I endorse physicalism overall. That is, there are no “spare degrees of freedom” in which an additional “consciousness” property could be “hiding” in the Standard Model (Wilczek 2008, p. 164, calls it the “Core Theory”), and the weight of evidence supporting the Standard Model is enormous. Physicist Sean Carroll argues this point in chapter 42 of Carroll (2016):

Unlike brains, which are complicated and hard to explain, elementary particles such as photons are extraordinarily simple, and therefore relatively easy to study and understand. Physicists talk about different kinds of particles having different “degrees of freedom” — essentially, the number of different kinds of such particles that there are. An electron, for example, has two degrees of freedom. It has both electric charge and spin, but the electric charge can take on only one value (–1), while the spin comes in two possibilities: clockwise or counterclockwise. One times two is two, for two total degrees of freedom. An up quark, by contrast, has six degrees of freedom; like an electron, it has a fixed charge and two possible ways of spinning, but it also has three possible “colors,” and one times two times three is six. Photons have an electric charge fixed at zero, but they do have two possible spin states, so they have two degrees of freedom just like electrons do.

We could interpret the [proposal that consciousness is a fundamental property of the universe] in the most direct way possible, as introducing new degrees of freedom for each elementary particle. In addition to spinning clockwise or counterclockwise, a photon could be in one of (let’s say) two [states of consciousness]. Call them “happy” and “sad,” although the labels are more poetic than authentic.

This overly literal version of panpsychism cannot possibly be true. One of the most basic things we know about the Core Theory is exactly how many degrees of freedom each particle has. Recall the Feynman diagrams from [a previous chapter], describing particles scattering off of one another by exchanging other particles. Each diagram corresponds to a number that we can compute, the total contribution of that particular process to the end result, such as two electrons scattering off of each other by exchanging photons. Those numbers have been experimentally tested to exquisite precision, and the Core Theory has passed with flying colors.

A crucial ingredient in calculating these processes is the number of degrees of freedom associated with each particle. If photons had some hidden degrees of freedom that we didn’t know about, they would alter all of the predictions we make for any scattering experiment that involves such photons, and all of our predictions would be contradicted by the data. That doesn’t happen. So we can state unambiguously that photons do not come in “happy” and “sad” varieties, or any other manner of mental properties that act like physical degrees of freedom.

Advocates of panpsychism would probably not go as far as to imagine that mental properties play roles similar to true physical degrees of freedom, so that the preceding argument wouldn’t dissuade them. Otherwise these new properties would just be ordinary physical properties.

That leaves us in a position very similar to the zombie discussion: we posit new mental properties, and then insist that they have no observable physical effects [e.g. in particle scattering experiments]. What would the world be like if we replaced “protoconscious photons” with “zombie photons” lacking such mental properties? As far as the behavior of physical matter is concerned, including what you say when you talk or write or communicate nonverbally with your romantic partner, the zombie-photon world would be exactly the same as the world where photons have mental properties.

A good Bayesian can therefore conclude that the zombie-photon world is the one we actually live in. We simply don’t gain anything by attributing the features of consciousness to individual particles. Doing so is not a useful way of talking about the world; it buys us no new insight or predictive power. All it does is add a layer of metaphysical complication onto a description that is already perfectly successful.

Consciousness seems to be an intrinsically collective phenomenon, a way of talking about the behavior of complex systems with the capacity for representing themselves and the world within their inner states. Just because it is here full-blown in our contemporary universe doesn’t mean that there was always some trace of it from the very start. Some things just come into being as the universe evolves and entropy and complexity grow: galaxies, planets, organisms, consciousness.

Perhaps because of this, panpsychists typically locate consciousness within physics in other ways, for example by positing that in addition to the usual relational-causal properties of the Standard Model (that best explain the results of our experiments), there is also an “intrinsic character” to physical things, and this intrinsic character is the ground of conscious experience (in either a “basic” or “emergent” way). I confess that to me, this sort of move by various panpsychists/non-reductivists about consciousness seems just as unmotivated as substance dualism does. (For an overview of some views of this kind, see ch. 4 of Weisberg 2014.)

415.On the neuroscience of human vision, see Snowden et al. (2012), Schiller & Tehovnik (2015), and Zhao (2016). Progress on machine vision is so rapid that anything I cite will be out-of-date within weeks or months, but see e.g. the readings for the University of Toronto’s “Deep Learning in Computer Vision” Winter 2016 course. For a brief overview of mirror self-recognition in animals, see Anderson & Gallup Jr. (2015). For an example claim of mirror self-recognition in a robot, see Takeno (2012).

416.Anderson & Gallup Jr. (2015).

417.I should note that there has been considerable debate about the aptness of the analogy between the study of life and the study of consciousness. I won’t take up that debate here, but see e.g. Chalmers (1996), ch. 3:

It is interesting to see how a typical high-level property — such as life, say — evades the arguments put forward in the case of consciousness. First, it is straightforwardly inconceivable that there could be a physical replica of a living creature that was not itself alive. Perhaps a problem might arise due to context-dependent properties (would a replica that forms randomly in a swamp be alive, or be human?), but fixing environmental facts eliminates even that possibility. Second, there is no “inverted life” possibility analogous to the inverted spectrum. Third, when one knows all the physical facts about an organism (and possibly about its environment), one has enough material to know all the biological facts. Fourth, there is no epistemic asymmetry with life; facts about life in others are as accessible, in principle, as facts about life in ourselves. Fifth, the concept of life is plausibly analyzable in functional terms: to be alive is roughly to possess certain capacities to adapt, reproduce, and metabolize. As a general point, most high-level phenomena come down to matters of physical structure and function, and we have good reason to believe that structural and functional properties are logically supervenient on the physical.

…All this notwithstanding, a common reaction to the sort of argument I have given is to reply that a vitalist about life might have said the same things. For example, a vitalist might have claimed that it is logically possible that a physical replica of me might not be alive, in order to establish that life cannot be reductively explained. And a vitalist might have argued that life is a further fact, not explained by any account of the physical facts. But the vitalist would have been wrong. By analogy, might not the opponent of reductive explanation for consciousness also be wrong?

I think this reaction misplaces the source of vitalist objections. Vitalism was mostly driven by doubt about whether physical mechanisms could perform all the complex functions associated with life: adaptive behavior, reproduction, and the like. At the time, very little was known about the enormous sophistication of biochemical mechanisms, so this sort of doubt was quite natural. But implicit in these very doubts is the conceptual point that when it comes to explaining life, it is the performance of various functions that needs to be explained. Indeed, it is notable that as physical explanation of the relevant functions gradually appeared, vitalist doubts mostly melted away. With consciousness, by contrast, the problem persists even when the various functions are explained.

Presented with a full physical account showing how physical processes perform the relevant functions, a reasonable vitalist would concede that life has been explained. There is not even conceptual room for the performance of these functions without life. Perhaps some ultrastrong vitalist would deny even this, claiming that something is left out by a functional account of life—the vital spirit, perhaps. But the obvious rejoinder is that unlike experience, the vital spirit is not something we have independent reason to believe in. Insofar as there was ever any reason to believe in it, it was as an explanatory construct — “We must have such a thing in order to be able to do such amazing stuff.” But as an explanatory construct, the vital spirit can be eliminated when we find a better explanation of how the functions are performed. Conscious experience, by contrast, forces itself on one as an explanandum and cannot be eliminated so easily.

One reason a vitalist might think something is left out of a functional explanation of life is precisely that nothing in a physical account explains why there is something it is like to be alive. Perhaps some element of belief in a “vital spirit” was tied to the phenomena of one’s inner life. Many have perceived a link between the concepts of life and experience, and even today it seems reasonable to say that one of the things that needs to be explained about life is the fact that many living creatures are conscious. But the existence of this sort of vitalist doubt is of no comfort to the proponent of reductive explanation of consciousness, as it is a doubt that has never been overturned.

418.Bechtel & Richardson (1998):

The role of vitalism in physiology is exemplified in the work of the French anatomist Xavier Bichat (1771-1802). Bichat analysed living systems into parts, identifying twenty-one distinct kinds of tissue, and explaining the behaviour of organisms in terms of the properties of these tissues. He characterized the different tissues in terms of their ‘vital properties’, as forms of ‘sensibility’ and ‘contractility’. Bichat thought the sensibility and contractility of each tissue type constituted the limit to decomposing living matter into its parts. These vital properties preclude identifying life with any physical or chemical phenomenon because the behaviour of living tissues is irregular and contrary to forces exhibited by their inorganic constituents. Insofar as living matter maintains itself in the face of ordinary physical and chemical processes that would destroy it, Bichat thought it could not be explained in terms of those forces. He therefore allowed that there are additional fundamental forces in nature that are on a par with those Newton ascribed to all matter: ‘To create the universe God endowed matter with gravity, elasticity, affinity… and furthermore one portion received as its share sensibility and contractility’ (Bichat 1801, vol. 1: xxxvii). These are vital properties of living tissues.

419.For example here is Bechtel & Richardson (1998) again:

…chemists in the early nineteenth century hoped to explain many of the reactions found in living organisms. Organic compounds are apparently formed only in living organisms, and thus appear to be products of vital activity. The physiological chemists of the early nineteenth century set out to show, contrary to initial appearances, that these products are the results of chemical processes. Jacob Berzelius (1779-1848) argued that chemistry could account for all of the reactions occurring within living organisms, and that organic and inorganic processes differ only in complexity. ‘There is,’ he said, ‘no special force exclusively the property of living matter which may be called a vital force’ (1836).

420.See e.g. Wimsatt (1976).

421.E.g. they seem to be satisfiable by a very small neural network (Herzog et al. 2007), or more generally a very short computer program.

422.One way this might happen is that we might conclude that consciousness is complex, but that “consciousness” properly refers to a highly disjunctive collection of a great many different complex processes.

Subscribe to new blog alerts
Open Philanthropy
Open Philanthropy
  • We’re Hiring!
  • Press Kit
  • Governance
  • Privacy Policy
  • Stay Updated
Mailing Address
Open Philanthropy
182 Howard Street #225
San Francisco, CA 94105
Email
info@openphilanthropy.org
Media Inquiries
media@openphilanthropy.org
Anonymous Feedback
Feedback Form

© Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT