There are many reasons for philosophical interest in nonhuman animal (hereafter “animal”) consciousness. First, if philosophy often begins with questions about the place of humans in nature, one way humans have attempted to locate themselves is by comparison and contrast with those things in nature most similar to themselves, i.e., other animals. Second, the problem of determining whether animals are conscious stretches the limits of knowledge and scientific methodology (beyond breaking point, according to some). Third, the question of whether animals are conscious beings or “mere automata”, as Cartesians would have it, is of considerable moral significance given the dependence of modern societies on mass farming and the use of animals for biomedical research. Fourth, while theories of consciousness are frequently developed without special regard to questions about animal consciousness, the plausibility of such theories has sometimes been assessed against the results of their application to animal consciousness.
Questions about animal consciousness are just one corner of a more general set of questions about animal cognition and mind. The so-called “cognitive revolution” that took place during the latter half of the 20th century has led to many innovative experiments by comparative psychologists and ethologists probing the cognitive capacities of animals. Despite all this work, the topic of consciousness per se in animals has remained controversial, even taboo, among scientists, even while it remains a matter of common sense to most people that many other animals do have conscious experiences.
- 1. Concepts of Consciousness
- 2. Basic Questions: Epistemological and Ontological
- 3. Applying Ontological Theories
- Arguments Against Animal Consciousness
- Arguments For Animal Consciousness
- 6. Summary
- Other Internet Resources
- Related Entries
Two ordinary senses of consciousness which are not in dispute when applied to animals are the sense of consciousness involved when a creature is awake rather than asleep, or in a coma, and the sense of consciousness implicated in the basic ability of organisms to perceive and thereby respond to selected features of their environments, thus making them conscious or aware of of those features. Consciousness in both these senses is identifiable in organisms belong to a wide variety of taxonomic groups.
A third, more technical notion of consciousness, access consciousness, has been introduced by Block (1995) to capture the sense in which mental representations may be poised for use in rational control of action or speech. This dispositional account of access consciousness — the idea that the representational content is available for other systems to use — is amended by Block (2005) to include an occurrent aspect in which the content is "broadcast" in a "global workspace" which is then available for higher cognitive processing tasks such as categorization, reasoning, planning, and voluntary direction of attention. Block believes that many animals possess access consciousness (speech is not a requirement). Indeed, some of the neurological evidence cited by Block (2005) in support of the global workspace is derived from monkeys. But clearly an author such as Descartes, who, we will see, denied speech, language, and rationality to animals, would also deny access consciousness to them. Those who follow Davidson (1975) in denying intentional states to animals would likely concur.
There are two remaining senses of consciousness that cause controversy when applied to animals: phenomenal consciousness and self-consciousness.
Phenomenal consciousness refers to the qualitative, subjective, experiential, or phenomenological aspects of conscious experience, sometimes identified with qualia. (In this article I also use the term “sentience” to refer to phenomenal consciousness.) To contemplate animal consciousness in this sense is to consider the possibility that, in Nagel's (1974) phrase, there might be “something it is like” to be a member of another species. Nagel disputes our capacity to know, imagine, or describe in scientific (objective) terms what it is like to be a bat, but he assumes that there is something it is like. There are those, however, who would challenge this assumption directly. Others would less directly challenge the possibility of scientifically investigating its truth. Nevertheless, there is broad commonsense agreement that phenomenal consciousness is more likely in mammals and birds than it is in invertebrates, such as insects, crustaceans or molluscs (with the possible exception of some cephalopods), while reptiles, amphibians, and fish constitute an enormous grey area.
Self-consciousness refers to an organism's capacity for second-order representation of the organism's own mental states. Because of its second-order character (“thought about thought”) the capacity for self consciousness is closely related to questions about “theory of mind” in nonhuman animals — whether any animals are capable of attributing mental states to others. Questions about self-consciousness and theory of mind in animals are a matter of active scientific controversy, with the most attention focused on chimpanzees and to a more limited extent on the other great apes. As attested by this controversy (and unlike questions about animal sentience) questions about self-consciousness in animals are commonly regarded as tractable by empirical means.
The remainder of this article deals primarily with the attribution of consciousness in its phenomenal sense to animals, although there will be some discussion of self-consciousness and theory of mind in animals, in connection with arguments by Carruthers (1998a,b, 2000) that theory of mind is required for phenomenal consciousness.
The topic of consciousness in nonhuman animals has been primarily of epistemological interest to philosophers of mind. Two central questions are:
- Can we know which animals beside humans are conscious? (The Distribution Question)
- Can we know what, if anything, the experiences of animals are like? (The Phenomenological Question)
In his seminal paper “What is it like to be a bat?” Thomas Nagel (1974) simply assumes that there is something that it is like to be a bat, and focuses his attention on what he argues is the scientifically intractable problem of knowing what it is like. Nagel's confidence in the existence of conscious bat experiences would generally be held to be the commonsense view, but as we shall see, it is subject to challenge, and there are those who would argue that the Distribution Question is just as intractable as the Phenomenological Question.
The two questions might be seen as special cases of the general skeptical “problem of other minds”, which even if intractable is nevertheless generally ignored to good effect by psychologists. However it is often thought that knowledge of animal minds — what Allen & Bekoff (1997) refer to as “the other species of mind problem” and Prinz (2005) calls “The Who Problem” — presents special methodological problems because we cannot interrogate animals directly about their experiences (but see Sober 2000 for discussion of tractability within an evolutionary framework). Although there have been attempts to teach human-like languages to members of other species, none has reached a level of conversational ability that would solve this problem directly. Furthermore, except for some language-related work with parrots and dolphins, such approaches are generally limited to those animals most like ourselves, particularly the great apes. But there is great interest in possible forms of consciousness in a much wider variety of species than are suitable for such research, both in connection with questions about the ethical treatment of animals (e.g., Singer 1975/1990; Regan 1983; Rollin 1989; Varner 1999), and in connection with questions about the natural history of consciousness (Griffin 1976, 1984, 1992; Bekoff et al. 2002).
Griffin's agenda for the discipline he labeled “cognitive ethology” features the topic of animal consciousness and advocates a methodology, inherited from classical ethology, that is based in naturalistic observations of animal behavior (see Allen 2004). This agenda has been strongly criticized, with his methodological suggestions often dismissed as anthropomorphic (see Bekoff & Allen 1997 for a survey). But such criticisms may have overestimated the dangers of anthropomorphism (Fisher 1990) and many of the critics themselves rely on claims for which there are scant scientific data (e.g., Kennedy 1992, who claims that the “sin” of anthropomorphism may be programmed into humans genetically).
While epistemological and related methodological issues have been at the forefront of discussions about animal consciousness, the main wave of recent philosophical attention to consciousness has been focused on ontological questions about the nature of phenomenal consciousness. One might reasonably think that the question of what consciousness is should be settled prior to tackling the Distribution Question — that ontology should drive the epistemology. In an ideal world this order of proceeding might be the preferred one, but as we shall see in the next section, the current state of disarray among the ontological theories makes such an approach untenable.
Whether because they are traditional dualists, or because they think that consciousness is an as-yet-undescribed fundamental constituent of the physical universe, some philosophers maintain that consciousness is not explainable in familiar scientific terms. Such non-reductive accounts of consciousness (with the possible exception of those based in anthropocentric theology) provide no principled ontological reasons, however, for doubting that animals are conscious.
Cartesian dualism is, of course, traditionally associated with the view that animals lack minds. But Descartes' argument for this view was not based on any ontological principles, but upon what he took to be the failure of animals to use language conversationally or reason generally. On this basis he claimed that nothing in animal behavior requires a non-mechanical (mental) explanation; hence he saw no reason to attribute possession of mind to animals.
There is, however, no ontological reason why animal bodies are any less suitable vehicles for embodying a Cartesian mind than are human bodies. Hence dualism itself does not preclude animal minds. Similarly, more recent non-reductive accounts of consciousness in terms of fundamental properties are quite compatible with the idea of animal consciousness. None of these accounts provides any constitutional reason why those fundamental properties should not be located in animals. Furthermore, given that none of these theories specify empirical means for detecting the right stuff for consciousness, and indeed dualist theories cannot do so, they seem forced to rely upon behavioral criteria rather than ontological criteria for the deciding the Distribution Question.
Other philosophers have tried to give reductive accounts of consciousness in terms either of the physical, biochemical, or neurological properties of nervous systems (physicalist accounts) or in terms of other cognitive processes (functionalist accounts).
Physicalist accounts of consciousness, which identify consciousness with physical or physiological properties of neurons, do not provide any particular obstacles to attributing consciousness to animals, given that animals and humans share the same basic biology. Of course there is no consensus about which physical or neurological properties are to be identified with consciousness. But if it could be determined that phenomenal consciousness was identical to a property such as quantum coherence in the microtubules of neurons, or brain waves of a specific frequency, then settling the Distribution Question would be a straightforward matter of establishing whether or not members of other species possess the specified properties.
Functionalist reductive accounts have sought to explain consciousness in terms of other cognitive processes. Some of these accounts identify phenomenal consciousness with the (first-order) representational properties of mental states. Such accounts are generally quite friendly to attributions of consciousness to animals, for it is relatively uncontroversial that animals have internal states that have the requisite representational properties. Such a view underlies Dretske's (1995) claim that phenomenal consciousness is inseparable from a creature's capacity to perceive and respond to features of its environment, i.e., one of the uncontroversial senses of consciousness identified above. On Dretske's view, phenomenal consciousness is therefore very widespread in the animal kingdom. Likewise, Tye (2000) argues, based upon his first-order representational account of phenomenal consciousness, that it extends even to honeybees.
Block (2005) pursues a different strategy, using tentative functional characterizations of phenomenal and access consciousness to interpret evidence from neuroscientists' search for neural correlates of consciousness. He argues, on the basis of evidence from both humans and monkeys, that recurrent feedback activity in sensory cortex is the most plausible candidate for being the neural correlate of phenomenal consciousness in these species. Prinz (2005) also pursues a neurofunctional account, but identifies phenomenal consciousness with a different functional role than Block. He argues for identifying phenomenal consciousness with brain processes that are involved in attention to intermediate-level perceptual representations which feed into working memory via higher level, perspective-invariant representations. Since the evidence for such processes is at least partially derived from animals, including other primates and rats, his view is supportive of the idea that phenomenal consciousness is found in some nonhuman species (presumably most mammals). Nevertheless, he maintains that it may be impossible ever to answer the Distribution Question for more distantly related species; he mentions octopus, pigeons, bees, and slugs in this context.
Functionalist theories of phenomenal consciousness that rely on more elaborately structured cognitive capacities can be less accommodating to the belief that animals do have conscious mental states. For example, some twentieth century philosophers, while rejecting Cartesian dualism, have turned his epistemological reliance upon language as an indicator of consciousness into an ontological point about the essential involvement of linguistic processing in human consciousness. Such insistence on the importance of language for consciousness underwrites the tendency of philosophers such as Dennett (1969, 1995, 1997) to deny that animals are conscious in anything like the same sense that humans are (see also Carruthers 1996).
Because Carruthers has explicitly applied his functionalist “higher order thought” theory of phenomenal consciousness to derive a negative conclusion about animal consciousness (Carruthers 1998a,b, 2000) this account deserves special attention here. According to Carruthers, a mental state is phenomenally conscious for a subject just in case it is available to be thought about directly by that subject. Furthermore, according to Carruthers, such higher-order thoughts are not possible unless a creature has a “theory of mind” to provide it with the concepts necessary for thought about mental states. But, Carruthers argues, there is little, if any, scientific support for theory of mind in nonhuman animals, even among the great apes, so he concludes that there is little support either for the view that any animals possess phenomenological consciousness. The evaluation of this argument will be taken up further below, but it is worth noting here that since most developmental psychologists agree that young children before the age of 4 lack a theory of mind, Carruthers' view entails that they are not sentient either — fear of needles notwithstanding! This is a bullet Carruthers bites, although for many it constitutes a reductio of his view (a response Carruthers would certainly regard as question-begging).
In contrast to Carruthers' higher-order thought account of sentience, other theorists such as Armstrong (1980), and Lycan (1996) have preferred a higher-order experience account, where consciousness is explained in terms of inner perception of mental states, a view that can be traced back to Aristotle, and also to John Locke. Because such models do not require the ability to conceptualize mental states, proponents of higher-order experience theories have been slightly more inclined than higher-order theorists to allow that such abilities may be found in other animals.
Phenomenal consciousness is just one feature (some would say the defining feature) of mental states or events. Any theory of animal consciousness must be understood, however, in the context of a larger investigation of animal cognition that (among philosophers) will also be concerned with issues such as intentionality (in the sense described by the 19th C. German psychologist Franz Brentano) and mental content (Dennett 1983, 1987; Allen 1992a,b, 1995, 1997).
Philosophical opinion divides over the relation of consciousness to intentionality with some philosophers maintaining that they are strictly independent, others (particularly proponents of the functionalist theories of consciousness described in this section) arguing that intentionality is necessary for consciousness, and still others arguing that consciousness is necessary for genuine intentionality (see Allen 1997 for discussion). Many behavioral scientists accept cognitivist explanations of animal behavior that attribute representational states to their subjects. Yet they remain hesitant to attribute consciousness. If the representations invoked within cognitive science are intentional in Brentano's sense, then these scientists seem committed to denying that consciousness is necessary for intentionality.
Where does this leave the epistemological questions about animal consciousness? While it may seem natural to think that we must have a theory of what consciousness is before we try to determine whether other animals have it, this may in fact be putting the conceptual cart before the empirical horse. In the early stages of the scientific investigation of any phenomenon, putative samples must be identified by rough rules of thumb (or working definitions) rather than complete theories. Early scientists identified gold by contingent characteristics rather than its atomic essence, knowledge of which had to await thorough investigation of many putative examples — some of which turned out to be gold and some not. Likewise, at this stage of the game, perhaps the study of animal consciousness would benefit from the identification of animal traits worthy of further investigation, with no firm commitment to idea that all these examples will involve conscious experience.
Of course, as a part of this process some reasons must be given for identifying specific animal traits as “interesting” for the study of consciousness, and in a weak sense such reasons will constitute an argument for attributing consciousness to the animals possessing those traits. These reasons can be evaluated even in the absence of an accepted ontology for consciousness. Furthermore, those who would bring animal consciousness into the scientific fold in this way must also explain how scientific methodology is adequate to the task in the face of various arguments that it is inadequate. These arguments, and the response to them, can also be evaluated in the absence of ontological certitude. Thus there is plenty to cover in the remaining sections of this encyclopedia entry.
Recall the Cartesian argument from the previous section against animal consciousness (or animal mind) on the grounds that animals do not use language conversationally or reason generally. This argument, based on the alleged failure of animals to display certain intellectual capacities, is illustrative of a general pattern of using certain dissimilarities between animals and humans to argue that animals lack consciousness.
A common refrain in response to such arguments is that, in situations of partial information, “absence of evidence is not evidence of absence”. Descartes dismissed parrots vocalizing human words because he thought it was merely meaningless repetition. This judgment may have been appropriate for the few parrots he encountered, but it was not based on a systematic, scientific investigation of the capacities of parrots. Nowadays many would argue that Pepperberg's study of the African Grey parrot “Alex” (Pepperberg 1999) should lay the Cartesian prejudice to rest. This study, along with several on the acquisition of a degree of linguistic competence by chimpanzees and bonobos (e.g., Gardner et al. 1989; Savage-Rumbaugh 1996) would seem to undermine Descartes' assertions about lack of conversational language use and general reasoning abilities in animals. (See, also, contributions to Hurley & Nudds 2006.)
Cartesians respond by pointing out the limitations shown by animals in such studies (they can't play a good game of chess, after all), and they join linguists in protesting that the subjects of animal-language studies have not fully mastered the recursive syntax of natural human languages. But this kind of post hoc raising of the bar suggests to many scientists that the Cartesian position is not being held as a scientific hypothesis, but as a dogma to be defended by any means. Convinced by evidence of sophisticated cognitive abilities, most philosophers these days (including Carruthers) agree with Block that something like access consciousness is properly attributed to many animals. Nevertheless, when it comes to phenomenal consciousness, dissimilarity arguments are not entirely powerless to give some pause to defenders of animal sentience, for surely most would agree that, at some point, the dissimilarities between the capacities of humans and the members of another species (the common earthworm Lumbricus terrestris, for example) are so great that it is unlikely that such creatures are sentient. A grey area arises precisely because no one can say how much dissimilarity is enough to trigger the judgment that sentience is absent.
This comparison of animal behavior to the unconscious capacities of humans can be criticized on the grounds that, like Descartes' pronouncements on parrots, it is based only on unsystematic observation of animal behavior. There are grounds for thinking that careful investigation would reveal that there is not a very close analogy between animal behavior and human behaviors associated with these putative cases of unconscious experience. For instance, it is notable that the unconscious experiences of automatic driving are not remembered by their subjects, whereas there is no evidence that animals are similarly unable to recall their allegedly unconscious experiences. Likewise, blindsight subjects do not spontaneously respond to things presented to their scotomas, but must be trained to make responses using a forced-response paradigm. There is no evidence that such limitations are normal for animals, or that animals behave like blindsight victims with respect to their visual experiences (Jamieson & Bekoff 1991).3] Such conceptualization requires, according to Carruthers, a theory of mind. And, Carruthers maintains, there is little basis for thinking that any nonhuman animals have a theory of mind, with the possible exception of chimpanzees. This argument is, of course, no stronger than the higher-order thought account of consciousness upon which it is based. But setting that aside for the sake of argument, this challenge by Carruthers deserves further attention as perhaps the most empirically-detailed case against animal consciousness to have been made in the philosophical literature.
The systematic study of self-consciousness and theory of mind in nonhuman animals has its roots in an approach to the study of self-consciousness pioneered by Gallup (1970). It was long known that chimpanzees would use mirrors to inspect their images, but Gallup developed a protocol that appears to allow a scientific determination of whether it is merely the mirror image per se that is the object of interest to the animal inspecting it, or whether it is the image qua proxy for the animal itself that is the object of interest. Using chimpanzees with extensive prior familiarity with mirrors, Gallup anesthetized his subjects and marked their foreheads with a distinctive dye, or, in a control group, anesthetized them only. Upon waking, marked animals who were allowed to see themselves in a mirror touched their own foreheads in the region of the mark significantly more frequently than controls who were either unmarked or not allowed to look into a mirror. Gallup's protocol has been repeated with other great apes and some monkey species, but besides chimpanzees only orang utans consistently “pass” the test. Using a modified version of Gallup's procedure, involving no anesthesia, Reiss & Marino (2001) have recently provided evidence of mirror self-recognition in bottlenose dolphins, although this evidence has been disputed (e.g. Wynne 2004).
According to Gallup et al. (2002) “Mirror self-recognition is an indicator of self-awareness.” Furthermore, he claims that “the ability to infer the existence of mental states in others (known as theory of mind, or mental state attribution) is a byproduct of being self-aware.” He describes the connection between self-awareness and theory of mind thus: “If you are self-aware then you are in a position to use your experience to model the existence of comparable processes in others.” The success of chimpanzees on the mirror self-recognition task thus may give some reason to maintain that they are phenomenally conscious on Carruthers' account, whereas the failure of most species that have been tested to pass the test might be taken as evidence against their sentience.
Carruthers neither endorses nor outright rejects the conclusion that chimpanzees are sentient. His suspicion that even chimpanzees might lack theory of mind, and therefore (on his view) phenomenal consciousness, is based on some ingenious laboratory studies by Povinelli (1996) showing that in interactions with human food providers, chimpanzees apparently fail to understand the role of eyes in providing visual information to the humans, despite their outwardly similar behavior to humans in attending to cues such as facial orientation. The interpretation of Povinelli's work remains controversial. Hare et al. (2000) conducted experiments in which dominant and subordinate animals competed with each other for food, and concluded that “at least in some situations chimpanzees know what conspecifics do and do not see and, furthermore, that they use this knowledge to formulate their behavioral strategies in food competition situations.” They suggest that Povinelli's negative results may be due to the fact that his experiments involve less natural chimp-human interactions. Given the uncertainty, Carruthers is therefore well-advised in the tentative manner in which he puts forward his claims about chimpanzee sentience.
A full discussion of the controversy over theory of mind deserves an entry of its own (see also Heyes 1998), but it is worth remarking here that the theory of mind debate has origins in the hypothesis that primate intelligence in general, and human intelligence in particular, is specially adapted for social cognition (see Byrne & Whiten 1988, especially the first two chapters, by Jolly and Humphrey). Consequently, it has been argued that evidence for the ability to attribute mental states in a wide range of species might be better sought in natural activities such as social play, rather than in laboratory designed experiments which place the animals in artificial situations (Allen & Bekoff 1997; see esp. chapter 6; see also Hare et al. 2000, Hare et al. 2001, and Hare & Wrangham 2002). Furthermore, to reiterate the maxim that absence of evidence is not evidence of absence, it is quite possible that the mirror test is not an appropriate test for theory of mind in most species because of its specific dependence on the ability to match motor to visual information, a skill that may not have needed to evolve in a majority of species. Alternative approaches that have attempted to provide strong evidence of theory of mind in nonhuman animals under natural conditions have generally failed to produce such evidence (see, e.g., the conclusions about theory of mind in vervet monkeys by Cheney & Seyfarth 1990), although anecdotal evidence tantalizingly suggests that researchers still have not managed to devise the right experiments.
One developing line of research that may be relevant considers the performance of animals in situations of cognitive uncertainty. When primates and dolphins are given a “bailout” option of indicating that they don't know the correct response in a discrimination choice experiment, they have been shown to choose the bailout option in ways that are very similar to humans (Smith et al. 2003). In the literature on human cognition, awareness of what one knows is called “metacognition” and it is associated with a “feeling of knowing”. Smith and colleagues claim that investigating metacognition in animals could provide information about the relation of self-awareness to other-awareness (theory of mind), and that their results already show that “animals have functional features of or parallels to human conscious cognition”. They also raise the question of what this might tell us about the phenomenal features of that cognition. Browne (2004) argues that the dolphin research cannot support the connection to theory of mind, but that it nevertheless is relevant to consciousness in dolphins, particularly within the theoretical framework provided by Lycan, described above. The notion of metacognition also seems relevant to questions about access consciousness.
Many scientists remain convinced that even if questions about self-consciousness are empirically tractable, no amount of experimentation can provide access to phenomenal consciousness in nonhuman animals. This remains true even among those scientists who are willing to invoke cognitive explanations of animal behavior that advert to internal representations. Opposition to dealing with consciousness can be understood as a legacy of behavioristic psychology first because of the behaviorists' rejection of terms for unobservables unless they could be formally defined, and second because of the strong association in many behaviorists' minds between the use of mentalistic terms and the twin bugaboos of Cartesian dualism and introspectionist psychology (Bekoff & Allen 1997). In some cases these scientists are even dualists themselves, but they are strongly committed to denying the possibility of scientifically investigating consciousness, and remain skeptical of all attempts to bring it into the scientific mainstream.
It is worth remarking that there is often a considerable disconnect between philosophers and psychologists (or ethologists) on the topic of animal minds. Some of this can be explained by the failure of some psychologists to heed the philosophers' distinction between intentionality in its ordinary sense and intentionality in the technical sense derived from Brentano (with perhaps most of the blame being apportioned to philosophers for failing to give clear explanations of this distinction and its importance). Indeed, some psychologists, having conflated Brentano's notion with the ordinary sense of intentionality, and then identifying the ordinary sense of intentionality with “free will” and conscious deliberation, have literally gone on to substitute the term “consciousness” in their criticisms of philosophers who were discussing the intentionality of animal mental states and who were not explicitly concerned with consciousness at all (see, e.g., Blumberg & Wasserman 1995).
Because consciousness is assumed to be private or subjective, it is often taken to be beyond the reach of objective scientific methods (see Nagel 1974). This claim might be taken in either of two ways. On the one hand it might be taken to bear on the possibility of answering the Distribution Question, i.e., to reject the possibility of knowledge that a member of another taxonomic group (e.g., a bat) has conscious states. On the other hand it might be taken to bear on the possibility of answering the Phenomenological Question, i.e., to reject the possibility of knowledge of the phenomenological details of the mental states of a member of another taxonomic group. The difference between believing with justification that a bat is conscious and knowing what it is like to be a bat is important because, at best, the privacy of conscious experience supports a negative conclusion only about the latter. To support a negative conclusion about the former one must also assume that consciousness has absolutely no measurable effects on behavior, i.e., one must accept epiphenomenalism. But if one rejects epiphenomenalism and maintains that consciousness does have effects on behavior then a strategy of inference to the best explanation may be used to support its attribution. More will be said about this in the next section.
Many judgments of the similarity between human and animal behavior are readily made by ordinary observers. The reactions of many animals, particularly other mammals, to bodily events that humans would report as painful are easily and automatically recognized by most people as pain responses. High-pitched vocalizations, fear responses, nursing of injuries, and learned avoidance are among the responses to noxious stimuli that are all part of the common mammalian heritage. Similar responses are also visible to some degree or other in organisms from other taxonomic groups.
Less accessible to casual observation, but still in the realm of behavioral evidence are scientific demonstrations that members of other species, even of other phyla, are susceptible to the same visual illusions as we are (e.g., Fujita et al. 1991) suggesting that their visual experiences are similar.
Neurological similarities between humans and other animals have also been taken to suggest commonality of conscious experience. All mammals share the same basic brain anatomy, and much is shared with vertebrates more generally. A large amount of scientific research that is of direct relevance to the treatment of human pain, including on the efficacy of analgesics and anesthetics, is conducted on rats and other animals. The validity of this research depends on the similar mechanisms involved and to many it seems arbitrary to deny that injured rats, who respond well to opiates for example, feel pain. Likewise, much of the basic research that is of direct relevance to understanding human visual consciousness has been conducted on the very similar visual systems of monkeys. Monkeys whose primary visual cortex is damaged even show impairments analogous to those of human blindsight patients (Stoerig & Cowey 1997) suggesting that the visual consciousness of intact monkeys is similar to that of intact humans.
Such similarity arguments are, of course, inherently weak for it is always open to critics to exploit some disanalogy between animals and humans to argue that the similarities don't entail the conclusion that both are sentient (Allen 1998). Even when bolstered by evolutionary considerations of continuity between the species, the arguments are vulnerable, for the mere fact that humans have a trait does not entail that our closest relatives must have that trait too. There is no inconsistency with evolutionary continuity to maintain that only humans have the capacity to learn to play chess. Likewise for consciousness. Povinelli & Giambrone (2000) also argue that the argument from analogy fails because superficial observation of quite similar behaviors even in closely related species does not guarantee that the underlying cognitive principles are the same, a point that Povinelli believes is demonstrated by his research (described in the previous section) into how chimpanzees use cues to track visual attention (Povinelli 1996). (See Allen 2002 for criticism of their analysis of the argument by analogy.)
Perhaps a combination of behavioral, physiological and morphological similarities with evolutionary theory amounts to a stronger overall case. But in the absence of more specific theoretical grounds for attributing consciousness to animals, this composite argument — which might be called “the argument from homology” — despite its comportment with common sense, is unlikely to change the minds of those who are skeptical.
If phenomenal consciousness is completely epiphenomenal, as some philosophers believe, then a search for the functions of consciousness is doomed to futility. In fact, if consciousness is completely epiphenomenal then it cannot have evolved by natural selection. On the assumption that phenomenal consciousness is an evolved characteristic of human minds, at least, and therefore that epiphenomenalism is false, then an attempt to understand the biological functions of consciousness may provide the best chance of identifying its occurrence in different species.
Such an approach is nascent in Griffin's attempts to force ethologists to pay attention to questions about animal consciousness. (For the purposes of this discussion I assume that Griffin's proposals are intended to relate to phenomenal consciousness, as well, perhaps, to consciousness in its other senses.) In a series of books, Griffin (who made his scientific reputation by carefully detailing the physical and physiological characteristics of echolocation by bats) provides examples of communicative and problem-solving behavior by animals, particularly under natural conditions, and argues that these are prime places for ethologists to begin their investigations of animal consciousness (Griffin 1976, 1984, 1992).
Although he thinks that the intelligence displayed by these examples suggests conscious thought, many critics have been disappointed by the lack of systematic connection between Griffin's examples and the attribution of consciousness (see Alcock 1992; Bekoff & Allen 1997; Allen & Bekoff 1997). Griffin's main positive proposal in this respect has been the rather implausible suggestion that consciousness might have the function of compensating for limited neural machinery. Thus Griffin is motivated to suggest that consciousness may be more important to honey bees than to humans.
If compensating for small sets of neurons is not a plausible function for consciousness, what might be? The commonsensical answer would be that consciousness “tells” the organism about events in the environment, or, in the case of pain and other proprioceptive sensations, about the state of the body. But this answer begs the question against opponents of attributing conscious states to animals for it fails to respect the distinction between phenomenal consciousness and mere awareness (in the uncontroversial sense of detection) of environmental or bodily events. Opponents of attributing the phenomenal consciousness to animals are not committed to denying the more general kind of consciousness of various external and bodily events, so there is no logical entailment from awareness of things in the environment or the body to animal sentience.
Perhaps more sophisticated attempts to spell out the functions of consciousness are similarly doomed. But Allen & Bekoff (1997, ch. 8) suggest that progress might be made by investigating the capacities of animals to adjust to their own perceptual errors. Not all adjustments to error provide grounds for suspecting that consciousness is involved, but in cases where an organism can adjust to a perceptual error while retaining the capacity to exploit the content of the erroneous perception, then there may be a robust sense in which the animal internally distinguishes its own appearance states from other judgments about the world. (Humans, for instance, have conscious visual experiences that they know are misleading — i.e., visual illusions — yet they can exploit the erroneous content of these experiences for various purposes, such as deceiving others or answering questions about how things appear to them.) Given that there are theoretical grounds for identifying conscious experiences with “appearance states”, attempts to discover whether animals have such capacities might be a good place to start looking for animal consciousness. It is important, however, to emphasize that such capacities are not themselves intended to be definitive or in any way criterial for consciousness.
Carruthers (2000) makes a similar suggestion about the function of consciousness, relating it to the general capacity for making an appearance-reality distinction; of course he continues to maintain that this capacity depends upon having conceptual resources that are beyond the grasp of nonhuman animals.
An article such as this perhaps raises more questions than it answers, but the topic would be of little philosophical interest if it were otherwise.
To philosophers interested in animal welfare or animal rights the issue of animal sentience is of utmost importance. This is due to wide, but by no means universal, acceptance of the biconditional statement:
[A]: animals deserve moral consideration if and only if they are sentient (especially possessing the capacity to feel pain).
Some philosophers have defended the view that animals are not sentient and attempted to use one of [A]'s component conditionals for modus tollens. Indeed Carruthers (1989) even argued that given their lack of sentience, it would be immoral not to use animals for research and other experimentation if doing so would improve the lot of sentient creatures such as ourselves. He has more recently backed off this view (1998b), denying [A] by claiming that sentience is not the sole basis for moral consideration, and that animals qualify for consideration on the basis of frustration of their unconscious desires. Varner (1999) disagrees with Carruthers by arguing for conscious desires throughout mammals and birds, but like Carruthers he also rejects [A], arguing for an even more inclusive criterion of moral considerability in terms of the biological “interests” that all living things have.
Others are inclined to use the other component conditional of [A] for modus ponens, taking for granted that animals are conscious, and regarding any theory of consciousness which denies this as defective. In this connection it is also sometimes argued that if there is uncertainty about whether other animals really are conscious, the morally safe position is to give them the benefit of the doubt.
The fact remains that for most philosophers of mind, the topic of animal consciousness is of peripheral interest to their main project of understanding the ontology of consciousness. Because of their focus on ontological rather than epistemological issues, there is often quite a disconnect between philosophers and scientists on these issues. But there are encouraging signs that interdisciplinary work between philosophers and behavioral scientists is beginning to lay the groundwork for addressing some questions about animal consciousness in a philosophically sophisticated yet empirically tractable way.
- Akins, K. A. (1993) “A bat without qualities.” In Consciousness, ed. M. Davies and G. Humphreys. Oxford: Blackwell.
- Alcock, J. (1992) Review of Griffin 1992. Natural History, September 1992: 62-65.
- Allen, C. (1992a) “Mental content.” British Journal for the Philosophy of Science 43: 537-553.
- Allen, C. (1992b) “Mental content and evolutionary explanation.” Biology and Philosophy 7: 1-12.
- Allen, C. (1995) “Intentionality: Natural and artificial.” In H. Roitblat and J.-A.Meyer (eds.) Comparative Approaches to Cognitive Science. Cambridge, MA: MIT Press.
- Allen, C. (1997) “Animal cognition and animal minds.” In Philosophy and the Sciences of the Mind: Pittsburgh-Konstanz Series in the Philosophy and History of Science vol. 4. ed. P. Machamer & M. Carrier Pittsburgh. and Konstanz: Pittsburgh University Press and the Universitätsverlag Konstanz: pp. 227-243. [Preprint available online]
- Allen, C. (1998) “The discovery of animal consciousness: an optimistic assessment.” Journal of Agricultural and Environmental Ethics 10: 217-225.
- Allen, C. (2002) “A skeptic's progress.” Biology & Philosophy 17: 695-702.
- Allen, C. (2004) “Is anyone a cognitive ethologist?” Biology & Philosophy 19: 589-607.
- Allen, C. & Bekoff, M. (1997) Species of Mind. Cambridge, MA: MIT Press. See especially ch. 8.
- Allen, C. & Saidel, E. (1998) “The Evolution of Reference.” In The Evolution of Mind, ed. D. Cummins & C. Allen. New York: Oxford University Press.
- Andrews, K. (1996) “The first step in the case for great ape equality: the argument for other minds.” Etica & Animali 8/96 (Special issue devoted to The Great Ape Project): 131-141.
- Armstrong, D. A. (1980) The Nature of Mind and Other Essays Ithaca, NY: Cornell University Press.
- Bekoff, M. & Allen, C. (1997) “Cognitive ethology: slayers, skeptics, and proponents.” In Anthropomorphism, Anecdote, and Animals, ed. R. Mitchell et al. New York: SUNY Press.
- Bekoff, M., Allen, C., & Burghardt, G.M. (eds.) (2002) The Cognitive Animal, Cambridge, MA: The MIT Press.
- Block, N. (1995) “On A Confusion About a Function of Consciousness.” Behavioral and Brain Sciences 18: 227-47.
- Block, N. (2005) “Two Neural Correlates of Consciousness”. Trends in Cognitive Sciences 9:41-89.
- Blumberg, M. S. & Wasserman, E. A.(1995) “Animal mind and the argument from design.” Am. Psychologist 50: 133-144
- Browne, D. (2004) “Do dolphins know their own minds?” Biology & Philosophy 19: 633-653.
- Burkhardt, R. W. Jr. (1997) “The founders of ethology and the problem of animal subjective experience.” In Animal Consciousness and Animal Ethics: Perspectives from the Netherlands ed. M. Dol, S. Kasanmoentalib, S. Lijmbach, E. Rivas & R. van den Bos. Assen, the Netherlands: van Gorcum: pp. 1-13.
- Byrne R. W. & Whiten, A. (1988) Machiavellian Intelligence: social expertise and the evolution of intellect in monkeys, apes and humans. Oxford: Oxford University Press.
- Carruthers, P. (1989) “Brute Experience.” J. Phil. 86: 258-269.
- Carruthers, P. (1992) The Animals Issue. Cambridge: Cambridge University Press.
- Carruthers, P. (1996) Language, Thought and Consciousness. Cambridge: Cambridge University Press.
- Carruthers, P. (1998a) “Natural Theories of Consciousness” European Journal of Philosophy 6: 203-222.
- Carruthers, P. (1998b) “Animal Subjectivity”, Psyche, Vol. 4/No. 3 (April 1998) [Available online]; see also “Replies to Critics: Explaining Subjectivity”, (his response to commentators), in Psyche, Vol. 6/No. 3 (February 2000) [Available online].
- Carruthers, P. (2000) Phenomenal Consciousness: A naturalistic theory. Cambridge: Cambridge University Press.
- Cheney, D. L., and Seyfarth, R. M. (1990) How Monkeys See the World: Inside the mind of another species. University of Chicago Press.
- Davidson, D. (1975) “Thought and talk.” In Guttenplan, S. (ed.) Mind and Language. Oxford: Oxford University Press.
- Dawkins, M.S. (1993) Through Our Eyes Only? The Search for Animal Consciousness. New York: W. H. Freeman.
- Dennett, D. C. (1969) Content and Consciousness. London: Routledge and Kegan Paul.
- Dennett, D. C. (1983) “Intentional systems in cognitive ethology: The ‘Panglossian paradigm’ defended.” Behavioral and Brain Sciences 6: 343-390.
- Dennett, D. C. (1987) The Intentional Stance. Cambridge, MA: MIT Press.
- Dennett, D. C. (1995) “Animal consciousness and why it matters.” Social Research 62: 691-710.
- Dennett, D. C. (1997) Kinds of Minds: Towards an Understanding of Consciousness New York: Basic Books (Science Masters Series).
- Dretske, F. (1995) Naturalizing the Mind. Cambridge, MA: MIT Press.
- Fisher, J. A. (1990) “The myth of anthropomorphism.” Originally published in M. Bekoff & D. Jamieson (eds.) Interpretation and explanation in the study of animal behavior: Vol. 1, Interpretation, intentionality, and communication. Boulder: Westview Press. Reprinted in Bekoff, M. , & D. Jamieson (eds.) (1996) Readings in Animal cognition. Cambridge, MA: MIT Press.
- Fujita, K., Blough, D. S., & Blough, P. M. (1991) “Pigeons see the Ponzo illusion.” Animal Learning and Behavior, 19, 283-293.
- Gallup, G. G., Jr. (1970) “Chimpanzees: Self-recognition.” Science 167: 86-87.
- Gallup, G. G., Jr., Anderson, J. R., & Shillito, D. J. (2002) “The Mirror Test” in Bekoff, Allen, & Burghardt (eds.).
- Gardner, R. A., Gardner, B. T., & Van Cantfort, T. E. (1989) Teaching sign language to chimpanzees. Albany, NY: SUNY Press.
- Griffin, D. R. (1976) The Question of Animal Awareness: Evolutionary Continuity of Mental Experience. New York: Rockefeller University Press. (second edition: 1981).
- Griffin, D. R. (1984) Animal Thinking. Cambridge, MA: Harvard University Press.
- Griffin, D. R. (1992) Animal Minds. Chicago: University of Chicago Press.
- Hare, B., Call, J., Agnetta, B. & Tomasello, M. (2000) ”Chimpanzees know what conspecifics do and do not see.” Animal Behavior 59: 771-785.
- Hare B., Call J., Tomasello M. (2001) “Do chimpanzees know what conspecifics know?” Animal Behaviour 63:139-151.
- Hare, B., & Wrangham, R. (2002) “Integrating two evolutionary models for the study of social cognition.” in Bekoff, Allen, & Burghardt (2002).
- Hauser, M., Chomsky, N., & Fitch, W. Tecumseh (2002) “The faculty of language: What is it, who has it, and how did it evolve?” Science 298:1569-1579.
- Heyes, C. (1998) “Theory of mind in nonhuman primates.” Behavioral and Brain Sciences 21: 101-148.
- Hurley, S. & Nudds, M. (eds.) (2006) Rational Animals? Oxford: Oxford University Press.
- Jamieson, D. & Bekoff, M. (1992) “Carruthers on nonconscious experience.” Analysis 52: 23-28.
- Kennedy, J. S. (1992) The new anthropomorphism. New York: Cambridge University Press.
- Lycan, W. (1996) Consciousness and Experience. Cambridge, MA: MIT Press.
- Nagel, T. (1974) “What is it like to be a bat?”, Philosophical Review 83: 435-450.
- Pepperberg, I.M. (1999) The Alex Studies: Cognitive and communicative abilities of Grey parrots. Cambridge, MA: Harvard University Press.
- Povinelli, D.J. (1996) “Chimpanzee theory of mind?” In P. Carruthers and P. Smith (eds.) Theories of Theories of Mind. Cambridge: Cambridge University Press.
- Povinelli, D.J. and Giambrone, S.J (2000) “Inferring Other Minds: Failure of the Argument by Analogy.” Philosophical Topics 27:161-201.
- Prinz, J. (2005) “A Neurofunctional Theory of Consciousness.” In A. Brook and K. Akins (eds.), Cognition and the Brain: The Philosophy and Neuroscience Movement. New York: Cambridge University Press.
- Radner, D. (1994) “Heterophenomenology: learning about the birds and the bees. J. Phil. 91: 389-403.
- Radner, D. & Radner, M. (1986) Animal Consciousness. Amherst, New York: Prometheus Books.
- Regan, T. (1983) The Case for Animal Rights. Berkeley: University of California Press. (See especially chs. 1 and 2.)
- Reiss, D. & Marino, L. (2001) “Mirror self-recognition in the bottlenose dolphin: A case of cognitive convergence.” Proceedings of the National Academy of Science 98:5937-5942.
- Rollin, B. E. (1989) The Unheeded Cry: Animal Consciousness, Animal Pain and Science. New York: Oxford University Press.
- Rosenthal, D. (1986) “Two concepts of consciousness.” Philosophical Studies 49, 329-359.
- Rosenthal, D. (1993) “Thinking that one thinks.” In M. Davies and G. Humphreys (eds.), Consciousness. Oxford: Blackwell; 197-223.
- Savage-Rumbaugh, S. (1996) Kanzi: The ape at the brink of the human mind. New York: John Wiley & Sons.
- Singer, P. (1975/1990) Animal Liberation (Revised Edition, 1990). New York: Avon Books.
- Smith J., Shields W. and Washburn D. (2003) “The comparative psychology of uncertainty monitoring and metacognition.” Behavioral and Brain Sciences 26: 317-373.
- Sober, E. (2000) “Evolution and the problem of other minds.” Journal of Philosophy 97: 365-386.
- Sorabji, R. (1993) Animal Minds and Human Morals: the origins of the Western debate, Ithaca, NY: Cornell University Press.
- Stoerig, P. & Cowey, A. (1997) “Blindsight in man and monkey.” Brain 120: 535-559.
- Trout, J.D. (2001) “The Biological Basis of Speech: What to Infer from Talking to the Animals.” Psychological Review 108:523-549.
- Tye, M. (2000) Consciousness, Color, and Content. Cambridge, MA: MIT Press.
- Varner, G. (1999) In Nature's Interests? New York: Oxford University Press.
- Wilkes, K. (1984) “Is consciousness important?” British Journal for the Philosophy of Science 35: 223-243.
- Wilson, M.D. (1995) “Animal ideas”, Proceedings and Addresses of the APA: 69: 7-25.
- Wynne, C. (2004) Do Animals Think? Princeton, NJ: Princeton University Press.
- Psyche Special Symposium on Animal Subjectivity: Target article by Peter Carruthers (1998a) with author's abstract, peer commentary, and author's response.
- Field Guide to the Philosophy of Mind entry on Philosophy of Cognitive Ethology by Colin Allen, with accompanying Annotated Bibliography.
- A combined Bibliography assembled by Profs. Donald Griffin, Colin Allen, and Marc Bekoff.
- The Animal Consciousness section from Prof. David Chalmer's bibliography on Contemporary Philosophy of Mind.