This is a file in the archives of the Stanford Encyclopedia of Philosophy.

version history
HOW TO CITE
THIS ENTRY

Stanford Encyclopedia of Philosophy

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

This document uses XHTML/Unicode to format the display. If you think special symbols are not displaying correctly, see our guide Displaying Special Characters.
last substantive content change
APR
3
2001

Higher-Order Theories of Consciousness

Higher-order theories of consciousness try to explain the distinctive properties of consciousness in terms of some relation obtaining between the conscious state in question and a higher-order representation of some sort (either a higher-order experience of that state, or a higher-order thought or belief about it). The most challenging properties to explain are those involved in phenomenal consciousness -- the sort of state which has a subjective dimension, which has ‘feel’, or which it is like something to undergo. These properties will form the focus of this article.

1. Kinds of Consciousness

One of the advances made in recent years has been in distinguishing between different questions concerning consciousness (see particularly: Rosenthal, 1986; Dretske, 1993; Block, 1995; Lycan, 1996). Not everyone agrees on quite which distinctions need to be drawn. But all are agreed that we should distinguish creature consciousness from mental-state consciousness. It is one thing to say of an individual person or organism that it is conscious (either in general or of something in particular); and it is quite another thing to say of one of the mental states of a creature that it is conscious.

It is also agreed that within creature-consciousness itself we should distinguish between intransitive and transitive variants. To say of an organism that it is conscious simpliciter (intransitive) is to say just that it is awake, as opposed to asleep or comatose. There do not appear to be any deep philosophical difficulties lurking here (or at least, they are not difficulties specific to the topic of consciousness, as opposed to mentality in general). But to say of an organism that it is conscious of such-and-such (transitive) is normally to say at least that it is perceiving such-and-such, or aware of such-and-such. So we say of the mouse that it is conscious of the cat outside its hole, in explaining why it does not come out; meaning that it perceives the cat's presence. To provide an account of transitive creature-consciousness would thus be to attempt a theory of perception.

There is a choice to be made concerning transitive creature-consciousness, failure to notice which may be a potential source of confusion. For we have to decide whether the perceptual state in virtue of which an organism may be said to be transitively-conscious of something must itself be a conscious one (state-conscious -- see below). If we say ‘Yes’ then we shall need to know more about the mouse than merely that it perceives the cat if we are to be assured that it is conscious of the cat -- we shall need to establish that its percept of the cat is itself conscious. If we say ‘No’, on the other hand, then the mouse's perception of the cat will be sufficient for the mouse to count as conscious of the cat; but we may have to say that although it is conscious of the cat, the mental state in virtue of which it is so conscious is not itself a conscious one! It may be best to by-pass any danger of confusion here by avoiding the language of transitive-creature-consciousness altogether. Nothing of importance would be lost to us by doing this. We can say simply that organism O observes or perceives X; and we can then assert explicitly, if we wish, that its percept is or is not conscious.

Turning now to the notion of mental-state consciousness, the major distinction here is between phenomenal consciousness, on the one hand -- which is a property of states which it is like something to be in, which have a distinctive ‘feel’ (Nagel, 1974) -- and various functionally-definable forms of access consciousness, on the other (Block, 1995). Most theorists believe that there are mental states -- such as occurrent thoughts or judgments -- which are access-conscious (in whatever is the correct functionally-definable sense), but which are not phenomenally conscious. In contrast, there is considerable dispute as to whether mental states can be phenomenally-conscious without also being conscious in the functionally-definable sense -- and even more dispute about whether phenomenal consciousness can be reductively explained in functional and/or representational terms.

It seems plain that there is nothing deeply problematic about functionally-definable notions of mental-state consciousness, from a naturalistic perspective. For mental functions and mental representations are the staple fare of naturalistic accounts of the mind. But this leaves plenty of room for dispute about the form that the correct functional account should take. Some claim that for a state to be conscious in the relevant sense is for it to be poised to have an impact on the organism's decision-making processes (Kirk, 1994; Dretske, 1995; Tye, 1995), perhaps also with the additional requirement that those processes should be distinctively rational ones (Block, 1995). Others think that the relevant requirement for access-consciousness is that the state should be suitably related to higher-order representations -- experiences and/or beliefs -- of that very state (Armstrong, 1968, 1984; Rosenthal, 1986, 1993; Dennett, 1991; Carruthers, 1996, 2000; Lycan, 1996).

What is often thought to be naturalistically problematic, in contrast, is phenomenal consciousness (Nagel, 1974, 1984; Jackson, 1982, 1986; McGinn, 1991; Block, 1995; Chalmers, 1996). And what is really and deeply controversial is whether phenomenal consciousness can be explained in terms of some or other functionally-definable notion. Cognitive (or representational) theories maintain that it can. Higher-order cognitive theories maintain that phenomenal consciousness can be reductively explained in terms of representations (either experiences or beliefs) which are higher-order. It is such theories which concern us here.

2. The Motivation for a Higher-Order Approach

Higher-order theories, like cognitive / representational theories in general, assume that the right level at which to seek an explanation of phenomenal consciousness is a cognitive one, providing an explanation in terms of some combination of causal role and intentional content. All such theories claim that phenomenal consciousness consists in a certain kind of intentional or representational content (analog or ‘fine-grained’ in comparison with any concepts we may possess) figuring in a certain distinctive position in the causal architecture of the mind. They must therefore maintain that these latter sorts of mental property do not already implicate or presuppose phenomenal consciousness. In fact, all cognitive accounts are united in rejecting the thesis that the very properties of mind or mentality already presuppose phenomenal consciousness, as proposed by Searle (1992, 1997) for example.

The major divide amongst representational theories of phenomenal consciousness in general, is between accounts which are provided in purely first-order terms and those which implicate higher-order representations of one sort or another (see below). These higher-order theorists will allow that first-order accounts -- of the sort defended by Dretske (1995) and Tye (1995), for example -- can already make some progress with the problem of consciousness. According to first-order views, phenomenal consciousness consists in analog or fine-grained contents which are available to the first-order processes which guide thought and action. So a phenomenally-conscious percept of red, for example, consists in a state with the analog content red which is tokened in such a way as to feed into thoughts about red, or into actions which are in one way or another guided by redness. Now, the point to note in favor of such an account is that it can explain the natural temptation to think that phenomenal consciousness is in some sense ineffable, or indescribable. This will be because such states have fine-grained contents which can slip through the mesh of any conceptual net. We can always distinguish many more shades of red than we have concepts for, or could describe in language (other than indexically -- e.g., ‘That shade’).

The main motivation behind higher-order theories of consciousness, in contrast, derives from the belief that all (or at least most) mental-state types admit of both conscious and non-conscious varieties. Almost everyone now accepts, for example, (post-Freud) that beliefs and desires can be activated non-consciously. (Think, here, of the way in which problems can apparently become resolved during sleep, or while one's attention is directed to other tasks. Notice, too, that appeals to non-conscious intentional states are now routine in cognitive science.) And then if we ask what makes the difference between a conscious and a non-conscious mental state, one natural answer is that conscious states are states we are aware of. And if awareness is thought to be a form of creature-consciousness (see section 1 above), then this will translate into the view that conscious states are states of which the subject is aware, or states of which the subject is creature-conscious. That is to say, these are states which are the objects of some sort of higher-order representation -- whether a higher-order perception or experience, or a higher-order belief or thought.

One crucial question, then, is whether perceptual states as well as beliefs admit of both conscious and non-conscious varieties. Can there be, for example, such a thing as a non-conscious visual perceptual state? Higher-order theorists are united in thinking that there can. Armstrong (1968) uses the example of absent-minded driving to make the point. Most of us at some time have had the rather unnerving experience of ‘coming to’ after having been driving on ‘automatic pilot’ while our attention was directed elsewhere -- perhaps having been day-dreaming or engaged in intense conversation with a passenger. We were apparently not consciously aware of any of the route we have recently taken, nor of any of the obstacles we avoided on the way. Yet we must surely have been seeing, or we would have crashed the car. Others have used the example of blindsight (Carruthers, 1989, 1996). This is a condition in which subjects have had a portion of their primary visual cortex destroyed, and apparently become blind in a region of their visual field as a result. But it has now been known for some time that if subjects are asked to guess at the properties of their ‘blind’ field (e.g. whether it contains a horizontal or vertical grating, or whether it contains an ‘X’ or an ‘O’), they prove remarkably accurate. Subjects can also reach out and grasp objects in their ‘blind’ field with something like 80% or more of normal accuracy, and can catch a ball thrown from their ‘blind’ side, all without conscious awareness. (See Weiskrantz, 1986, 1997, for details and discussion.)

More recently, a powerful case for the existence of non-conscious visual experience has been generated by the two-systems theory of vision proposed and defended by Milner and Goodale (1995). They review a wide variety of kinds of neurological and neuro-psychological evidence for the substantial independence of two distinct visual systems, instantiated in the temporal and parietal lobes respectively. They conclude that the parietal lobes provide a set of specialized semi-independent modules for the on-line visual control of action; whereas the temporal lobes are primarily concerned with more off-line functions such as visual learning and object recognition. And only the experiences generated by the temporal-lobe system are phenomenally conscious, on their account.

(Note that this is not the familiar distinction between what and where visual systems, but is rather a successor to it. For the temporal-lobe system is supposed to have access both to property information and to spatial information. Instead, it is a distinction between a combined what-where system located in the temporal lobes and a how-to or action-guiding system located in the parietal lobes.)

To get the flavor of Milner and Goodale's hypothesis, consider just one strand from the wealth of evidence they provide. This is a neurological syndrome called visual form agnosia, which results from damage localized to both temporal lobes, leaving primary visual cortex and the parietal lobes intact. (Visual form agnosia is normally caused by carbon monoxide poisoning, for reasons which are little understood.) Such patients cannot recognize objects or shapes, and may be capable of little conscious visual experience; but their sensorimotor abilities remain largely intact.

One particular patient -- D.F. -- has now been examined in considerable detail. While D.F. is severely agnosic, she is not completely lacking in conscious visual experience. Her capacities to perceive colors and textures are almost completely preserved. (Why just these sub-modules in her temporal cortex should have been spared is not known.) As a result, she can sometimes guess the identity of a presented object -- recognizing a banana, say, from its yellow color and the distinctive texture of its surface. But she is unable to perceive the shape of the banana (whether straight or curved, say); nor its orientation (upright or horizontal; pointing towards her or across). Yet many of her sensorimotor abilities are close to normal -- she would be able to reach out and grasp the banana, orienting her hand and wrist appropriately for its position and orientation, and using a normal and appropriate finger grip. Under experimental conditions it turns out that although D.F. is at chance in identifying the orientation of a broad line or letter-box, she is almost normal when posting a letter through a similarly-shaped slot oriented at random angles. In the same way, although she is at chance when trying to discriminate between rectangular blocks of very different sizes, her reaching and grasping behaviors when asked to pick up such a block are virtually indistinguishable from those of normal controls. It is very hard to make sense of this data without supposing that the sensorimotor perceptual system is functionally and anatomically distinct from the object-recognition/conscious system.

There is a powerful case, then, for thinking that there are non-conscious as well as conscious visual percepts. While the perceptions which ground your thoughts when you plan in relation to the perceived environment (‘I'll pick up that one’) may be conscious, and while you will continue to enjoy conscious perceptions of what you are doing while you act, the perceptual states which actually guide the details of your movements when you reach out and grab the object will not be conscious ones, if Milner and Goodale (1995) are correct.

But what implications does this have for phenomenal consciousness? Must these non-conscious percepts also be lacking in phenomenal properties? Most people think so. While it may be possible to get oneself to believe that the perceptions of the absent-minded car driver can remain phenomenally conscious (perhaps lying outside of the focus of attention, or being instantly forgotten), it is very hard to believe that either blindsight percepts or D.F.'s sensorimotor perceptual states might be phenomenally conscious ones. For these perceptions are ones to which the subjects of those states are blind, and of which they cannot be aware. And the question, then, is what makes the relevant difference? What is it about a conscious perception which renders it phenomenal, which a blindsight perceptual state would correspondingly lack? Higher-order theorists are united in thinking that the relevant difference consists in the presence of something higher-order in the first case which is absent in the second. The core intuition is that a phenomenally conscious state will be a state of which the subject is aware.

What options does a first-order theorist have to resist this conclusion? One is to deny the data (as does Dretske, 1995). It can be said that the non-conscious states in question lack the kind of fineness of grain and richness of content necessary to count as genuinely perceptual states. On this view, the contrast discussed above isn't really a difference between conscious and non-conscious perceptions, but rather between conscious perceptions, on the one hand, and non-conscious belief-like states, on the other. Another option is to accept the distinction between conscious and non-conscious perceptions, and then to explain that distinction in first-order terms. It might be said, for example, that conscious perceptions are those which are available to belief and thought, whereas non-conscious ones are those which are available to guide movement (Kirk, 1994). A final option is to bite the bullet, and insist that blindsight and sensorimotor perceptual states are indeed phenomenally conscious while not being access-conscious. (See Block, 1995; Tye, 1995; and Nelkin, 1996; all of whom defend versions of this view.) On this account, blindsight percepts are phenomenally conscious states to which the subjects of those states are blind. Higher-order theorists will argue, of course, that none of these alternatives is acceptable (see, e.g., Carruthers, 2000).

In general, then, higher-order theories of phenomenal consciousness claim the following:

Higher Order Theory (In General):
A phenomenally conscious mental state is a mental state (of a certain sort -- see below) which either is, or is disposed to be, the object of a higher-order representation of a certain sort (see below).

Higher-order theorists will allow, of course, that mental states can be targets of higher-order representation without being phenomenally conscious. For example, a belief can give rise to a higher-order belief without thereby being phenomenally conscious. What is distinctive of phenomenal consciousness is that the states in question should be perceptual or quasi-perceptual ones (e.g. visual images as well as visual percepts). Moreover, most cognitive/representational theorists will maintain that these states must possess a certain kind of analog (fine-grained) or non-conceptual intentional content. What makes perceptual states, mental images, bodily sensations, and emotional feelings phenomenally conscious, on this approach, is that they are conscious states with analog or non-conceptual contents. So putting these points together, we get the view that phenomenally conscious states are those states which possess fine-grained intentional contents of which the subject is aware, being the target or potential target of some sort of higher-order representation.

There are then two main dimensions along which higher-order theorists disagree amongst themselves. One concerns whether the higher-order states in question are belief-like or perception-like. Those taking the former option are higher-order thought theorists, and those taking the latter are higher-order experience or ‘inner-sense’ theorists. The other disagreement is internal to higher-order thought approaches, and concerns whether the relevant relation between the first-order state and the higher-order thought is one of availability or not. That is, the question is whether a state is conscious by virtue of being disposed to give rise to a higher-order thought, or rather by virtue of being the actual target of such a thought. These are the options which will now concern us.

3. Inner-Sense Theory

According to this view, humans not only have first-order non-conceptual and/or analog perceptions of states of their environments and bodies, they also have second-order non-conceptual and/or analog perceptions of their first-order states of perception. Humans (and perhaps other animals) not only have sense-organs which scan the environment/body to produce fine-grained representations which can then serve to ground thoughts and action-planning, but they also have inner senses, charged with scanning the outputs of the first-order senses (i.e. perceptual experiences) to produce equally fine-grained, but higher-order, representations of those outputs (i.e. to produce higher-order experiences). A version of this view was first proposed by the British Empiricist philosopher John Locke (1690). In our own time it has been defended especially by Armstrong (1968, 1984) and by Lycan (1996).

(A terminological point: this view is sometimes called a ‘higher-order experience (HOE) theory’ of phenomenal consciousness; but the term ‘inner-sense theory’ is more accurate. For as we shall see in section 5, there are versions of a higher-order thought (HOT) approach which also implicate higher-order perceptions, but without needing to appeal to any organs of inner sense.)

(Another terminological point: ‘inner-sense theory’ should more strictly be called ‘higher-order-sense theory’, since we of course have senses which are physically ‘inner’, such as pain-perception and internal touch-perception, which are not intended to fall under its scope. For these are first-order senses on a par with vision and hearing, differing only in that their purpose is to detect properties of the body rather than of the external world. According to the sort of higher-order theory under discussion in this section, these senses, too, will need to have their outputs scanned to produce higher-order analog contents in order for them to become phenomenally conscious. In what follows, however, the term ‘inner sense’ will be used to mean, more strictly, ‘higher-order sense’, since this terminology is now pretty firmly established.)

We therefore have the following proposal to consider:

Inner-Sense Theory:
A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is in turn the target of a higher-order analog/non-conceptual intentional state, via the operations of a faculty of ‘inner sense’.

On this account, the difference between a phenomenally conscious percept of red and the sort of non-conscious percepts of red which guide the guesses of a blindsighter and the activity of sensorimotor system, is as follows. The former is scanned by our inner senses to produce a higher-order analog state with the content experience of red or seems red, whereas the latter states are not -- they remain merely first-order states with the analog content red; and in so remaining, they lack any dimension of seeming or subjectivity. According to inner-sense theory, it is our higher-order experiential contents produced by the operations of our inner-senses which make some mental states with analog contents, but not others, available to their subjects. And it is these same higher-order contents which constitute the subjective dimension or ‘feel’ of the former set of states, thus rendering them phenomenally conscious.

One of the main advantages of inner-sense theory is that it can explain how it is possible for us to acquire purely recognitional concepts of experience. For if we possess higher-order perceptual contents, then it should be possible for us to learn to recognize the occurrence of our own perceptual states immediately -- or ‘straight off’ -- grounded in those higher-order analog contents. And this should be possible without those recognitional concepts thereby having any conceptual connections with our beliefs about the nature or content of the states recognized, nor with any of our surrounding mental concepts. This is then how inner-sense theory will claim to explain the familiar philosophical thought-experiments concerning one's own experiences, which are supposed to cause such problems for physicalist/naturalistic accounts of the mind (Kripke, 1972; Chalmers, 1996).

For example, I can think, ‘This type of experience [as of red] might have occurred in me, or might normally occur in others, in the absence of any of its actual causes and effects.’ So on any view of intentional content which sees content as tied to normal causes (i.e. to information carried) and/or to normal effects (i.e. to teleological or inferential role), this type of experience might occur without representing red. In the same sort of way, I shall be able to think, ‘This type of experience [pain] might have occurred in me, or might occur in others, in the absence of any of the usual causes and effects of pains. There could be someone in whom these experiences occur but who isn't bothered by them, and where those experiences are never caused by tissue damage or other forms of bodily insult. And conversely, there could be someone who behaves and acts just as I do when in pain, and in response to the same physical causes, but who is never subject to this type of experience.’ If we possess purely recognitional concepts of experience, grounded in higher-order percepts of those experiences, then the thinkability of such thoughts is both readily explicable, and apparently unthreatening to a naturalistic approach to the mind.

Inner-sense theory does face a number of difficulties, however. One objection is as follows (see Dretske, 1995). If inner-sense theory were true, then how is it that there is no phenomenology distinctive of inner sense, in the way that there is a phenomenology associated with each outer sense? Since each of the outer senses gives rise to a distinctive set of phenomenological properties, you might expect that if there were such a thing as inner sense, then there would also be a phenomenology distinctive of its operation. But there doesn't appear to be any.

This point turns on the so-called ‘transparency’ of our perceptual experience (Harman, 1990). Concentrate as hard as you like on your ‘outer’ (first-order) experiences -- you will not find any further phenomenological properties arising out of the attention you pay to them, beyond those already belonging to the contents of the experiences themselves. Paying close attention to your experience of the color of the red rose, for example, just produces attention to the redness -- a property of the rose. But put like this, however, the objection just seems to beg the question in favor of first-order theories of phenomenal consciousness. It assumes that first-order -- ‘outer’ -- perceptions already have a phenomenology independently of their targeting by inner sense. But this is just what an inner-sense theorist will deny. And then in order to explain the absence of any kind of higher-order phenomenology, an inner-sense theorist only needs to maintain that our higher-order experiences are never themselves targeted by an inner-sense-organ which might produce third-order analog representations of them in turn.

Another objection to inner-sense theory is as follows (see Sturgeon, 2000). If there really were an organ of inner sense, then it ought to be possible for it to malfunction, just as our first-order senses sometimes do. And in that case, it ought to be possible for someone to have a first-order percept with the analog content red causing a higher-order percept with the analog content seems-orange. Someone in this situation would be disposed to judge, ‘It's red’, immediately and non-inferentially (i.e. not influenced by beliefs about the object's normal color or their own physical state). But at the same time they would be disposed to judge, ‘It seems orange’. Not only does this sort of thing never apparently occur, but the idea that it might do so conflicts with a powerful intuition. This is that our awareness of our own experiences is immediate, in such a way that to believe that you are undergoing an experience of a certain sort is to be undergoing an experience of that sort. But if inner-sense theory is correct, then it ought to be possible for someone to believe that they are in a state of seeming-orange when they are actually in a state of seeming-red.

A different sort of objection to inner-sense theory is developed by Carruthers (2000). It starts from the fact that the internal monitors postulated by such theories would need to have considerable computational complexity in order to generate the requisite higher-order experiences. In order to perceive an experience, the organism would need to have mechanisms to generate a set of internal representations with an analog or non-conceptual content representing the content of that experience, in all its richness and fine-grained detail. And notice that any inner scanner would have to be a physical device (just as the visual system itself is) which depends upon the detection of those physical events in the brain which are the outputs of the various sensory systems (just as the visual system is a physical device which depends upon detection of physical properties of surfaces via the reflection of light). For it is hard to see how any inner scanner could detect the presence of an experience qua experience. Rather, it would have to detect the physical realizations of experiences in the brain, and construct the requisite higher-order representation of the experiences which those physical events realize, on the basis of that physical-information input. This makes is seem inevitable that the scanning device which supposedly generates higher-order experiences of our first-order visual experience would have to be almost as sophisticated and complex as the visual system itself.

Now the problem which arises here is this. Given this complexity in the operations of our organs of inner sense, there had better be some plausible story to tell about the evolutionary pressures which led to their construction. For natural selection is the only theory which can explain the existence of organized functional complexity in nature (Pinker, 1994, 1997). But there would seem to be no such stories on the market. The most plausible suggestion is that inner-sense might have evolved to subserve our capacity to think about the mental states of conspecifics, thus enabling us to predict their actions and manipulate their responses. (This is the so-called ‘Machiavellian hypothesis’ to explain the evolution of intelligence in the great-ape lineage. See Byrne and Whiten, 1988, 1998.) But this suggestion presupposes that the organism must already have some capacity for higher-order thought, since it is such thoughts which inner sense is supposed to subserve. And yet as we shall see shortly (in section 5), some higher-order thought theories can claim all of the advantages of inner-sense theory as an explanation of phenomenal consciousness, but without the need to postulate any ‘inner scanners’. At any rate, the ‘computational complexity objection’ to inner-sense theories remains as a challenge to be answered.

4. Higher-Order Thought Theory (1): Non-dispositionalist

Non-dispositionalist higher-order thought (HOT) theory is a proposal about the nature of state-consciousness in general, of which phenomenal consciousness is but one species. Its main proponent has been Rosenthal (1986, 1993, forthcoming). The proposal is this: a conscious mental state M, of mine, is a state which is actually causing an activated belief (generally a non-conscious one) that I have M, and causing it non-inferentially. (The qualification concerning non-inferential causation is included to avoid one having to say that my non-conscious motives become conscious when I learn of them under psycho-analysis, or that my jealousy is conscious when I learn of it by interpreting my own behavior.) An account of phenomenal consciousness can then be generated by stipulating that the mental state M should have an analog content in order to count as an experience, and that when M is an experience (or a mental image, bodily sensation, or emotional feeling), it will be phenomenally conscious when (and only when) suitably targeted.

We therefore have the following proposal to consider:

Non-Dispositionalist Higher-Order Thought Theory:
A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is the object of a higher-order thought, and which causes that thought non-inferentially.

This account avoids some of the difficulties inherent in inner-sense theory, while retaining the latter's ability to explain the distinction between conscious and non-conscious perceptions. (Conscious perceptions will be analog states which are targeted by a higher-order thought, whereas perceptions such as those involved in blindsight will be non-conscious by virtue of not being so targeted.) In particular, it is easy to see a function for higher-order thoughts, in general, and to tell a story about their likely evolution. A capacity to entertain higher-order thoughts about experiences would enable a creature to negotiate the is/seems distinction, perhaps learning not to trust its own experiences in certain circumstances, and also to induce appearances in others, by deceit. And a capacity to entertain higher-order thoughts about thoughts (beliefs and desires) would enable a creature to reflect on, and to alter, its own beliefs and patterns of reasoning, as well as to predict and manipulate the thoughts and behaviors of others. Indeed, it can plausibly be claimed that it is our capacity to target higher-order thoughts on our own mental states which underlies our status as rational agents (Burge, 1996; Sperber, 1996).

One well-known objection to this sort of higher-order thought theory is due to Dretske (1993). We are asked to imagine a case in which we carefully examine two line-drawings, say (or in Dretske's example, two patterns of differently-sized spots). These drawings are similar in almost all respects, but differ in just one aspect -- in Dretske's example, one of the pictures contains a black spot which the other lacks. It is surely plausible that, in the course of examining these two pictures, one will have enjoyed a conscious visual experience of the respect in which they differ -- e.g. of the offending spot. But, as is familiar, one can be in this position while not knowing that the two pictures are different, or in what way they are different. In which case, since one can have a conscious experience (e.g. of the spot) without being aware that one is having it, consciousness cannot require higher-order awareness.

Replies to this objection have been made by Seager (1994) and by Byrne (1997). They point out that it is one thing to have a conscious experience of the aspect which differentiates the two pictures, and quite another to consciously experience that the two pictures are differentiated by that aspect. That is, seeing the extra spot in one picture needn't mean seeing that this is the difference between the two pictures. So while scanning the two pictures one will enjoy conscious experience of the extra spot. A higher-order thought theorist will say that this means undergoing a percept with the content spot here which forms the target of a higher-order belief that one is undergoing a perception with that content. But this can perfectly well be true without one undergoing a percept with the content spot here in this picture but absent here in that one. And it can also be true without one forming any higher-order belief to the effect that one is undergoing a perception with the content spot here when looking at a given picture but not when looking at the other. In which case the purported counter-example isn't really a counter-example.

A different sort of problem with the non-dispositionalist version of higher-order thought theory relates to the huge number of beliefs which would have to be caused by any given phenomenally conscious experience. (This is the analogue of the ‘computational complexity’ objection to inner-sense theory, sketched in section 3 above). Consider just how rich and detailed a conscious experience can be. It would seem that there can be an immense amount of which we can be consciously aware at any one time. Imagine looking down on a city from a window high up in a tower-block, for example. In such a case you can have phenomenally conscious percepts of a complex distribution of trees, roads, and buildings; colors on the ground and in the sky above; moving cars and pedestrians; and so on. And you can -- it seems -- be conscious of all of this simultaneously. According to non-dispositionalist higher-order thought theory, then, you would need to have a distinct activated higher-order belief for each distinct aspect of your experience -- either that, or just a few such beliefs with immensely complex contents. Either way, the objection is the same. For it seems implausible that all of this higher-order activity should be taking place (albeit non-consciously) every time someone is the subject of a complex conscious experience. For what would be the point? And think of the amount of cognitive space that these beliefs would take up!

This objection to non-dispositionalist forms of higher-order thought theory is considered at some length in Carruthers (2000), where a variety of possible replies are discussed and evaluated. Perhaps the most plausible and challenging such reply would be to deny the main premise lying behind the objection, concerning the rich and integrated nature of phenomenally conscious experience. Rather, the theory could align itself with Dennett's (1991) conception of consciousness as highly fragmented, with multiple streams of perceptual content being processed in parallel in different regions of the brain, and with no stage at which all of these contents are routinely integrated into a phenomenally conscious perceptual manifold. Rather, contents become conscious on a piecemeal basis, as a result of internal or external probing which gives rise to a higher-order belief about the content in question. (Dennett himself sees this process as essentially linguistic, with both probes and higher-order thoughts being formulated in natural language. This variant of the view, although important in its own right, is not relevant to our present concerns.) This serves to convey to us the mere illusion of riches, because wherever we direct our attention, there we find a conscious perceptual content.

It is doubtful whether this sort of ‘fragmentist’ account can really explain the phenomenology of our experience, however. For it still faces the objection that the objects of attention can be immensely rich and varied at any given moment, hence requiring there to be an equally rich and varied repertoire of higher-order thoughts tokened at the same time. Think of immersing yourself in the colors and textures of a Van Gogh painting, for example, or the scene as you look out at your garden -- it would seem that one can be phenomenally conscious of a highly complex set of properties, which one could not even begin to describe or conceptualize in any detail. However, since the issues here are large and controversial, it cannot yet be concluded that non-dispositionalist forms of higher-order thought theory have been decisively refuted.

5. Higher-Order Thought Theory (2): Dispositionalist

According to all forms of dispositionalist higher-order thought theory, the conscious status of an experience consists in its availability to higher-order thought (Dennett, 1978; Carruthers, 1996, 2000). As with the non-dispositionalist version of the theory, in its simplest form we have here a quite general proposal concerning the conscious status of any type of occurrent mental state, which becomes an account of phenomenal consciousness when the states in question are experiences (or images, emotions, etc.) with analog content. The proposal is this: a conscious mental event M, of mine, is one which is disposed to cause an activated belief (generally a non-conscious one) that I have M, and to cause it non-inferentially.

The proposal before us is therefore as follows:

Dispositionalist Higher-Order Thought Theory:
A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is held in a special-purpose short-term memory store in such a way as to be available to cause (non-inferentially) higher-order thoughts about any of the contents of that store.

In contrast with the non-dispositionalist form of theory, the higher-order thoughts which render a percept conscious are not necessarily actual, but potential, on this account. So the objection now disappears, that an unbelievable amount of cognitive space would have to be taken up with every conscious experience. (There need not actually be any higher-order thought occurring, in order for a given perceptual state to count as phenomenally conscious, on this view.) So we can retain our belief in the rich and integrated nature of phenomenally conscious experience -- we just have to suppose that all of the contents in question are simultaneously available to higher-order thought. Nor will there be any problem in explaining why our faculty of higher-order thought should have evolved, nor why it should have access to perceptual contents in the first place -- this can be the standard sort of story in terms of Machiavellian intelligence.

It might well be wondered how their mere availability to higher-order thoughts could confer on our perceptual states the positive properties distinctive of phenomenal consciousness -- that is, of states having a subjective dimension, or a distinctive subjective feel. The answer may lie in the theory of content. Suppose that one agrees with Millikan (1984) that the representational content of a state depends, in part, upon the powers of the systems which consume that state. That is, suppose one thinks that what a state represents will depend, in part, on the kinds of inferences which the cognitive system is prepared to make in the presence of that state, or on the kinds of behavioral control which it can exert. In which case the presence of first-order perceptual representations to a consumer-system which can deploy a ‘theory of mind’, and which is capable of recognitional applications of theoretically-embedded concepts of experience, may be sufficient to render those representations at the same time as higher-order ones. This would be what confers on our phenomenally conscious experiences the dimension of subjectivity. Each experience would at the same time (while also representing some state of the world, or of our own bodies) be a representation that we are undergoing just such an experience, by virtue of the powers of the ‘theory of mind’ consumer-system. Each percept of green, for example, would at one and the same time be an analog representation of green and an analog representation of seems green or experience of green. In fact, the attachment of a ‘theory of mind’ faculty to our perceptual systems may completely transform the contents of the latter's outputs.

(Consumer semantics embraces not only a number of different varieties of teleosemantics, but also various forms of inferential role semantics. For the former, see Millikan, 1984, 1986, 1989; and Papineau, 1987, 1993. For the latter, see Loar, 1981, 1982; McGinn, 1982, 1989; Block, 1986; and Peacocke, 1986, 1992.)

This account might seem to achieve all of the benefits of inner-sense theory, but without the associated costs. (Some potential draw-backs will be noted in a moment.) In particular, we can endorse the claim that phenomenal consciousness consists in a set of higher-order perceptions. This enables us to explain, not only the difference between conscious and non-conscious perception, but also how analog states come to acquire a subjective dimension or ‘feel’. And we can also explain how it can be possible for us to acquire some purely recognitional concepts of experience (thus explaining the standard philosophical thought-experiments). But we don't have to appeal to the existence of any ‘inner scanners’ or organs of inner sense (together with their associated problems) in order to do this. Moreover, it should also be obvious why there can be no question of our higher-order contents getting out of line with their first-order counterparts, in such a way that one might be disposed to make recognitional judgments of red and seems orange at the same time. This is because the content of the higher-order experience is parasitic on the content of the first-order one, being formed from it by virtue of the latter's availability to a ‘theory of mind’ system.

On the down-side, the account isn't neutral on questions of semantic theory. On the contrary, it requires us to reject any form of pure input-semantics, in favor of some sort of consumer-semantics. We cannot then accept that intentional content reduces to informational content, nor that it can be explicated purely in terms of causal co-variance relations to the environment. So anyone who finds such views attractive will think that the account is a hard one to swallow. (For discussion of various different versions of input-semantics, see Dretske, 1981, 1986; Fodor, 1987, 1990; and Loewer and Rey, 1991.)

What will no doubt be seen by most people as the biggest difficulty with dispositionalist higher-order thought theory, however, is that it may have to deny phenomenal consciousness to most species of non-human animal. This objection will be discussed, among others, in the section following, since it can arguably also be raised against any form of higher-order theory.

6. Objections to a Higher-Order Approach

There have been a whole host of objections raised against higher-order theories of phenomenal consciousness. (See, e.g., Aquila, 1990; Jamieson and Bekoff, 1992; Dretske, 1993, 1995; Goldman, 1993; Güzeldere, 1995; Tye, 1995; Chalmers, 1996; Byrne, 1997; Siewert, 1998.) Unfortunately, many of these objections, although perhaps intended as objections to higher-order theories as such, are often framed in terms of one or another particular version of such a theory. One general moral to be taken away from the present discussion should then be this: the different versions of a higher-order theory of phenomenal consciousness need to be kept distinct from one another, and critics should take care to state which version of the approach is under attack, or to frame objections which turn merely on the higher-order character of all of these approaches.

One generic objection is that higher-order theories, when combined with plausible empirical claims about the representational powers of non-human animals, will conflict with our common-sense intuition that such animals enjoy phenomenally conscious experience (Jamieson and Bekoff, 1992; Dretske, 1995; Tye, 1995). This objection can be pressed most forcefully against higher-order thought theories, of either variety; but it is also faced by inner-sense theory (depending on what account can be offered of the evolutionary function of organs of inner sense). Since there is considerable dispute as to whether even chimpanzees have the kind of sophisticated ‘theory of mind’ which would enable them to entertain thoughts about experiential states as such (Byrne and Whiten, 1988, 1998; Povinelli, 2000), it seems most implausible that many other species of mammal (let alone reptiles, birds and fish) would qualify as phenomenally conscious, on these accounts. Yet the intuition that such creatures enjoy phenomenally conscious experiences is a powerful and deep-seated one, for many people. (Witness Nagel's classic 1974 paper, which argues that there must be something which it is like to be a bat.)

The grounds for this common-sense intuition can be challenged, however. (How, after all, are we supposed to know whether it is like something to be a bat?) And that intuition can perhaps be explained away as a mere by-product of imaginative identification with the animal. (Since our images of their experiences are phenomenally conscious, we may naturally assume that the experiences imaged are similarly conscious. See Carruthers, 1999, 2000.) But there is no doubt that one crux of resistance to higher-order theories will lie here, for many people.

Another generic objection is that higher-order approaches cannot really explain the distinctive properties of phenomenal consciousness (Chalmers, 1996; Siewert, 1998). Whereas the argument from animals is that higher-order representations aren't necessary for phenomenal consciousness, the argument here is that such representations aren't sufficient. It is claimed, for example, that we can easily conceive of creatures who enjoy the postulated kinds of higher-order representation, related in the right sort of way to their first-order perceptual states, but where those creatures are wholly lacking in phenomenal consciousness.

In response to this objection, higher-order theorists will join forces with first-order theorists and others in claiming that these objectors pitch the standards for explaining phenomenal consciousness too high (Block and Stalnaker, 1999; Tye, 1999; Carruthers, 2000; Lycan, 2001). We will insist that a reductive explanation of something -- and of phenomenal consciousness in particular -- doesn't have to be such that we cannot conceive of the explanandum (that which is being explained) in the absence of the explanans (that which does the explaining). Rather, we just need to have good reason to think that the explained properties are constituted by the explaining ones, in such a way that nothing else needed to be added to the world once the explaining properties were present, in order for the world to contain the target phenomenon. But this is hotly contested territory. And it is on this ground that the battle for phenomenal consciousness may ultimately be won or lost.

Bibliography

Other Internet Resources

Related Entries

consciousness: and intentionality | consciousness: animal | consciousness: representational theories of

Copyright © 2001
Peter Carruthers
peter_carruthers@umail.umd.edu

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Stanford Encyclopedia of Philosophy