1 60 PETER BIERI Stalnaker, R. (1984). Inquiry. Cambridge, MA: MIT Press. Stampe, D. (1984). Towards a causal theory of linguistic representation. Midwest Studies in Philosophy, 2,4243. Taylor, C. (1985). Human Agency and Language: Philosophical Papers 1. Cambridge: Cambridge University Press. Van Gulick, R. (1980). Functionalism, information and content. Nature and System, 2, Van Gulick, R. (1982). Mental representation - a functionalist view. Pacrfic Philosophical Quarterly, 63, Van Gulick, R. (1983). What difference does consciousness make? Philosophical Topics, 17, Wilkes., K. (1984)., 1s consciousness important? British Journal for the Philosophy of Science, 35, Wilkes, K. (1988). -, yishi, duh, um, aild consciousness. ln Marcel & Bisiach Robert Van Gulick What Would Count as Consciousness has again become a hot topic in the philosophy of mind as shown by the recent spate of books about it (Lycan 1987; McGinii 1991; Dennett 1991 ; Flanagan 1992; Searle 1992). Various theories have been put forward, each of which alleges to fully or at least partially explain what consciousness is and liow it fits into Our overall picture oftlie world (Lycan 1987; Rosenthal 1986; Dennett 1991 ; Searle 1992). In contrast, otlier philosophers have argued that coiisciousiiess is systematcally resistant to explanation in one or another important respect (Nagel 1974; Levine 1983; McGinn 1991). Although the dispute between these two cornpeting groups - the optimists and the pessimists - has involved strongly expressed opinions it has also suffered from a fair amount of conf~ision regarding just what is at issue. Before one can decide how good Our prospects are for explaining coiisciousness we need to be clearer about what would count as doing so. In this paper 1 propose to do just that; my aiin is not to resolve the dispute(s), but just to untangle and claris, the various distinct issues which sometimes get run together in the heat of controversy. The question of wliether we can explain or understand consciousness is systematically ainbiguous in three main respects, and to make it more precise we need to be specific about each of the relevant parameters. A. What is the explandum, i.e. what features or aspects of consciousness do we wisli to explain or understand? B. What can go in the explanans? Tliat is, iii what terms or within what conceptual framework (e.g. physical, functional, naturalistic) inust we construct Our explanation? il : C. What relation must liold between the explanadum and the explanans to count as giving a satisfactory expalanatioii?
2 62 ROBERT VAN GULICIC Given these parameters we find that the question of understanding consciousness is not just one question but a large family of interrelated questions, wliicli rnay liave quite different prospects for successful resolution. In what follows 1 will consider some of the main variants, though 1 will not try to be fully compreheiisive nor cover every possible interpretation of the question; there are just too many readings one inight give it. Nonetheless if we can clearly state the leading variants, we will have made a lot of progress in clarifying the dispute. A. The Explananda Let us begin witli the first parameter; what featiires or properties of consciousness are in need of explanation? Al At a minimum we need to explain the difference between conscious mental states and nonconscious or unconscious mental states or processes. In so far as it is possible to have unconscious beliefs, desires, aiid perceptions or engage in the drawing of unconscious inferences, we need to understand how such states or processes differ from others that are of the same type but conscious. Moreover, there may be some mental states that are of types that can never becoine conscious, e.g. the knowledge or 'cognizing' states postulated by a Chomskyean theory of linguistic competence. Such states are alleged to be geniiinely mental (puce Searle 1992) though they are in principle inaccessible to conscious~iess. A coinmon and appealing move is to define the notion of a conscious mental state as a mental state of which we are conscious. For example David Rosenthal (1 986) analyses a conscious mental state (e.g. a conscious desire) as a first order mental state which is accompanied by a secoiid order thouglit to the effect that one is in the first order state (i.e. 1 desire x and liave the simultaneous thouglit that 1 desire x). Tliere are problems with tlie proposed analysis, but for present purpose it should suffice to illustrate the relevant explanandum: the need to distinguish between conscious and unconscious mental states. A2 We must also explain the distinction between conscious and nonconscious or unconscious creatures. You are clearly conscious wliile you are readiiig this page, and 1 was surely conscious when 1 wrote it, but most of us spend a good part of each niglit being unconscious, and soine unfortunate individuals fa11 into comas from wliicli tliey never reemerge as coiiscious. Some nonhuman animals such as mammals and birds strike us as clearly coiiscious creatures, biit abolit others, siicli as snails or honey bees, we feel * WHAT WOULD COUNT AS EXPLAINING? 63 much less certain. But just what are we asking when we ask whether or not fish are ever conscious? Surely not whether they use their sensory organs to perceive tlie world around tliein and respond appropriately. There is no doubt that they do that. Perhaps one could explicate creature consciousness in terms of conscious states as follows: a creature is conscious at a particular tiiiie only if it has at least some conscious mental states at that time, and a type of creature counts among the conscious types only if it is conscious at least some of the time. But this will be problematic if we accept a higher order thought account of states consciousness: counting fish as nonconscious unless they can have thoughts about their own mental states seems to set too high a standard for qualifying as a conscious creature. Nonetlieless the need to find some way to draw the distinction provides another explanandum. In trying to isolate tlie deeply problematic nature of consciousness, pliilosophers often refer to the subjective, qualitative, or phenomenal aspects of conscious experience. Al1 three terms are directed at those features that in Thomas Nagel's phrase make it the case tliat 'there's something that it's like to be' a conscious thing, i.e. something that it's like 'from the inside' or 'for the creature itself (Nage1 1974). Although the three terms are sometimes used interchangeably, they in fact refer to distinct tliough interrelated dimensions of consciousness. Each of the terms is itself ambiguous and open to multiple interpretations, but in each case we can legitiinately isolate a central or core use that picks out a specific feature of consciousness in need of explanation, thus providing us with three more explananda. Qualia and the qualitative nature of conscious experience are often invoked by sceptics about the explanatory value of one or another theory of consciousness, wliicli they charge with failing to provide an adequate account of the raw feels of experience, the redness of experienced red or the experienced taste of a ripe mango; pains do not merely 'signal' or 'represent' the occurrence of bodily liarm: they hurt, and any theory of consciousiiess tliat fails to explain such felt aspects of Our mental life will be incomplete. The alleged explanatory lapse may concern only specific qualia, or the general issue of how there can be any such properties at all. 'Iiiverted qualia' arguments purport to show the former and 'absent qualia' arguments the latter (Block 1978). Thus understanding coiisciousness requires us to understand its qualitative aspect. If there really are qualia, we will have to understand what sorts of things or properties they are; how something can coine to have them; and what makes it the case that a specific mental state involves the specific quale that it does. Even if, following some recent philosophers, we conclude that there are no such things as qualia
3 64 ROBERT VAN GULICK (Dennett 1988; 1991), we will still have to explain why it seems that there are, as well as explaining what - if anything -the qualitative aspect of experieiice does involve and how it fits within our overall understanding of consciousness. A4 The term 'phenomenal' is sometimes used interchangeably with 'qualitative' in talking about conscioiisness; 'the problem of phenomenal properties' becoines just another name for the difficulty we encounter in trying to explain raw feels, the hurtfiilness of pain or the experienced fragrance of a gardenia. This is a legitimate ilse of tlie term, but 1 prefer to reserve 'phenomenal' for a more comprehensive range of features. Current philosophical debate has focused heavily on raw feels, but they are just one aspect of our experienced inner life and thus only part of what we must deal with if we aim to describe the phenomenal structure of experience. In tliis sense the use of 'phenomenal' accords better with its historical use by Kant and later by the phenomenologists. The order and connectedness tliat we fiiid withiii experience, its conceptiial organization, its temporal structure, its emotive toiles and moods, and the fact that Our experience is that of a (more or less) unifed self set over against an objective world are just a few of features other than raw feels that properly fall within the bounds of tlie phenomenal. Al1 will need to be addressed ifwe take the plienomenal aspect as our explanandum. A5 The third meinber of our triad, the term 'subjectivity', also varies in use witli regard to coiisciousness. Some philosophers use it to inean just that experience lias a first person aspect over and above whatever objective or third person properties it inany have. In that sense, it involves little or nothing that is not already captured by 'qualitative' and 'phenomenal'. However, there is a distinctively epistemic use of the term that merits separate and special attention. It is in this sense that some (Nagel 1974; Jackson 1982 and McGinn 1991) have argued that facts about the experiential aspect of consciousness can only be known or even understood by agents who themselves are capable of Iiaving the relevant sorts of experi- ences -a view with a long empiricist pedigree. Such facts, Nagel argiles, are bound up with a particular (type of) point of view, wliere (types of) points of view are individuated on the basis of tlie sorts of qualitative or plienomenal properties that can be experienced by tlie relevant sort of conscious agent. It is in this empathetic sense that humans supposedly can iiot fiilly iinderstand wliat it would be like to be a bat because we are iiicapable of having echolocatory experiences like those had by bats. VJ!?ether or t:7 what extent this is true is controversial, but 'siil~jective' used WHAT WOULD COUNT AS EXPLAINING? 6 5 in this way differs eiiough from botli 'qualitative' and 'plienomenal' to provide a distinct explanandum. A tinal feature of consciousness that we need to address is the extent to wliich the intentional or representational content of our coiiscious mental states is immediately available or accessible to us, a feature that 1 have elsewhere referred to as the semantic transparency of consciousness (Van Gulick 1988a; 1988b). When we have a conscious thought or experience we typically know on the whole what that thought or experience is about, what state affairs it represents. 1 believe it is in large part this feature of our coiiscious mental life that leads Joliri Searle to distinguish between what he calls the intrinsic intentionality of conscious mental states in contrast with tlie inerely metaphoric intentionality he attributes to computers. He treats coiiscious mental states as intrinsically intentional because they have ineaning or content for the person or creature whose states they are. In contrast the computer's states have content only from the perspective of some external interpreter of its actions; their meaning is not to any degree transparent to tlie computer itself - or so at least Searle claims. Though 1 am not inclined to draw the distinction as Searle does, 1 do believe that his notion of intrinsic intentionality and what 1 cal1 semantic transparency are two attempts to get at a real and important property of consciousness that rnust be explained by a comprehensive theory of conscioiisness. B. Tlie Explanans Altliougli tliese six clearly do not exhaust the list of possible explanailda, they do capture the main features of consciousness that have been at issue in the recent philosophic literature. We can thus turn to tlie second parameter and consider various restrictions on what can appear in our explanans. We get quite different interpretations of oiir original question depending on Iiow we liinit the range of terms, concepts or processes that can figure in our explanatioii of consciousness. Tliere are at least five main variaiits. Although they are not miitually exclusive, and iiideed in soine cases clearly overlap, it is worth regarding each of the five as delimiting a distinct if not wholly separate explanatory domain. The first and perhaps most coininon variant involves limitiiig the explanails to the physical or material. The two are not quite the same since the inaterial concerns only the properties of matter and tliere is inore to pliysics tliaii matter and its properties, but in philosophic discussion it is coinmon to use the two iiitercliangeably and 1 will iiot make inucli of the difference. Piit
4 66 ROBERT VAN GULICK pcific if in these terms the problem of consciousness becomes just a spperliaps particularly intractable case of the mina-body problem: 'Can we understand or explain the relevant aspect of consciousness in purely physical or material terms?' As we shall see when we turii to Our tliird parameter this question is itself open to many readings depending on Iiow we unpack the requisite notion of explaiiation. Moreover there are unclarities in the very notioii of the physical itself. Jiist what counts as a physical property or relation? One could treat as physical any property that applies to physical things, but doing so would threaten to trivialize the mind-body problem. For example any form of dual aspect theory or even property dualism that allowed brains to Iiave both mental and physical properties would collapse into physicalism; there would be no way to assert that brains could have properties that were not physical properties. One would still be able to distinguish physicalism from substance dualism, but in so far as we wish to do more than that we need a way to delimit the set of physical properties more narrowly than as tliose that apply to physical or material objects. On the otlier hand if we turn only to the set of properties tliat are explicitly referred to or qiiantified over in current physical theory we face at least two other problems. First there are many properties that we regard as uncontroversially physical that nonetheless fail to find a place in physical theory proper either because they are liigherorder properties of organized physical systems - the properties of being a comea or a magnitude 7.0 earthquake are physical in this sense but neither is invoked by physicists in their theories Thus we need to extend the range of the physical to cover properties that in some way depend upon or are constituted by underlying physical properties, but just what sort of dependence is required is opeil to debate. Ratlier than restricting Our explanans to the physicial we might instead restrict it to functional relations. Functionalism is the general view that mental states and processes can be characterized and explained in fiinctional terms; what makes something an instance of a given mental state type, such as a belief that interest rates will rise or a desire for a Iiot cup of freshly brewed coffee is the functional role it plays within the functionally characterized organization of some person, organism or system. Though functionalism has come under a lot of critical attack it probably still remains the closest thing there is to a mainline view in the contemporary philosophy of mind. lndeed 1 think many of the alleged refutations of functionalism, e.g. tliose based on the social nature of mental categories (Baker 1985), are best interpreted as disputes between rival versions of fiii~ctionalisrn rather than as attacks on the position per se. lt's just more.. ex:.:!~iig :O prese!.t tbeii? as genera! ref~itations of f~inctionalisin. 1 fowever, WHAT WOULD COUNT AS EXPLAINiNG? 67 the very fact that functionalism admits of so many interpretations means that it's far from clear just wliat a commitment to functional explanation involves. It is sometimes said that functional states of a system are to be type-individuated on the basis of their relations to their inputs, their outputs and eacli otlier. But then questions immediately arise about what is to count as an input or ail output and in what terms are they to be specified or characterized, as well as questions about the nature and range of interstate relations to which one may appeal. In a very simple and restrictive formulation, inputs must consist of externally observable stimuli, outputs are restricted to macroscopic behaviors described in some neutral vocabulary (e.g. as physical inovements) and the range of interstate relations includes only basic relations of causation or inhibition among states or groups of states. The fact that many mental states could not be captured within such a narrow framework would not show that they could not be characterized ~ising a more liberal concept of f~inctional role. The same can be said of theories that unpack tlie notion of functional role in terms of Iiighly abstract computational states, whose realization requires only some form of mapping from the computational states to physical states that preserves formal relations of joint realization and succession. In contrast with such highly abstract forms of functionalism, one could alternatively characterize iiiputs, outputs and interstates relations in ways that involve significant limits on the range of physical realizations. As numerous authors have iioted (Kalke 1969; Lycan 1987), the functional-structural division lias no absolute boundaries, and the there is no context-free answer to questions about whether such properties as being a neuron or being a positively selfsustaining neural loop are functional properties or underlying structural properties. They can be classed as either depending upon the particular explanatory project one is engaged in. A siinilar relativism affects the distinction between whnt role a state plays and how it plays that role; it is sometimes said the functional description is concerned with the 'wliat' wliile questions about tlie 'Iiow' are matters about underlying realization or structure. But the what-how distinction collapses as soon as one tries to put any theoretical weight 011 it. One further and important issue is whether or not the notion of function is interpreted teleologically. To say tliat a given state has the teleological role of doing x, is not merely to say that it does x and that by doing so it plays a certain causal role to the operation of the overall system. If one ; states that the teleological function of the three part bovine stomach is to digest cellulose or that the function of the Dolby circuit on a tape player is to reduce hiss, one is saying more than that tliey do those things within the operation of their respective systems. The teleological claim iinplies that in I doiilg so they are doing what they are supposed to be doing. However there is 110 clear consensus about Iiow to unpack that f~irther element. Some take it to concern tlie striictiire's origiii (Wriglit 1974) and Iiow its d3ing x
5 6 8 ROBERT VAN GULICK figu'ed in tlie selection or design process that lead to its present existence (it's because the lineage of ancestral bovine stomach structures aided in digestiiig cellulose tliat they were selected for by evolution). Otliers interpret it as more a matter of how its doing x contributes to the well beiiig orproper operation of the system of which it is a part (Nage1 1976). Given these many readings of 'functional', claims about which aspects of consciousness can be explained in functional terms are highly indeterminate. Thus it's surprising that confident claims are made about features of consciousness not being open to functional explanation. Can one really say that functional explanations are always vulnerable to inverted or absent qiialia objections given the range of possible ways in which one might read 'functional'? 1 think not. B3 A third option would be to restrict our explanans to naturalistic concepts. There lias been a lot of recent pliilosopliic controversy about whether and liow one miglit give a naturalistic account of intentional content - with covariance tlieories, causal theories and functional role theories among the leading contenders. Similar conceriis extend as well to consciousness. However, the conceptual boundaries of the naturalistic are even more obscure tlian those of the physical and the functional. In part naturalism inherits its vagueness from those first two dornains. Physical and functional relations (at least those that don't involve any suspect and unexplained element of teleology) couiit as naturalistic, and in so far as their borders remain unclear the domain of what's naturalstically acceptable in also left uncertain. However, the naturalistic is generally taken to include more than the physical and tlie functional; biological concepts, neurophysiological properties, historical factors, and more or less any concepts used in any of the nonmentalistic natural sciences miglit as naturalistic. Indeed one might ask, 'Wliat if anything is left out?' The intent seems to be on one hand to exclude explicitly mental properties such as intentionality and subjectivity, and tlie other hand to exclude dualistic, supernatural and magical factors or relations. In the last respect tliere is a specifc intent to rule out theories that appeal to unexplained basic corelations between material properties and mental ones. For example, as McGinn (199 1) lias noted, any tlieory asserting that a given neurophysiological property just causes conscious episodes witliout further elaboration would be dismissed as 'niagical', on a par with turning water into wine. But this really moves LIS iiito issues better dealt with later in section C. B4 A foiirth and final option concerns a less radical but still substantial explaiiatory project: tliat of trying to explain consciousness in terins of WI-IAT WOULD COUNT AS EXPLAINTNG? 69 relations among nonconscious mental states. The project is less radical in tliat it allows mentalistic notions, such as belief, desire, perception, and intentionality, to appear witliin its explanans. It is rioiietheless nontrivial since it is not obvious that appeals to nonconscious mental states and their iiiterrelations will suffice to explain conscious inentality; if consciousness is what make the mind-body problem (seem) really intractable then sliowing liow consciousness can be explained in terins of less problematic mental states would be great progress. The two most prominent exarnples of this approach in the recent philosophic literature are Daniel Dennett's (1991) multiple drafts tlieory of consciousness and the various related versions of tlie liigher order thought tlieory of consciousness as charnpioned by David Rosenthal (1 986). In both cases, tlie attempt is made to explain conscioiisness in tenns of intentional states ('judgeinents in Dennett's version and thoughts in Rosentlial's) that have other mental states as tlieir intentional objects. Altliougli such theories are less radical than direct attempts to explain consciousness physically or functionally, tliey have met with lots of spirited criticism (e.g. Shoemaker 1993; Tye 1993). Most critics have questioned whether tlie alleged theories in fact suffice to explain one or another important aspect of consciousness; for example do they adequately explain the qualitative character of conscious experience? However, our present aim is just to get clear about the nature and scope of the explanatory project not its adequacy. Nonetheless there are some probleins even in that regard. In particular it is unclear just which states count as nonconscious. For example does the concept of tliought iinply consciousiiess? If so, it could not be used to define consciousiiess on pain of circularity. Nor are suc11 worries idle. Some philosophers, most notably John Searle (1 992), have argued that genuine intentionality presupposes consciousness. Should Searle be right, the order of explanatory dependence proposed by Dennett and HOT supporters would have to be reversed. C. Relation between Explananda and Explanans Having completed our survey of possible explananda and explanans, we caii turn to tlie third and perhaps most important paremeter of our basic question: Wliat relation must hold between the two in order to count as an explanation of the relevant feature of consciousness? Here again tlie options are diverse and which one seerns rnost appropriate depends in part on which pair from our first two parameters we are trying to combine. A relation that inight be apt for a physical explanation of qualia inight not be riglit for a f~inctional explanation of subjectivity. There are five main relations to consider.
6 70 ROBERT VAN GULICK WHAT WOULD COUNT AS EXPLAINING? The first is logical siifficiency or deductive entailment. The factors cited iii the explanans woiild coiint as adequate only if tliey provided a logically sufficient condition from wliich one could deduce the existence and nature of tlie relevant feature of consciousness. A pliysical explanation of qualia would have to cite physical conditions that entailed tlie occurrence of mental States with specific qualitative characters. Although the deductive reqiiirement may seem to set a rigorous (perhaps too rigoroiis) and precise standard for explanation, it actually shifts inost of tlie difficult issues elsewhere, ont0 the question of what additional assumptions are being used to bridge the psychophysical gap. The problem is that without some bridge principles deducibility is impossible, but with them jt is far too easy unless their range is suitably retricted. If for example one knew tliat a particular brain state or even a particiilar pattern of perceptual stimulation were invariably correlated as a matter of empircial fact with a given state of qualitative consciousness, one could add the true conditional describirig that link to one's explanans. One could then deduce the occurrence of the specific conscioiis state from the occurrence of the brain event or stimulus condition, but it would not seein tliat doing so would suffice as an explanation of the relevant aspect of consciousness. Surely it woiild not satisfy the explanatory demands of those who worry about consciousness and the inind-body problem. The relevant bridge principle provides nothing more than a brute fact correlation, which even if it is true would not satisfy our legitimate desire to iinderstand why the correlation holds. Thus the deducibility standard doesii't really do much to clarifi the issue of what would count as an explaiiation; al1 tlie difficult questions are just transfered to tlie problem of deliiniting the iiature and scope of allowable auxilliary assumptions and bridge principles. C2 Nomic sufficiency offers a second initially attractive alternative that nonetheless succumbs to the same fate as logical sufficiency. One might require that an adequate explanans provide a nomically sufficient condition for the occurrence of a given aspect of consciousness; i.e. the factors cited in the explanans must necessitate the feature of consciousness as a matter of natural law, which is al1 we require in many domains of scientific explanation. The nomic requireinent also guarantees that the factors cited in tlie explanans are more than just empirically correlated with the relevant featiires or consciousness. Despite these advantages, nomic sufficiency is sub-ject to the same basic objection raised against the logical sufficiency standard. The laws that bridge tlie gap from explanans to conscious explanandum might leave too mucli lef3 iinexplained. The unsatisfying element cf brute fact correlation cal1 reoccur at the nomic level. Brute links inay be al1 we expect at the level of basic laws of nature. For example, it may not inake sense to ask why matter gravitationally attracts according to an inverse square law; it jiist does. But laws linking features of coiisciousness with coinplex ~ioiiconscious factors sucli as patterns of neurophysiological activity would 11ot seem appropriate end points of explanation iior candidates for ultimate and basic laws of nature. Tlius tliey would leave our legitimate explanatory expectations unfulfilled. We seein to need explanations tliat provide sorne greater degree of intelligibility, ones tliat perhaps describe some process or mechanism that let us see intuitively why and how the cited factors necessarily produce the relevant aspect of consciousness. Intuitive sufficiency. Thus a tliird way of delirniting the required explanatory relation is to dernand tliat the explanans specify a set of processes or meclianisms that can be seen intuitively to produce or realize whatever feature of consciousness we are aiming to understand. Without such a specification Our explanatory desires will not be satisfied; too rnany questions of how and wliy will be left unanswered. However, once again the details are difficult to spell out; most importantly what is to count as an intuitive process or explanation? One can appeal to examples from other domains that seem to meet tliat standard, such as explaiiiing the room temperature liquidity of water in terins of its molecular structure or its frozen state below O OC in terms of the intramolecular hydrogen bonds that produce ice crystals. But citiiig examples is not tlie same as definiiig intuitiveness, and it is far from clear how are we to generalize from exaniples like those of liquid water and ice to physical or functional explanations of one or another feature of consciousness. Moreover, tlie problem is complicated by the fact that what strikes us as intuitive is highly context sensitive and relative. Familiarity for example is clearly a factor; having encountered a form of explanation many tiines generally enhaiices its intuitive appeal. Field theoretic explanations miglit have seemed odd and less than intuitive a hundred and fifty years ago, but today tliey qualify as paradigms of naturalness. Perhaps al1 we can Say witli regard to intuitiveness is what an American Supreme Court Justice famously said about obscenity, '1 don't kiiow how to define, but 1 know it when 1 see it.' One would surely like more, but at present that may be the best we can do. Those like Colin McGinn (1991) and Joseph Levine (1983) who are sceptical about our ability to explain how conscioiisness depends on physical processes might argue that no defiiiitioii of intuitiveness is needed to make their claims. No matter how it is ultimately defîned, it seems we do not at present have any intuitively compelling explanations of how to bridge tlie psychophysical gap, at least not with regard to the phenomenal and qualitative aspects of consciousness. I3ut tlieir claiin is notjust tliat we doii't