I identify two morals in Hardcastle's book (1995). The main moral is that scientific investigation can provide an illuminating, explanatory theory of conscious experience. The subsidiary one is that the best way for such investigation to proceed is to combine psychological and neurophysiological research, incorporating more dynamical models and relying less on strictly classical computational models. I focus my critical attention on the main moral, but also briefly discuss the subsidiary one.
2. While the proposal concerning the SE system is the vehicle, I read Hardcastle's main concern as methodological in nature. In particular, one can identify the main moral, as well as a subsidiary one. The main moral is that scientific investigation can, despite the doubts of philosophical mysterians and skeptics, provide an illuminating, explanatory theory of conscious experience. The subsidiary one is that the best way for such investigation to proceed is to combine psychological and neurophysiological research, incorporating more dynamical models and relying less on strictly classical computational models. I will focus my attention on the main moral, but I'll have a little to say about the subsidiary one as well.
3. What is the methodological problem Hardcastle claims to address? In the beginning of the book she makes it quite explicit. My mental life is full of "vivid" experiences, the "smell of perfume", the "sound of symbols crashing", color experiences, and the like. What is it to have such experiences? How can their features be explained? In other words, what does a theory of such phenomena even look like?
4. My complaint against Hardcastle's answer to the methodological question posed is that though she claims at the beginning of the book to be addressing the explanatory question that has bothered philosophers, in fact her primary response to the philosophical worries is, essentially, to tell those that have them to grow up. The actual theory developed does not touch the question that bothers the mysterians and skeptics, a fact that Hardcastle admits. But that question, she maintains, is the result of confusion, or untutored intuition. Perhaps she's right, but to make that argument she didn't need all the empirical details (and she did need more philosophy than she gave us). That a scientific theory of information flow is possible was never in doubt. The real crux of the matter is whether such a theory gives you a theory of consciousness.
5. So what is the explanatory problem that bothers the qualophile (to use Dennett's term)? Right now I'm looking at the red diskette case beside my computer. My perceptual state possesses a certain reddish qualitative character. What explains that feature of my perceptual state? There seem to be two sorts of scientific answers available (and combinations thereof): either it's something about the informational or computational character of the state, or something about its neural implementation, that is responsible for the reddish quality. But I see no explanatory connection between informational or neurological properties and qualitative properties. In particular, it seems quite conceivable that they come apart in various ways, as the inverted and absent qualia scenarios exemplify.
6. Of course Hardcastle doesn't accept the relevance of conceivability considerations to this project. Here's how I see the role of conceivability. Most of the literature, including Hardcastle's discussion, is concerned with the connection between conceivability and possibility. While I don't think the anti-possibility side often does full justice to the pro-possibility side, in the end I side with them. (Actually, my position is more complicated than this, but I can't go into the nuances here.) But conceivability is relevant to explanatory adequacy. To understand why X occurs is to know why not-X couldn't occur (in the relevant circumstances of course). So long as it's conceivable that not-X could occur as well, there's a gap in our understanding of why it was X, and not not-X, that occurred. Confidence that there is a metaphysically sufficient condition, so long as we don't know how to characterize it, isn't enough to close the gap in our understanding.
7. Now Hardcastle argues, as many do, that we are faced with the following options: Either dualism or materialism is true. If the latter, then conscious states are metaphysically supervenient on physical states, so one can't have a mental difference without a physical difference. If dualism is true, then, given a plausible assumption about the causal closure of the physical, epiphenomenalism follows. But epiphenomenalism is implausible, or downright intolerable. Hence, materialism, with its supervenience thesis, must be true. This means that what seems conceivable, the instantiation of a certain collection of physical-functional properties (e.g., activation of the SE system) without consciousness, isn't genuinely possible.
8. I can agree with all that - in fact I do - and still maintain that there is an explanatory gap. I admit that I have good reason to think that it's not metaphysically possible for a creature to share all of my brain states and yet lack consciousness, since to admit such a possibility would entail (given certain other plausible premises) the causal irrelevance of consciousness. So I have good reason to think that some such theory as the one Hardcastle offers is actually true. But this doesn't mean that I now have an explanation of consciousness. I still don't know why activation of the SE system results in consciousness, and the conceivability of such activation without the presence of consciousness is a symptom of that lack of understanding.
9. Aside from a very general disagreement about the role of philosophical reflection and intuition, a dispute I can't hope to adequately address here, I suspect there is a more particular disagreement between myself and Hardcastle at work as well. (Undoubtedly it's related to the larger issue.) It comes up in her discussion of inverted spectra, and I think it helps to explain just why, despite my remarks above, she needs all the empirical details for her purposes.
10. When I think of what there is to explain about qualia, I have in mind something very basic and primitive, not conceptually tied to "vividness" or representational richness at all. To me it makes perfect sense for there to be a mind that has just one state constantly, one qualitatively quite similar to mine if I were to stare at a wall of pure saturated red and think of nothing else. For Hardcastle, however, I think what there is to explain about conscious experience is precisely its richness of informational content, and its multifarious relations to other cognitive states. If this is what is at issue, then of course the details about the SE system, worries about the binding problem, and all the rest are quite pertinent. But for my problem, all of this is beside the point. Again, you can just dismiss my problem, as Hardcastle does. But providing a theory that addresses the other problem isn't in itself an argument that my problem doesn't exist.
11. I said above that this difference in understanding of the problem is manifested in her treatment of inverted spectra. Initially, I found her response to this challenge curious. She admits that the best psychological theory of color vision, say, or even the best neurophysiological theory, very well might not be able to rule out a case of inverted spectra. It follows that such theories would not be able to say in what the instantiation of a particular color quale consists, though it might still be able to say what it is to have some color quale or other. But this is not a problem, she argues, since science is in the business of capturing generalizations, and therefore must abstract over the sorts of individual differences that cases of inverted spectra would represent.
12. I would have thought this was a very damning admission. After all, how could there be a physical property for which there is no explanation of its instantiation in physical terms? But her analogy to the case of intelligence shows why she isn't embarrassed by it. We can suppose that a number of factors enter into the determination of intelligence, and it would be enough for psychology to discover certain general principles that govern the behavior and interaction of these factors. When it comes to individual differences, however, there may be nothing interesting to say, other than that they are the result of massive interaction effects among a multitude of variables. We might be able to say something interesting about the kind, human intelligence, as well as some of the general factors that affect its development. But we may have nothing interesting to say about why Chomsky is so much smarter than I am.
13. For Hardcastle, it seems, the relation between the precise qualitative character of an experience and the property of there being a qualitative character is not really a qualitative matter, but rather a matter of increased specificity or determinacy. So long as one can specify what the factors are that determine the particular values, and how they determine them, our inability to predict precisely what the values will be for particular cases, due to the complexity of the computation (among other factors) is not troubling.
14. But that isn't how I'd characterize the explanandum. What the inverted spectrum case reveals, on my construal of it, is precisely the conceptual independence between the notion of a particular type of qualitative character and the notion of a causal role. It's not that particular qualia are constituted by points in a certain space of relations while all we are able to specify by the methods at hand are regions. Rather, our conception of qualia is more a matter of the occupants of the points (or regions), which accounts for the ability to conceive of them divorced from their standard roles. But if this is the right picture, then we really are lacking the conceptual resources to explain what makes a particular quale the quale it is.
15. Finally, a word about the subsidiary moral. One place this comes up is in her discussion of the functional zombie problem. She argues that we don't get a problem if we allow more detail about neural mechanisms to help define the functional or causal role of consciousness, and in this she parts company with such functionalists as Sydney Shoemaker. But I would side with Shoemaker here. True, as Lycan always insists, the functional role/implementation distinction is not absolute, and is multi-level. But that doesn't mean for a given explanatory purpose that it's unprincipled. It seems to me that we can distinguish between a case of adding a detail about a psychologically characterized function and a case of adding a detail about how such a function is implemented in a physical mechanism. So long as this distinction is sensible, there's a clear limit on the usefulness of neurophysiological information in overcoming the problem of functional zombies.
Hardcastle, V.G. (1995). Locating Consciousness. John Benjamins Press.
Hardcastle, V.G. (1996). Precis of: Locating Consciousness. PSYCOLOQUY 7(33) locating-consciousness.1.hardcastle.
Levine, J. (1993). On Leaving Out What It's Like, in Davies, M. and Humphreys, G., eds., Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 121-136.