Valerie Gray Hardcastle (1996) Locating Consciousness. Psycoloquy: 7(33) Locating Consciousness (1)

Volume: 7 (next, prev) Issue: 33 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 7(33): Locating Consciousness

LOCATING CONSCIOUSNESS
[John Benjamins, 1995, xviii + 264 pp. ISBN:902725124/1556191847]
Precis of Hardcastle on locating-consciousness

Valerie Gray Hardcastle
Department of Philosophy
Virginia Polytechnic Institute and State University
Blacksburg, VA 24061-0126

valerie@vt.edu

Abstract

Our conscious minds are a wonderfully bizarre feature of us. This book aims to develop a scientific framework in which we can investigate and study consciousness as a perfectly natural and perfectly understandable phenomenon. In doing so, it also explores various skeptical charges made against this project, arguing for the most part that the skepticism is fueled by an ignorance of science in general and of how scientific explanations function in particular.

Keywords

binding, consciousness, dynamical system, memory, priming, qualia.

I. NATURALISM ABOUT SUBJECTIVE EXPERIENCE

1. Most of us have dualistic tendencies; our folk theories straightforwardly assume a causally efficacious mental thing that runs most of our lives. As a result, we have difficulty imagining what any genuine theory of consciousness would look like, for anything that we come up with just looks silly. Neurotransmitters, neurons, neural firing patterns, neuronal networks -- none of these seem to be the right stuff for being conscious. And even if we could isolate the causal factor in question, there still is the vague sentiment that we would be leaving something important out, namely the "first person-ness" of conscious experience.

2. Owen Flanagan divides this worry into two components: on the one hand, science will not "capture what consciousness is like from a particular point of view;" on the other, it will not analyze conscious exhaustively (1985, p. 386). I maintain that the first worry is false and the second true but irrelevant. There is no causal distinction a third person point of view cannot translate from a first person point of view. The ways in which the causal relations are described may differ -- subjective experience is prelinguistic or perhaps sublinguistic, while objective descriptions are propositional or sentential -- but the relations themselves remain the same. I may know red as a certain metameric triplet or by a visual experience, but in both cases, I know red just the same.

3. However, being able to capture the relevant causal interactions is not the same as analyzing some phenomenon exhaustively. Any objective, third-person, scientific theory will not be able to describe what it is like to have a conscious experience for any particular person. But I don't see this as a problem: no scientific theory describes or analyzes all aspects of whatever phenomena it is explaining. Science abstracts away from the intricacies of the real world to posit a "physical system", a simpler artificial domain with a few well-controlled parameters that allow us to predict how the world will likely go in all its complexity.

4. The real question is whether what is left out of a scientific theory of consciousness will be important. The skeptics maintain that it is; the naturalists maintain that it is not. What it is like is certainly important for our folk interactions in the world. However, it is entirely possible that we have picked out the most salient features of our phenomenal experiences in our everyday discourse about our mental life without hitting upon the most fundamental or essential ones. For now, we should just wait and see what science delivers before rendering any sort of verdict of science's success or failure in explaining our qualitative states.

5. Of course, some believe that our completed sciences of the mind will not include consciousness as an explanandum; our conscious mental states are orthogonal to how psychology and neuroscience parse their respective domains. Consequently, consciousness is not an appropriate topic for scientific scrutiny. Here I simply disagree: there is something that it is like to be a conscious state and I want to explain what that something is. Prima facie I see no reason why I am not making a reasonable request.

II. THE MULTIPLE MEMORY FRAMEWORK

6. The time has come either to put up or shut up. Here is my framework for understanding consciousness scientifically. It will at least identify where we are conscious in our processing stream, though it won't tell us why experiences are qualitative.

7. We know that there are at least three different memory systems in the brain, only one of which supports conscious access to its contents. These three mnemonic systems differ substantially in their properties and in their causal effects. One system contains habits and skills, and other such "memories," and is probably located among the corticostriatal connections of the brain. This system appears to be little more than a collection of specific behavioral responses. That is, the system does not store representations per se, but only learned responses to particular cues.

8. Similar to, but not identical with, this system is a perceptual memory system (resembling the one Schacter (1990) hypotheses to underwrite implicit priming tasks). This system is not semantic, though it is episodic. That is, it appears that this system is sensitive to only a restricted and rather crude set of an input's many "microfeatures." This structural (ST) system is modality specific and sensitive to font and graphemes, but not to the environment surrounding the stimuli. Further, it is not sensitive to the type of processing the stimuli receive (e.g., semantic versus phonetic), nor to the amount of attention focused on the task. Each mnemonic trace, once laid down, operates as a self-contained unit for pattern matching and does not depend on neighboring memories for input or support. Though this system does not contain semantic information itself, since we do see some semantic priming in the lexical decision paradigms, we can infer that the system influences the activation of the "declarative" memory system that underwrites intentional behavior. My hypothesis is that the ST system has multiple outputs which input to, or at least somehow affect, conscious declarative memory.

9. The explicit memory system is the one that forms the basis for short-term memory and consciousness itself. Although the larger hippocampal region and its connections to neocortex lay down the memories in this system, the memories themselves are probably located only in the cortex (although the medial temporal region may have to maintain the cortical regions for several years before the memory is completely established there). More specifically, this memory system is located in the sensory cortices responsive for analyzing and processing incoming sensory data. This system captures not just immediate physical surroundings, but all sorts of contextual information. Indeed, the defining characteristics of this semantic (SE) memory system are not just the vast amounts of information stored in content addressable form, but also the complex associations that tie together or bind different mnemonic objects into complicated unified wholes.

10. In contrast to the ST system, memories in the SE system have a tendency to diminish rather rapidly over time, possibly through some consolidation procedure. This system is sensitive to the level and type of processing a task demands, as well as to the amount of attention devoted to a task, and it can transfer information from one modality to another without loss of that information. Only a single hypothesis about the meaning of the incoming stimuli is active at a time. Further, the semantic ties among groups of data are not easily broken once formed.

11. I believe that conscious percepts reside in activated SE memories; that is, we have a conscious percept whenever we use the SE system. More specifically, a conscious percept is an interpretation of incoming stimuli in the SE system in light of the previous (interpreted) experiences we have had.

12. The real difference between ST and SE memories (once SE memories are firmly established) is in the type of information stored, not in their general locations. Both ST and SE memories are distributed throughout sensory cortex. However, the ST system can capture neither rich contexts nor semantic associations; for each individual mnemonic trace is self-contained, isolated, and relatively sparse. In contrast, SE memories are extremely interconnected and as a result of being more distributed have the computational capacity to represent the immediate details surrounding incoming stimuli. (Perhaps being stored in essentially the same location allows for the ST system to influence activation in the SE.) Which of these features (rich associative connections or semantic instead of structural ties) -- if either -- is necessary or sufficient for consciousness is not clear. All I can say at the moment is that in order to have a conscious perception, we have to be able to interpret incoming data using the right autobiographical information stored in the SE system.

13. I should stress that this view of consciousness differs substantially from those currently in vogue. Cognitive psychologists align consciousness with some sort of supervisory system or executive processor that takes the activated interpretations of our various memory systems and then manipulates them in some fashion so as to influence behavior. The most important difference between our two approaches is that we assume consciousness resides at different places in our psychological economies -- I hypothesize that consciousness exists prior to what the "executive" theories postulate. In particular, a theory based on my framework entails that conscious phenomena are components over which the executive or supervisory system operates. That is, the conscious phenomena themselves would be one type of input to cognitive psychology's executive processors.

14. There is some evidence, too, that executive processing does not underwrite consciousness. First, neither of the brain areas usually thought to be connected to a supervisory system (frontal lobes and the reticular activating system) appear to support qualia. Patients who have their frontal lobes oblated still experience conscious phenomena. Too little is known as of yet to distinguish the sorts of processing the reticular activating system does from other diffuse projection systems very well. All appear to be increasing the signal-to-noise ratio of firing cortical neurons.

15. Finally, we see clear dissociations between cognitive processes such as arousal, attention, performing motor tasks, problem solving, and cognitive control, on the one hand, and consciousness, on the other. On the other hand, we do lose aspects of qualitative experience with damage to posterior cortex, along with bits of mnemonic interpretations. This is the sort of evidence one would expect to find if the multiple memory framework for understanding consciousness were correct.

16. A second and maybe more controversial point that should be noted about this view is that it entails that young infants are not conscious. Consciousness requires an SE memory system and infants younger than about 4 or 5 months show signs of having only ST structures. They cannot remember contexts or make rich associations. Instead their behaviors are keyed to specific environmental cues and often resemble perseverative S-R loops.

III. THE HARD PROBLEMS OF CONSCIOUSNESS

17. One criticism of scientific theories of consciousness is that it is never clear why it should be "that" that is conscious. One version of this criticism is seen in what Kathleen Akins calls Marr's Paradox: None of the alleged representations at any level of processing in the brain corresponds to our rich, semantically-loaded, continuous, and unified conscious experiences. None of the outputs of any of our processing modules seem like our conscious experiences.

18. However, this criticism assumes that our processing proceeds in a serial, step-wise fashion (even if the individual modules are distributed and parallel). But if we can change the way we view the brain so that we see it as containing sets of dynamical processing loops, then this sort of "hard problem" disappears. If our perceptions correspond to higher ordered patterns of bifurcation in an attractor phase space, then the output of individual processors and what it resembles becomes irrelevant, for we abstract over that level of description. In this case, the sequence of patterns that the oscillatory networks pass through over time, or the bifurcations from one attractor to another, constitute the "computation" of the system. These oscillations entrain the macro-activity of individual neurons in various areas of cortex into a well defined macro-state. Adopting perspective means that conscious states would be mapped to the transient firing patterns of groups of neurons whose whole behavior expressed a particular mathematical dynamical description.

19. Moreover, there is a bit of data from simple patterned hallucinations in cat cortex which suggests that these higher ordered patterns do in fact "resemble" our perceptions in the relevant ways. Visual hallucinations resemble one another in their early stages; most are simple geometric shapes (gratings, lattices, funnels, spirals, etc.). EEG waves recorded over the cortex of hallucinating cats are geometric transformations of the patterns seen. Moreover, these transformations follow the same topographic transformations we find from retina to cortex (Adey, Bell & Dennis, 1962; Ermentrout, 1979). More complicated patterns have been found over other areas of cortex in normal perceiving animals, though none have been linked to the structure of any qualia.

20. A second sort of complaint with this sort of framework for understanding consciousness is that we can easily imagine creatures with ST and SE memory systems who are not conscious. Consequently, I have not isolated the conditions necessary for qualitative phenomena. This is the well-known problem of absent qualia: for anything identified with consciousness, we can imagine some creature with that thing who is not conscious.

21. At a first pass, this objection hinges on a failure to appreciate identity relations in science. I might be able to imagine that water is not H2O, but I would be wrong in my imagining. Water is H2O and what I can and cannot picture has little bearing on how the universe actually unfolds. Consciousness should work the same way. It is whatever it is, regardless of whether that thing seems plausible to me. Moreover, this criticism assumes a strict division between the functional or causal relations the theory hypothesizes and the material which instantiates those relations. If the abstractions found at one level really are separate from the patterns at a different level, then we could always suppose that the true causal ingredient for consciousness is located at some other level of description in the brain and that whatever theory I posit (for any theory of the brain) will overlook that item. This sort of criticism works if one relies heavily on one's own intuitions as to what counts as a plausible explanation of consciousness. Since there are none, no possible theory will capture the elusive intuition-soothing ingredient.

22. But if we dismiss (or relativize) the division between levels of description and organization, then this objection to a theory of consciousness vanishes as well. A multi-level theory could include any relevant causal information. If some factor were discovered about the SE system that caused the phenomenal difference between it and ST memories, then -- whatever it is -- it could be included in the theory. And, intuitions to the contrary notwithstanding, we would have a theory of consciousness.

23. A final concern for a multi-level theory of consciousness is that there is appropriate agreement between high level psychological and lower level neurophysiological descriptions of the brain. It has been argued that our experimental paradigms are simply too crude to allow us to tie psychological models to those in neuroscience (e.g., Dennett & Kinsbourne, 1992). In particular, we cannot tell when consciousness occurs in the brain from the basic RT experiments and error tabulations from psychology.

24. However, we do have at least one investigative path open to us which can tell us about the time course of psychological events with greater accuracy, and that is using event-related potentials (ERPs) to cognitive stimuli. Prima facie, there is no reason why neurobiological evidence cannot provide a robust indicator of qualitative experience. Indeed, I believe that such evidence is already accruing.

25. The ERPs for implicit (masked) priming effects are very different from explicit (unmasked) effects for priming using both words (Neville and Weber-Fox, 1994) and novel visual patterns (Hardcastle, 1996), suggesting that implicit and explicit priming activate non-identical systems. Moreover, the system that underwrites implicit priming is activated automatically and quite early in cognitive processing. Though it can perform rather sophisticated analyses (i.e., recognizing a stimulus as the same or different from a previous stimulus), it is not capable of categorizing stimuli based on higher level distinctions, such as arbitrary category definitions. Explicit priming occurs later and appears to be sensitive to the more "abstract" properties of stimuli.

26. These early and late processing systems are consistent with the hypothesized ST and SE memory systems, and with information flowing from the ST to the SE system. The early system, as seen under the masked conditions, was active anteriorly and only for repetition priming using meaningful words and for identity and mirror priming using novel visual stimuli. The late system -- unmasked conditions -- were posteriorly distributed and showed differential responses for both words and nonwords and for identity, mirror, and category priming of novel shapes. These data support the suggestion that the early system is more structural and the later includes "higher level" semantic analysis.

IV. CONCLUSION

27. I want to be clear about what I am claiming. When we use a memory laid down in parietal cortex by the medial temporal lobe to interpret some incoming stimuli, we experience something phenomenally. If we can't use these memories to interpret incoming stimuli, then we do not experience anything at all. These conditions entail that phenomenal experience is inextricably tied to what information we can access in our brains and how we can access it. Contra Block (forthcoming), it does not make sense to talk about "phenomenal" consciousness apart from "access" consciousness. For a certain type of "access" just is phenomenal awareness.

28. Is this a radical claim? Yes and no. It is in that it immediately suggests how one might go about falsifying it. Find someone who has activated SE memories without being conscious of them or find someone who by all other reasonable standards is conscious but lacks the correlative SE memories. It is not a radical claim, though, because I identify consciousness with something in our heads. For it is exactly the sort of claim one must make if one is to be an earnest, anti-epiphenomenalist, materialist. For an already converted naturalist, the question is which brain states to identify with our mental phenomena, not whether the identification is possible.

REFERENCES

Adey, W.R., Bell, F.R. & Dennis, B.H. (1962). Effects of LSD-25, psilocybin, and psilocin on temporal lobe EEG patterns and learned behavior in the cat. Neurology, 12: 591-602.

Block, N. (forthcoming). On a confusion about a function of consciousness. Behavioral and Brain Sciences.

Dennett, D.C. & Kinsbourne, M. (1992). Time and the observer: The where and when of consciousness in the brain. Behavioral and Brain Sciences, 15: 183-200.

Ermentrout, B. (1979). A mathematical theory of visual hallucination patterns. Biological Cybernetics, 34: 137-150.

Flanagan, O. (1985). Consciousness, naturalism, and Nagel. The Journal of Mind and Behavior, 6: 373-390.

Hardcastle, V.G. (1996). Discovering the moment of consciousness II: An ERP analysis of priming using novel visual stimuli. Philosophical Psychology, 2: 169-198.

Neville, H.J. & Weber-Fox, C. (1994). Cerebral subsystems within language. In B. Albowitz, K. Albus, U. Kuhnt, H.-Ch. Nothdurft, & P. Wahle (eds.) Structural and Functional Organization of the Neocortex: Proceedings of a Symposium in the Memory of Otto D. Creutzfeldt, May 1993. Berlin: Springer-Verlag, pp. 424-438.

Schacter, D.L. (1990). Perceptual representation systems and implicit memory: Toward a resolution of the multiple memory systems debate. Annals of the New York Academy of Sciences, 608: 435-571.

TABLE OF CONTENTS of "Locating Consciousness":

1. Naturalism about Subjective Experience 2. The Limits of Theory 3. Consciousness as a Natural Kind 4. A Multiple Memory System Framework 5. Conscious Perception and Semantiv Memory 6. How Do We Get There from Here? 7. Martian Pain and the Problem of Absent Qualia 8. "Executive" Processing and Consciousness as Structure 9. The Moment of Consciousness


Volume: 7 (next, prev) Issue: 33 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: