Morten Overgaard (2001) The Role of Phenomenological Reports in Experiments on Consciousness. Psycoloquy: 12(029) Consciousness Report (1)

Volume: 12 (next, prev) Issue: 029 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(029): The Role of Phenomenological Reports in Experiments on Consciousness

Target Article by Overgaard on Consciousness-Report

Morten Overgaard
University of Aarhus
Department of Psychology
Asylvej 4
8240 Risskov


This paper considers the use of phenomenological reports in scientific experiments on consciousness. These take as a point of departure the notions of first- and third-person observations, and a discussion of what kind of phenomena we can refer to using these concepts. Furthermore, an outline of an experiment is presented as a scientific framework within which experiments on consciousness can be performed.


Consciousness, first-person knowledge, third-person knowledge, phenomenology, experiment, methodology.
    The target article below was today published in PSYCOLOQUY, a
    refereed journal of Open Peer Commentary sponsored by the American
    Psychological Association. Qualified professional biobehavioral,
    neural or cognitive scientists are hereby invited to submit Open
    Peer Commentary on it. Please email or consult the websites below
    for Instructions if you are not familiar with format or acceptance
    criteria for PSYCOLOQUY commentaries (all submissions are

    To submit articles and commentaries or to seek information:



1. Recently, several authors have suggested that a more developed language of phenomenology might contribute substantially to a solution of "the hard problem", as described by David Chalmers (1995, 1996), and as much earlier discussed by William James (1890, 1904). These include, though in very different theoretical frameworks, Fransisco Varela's neurophenomenology (1996), Max Velmans' intersubjective science (1999), and David Chalmers' speculations on "first person methods" (1999), just to mention a few. Two recent publications also address these issues - that is Varela & Shear, 1999 and Velmans, 2000a. Fundamental to all of them is, however, the logical intuition that any science, trying to correlate different observable classes of phenomena, will not succeed with a highly specific and developed terminology about one class and a seriously underdeveloped one about another. This is very much the case for neuroscience and phenomenology. More fundamentally, and hopefully without putting words in peoples' mouths, it appears to me that those in favour of a developed phenomenology all accept Thomas Nagel's "what is it like to be" as a criterion for consciousness (see Nagel, 1974), which obviously makes it necessary to have a descriptive account of what "something is like" in the first place. All such authors agree that being conscious is, at the very least, a study of a someone, not a something, and that making the subject into an object might seriously confuse scientific investigations.

2. As most readers will be aware, there are, however, many problems relating to consciousness, and in this paper I shall try to show how phenomenology will contribute in different ways to different problems. More specifically I shall discuss the problems of the so-called "first" and "third person methods", and the problem of studying qualia in a phenomenological and experimental framework.

3. The issue here is a question of methodology - not what we believe is the true relation between consciousness and the brain, but what we can do in order to find out about it. The most immediate problem for a science to deal with "what something is like" is the problem of the first and third person perspective. After all, how can we accept something that only I have access to as in any way coherent with our demand that scientific data must be replicable, generalisable, etc.? On closer inspection, however, there seems to be at least two different understandings of "first" and "third personness", and I think we need to separate the two in order to see the problem more clearly.


4. According to the Cartesian view, "first personness" refers to anything that can be considered certain knowledge based on his dictum "cogito, ergo sum" or arguments similar to that in logic. Thus, one might say that "I see the colour red, therefore I am", etc. (even though Descartes might not have agreed with this claim). The certain knowledge that can be derived from Descartes' original sentence about thoughts and from the sentence about seeing is, in my view, exactly the same. So, basically, I have a first-person perspective on anything that I am conscious of at this moment. Third-person information is, according to this understanding, what we normally would understand by "objective science" - anything that could be studied by an external observer. This is often mistakenly conceptualised by scientists as if "objective science" were independent of the limitations of the observer, and thus more scientifically "pure" than e.g. introspective reports.

5. However, it could be said, with some inspiration from Husserl and other phenomenologists, that there is no such thing as knowledge that does not involve the subject as an observer. So, inspired by this, it could be said that we have "third person knowledge" about our exteroceptive sensory information (although this is just as "subjective" as are thoughts) and "first person knowledge" about strictly internal states. Obviously, both kinds of knowledge are based on subjective experience and are in this sense dependent on the observer. Yet, if this distinction is to make sense we cannot meaningfully talk of "first person knowledge" as deriving from subjective experiences and "third person knowledge" as deriving from something else. Instead, we should make a distinction between what we experience as being "out there" (objects more than one person can perceive and thus amenable to investigation in classical science) and what we experience as being "inside ourselves" (our thoughts and feelings, which classical science finds difficult to deal with). There is no method that gives access to the "world as such", independent of yourself as an observer, in that all we know about brain, behaviour etc. is either based on what we have perceived or reasoned about it (see Velmans, 2000b, for in-depth discussions of these issues).

6. Some might consider my argument so far as supporting some sort of idealism. Yet, even though all kinds of knowledge depend on the subject possessing this knowledge, it does not imply that we cannot have an objective science. In fact, the notion of objective science is dependent on certain phenomena being available to any observer and certain experiences being shared when observing the same. So this is where phenomenology appears on the scene, to help us answer the question: How can we know if all of our experiences are similar, in which cases and to which degree? As long as we are not dealing actively with the problem of qualia, this is still immensely difficult, but probably soluble. In the case of perception, we seem at least under normal circumstances to be as good a scientific object as any physical phenomenon, in that it seems to be the case that if you carry out this or that procedure, you will observe or experience this or that result (Velmans, 1999). If I place a red apple at a table in front of me, I will assume that anybody around me will see it, though from different angles, and if someone did not, there would be something to be explained (in terms of colour blindness, visual illusion tricks at work or something else). If we are to agree on this level of realism, the developing of an applicable language of phenomenology would be a question of correlating discriminations of percepts and reports, so that one word or report correlates, ideally, 1:1 with one experienced state.

7. In the case of internal states (emotions, thought processes etc.), it is somewhat more difficult. Here, we assume that we all differ when presented with the same stimuli: One person has one kind of associations when looking at the apple, the other has none whatsoever, etc. In fact, absolute congruence would be considered mysterious and might be explained with theories of supernatural psychic powers. Yet, if we push realism a bit further, it seems plausible that we all to a high degree have very similar emotions and cognitive processes, even though they may be present in us under widely different circumstances. Again, we must admit that we already have a very efficient, if somewhat naive, every-day phenomenology of such affairs. Autism is a good example of how it would be if we did not have this fundamental ability to give and understand reports about our inner states, and empathy in clinical psychological practice is at least a hint in the direction that it may be possible to develop this ability further to be more precise. So even though we cannot find a starting point in common observations, we could still find one in these every-day understandings. Again, the long-term goal would be to have 1:1 correlations between phrases and experienced states. If this should prove at least partially possible, it would diminish the problem of first and third person perspectives, in that the phenomenological statements would have an equal status, no matter which perspective they refer to.


8. Returning to the hard problem, we must ask ourselves if a solution in terms of an improvement and expansion of phenomenology actually would make any difference in the attempt to make progress on the distinct problem of "qualia", the way experiences appear to us. On the one hand, the 1:1 relationship would help us to discriminate between experiences, yet on the other, the verbal descriptions will in no way entail the essence of qualities as they are experienced first-hand. Thus, arguments of the "inverted qualia"-persuasion will be left unharmed by any such method. That is, it would still be possible to argue that two persons who give identical descriptions of something in fact are experiencing the object in question in two different ways.

9. Most of the experimental approaches to consciousness simply ignore these issues and will either just assume certain experienced qualities in the subject or rely on the more unspecific every-day phenomenology. This is not odd at all when considering the immense work on developing a useful phenomenology that is needed to do this properly. However, It is worth noting that the idea that the relation between reports and experiences is in one specific way hardly can be a priori for any such experiment. Yet, I believe it is quite allowable for science to conduct experiments keeping the self-reports of the subjects fairly simple and uncomplicated, until we perhaps someday have a more sophisticated terminology. Saying that "I had a clear experience of X" or something like that seems, at least intuitively, rather harmless.

10. As an example of the experimental use of phenomenology in regards to qualia, I shall now introduce the outlines of a series of experiments. The over-all experimental framework points towards a general methodology to conduct experiments on consciousness. These experiments should be regarded as an example of what I believe an experimental framework for active phenomenology may look like. The experimental paradigm I suggest takes as its point of departure a criticism of one of the few paradigms that have in fact tried to use phenomenological observations as a variable, namely the works of Benjamin Libet. Some of his experiments concern what I have called "first person knowledge", others concern "third person knowledge".

11. Most readers will be acquainted with Libet's experimental setting, where a subject is watching a clock and at a certain point in time moves a finger after a conscious decision to do so, leading to the discovery that the experienced decision is delayed 500 milliseconds compared to the EEG-monitored neural activity in the motor areas (Libet, 1985). Other experiments by Libet indicate that awareness of sensory information is delayed between 100 and 500 milliseconds (Libet, 1978). The main purpose of the experiments described here is not just to revise Libet's work, but to develop a new scientific framework for finding neural correlates of consciousness.

12. In the Libet studies, phenomenology was used to give a precise indication of when the sensation or decision occurred during the quite complicated task of self-monitoring when you first become conscious of having made a decision or having felt a tactile stimulus. One of my most substantial criticisms of this, from a purely experimental point of view, is that you cannot easily compare the timing done by a subject when, say, looking at a clock (in the "free will-experiment") and the timing of EEG. Quite different time resolutions might be expected in a subjective account of time and in a machine-based one. Benjamin Libet did develop a method to determine the accuracy of the subjects' experience by collecting reports of where they saw the position of the clock's moving spot when they were presented with a tactile stimulus with a known onset time. Judged onset times for the experience of these stimuli were around -50 milliseconds of physical onset times. (Libet et al., 1983; Libet, personal communication, 2000). However, even with this method, the possibility of a "systematic difference" in phenomenological reports compared to EEG-timing still exists.

13. First of all, judgements of intermodal sensory simultaneity logically depend on which senses are studied and which stimuli are used. Latency and processing rate differences among senses as well as latency differences introduced by use of a near-threshold tactile stimulus compared to a supra-threshold visual stimulus render use of any one estimate of timing errors (Breitmeyer, 1985). Furthermore, attending to a first person mental state (the making of a decision) may not be equivalent to attending to a third person mental state. This does not lead to the conclusion that there must be a systematic error in Libet's results here, but it seems to be just as valid an explanation of the data, as the hypothesis of a delay in experience. Another problem at the very root of the Libet experiments is that it is assumed beforehand that the experience of making a decision happens at one specific time, and not for example as a slow process.

14. To overcome these uncertainties, it would be ideal to take a transcranial magnetic stimulator (TMS) in use in a revision of the Libet experiment - a method with which it is possible to "disrupt" the neural activation in the brain at a given location for a few seconds by sending an impulse of magnetic waves through the cranium of the subject. A time-varying, high-current electrical pulse is passed through a magnetic flux which, unlike electrical stimulation through scalp electrodes, passes through the skull with very little attenuation. The magnetic pulse then induces proportionately an electrical field in a direction opposite to the current in the coil. The current flowing is proportional to the conductance of the volume conductor. With magnetic stimulation, an electric field is induced inside the axonal nodes, thereby exciting them.

15. TMS has previously been used to study different psychological phenomena. In visual perception, it has for instance been shown that TMS-stimulation can "take out" a part of the experienced visual field. If, say, all of V1 is stimulated, there will be no experience of vision at all. If only parts of the visual areas are stimulated, parts of the experienced visual field will fall out (Kamitani & Shimojo, 1999). It has also been possible to show delays of reaction times, when TMS is applied over the motor cortex for the target muscle (Day et al., 1989), and delays in judging own reaction time (Haggard & Magno, 1999). This is in principle somewhat like a "backwards masking" technique, widely used in the psychology of perception, where one stimulus is presented, immediately followed by a second one, blocking the perception of the first. To begin with, one could use a controlled exogenous stimulus, although this of course would limit the technique to be only applicable in the experience of third person information (according to my previous definition). Here, one could present, say, visual images to a subject just long enough for him or her to be conscious of them, and at the same time deliver TMS-pulses at different visual areas. This has already been performed (however, only with very indirect reference to conscious experience) and must thus be considered feasible (Amassian et al., 1989; Kastner, Demmer & Ziemann, 1998; Kamitani & Shimojo, 1999). With this technique, it will therefore be possible to find how long, exactly, the latency period must be from the presentation of the stimulus until the TMS-pulse is delivered to avoid that the experience is knocked out. This will then reveal when that particular brain area is contributing to consciousness. In other words, this information in itself could add a temporal dimension to theories of neural correlates of consciousness that primarily look at the relevant brain areas as were they non-dynamic structures. However, in theory at least, it could be used in even more interesting ways to gain knowledge of the neural substrates of consciousness. In one sense, it would be meaningful to categorise the subjects' reports as being "correct", "incorrect", "near correct" etc. when you compare the reports to what was actually presented. However, it should generally be assumed that people are correct in their descriptions of their own experiential states. In this sense, reports should never be considered "incorrect". For one thing, a researcher has no evidence of those experiential states apart from the subjects' reports. So even though one could quantify and categorise the reports for statistical purposes, one should keep the original qualitative form of the data in mind.

16. Potentially, one could also apply this idea to first person experiences, as in Libet's study of voluntary movements. The obvious advantage of using TMS in this experimental set-up is the possibility of disrupting the neural activity in the brain as soon as the response potential (RP) in the premotor area, described by Libet as the point compared to which consciousness is delayed, has shown. Then the subject could be asked if he or she has had the experience of making a decision or not. In this manner, the subject would not need to look at a clock or in any way make subjective statements about timing. It also holds the possibility of getting a better understanding of what the subject experiences at different points in time during the decision making process. So instead of letting the subject control the time factor, the TMS must somehow be triggered by the relevant RP itself within a range of few milliseconds.

17. In order for this to work out, it will take some time to develop the method. For instance, it is necessary to connect the EEG to a computer doing on-line pattern recognition (based on information from all of the electrodes) in order to recognise the desired RP and then send a "go-signal" to the TMS. The RP is easily recognised once the EEG-output has been statistically analysed and averaged, to distinguish the signal from the background noise, you always get when using EEG. However, in this experiment there is no time to wait for averaging - the computer must recognise the RP almost immediately. It is true that analysis techniques such as time-frequency analysis actually can be performed on a single trial basis, thus providing a hint of which kinds of methodologies might be used. However, it is not clear how this would make a specific RP recognisable on-line at this stage of the research. For that reason, initial experiments will need to be carried out where the computer will be programmed to react at different amplitude levels and simply give a signal to the laboratory leader that now it would have given a TMS-response if this was the "real experiment". This should then be correlated with the subject performing the task of the original Libet experiment, in order to reduce the number of false positives. In this manner, it would hopefully be possible to disturb the "right" RP more often than not. The delay from the EEG-signal to the TMS-pulse will probably amount to about 10-20 milliseconds.

18. When the TMS-pulse is delivered, one of two things is expected to happen. Either, the subject will report that he or she has actually made a conscious decision to move a finger, or the subject will report that this is not the case. If the first possibility occurs, the 500 milliseconds delay, as claimed by Libet and colleagues, was a result of too inaccurate measurements. However, if the second possibility were the case, it would of course be interesting to find exactly when the conscious experience first appears. This can in fact be investigated, using the same experimental paradigm by programming the computer so that when it recognises the relevant RP, it will wait a number of milliseconds before sending the go-signal to the TMS. If the subject still reports that no decision is made, the delay can of course be prolonged. In this manner, the moment when the experience first appears to the subject may be found.

19. Obviously, this experiment is to some degree a "control" of Benjamin Libet's experiments. However, it has a more important scope: To find an EEG-measured neural correlate to the first "subjective appearance" of a conscious state. This would be of special significance because it would help us get closer to a "core neural correlate of consciousness". It is a different approach to finding correlates than what so far has been achieved by Baars (1997) or Hobson (1997), among others, in that these approaches have focused primarily on areas in the brain that are active when one is performing something in a conscious state, compared to what happens when one is doing something else in a non-conscious state. In the experiments here described, it is the very same function in the very same experimental procedure that is studied in its conscious and non-conscious "phases".

20. First of all, and as briefly mentioned, one would be able to combine this method with different brain scanning methods (fMRI would probably be preferable), and thus to combine the information of spatiality and timeliness in order to strengthen both methods. For instance, with the TMS method, we can refine the fMRI studies to know what phenomenal states are present for each scan, thus also refining our understanding of the relation between these and specific neural states. But, secondly, it would be of special interest to compare results of the TMS method from experiments on, say, visual consciousness, auditory consciousness, experiments of the Libet type, etc. in order to find significant EEG-measured common features, such as levels of amplitudes or frequencies. If this were the case, there would be similar neural events associated with becoming conscious of something, regardless of the specific content and the specific brain areas involved. If this were true, it would also strengthen theories that go against so-called "Cartesian Theatres" in the claim that the neural basis of consciousness is to be found in specific kinds of neural activity, rather than specific areas in the brain. I believe that this viewpoint is present in the work by Benjamin Libet (1985, 1993), and definitely in many of the more recent theories on consciousness (see the volume edited by Metzinger, 2000). Experiments like these might even be helpful, at least in the eyes of some theorists, in the ongoing projects to improve our concepts to describe conscious experience. They might show that certain general neural patterns are shared by all aspects of visual consciousness, or perhaps perceptual consciousness, that however differ from other types of consciousness (self-awareness, conscious thought processes, etc.), indicating that these perhaps also should be discussed with different concepts.

21. Intuitively, it seems reasonable to deliver the TMS-pulse at the motor and premotor areas of the brain and perhaps also at the prefrontal cortex in the Libet-experiment: The motor areas are obviously involved in the "finger movement task", and these are the regions from which Libet recorded RPs. The prefrontal regions of the brain are however generally believed to be involved in decision making processes. In the perceptual experiment, the choice of brain regions is somewhat more obvious. However, which exact brain locations will produce the most interesting results must also be studied further in preliminary experiments.

22. After each trial, the subject should not just be asked if he or she has made a decision/can see something or not. Instead, the subject is to study his own mental state introspectively to describe his experience. In the Libet case, a fuller description will indicate if the conscious decision making suddenly appears after a certain number of milliseconds delay (counting from the first appearance of the RP) or if it is a slow process, where the subject will say things like "maybe I have made a decision" or "I think I was about to make one" etc. This is, in my view, a "more correct" way of using phenomenology in an experiment than what Libet did, since the subject will not do any timing or anything else besides just "looking into his own head" and give a relatively simple report. The only thing the subject provides information about, is the one aspect that our technological equipment always will be blind to: His own phenomenological state.

23. It is important to note that the use of a structured questionnaire or some other means of limiting the subject's answers to a few possible categories such as "yes" or "no" might not be the best way to proceed. That would be almost a return to the classical button-pushing paradigms of cognitive psychology where the subject is basically conceived as a variable or an extension of the laboratory equipment. Obviously, rigid categories would force the subject to give imprecise answers that are not congruent with his or her internal state.

24. Training subjects to give more precise descriptions is of course not without problems, in that they may become overly cooperative, giving the answers you want, and it could be argued that a mental state changes the moment it is introspectively studied. Yet, I believe that informed subjects capable of handling a systematic phenomenology are much to be preferred in such experiments, because no matter how much methodological criticism we might point out, there will be no other method to transform first person knowledge into third person information. In the perception variant of the TMS experiment, the training could be done by asking subjects first to describe what they believe was presented to them (as a forced-choice task), and afterwards to give reports about what they experienced. Due, among other things, to the so-called "induced blindsight effect"[2] (Kolb & Braun, 1995), those questions would not be identical. In order to quantify their reports, it is important to develop a scale as a means of measuring experiences. I have previously done this by letting subjects develop their own scales in a pilot experiment to ensure that they consider the categories meaningful as descriptions of their experiences, and to ensure that they feel comfortable using the categories to describe experiences (and nothing else). After the experiment, I have used open qualitative interviews to get an understanding of what exactly they meant by "having a clear experience" or "seeing a vague glimpse" (a theoretical framework for phenomenology along these lines is described in part in Vermersch, 1999).

25. This research strategy will yield very different data than those obtained with PET or fMRI. The use of those methods is too often (though of course not always) based on the hypothesis that you can in fact find one or several "Cartesian theatres" in the brain - areas that necessarily must be active for an organism to have conscious experience. The approach here described is based on the idea that many - and perhaps any - brain area could contribute to conscious experience when behaving in certain ways.

26. Now, "the hard problem" concerns an explanation of consciousness. So could we explain anything with an approach as the one here described? Obviously, to answer this question, we must clearly define not only what specific problem we want to explain, but also what we accept to be a satisfactory explanation. In Nagel's terminology it might be hard to imagine what it would be like to explain consciousness. If we require science to give an explanation, which we would consider as intuitively logical, similar to analytical propositions such as "a triangle has three corners", then we will, of course, not be satisfied with experiments like these. Obviously so, since the experiments will take some sort of mind-brain connection as given, while we basically would want to know if and how such a connection could be possible.

27. To sum up, I think that a developed phenomenology can help close the gap between different classes of mental phenomena - so-called first and third person phenomena - at least in a non-Cartesian sense. Within an experimental framework, a systematic use of phenomenology is crucial if we are to have a science of mind-brain relations, but the data from any investigations in any such science will never reveal what is really hard about the hard problem - the essence of conscious states. For that, scientists will always need philosophy.


[1] This paper was presented in a slightly different form at the conference "Can there be a science of consciousness", The Consciousness Studies Programme in Skvde, Sweden, 20th-22nd June, 2000.

[2] Kolb & Braun (1995) discuss the fact that subjects seem able to recognise objects presented to them briefly before they consciously perceive them as "the induced blindsight effect". In doing so, they suggest that normal subjects under such conditions might be considered comparable to blindsight patients.


Amassian, V.E., Cracco, R.Q., Maccabee, P.J., Cracco, J.B., Rudell, A. & Eberle, L. (1989): Suppression of visual perception by magnetic coil stimulation of human occipital cortex, Electroencephalography & Clinical Neurophysiology, 74, 458-462

Baars, B. (1997a): A thoroughly empirical approach to consciousness: Contrastive analysis, in N. Block, O. Flanagan & G. Guzeldere (eds.): The Nature of Consciousness, MIT Press

Breitmeyer, B.G. (1985): Problems with the psychophysics of intention (commentary on B. Libet, 1985), Behavioral and Brain Sciences, 8, 539-540

Chalmers, D.J. (1995): Facing up to the problem of consciousness, Journal of Consciousness Studies, 2, 200-219

Chalmers, D.J. (1996): The Conscious Mind, Oxford University Press

Chalmers, D.J. (1999): First person methods, Consciousness Bulletin, Center for Consciousness Studies, University of Arizona

Day, B.L., Rothwell, J.C., Thompson, P.D. Maertens de Noordhout, A., Nakashima, K., Shannon, K., Marsden, C.D. (1989): Delay in the execution of voluntary movement by electrical or magnetic brain stimulation in intact man, Brain, 112, 649-663

Haggard, P. & Magno, E. (1999): Localising awareness of action with transcranial magnetic stimulation, Experimental Brain Research, 127, 102-107

Hobson, J.A. (1997): Consciousness as a state-dependent phenomenon, in J.Cohen & J.Schooler (eds.): Scientific Approaches to Consciousness, Lawrence Erlbaum

James, W. (1890): The Principles of Psychology, Harvard University Press, 1983

James, W. (1904): Does "consciousness" exist?, Journal of Philosophy, Psychology, and Scientific Method, 1

Kamitani, Y. & Shimojo, S. (1999): Manifestation of scotomas created by transcranial magnetic stimulation of human visual cortex, Nature Neuroscience, 2 (8), 767-771

Kastner, S., Demmer, I. & Ziemann, U. (1998): Transient visual field defects induced by transcranial magnetic stimulation over human occipital pole, Experimental Brain Research, 118, 19-26

Kolb, F.C. & Braun, J. (1995): Blindsight in normal observers, Nature, 377, 336-338

Libet, B. (1978): Neuronal vs. subjective timing, for a conscious sensory experience, in: Buser, P.A. & Rougeul-Buser, A. (eds.): Cerebral Correlates of Conscious Experience, Elsevier/North Holland Biomedical Press

Libet, B. (1985): Unconscious cerebral initiative and the role of conscious will in voluntary action, Behavioral and Brain Sciences, 8, 529-566

Libet, B., Gleason, C.A., Wright, E.W. & Pearl, D.K. (1983): Time of conscious intention to act in relation to onset of cerebral activity (readiness-potenital), Brain, 120, 1587-1602

Metzinger, T. (ed.) (2000): Neural Correlates of Consciousness, MIT Press

Nagel, T. (1974): What is it like to be a bat? The Philosophical Review, LXXXIII, 435-51.

Varela, F. (1996): Neurophenomenology, Journal of Consciousness Studies, 3, 4, 330-344

Varela, F. & Shear, J. (eds.) (1999): The View From Within, Imprint Academic

Velmans, M. (1999): Intersubjective science, Journal of Consciousness Studies, 6, 2/3, 299-306

Velmans, M. (ed.) (2000a): Investigating Phenomenal Consciousness, John Benjamin

Velmans, M. (2000b): Understanding Consciousness, Routledge/Psychology Press

Vermersch, P. (1999): Introspection as practice, in: Varela, F. & Shear, J. (eds.): The View from Within, Imprint Academic

Volume: 12 (next, prev) Issue: 029 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary