Ronald N. Giere (1993) Cognitive Models of Science. Psycoloquy: 4(56) Scientific Cognition (1)

Volume: 4 (next, prev) Issue: 56 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 4(56): Cognitive Models of Science

[Minnesota Studies in the Philosophy of Science, volume 15.
Ronald N. Giere (ed.) 1992 20 Chapters, 508 pgs. Minneapolis:
University of Minnesota Press]
Precis of Giere on Scientific-Cognition

Ronald N. Giere
Department of Philosophy and
Center for Philosophy of Science
University of Minnesota
Minneapolis MN 55455


Cognitive sciences have reached a sufficient state of maturity that they can now provide a valuable resource for philosophers of science who are developing general theories of science as a human activity. Three disciplinary clusters are distinguised: (i) Artificial Intelligence (itself a branch of computer science), (ii) Cognitive Psychology, and (iii) Cognitive Neuroscience. Each of these clusters provides a group of models that might be deployed in approaching problems that are central to the philosophy of science.


Cognitive science, philosophy of science, cognitive models, artificial intelligence, computer science, cognitve neuroscience.
    SCIENCE, edited by Ronald N. Giere. This book has been selected
    for multiple review in PSYCOLOQUY. If you wish to submit a formal
    book review (see Instructions following Precis) please write to
    psyc@pucc.bitnet indicating what expertise you would bring to bear
    on reviewing the book if you were selected to review it (if you
    have never reviewed for PSYCOLOQUY or Behavioral & Brain Sciences
    before, it would be helpful if you could also append a copy of your
    CV to your message). If you are selected as one of the reviewers,
    you will be sent a copy of the book directly by the publisher
    (please let us know if you have a copy already). Reviews may also
    be submitted without invitation, but all reviews will be refereed.
    The author will reply to all accepted reviews.


1. This volume grew out of a workshop on implications of the cognitive sciences for the philosophy of science held in October, 1989 under the sponsorship of the Minnesota Center for Philosophy of Science. The idea behind the workshop was that the cognitive sciences have reached a sufficient state of maturity that they can now provide a valuable resource for philosophers of science who are developing general theories of science as a human activity. The hope is that the cognitive sciences might come to play the sort of role that formal logic played for Logical Empiricism or that history of science played for the historical school within the philosophy of science. This development might permit the philosophy of science as a whole finally to move beyond the division between "logical" and "historical" approaches that has characterized the field since the 1960s.

2. The unifying label, "cognitive science," in fact covers a diversity of disciplines and activities. For the purposes of this volume, I distinguish three disciplinary clusters: (i) Artificial Intelligence (itself a branch of computer science), (ii) Cognitive Psychology, and (iii) Cognitive Neuroscience. These clusters tend to be thought of as providing three different levels of analysis, with the functional units becoming more abstract as one moves "up" from neuroscience to artificial intelligence. Each of these disciplinary clusters provides a group of models that might be deployed in approaching problems that are central to the philosophy of science. I begin with cognitive psychology because it seems to me that the models being developed in cognitive psychology are, at least for the moment, the most useful for a cognitive approach to the philosophy of science.


3. Nancy Nersessian provides a prototype of someone drawing on research in the cognitive sciences to solve problems in the philosophy of science. The focus of her research is a problem originating in the historical critique of Logical Empiricism. Logical Empiricism made science cumulative at the observational level while allowing the possibility of change at the theoretical level. But any noncumulative changes at the theoretical level could only be discontinuous. The historical critics argued that science has not been cumulative even at the empirical level. But some of these critics, such as Kuhn and Feyerabend, also ended up with a view of theoretical change as being discontinuous, though for different reasons. Thus was born the problem of "incommensurability." Nersessian's project is to dissolve the problem of incommensurability by showing how the theoretical development of science can be continuous without science as a whole being cumulative.

4. Most historically minded critics of Logical Empiricism took over the assumption that scientific theories are primarily LINGUISTIC entities. The main exception is Kuhn, who gave priority to concrete exemplars over linguistically formulated generalizations. Nersessian adopts a theory of "mental models" as elaborated, for example, by Johnson-Laird (1983). On this approach, language, in the form of propositions, may be used not to describe the world directly, but to construct a "mental model," which is a "structural analog" of a real-world or imagined situation. Once constructed, the mental model may yield "images," which are mental models viewed from a particular perspective. This interplay of propositions, models, and images provides a richer account of the representational resources of scientists than that used by either Logical Empiricists or most of their critics. It may be thought of as an extension of the model-theoretic approach to the nature of scientific theories as elaborated, for example, by Suppe (1989), van Fraassen (1980, 1989), and myself (Giere, 1988). In any case, the cognitive theory of mental models provides the main resource for Nersessian's account of the dynamics of conceptual change in science. Some such account of representation seems sure to become standard within a cognitive approach to the philosophy of science.

5. Another assumption shared by Logical Empiricists and most of their historically based critics is that the basic entities in an account of science are abstractions like "theories," "methods," or "research traditions" (which for both Lakatos and Laudan are explicitly characterized in terms of laws, theories, and methodological rules). Nersessian, by contrast, insists on including the individual scientist as an essential part of her account. Her question is not simply how the theory of electrodynamics developed from the time of Faraday to that of Einstein, but how Faraday, Maxwell, and Einstein, as individual scientists, developed electrodynamics. Theories do not simply develop; they are developed through the cognitive activities of particular scientists. It is the focus on scientists, as real people, that makes possible the application of notions from cognitive psychology to questions in the philosophy of science.

6. Nersessian's insistence on the role of human agency in science is strongly reinforced by David Gooding's analysis of the path from actual experimentation to the creation of demonstration experiments, to the development of theory. Insisting that all accounts of scientific activity, even those recorded in laboratory notebooks, involve reconstruction, Gooding distinguishes six types, or levels, of reconstruction. Standard philosophical reconstructions, which Gooding labels "normative," are last in the sequence. The first are "cognitive" reconstructions, with "rhetorical" and "didactic" reconstructions being among the intermediate types. Gooding is particularly insistent on the importance of "procedural knowledge," such as laboratory skills, in the cognitive development of science.

7. Gooding concludes his paper by arguing that the power of thought experiments derives in part from the fact that they embody tacit knowledge of experimental procedures. This argument complements Nersessian's analysis of how an "experiment" carried out in thought can have such an apparently powerful empirical force. She argues that conducting a thought experiment is to be understood as using a mental model of the experimental situation to run a simulation of a real experiment. The empirical content is built into the mental model, which includes procedural knowledge.

8. Ryan Tweney was among the first of recent theorists to advocate a cognitive approach to the study of science, and he has pursued this approach in both experimental and historical contexts. Here he explores some implications of the recent vogue for parallel processing for the study of science as a cognitive process. Tweney acknowledges the importance of having models which could plausibly be implemented in a human brain, but he is less impressed by neuroscientific plausibility than by the promise of realistic psychological models of perception, imagery, and memory -- all of which he regards as central to the process of science.

9. Rather than joining the debate between advocates of serial and parallel models, Tweney takes a third route which focuses attention on cognitive activities in natural contexts -- leaving the question of which sort of model best fits such contexts to be decided empirically on a case by case basis. But it is clear that Tweney is impressed with the promise of parallel models, even though, as he points out, they have yet to be applied very successfully to higher level cognitive processes. Here he considers two applications: (1) an account of the memory aids used by Michael Faraday to index his notebooks, and (2) Paul Thagard's analysis of scientific revolutions using a parallel network implementation (ECHO) of a theory of explanatory coherence. He finds the concept of parallel processing useful in the first case but superfluous in the second.

10. For nearly two decades, sociologists of science have been gathering under a banner labeled "The Social Construction of Scientific Knowledge." The above papers suggest that we can equally well speak of "The Cognitive Construction of Scientific Knowledge." There are, however, two important differences in the ways these programs are conceived. First, unlike social constructionists, cognitive constructionists make no claims of exclusivity. We do not insist that cognitive construction is all there is. Second, social constructionists typically deny, or claim indifference to, any genuine representational connection between the claims of scientists and an independently existing world. By contrast, connections with the world are built into the cognitive construction of scientific knowledge. This is particularly clear in Gooding's paper, which emphasizes the role of procedural knowledge in science.

11. The historical movement in the philosophy of science made conceptual change a focus of research in the history and philosophy of science. It has subsequently become a research area within cognitive psychology as well, although Piaget had already made it a focus of psychological research in Europe several decades earlier. Indeed, one major strand of current research may be seen as an extension of Piaget's program which used conceptual development in children as a model for conceptual development in science (Gruber and Voneche 1977). This line of research is represented here by Susan Carey.

12. Carey works within the "nativist" tradition which holds that at least some concepts are innate, presumably hard-wired as the result of our evolutionary heritage. The question is what happens to the conceptual structure possessed by a normal human in the natural course of maturation, apart from explicit schooling. An extreme view is that conceptual development consists only of "enrichment," that is, coming to believe new propositions expressed solely in terms of the original set of innate concepts. Another possible view is that humans also form new concepts by differentiation and combination. Objects become differentiated to include animals, then dogs. Colors become differentiated into red, green, blue, etc. Combination produces the concept of a red dog (an Irish Setter). Carey argues that normal development also produces conceptual systems that are, in Kuhn's (1983) terms, "locally incommensurable" with earlier systems.

13. Carey takes pains to argue that local incommensurability between children's and adults' concepts does not mean that adults and children cannot understand one another, that children do not learn language by interacting with adults, or that psychologists cannot explain the child's conceptual system to others. So the concept of incommensurability used here has none of the disastrous implications often associated with philosophical uses of this notion. It seems, therefore, that philosophers and psychologists may at last have succeeded in taming the concept of incommensurability, turning it into something that can do useful work.

14. The shift from novice to expert provides another model recently exploited by cognitive psychologists to study conceptual change in science. Michelene Chi has been a leader in this research. Here, however, she treats conceptual change in more general terms. She argues that even Carey's notion of change between incommensurable conceptual systems is not strong enough to capture the radical nature of the seventeenth-century revolution in physics. That revolution, she argues, involved a more radical conceptual shift because there was a shift in ontological categories. In particular, the conceptual system prior to the scientific revolution mainly used concepts within the ontological category of "material substance" whereas the new physical concepts were mainly relational, covering what she calls "constraint based events." According to Chi's analysis, therefore, the difficulty people have moving beyond an undifferentiated weight/density concept is due to difficulty in conceiving of weight as relational rather than substantial. Density, being an intrinsic property of objects (mass per unit volume), is developmentally the more primitive concept.

15. The final two papers in this section use a cognitive approach to problems that were prominent among Logical Empiricists. Questions about the nature of observation, and, more technically, measurement, were high on the agenda of Logical Empiricism. That was in large part because of the foundational role of observation in empiricist epistemology. But even if one abandons foundationist epistemology, there are still interesting questions to be asked about observation and measurement. Richard Grandy explores several such issues from the general perspective of cognitive agents as information processors.

16. One topic Grandy explores is the relative information provided by the use of various types of measurement scales. Grandy demonstrates that the potential information carried by a measurement typically increases as one moves from nominal, to ordinal, to ratio scales. More surprising, he is able to show that what he would regard as observation sentences typically convey more information than ordinal scale measurements, though not as much as ratio scale measurements. This is but one step in a projected general program to analyze the contributions of new theories, instruments, and methods of data analysis in terms of their efficiency as information generators or processors. Such an analysis would provide a "cognitive" measure of scientific progress.

17. In the final paper of this section, Wade Savage explores the possibility of using recent cognitive theories of perception to develop a naturalized foundationalist empiricism. He begins by distinguishing strong from weak foundationalism. Strong foundationalism is the view that some data provided by sensation or perception are both independent (not based on further data) and infallible (incapable of error). Weak foundationalism holds only that some data of sensation or perception are more independent and more reliable than other data. Savage's view is that weak foundationalism provides a framework for a naturalistic theory of conscious human knowledge and strong foundationalism provides a framework for a naturalistic theory of unconscious human knowledge. The mistake of the classical foundationalists, he claims, is to have assumed that strong foundationalism could be a theory of conscious knowledge.


18. Among the many cross currents within the fields of computer science and artificial intelligence is a tension between those who wish to use the computer as a means to study the functioning of human intelligence and those who see the computer primarily as a tool for performing a variety of tasks quite apart from how humans might in fact perform those same tasks. This tension is evident in the original work on "discovery programs" inspired by Herbert Simon (1977) and implemented by Pat Langley, Simon, and others (Langley et al 1987). This work has demonstrated the possibility of developing programs which can uncover significant regularities in various types of data using quite general heuristics. Among the prototypes of such programs are BACON, GLAUBER, and KAKEDA (Kulkarni and Simon 1988). BACON, for example, easily generates Kepler's laws beginning only with simple data on planetary orbits.

19. One way of viewing such programs is as providing "normative models" in the straightforward instrumental sense that these models provide good means for accomplishing well-defined goals. This use of AI is exhibited in this volume by the papers of Gary Bradshaw and Lindley Darden. Bradshaw, who began his career working with Simon and Langley, applies Simon's general approach to problem solving to invention in technology. He focuses on the much discussed historical question of why the Wright brothers were more successful at solving the problem of manned flight than their many competitors. Dismissing a variety of previous historical explanations, Bradshaw locates the crucial difference in the differing heuristics of the Wright brothers and their competitors. The Wright brothers, he argues, isolated a small number of functional problems which they proceeded to solve one at a time. They were thus exploring a relatively small "function space" while their competitors were exploring a much larger "design space."

20. Darden proposes applying AI techniques developed originally for diagnosing breakdowns in technological systems to the problem of "localizing" and "fixing" mistaken assumptions in a theory which is faced with contrary data. Here she outlines the program and sketches an application to the resolution of an empirical anomaly in the history of Mendelian genetics. Darden is quite clear on the goal of her work: "The goal," she writes, "is not the simulation of human scientists, but the making of discoveries about the natural world, using methods that extend human cognitive capacities."

21. Programs like those of Darden and others are potentially of great scientific utility. That potential is already clear enough to inspire many people to develop them further. How useful such programs will actually prove to be is not something that can be decided a priori. We will have to wait and see. The implications of these sorts of programs for a cognitive philosophy of science are mainly indirect. The fact that they perform as well as they do can tell us something about the structure of the domains in which they are applied and about possible strategies for theorizing in those domains.

22. Others see AI as providing a basis for much farther reaching philosophical conclusions. The papers by Greg Nowak and Paul Thagard, and by Eric Freedman, apply Thagard's (1989) theory of explanatory coherence (TEC) to the Copernican revolution and to a controversy in psychology respectively. Nowak and Thagard hold both (a) that the objective superiority of the Copernican theory over the Ptolemic theory is shown by its greater overall explanatory coherence, and (b) that the triumph of the Copernican theory was due, at least in part, to the intuitive perception of its greater explanatory coherence by participants at the time.

23. Thagard, who advocates a "computational philosophy of science" (Thagard 1988), implements his theory of explanatory coherence in a connectionist program, ECHO, which utilizes localized representations of propositions. It has been questioned (for example by Glymour and Tweney in this volume) whether ECHO is doing anything more than functioning as a fancy calculator. All the real work seems to be done by the conditions for explanatory coherence -- which have nothing essential to do with research in cognitive science.

24. Freedman's study provides a partial response to these objections. He uses TEC and ECHO to analyze the famous controversy between Tolman and Hull over the significance of Tolman's latent learning experiments. Applying TEC and ECHO, Freedman finds that Tolman's cognitive theory is favored over Hull's behaviorist theory. Yet historically, Hull's approach prevailed for many years. By varying available parameters in ECHO, Freeman shows several ways in which ECHO can be made to deliver a verdict in favor of Hull. For example, significantly decreasing the importance of the latent learning data can tip the balance in favor of Hull's theory. To Freedman, this provides at least a suggestion for how the actual historical situation might be explained. So ECHO does some work. But this study also makes it obvious that to decide among the possibilities suggested by varying different parameters in ECHO, one would have to do traditional historical research. ECHO cannot decide the issue.


25. The relevance of models from the neurosciences to the philosophy of science is here argued by the primary advocate of the philosophical relevance of such models, Paul Churchland. It is Churchland's (1989) contention that we already know enough about the gross functioning of the brain to make significant claims about the nature of scientific knowledge and scientific reasoning. Here he argues that a "neurocomputational" perspective vindicates (more precisely, "reduces") a number of claims long advocated by Paul Feyerabend. For example: "competing theories can be, and occasionally are, incommensurable," and "the long term best interests of intellectual progress require that we proliferate not only theories, but research methodologies as well."

26. Whatever one's opinion of Churchland's particular claims, I think we must all agree that the neurosciences provide a very powerful and indisputable constraint on any cognitive philosophy of science. Whatever cognitive model of scientific theorizing and reasoning one proposes, it has to be a model that can be implemented by humans using human brains.


27. Except during momentary lapses of enthusiasm, no one thinks that a cognitive theory of science could be a complete theory of science. The cognitive activities of scientists are embedded in a social fabric whose contribution to the course of scientific development may be as great as that of the cognitive interactions between scientists and the natural world. Thus cognitive models of science need to be supplemented with social models. The only requirement is that the two families of models fit together in a coherent fashion.

28. There are those among contemporary sociologists of science who are not so accommodating. Latour and Woolgar, for example, are now famous for suggesting a ten year moratorium on cognitive studies of science, by which time they expect to have constructed a complete theory of science which requires no appeal to cognitive categories. Such voices are not directly represented in this volume, but they do have supporters nonetheless.

29. Houts and Haddock agree with the sociological critics of cognitivism in rejecting the use of cognitive categories like representation or judgment in a theory of science. But they insist there is need for a genuine psychology of science. From a cognitivist point of view, these are incompatible positions. For Houts and Haddock these positions are not incompatible because their psychology of science is based on the behaviorist principles of B. F. Skinner. In Skinnerian theory, the determinants of behavior are to be found in the environment, both natural and social, which provides the contingencies of reinforcement. There is no need for any appeal to "mental" categories such as representation or judgment. Several commentators, for example Slezak (1989) and myself (Giere 1988), have criticized behaviorist tendencies in the writings of sociologists of science. For Houts and Haddock, these tendencies are not a basis for criticism, but a positive virtue. They make possible a unified approach to both the psychology and the sociology of science.

30. Within cognitive psychology there is a tradition, already several decades old, in which scientific reasoning tasks are simulated in a laboratory setting. Gorman reviews this tradition and compares it with the more recent tradition of computational simulation pioneered by Simon and represented in this volume by Thagard. He relies heavily on the distinction between externally valid and ecologically valid claims. A claim is externally valid if it generalizes well to other well-controlled, idealized conditions. A claim is ecologically valid if it generalizes well to natural settings, for example, to the reasoning of scientists in their laboratories. Gorman argues that, while both laboratory and computer simulations may be externally valid, laboratory studies are more ecologically valid. Granting this conclusion, however, does little to remove doubts about the ecological validity of laboratory studies themselves.

31. Gorman proposes bridging the gap between cognitive and social studies of science by designing experimental simulations which include social interactions among the participants. Here experimental paradigms from social psychology are merged with those that have been used in the experimental study of scientific reasoning. Gorman's hope is that one might eventually develop experimental tests of claims made by sociologists as well as by more theoretical "social epistemologists" such as Steve Fuller.

32. Fuller himself questions a central presupposition of most cognitive approaches to the philosophy of science, namely, that the individual scientist is the right unit of analysis for any theory of science. Not that he advocates returning to abstract entities like theories. Rather he thinks that the appropriate unit will turn out to be something more like a biological species than an individual scientist. Bruno Latour's (1987) "actor network" may be a good example of the kind of thing Fuller expects might emerge as the proper unit of study. Fuller's argument is both historical and critical. He sketches an account of how the individual scientist came to be regarded as the basic entity for epistemology generally and why this assumption has led to difficulties in several areas, particularly in analytic epistemology, but also in Churchland's neurocomputational approach.


33. Clark Glymour was among the first philosophers of science to grasp the possibility of deploying methods and results from the cognitive sciences, particularly artificial intelligence, in the philosophy of science itself. (Herbert Simon, who I definitely would wish to claim as a philosopher of science, must surely have been the first.) But as his contribution to this volume makes crystal clear, Glymour is quite disappointed with what some other philosophers of science have been doing with this strategy. Here he expresses his disappointment with work by three of the participants in the Minnesota workshop, Churchland, Thagard, and Giere. By mutual agreement, Glymour's comments appear as he wrote them. They are followed by replies from each of the three named subjects of his remarks.


Churchland, P. M. (1989) A Neurocomputational Perspective. Cambridge: MIT Press.

Giere, R. N. (1988) Explaining Science: A Cognitive Approach. Chicago: University of Chicago Press.

Gruber, H. and J. J. Voneche, eds. (1977) The Essential Piaget. New York: Basic Books.

Johnson-Laird, P. N. (1983) Mental Models. Cambridge: Harvard Univ. Press.

Kuhn, T. S. (1983) Commensurability, Comparability, Communicability. In PSA 1982. eds. P. Asquith and T. Nickles, 669-687, East Lansing, MI: The Philosophy of Science Association.

Kulkarni, D., and H. Simon (1988) The Processes of Scientific Discovery: The Strategy of Experimentation. Cognitive Science 12:139-175.

Langley, P., H. A. Simon, G. L. Bradshaw, and J. M. Zytkow (1987) Scientific Discovery. Cambridge: MIT Press.

Latour, B. (1987) Science in Action. Cambridge: Harvard Univ. Press.

Simon, H. A. (1977) Models of Discovery. Dordrecht-Holland: Reidel.

Slezak, P. (1989) Scientific Discovery by Computer as Empirical Refutation of the Strong Programme', Social Studies of Science 19:563-600.

Suppe, F. (1989) The Semantic Conception of Theories and Scientific Realism. Urbana, IL: Univ. of Illinois Press.

Thagard, P. 1988. Computational Philosophy of Science. Cambridge: MIT Press.

Thagard, P. 1989. Explanatory Coherence. Behavioral and Brain Sciences. 12:435-467.

van Fraassen, B. C. 1980. The Scientific Image. Oxford: Oxford Univ. Press.

van Fraasen, B. C. 1989. Laws and Symmetry. Oxford: Oxford Univ. Press.


    8.0  Table of Contents of COGNITIVE MODELS OF SCIENCE

    Ronald N. Giere:  Cognitive Models of Science

    Nancy J. Nersessian:
        How do Scientists Think? Capturing the Dynamics of Conceptual
        Change in Science
    David Gooding:
        The Procedural Turn, or Why do Thought Experiments Work?
    Ryan D. Tweney:
        Serial and Parallel Processing in Scientific Discovery
    Susan Carey:
        The Origin and Evolution of Everyday Concepts
    Michelene T. H. Chi:
        Conceptual Change Within and Across Ontological Categories:
        Examples from Learning and Discovery in Science
    Richard E. Grandy:
        Information, Observation, and Measurement from the
        Viewpoint of a Cognitive Philosophy of Science
    C. Wade Savage:
        Foundationalism Naturalized

    Gary Bradshaw:
        The Airplane and the Logic of Invention
    Lindley Darden:
        Strategies for Anomaly Resolution
    Greg Nowak & Paul Thagard:
        Copernicus, Ptolemy, and Explanatory Coherence
    Eric G. Freedman:
        Understanding Scientific Controversies from a
        Computational Perspective: The Case of Latent Learning

    Paul M. Churchland:
        A Deeper Unity: Some Feyerabendian Themes in Neurocomputational Form

    Arthur C. Houts & C. Keith Haddock:
        Answers to Philosophical and Sociological Uses of Psychologism
        in Science Studies: A Behavioral Psychology of Science
    Michael E. Gorman:
        Simulating Social Epistemology:
        Experimental and Computational Approaches
    Steve Fuller:
        Epistemology Radically Naturalized:
        Recovering the Normative, The Experimental, and the Social

    Clark Glymour:
        Invasion of the Mind Snatchers
    Replies to Glymour:
        Paul M. Churchland: Reconceiving Cognition
        Ronald N. Giere: What The Cognitive Study of Science is Not
        Paul Thagard: Computing Coherence



PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 35,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral, cognitive, neural, social, etc.) All contributions are refereed by members of PSYCOLOQUY's Editorial Board.

Target article length should normally not exceed 500 lines [c. 4000 words]. Commentaries and responses should not exceed 200 lines [c. 1600 words].

All target articles, commentaries and responses must have (1) a short abstract (up to 100 words for target articles, shorter for commentaries and responses), (2) an indexable title, (3) the authors' full name(s) and institutional address(es).

In addition, for target articles only: (4) 6-8 indexable keywords, (5) a separate statement of the authors' rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses).

All paragraphs should be numbered in articles, commentaries and responses (see format of already published articles in the PSYCOLOQUY archive; line length should be < 80 characters, no hyphenation).

It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article.

PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected.

Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY,

Please submit all material to psyc@pucc.bitnet or Anonymous ftp archive is DIRECTORY pub/harnad/Psycoloquy HOST

Volume: 4 (next, prev) Issue: 56 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary