David Klahr (1994) Searching for Cognition in Cognitive Models of Science. Psycoloquy: 5(69) Scientific Cognition (12)

Volume: 5 (next, prev) Issue: 69 (next, prev) Article: 12 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 5(69): Searching for Cognition in Cognitive Models of Science

Book Review of Giere on Scientific-Cognition

David Klahr
Department of Psychology
Carnegie Mellon University
Pittsburgh, PA 15213



The goal of Cognitive Models of Science (1992) is to demonstrate the ways in which the field of Cognitive Science is becoming a resource for Philosophy of Science. However, it fails to provide a framework for understanding much of the potentially relevant work in Cognitive Science. In this review, I suggest one such framework, and provide a "reader's guide" to the work in cognitive psychology that directly addresses questions about the psychology of science.


Cognitive science, philosophy of science, cognitive models, artificial intelligence, computer science, cognitve neuroscience.


1. The fundamental premise of Cognitive Models of Science (CMS) is that Philosophy of Science stands to learn something by paying attention to recent developments in Cognitive Science (Giere, 1992, 1993). Note that this is not just a claim that the emergence of Cognitive Science has provided a new set of interesting objects for philosophical comment. As Giere puts it, "the reverse is happening" and the field is becoming a "resource for the philosophical study of science as a cognitive activity" [p. xvi]. (All references in square brackets are to CMS pages.) I think that the premise is correct, but in this review, I criticize the volume for its inadequate treatment of what I view as the work in Cognitive Science that is most relevant to the goal.

2. This sub-area has come to be known as Cognitive Studies of Science (CSS). I believe that CMS fails to provide a framework that makes it possible to understand CSS. This is a serious inadequacy, given the goal of CMS: to inform philosophers about what Cognitive Science could tell us about science. In order to help point readers in the right direction, I will (in paragraphs 16-24 below) provide a framework that makes it possible to understand what is happening in the field of CSS.

3. Most definitions of Cognitive Science (e.g., Posner, 1989), include, as its component disciplines, Cognitive Psychology, Artificial Intelligence, Philosophy, Linguistics, and Neuroscience as well as Anthropology, Economics and Social Psychology. In Cognitive Models of Science (CMS) Giere defines the field more narrowly, in terms of three disciplinary clusters: "(1) artificial intelligence, (2) cognitive psychology, and (3) cognitive neuroscience. These clusters tend to be thought of as providing three different levels of analysis, with the functional units becoming more abstract as one moves up from neuroscience to artificial intelligence" [p. xvi]. Given the book's premise, this definition is important, and has interesting implications.

4. One positive consequence of Giere's definition is that by not adhering to the conventional inclusion of Philosophy as a component of Cognitive Science that could inform Philosophy of Science, he avoids a slope of endless recursion. However, the circumscribed definition may leave too narrow a base of activities to get the attention of Philosophers. Thus, of the three remaining clusters, one -- cognitive neuroscience -- seems quite tangential here. Although Churchland's chapter attempts to demonstrate the connection, I am unable to see its potential as a resource to the Philosophy of Science. There are simply too many levels of analysis between connectionist models of the lateral geniculate nucleus and the cognitive processes that influence scientific discovery. (See also Shafto's (1994) review, par. 3 for a similar argument.) This leaves only two subdisciplines of Cognitive Science as the ones to which Philosophy of Science should be attending: Artificial Intelligence and Cognitive Psychology.

5. These are certainly relevant areas, but their treatment in CMS is problematic. For one thing, the distinction between them is not clearly made in CMS (see pars. 6-7 below). For another, much of the activity within these fields that does directly address issues of scientific discovery is inadequately represented in CMS.

6. The field of Artificial Intelligence (AI) attempts to understand the abstract nature of intelligent computation as well as to create systems that exhibit intelligence, whereas Cognitive Psychology attempts to understand how intelligence is manifested in humans. Darden succinctly distinguishes AI from Cognitive Psychology: "The goal [of AI] is not the simulation of human scientists, but the making of discoveries about the natural world, using methods that extend human cognitive capacities" [p. 252]. Unfortunately CMS does not describe any AI models that have successfully effected such extensions -- such as Valdes-Perez's (1994 a, b) systems for discoveries in chemistry and physics, Fajtlowicz's in mathematics (Erdos, Fajtlowicz & Staton, 1991), or Callahan and Sorensen's (1992) in the social sciences -- so the potential of AI is not really demonstrated here.

7. Moreover, the classification of some of the chapters in CMS as instances of AI rather than Cognitive Psychology strikes me as mistaken. For example, Novak and Thagard's chapter on ECHO is included as an example of AI, rather than Cognitive Psychology, apparently because the theory is computational. But ECHO is supposed to be a cognitive model, not an AI model in Darden's sense. Thagard makes this particularly clear in his very interesting book on conceptual revolutions (Thagard, 1992). Indeed, Giere similarly misclassifies the influential computational models of scientific discovery produced by Simon and his colleagues (Langley, et al., 1987) as examples of AI, rather than Cognitive Psychology, even though Langley et al. make it abundantly clear that they regard their computational systems as theories of the cognitive processes used by Kepler, Glauber, etc. in making their discoveries. Langley et al. make no claim that their programs, in their current form, should be taken as systems that extend human capacity or that can be used to make new discoveries. Similarly, Bradshaw's chapter in CMS on the Wright brothers' inventive processes is a cognitive analysis, not an AI model.

8. There IS a large and rapidly growing literature on the cognitive (and developmental) psychology of science. (A good summary of the field in its infancy can be found in Tweney, Doherty & Mynatt, 1981). Unfortunately, the current work is so inadequately represented in CMS that I would not be surprised if most who practice Philosophy of Science find little of relevance to that practice in CMS. I view CMS as having a worthwhile set of goals, but I am disappointed by the extent to which it misses the mark. For those similarly stimulated by the potential of CMS and then frustrated by its execution, in the following paragraphs I attempt to lay out the nature of the field, how it is organized, and where it is headed.


9. Empirical investigations of science can be organized into four relatively distinct categories. The first category includes nonpsychological approaches that analyze the major discoveries in terms of political, anthropological, or social forces and mechanisms. The second category includes psycho-historical accounts of the purported cognitive and motivational processes of the focal scientists (e.g., Holmes, 1985). This approach, exemplified in CMS in the chapters by Gooding and Nersessian, is based primarily on retrospective analyses of diaries, autobiographies, lab notebooks, correspondence, and so on. It has produced some intriguing analyses, but the reliability of the scientists' accounts that provide the raw data for such analyses is always in doubt: "But did they REALLY think this way?" asks Nersessian [p. 36]. "In the end we all face the recorded data and know that every piece is in itself a reconstruction by its author" [p. 36].

10. The third category includes computational models of discovery (cf. Shrager & Langley, 1990 for an introduction to this literature). In this approach, epitomized by the work of Langley, et al. (1987), one constructs computational models of key steps in major scientific discoveries. Some models focus on Artificial Intelligence goals without regard for psychological plausibility (cf. Darden's and Freedman's chapters), whereas others try to replicate human performance on discovery tasks ranging from "simulated science" to real scientific discoveries. The approach bears some similarity to the psycho-historical accounts, in that it also looks at the historical record. However, it differs from such accounts in that it proposes cognitive mechanisms sufficiently specific to be cast as computational models that can make the same discoveries as did the focal scientist (Kulkarni & Simon, 1990).

11. The fourth category, and the one on which I will focus in this review, includes psychological studies of subjects in simulated discovery contexts. This is one of a flourishing area in CSS. The general paradigm is to present people with problems that purport to isolate one or more essential aspects of "real world" science and to carefully observe their problem-solving processes. Subjects can be selected from characteristic populations (e.g., scientists, college sophomores, school children, and so on). The "thing-to-be-discovered" can range from something as simple and arbitrary as a "rule" that the experimenter has in mind (Gorman, in Giere, 1992; Wason, 1960) to something as complex as the physics of an artificial universe (Mynatt, Doherty & Tweney, 1977) or the mechanisms of genetic inhibition (Dunbar, 1994). The advantage of this approach is that it enables the researcher to exert some control over subjects' prior knowledge and complete control over the "state of nature" (the thing to be discovered). Most important, it enables the researcher to observe the dynamic course of scientific discovery in great detail.


12. Gorman's chapter discusses the ecological validity of simple versions of such laboratory studies, but Giere remains skeptical about the relevance of the approach. He comments that Gorman's chapter does "little to remove doubts about the ecological validity of laboratory studies themselves" [p. xxvi]. Such doubts would seem to undermine the very premise of the book, because, as I argued above, one of the primary sources of the potential contribution of Cognitive Science to Philosophy of Science comes from investigations done in the psychology laboratory. Perhaps Giere's skepticism derives from Gorman's neglect of many studies (cited in paragraph 24 below) that go far beyond Wason's (1960) original 2-4-6 task.

13. But even the "simple" tasks tap everyday mental processes that are fundamental to scientific thinking. Major scientific discoveries are not labelled as such because of any unusual cognitive processes that produced them but rather because of the importance of what is discovered. For example, in reflecting on the discovery of the structure of DNA, Francis Crick (1988) said: "I think what needs to be emphasized about the discovery of the double helix is that the path to it was, scientifically speaking, fairly commonplace. What was important was NOT THE WAY IT WAS DISCOVERED, but the object discovered -- the structure of DNA itself" (p. 67; emphasis added). This "nothing special" view of the processes underlying scientific creativity is elaborated by Perkins (1981), Weisberg (1993), and Boden (1990, 1994).

14. But if the processes are so common then what makes science special? Again, I refer to Shafto's excellent review: "Neither Gooding nor Nersessian, however, address the question of the SPECIFIC characteristics of scientific reasoning. Thus, Johnson-Laird's theory of mental models applies equally well to a broad range of reasoning tasks, including scientific reasoning, syllogistic reasoning, and sentence-picture comparison. Furthermore, it applies to (and is intended to explain) incorrect, as well as correct, reasoning." (Shafto, 1994, par. 10)

15. Let me try to answer Shafto's question. One important distinction between scientific reasoning and the list of "standard" psychology tasks enumerated by Shafto is that scientists apply these domain-general processes in the context of an immense domain-specific, cumulative, shared, and public knowledge base. This base includes facts about the domain, procedures, instrumentation, experimental paradigms, data-analytic methods (cf. Bookstein, 1993), publication practices (cf. Bazerman, 1988) and so on. Until recently, one of the biggest weaknesses of simulated discovery contexts was their lack of much domain-specific knowledge. But recent work has begun to include sufficient domain-specific knowledge to reveal how it influences the formation and evaluation of hypothesis.


16. Before giving illustrative examples of CSS that are not represented in CMS, I will provide a framework for understanding the field of CSS. I will attempt to simultaneously emphasize some of the relations among the CSS chapters that ARE in CMS, as well as illustrate the important lacunae in CMS. The cognitive processes involved in scientific discovery can be classified along two dimensions: one representing degree of domain-specificity or domain-generality, and the other representing the type of processes involved. Table 1 depicts this characterization of the field. The rows focus on the difference between domain-general knowledge and domain-specific knowledge (see Section IV.1) and the columns focus on the major components of the overall discovery process (see Section IV.2).

    Table 1: Types of foci in psychological studies of scientific
    reasoning processes

                   Hypothesis Space     Experiment Space    Evidence
                       Search               Search          Evaluation

    Domain-specific       A                   B                 C

    Domain-general        D                   E                 F

17. The inherent difficulty of studying all of this simultaneously has led most researchers to follow a divide-and-conquer strategy in which they focus on limited aspects of the overall picture. In this paragraph I will list a few characteristic studies. Readers who are already familiar with this literature will be able to infer the basis of the classification from the citations in this paragraph, but I will explain it in more detail in subsequent paragraphs. (Note: I make no attempt in this review to be exhaustive in sampling the CSS literature; that would require a volume in itself.) The following studies focus primarily on single cells in Table 1. Mainly A: Carey, 1985; McCloskey, 1983. Mainly B: Tschirgi, 1980. Mainly C: Chi & Koeske, 1983. Mainly D: Bruner, Goodnow & Austin, 1956 (Reception experiments). Mainly E: Case, 1974; Siegler & Liebert, 1975. Mainly F: Ruffman, Perner, Olson & Doherty, 1993; Shaklee & Paszek, 1985; Wason, 1968. The following studies focus on combinations of cells in Table 1: A & C: Vosniadou & Brewer, 1992. B & E: Cheng & Holyoak (1985). D, E & F: Klayman & Ha, 1987; Wason, 1960. (The fact that much of this work is in the area of cognitive development reflects both my own biases and the fact that developmentalists have a long-standing interest in the development of scientific reasoning processes.)


18. What is the relative influence of domain-specific versus domain-general knowledge on scientific reasoning skills? On the one hand, acquisition of domain-specific knowledge not only changes the substantive structural knowledge in the domain (by definition) but also influences the processes used to generate and evaluate new hypotheses in that domain (Carey, 1985; Keil, 1981; Wiser, 1989). Thus, it is not surprising that, after years of study in a specific domain, scientists exhibit reasoning processes that are purported to be characteristic of the field (e.g., "she thinks like a physicist") and very different from those used by experts in other fields, or by novices or children. On the other hand, in simple contexts that are nearly devoid of domain-specific knowledge, professional scientists are not distinguishable from lay persons (Mahoney & DeMonbreun, 1978), and even pre-schoolers can reason correctly about hypotheses and select appropriate experiments to evaluate them (Sodian, Zaitchik & Carey, 1991).

19. This focus on the relative influence of general versus specific knowledge has produced two distinct literatures: one on domain-specific knowledge (including the development of that knowledge) about scientific phenomena and the other on general reasoning processes. For example, both Chi's and Carey's work, cited above and summarized in their chapters in CMS, exemplify a focus on domain-specific conceptual change (both historically and developmentally), but neither deals with domain-general processes for scientific reasoning. On the other hand, there is an extensive literature on the development of domain-general psychological processes that underlie scientific discovery (e.g., concept identification: Bruner, et al., 1956; interpretation of covariation: Shaklee & Paszek, 1985).


20. The process dimension in Table 1 reflects a view of scientific discovery as a problem-solving process involving search in two distinct, but related, problem spaces (Klahr & Dunbar, 1988; Simon & Lea, 1974). There are three major interdependent processes for coordinating search in this dual space: Hypothesis Space search, Experiment Space search, and Evidence Evaluation. In searching the Hypothesis Space, the initial state consists of some knowledge about a domain, and the goal state is a hypothesis that can account for some or all of that knowledge in a concise or universal form. Search in the Experiment Space is sometimes used in the absence of hypotheses, in order to generate a data pattern over which to induce hypotheses. Even when one or more hypotheses are active, it is not immediately obvious what constitutes a "good" or "informative" experiment. In constructing experiments, subjects are faced with a problem-solving task paralleling their search for hypotheses.

21. The third process -- Evidence Evaluation -- involves a comparison of the predictions derived from a hypothesis with the results obtained from experimentation. The process is not straightforward. Relevant features must be extracted, potential noise and error must be detected, suppressed, and corrected, and the resulting internal representation must be compared with earlier predictions. Theoretical biases influence not only the strength with which hypotheses are held in the first place -- and hence the amount of disconfirming evidence necessary to refute them -- but also the features in the evidence that will be attended to and encoded (Wisniewski & Medin, 1991)

22. Most investigators have studied these three processes in isolation. For example, classical concept learning studies (Bruner, Goodnow & Austin, 1956) focus on hypothesis formation and evaluation, but do not require subjects to design experiments. In contrast, studies of people's ability to design factorial experiments (Siegler & Liebert, 1975) or to select a piece of evidence that can discriminate among hypotheses (Bruner, Olver & Greenfield, 1966) do not require them to formulate hypotheses. Finally, studies of people's ability to decide which of several hypotheses is supported by evidence focus on evidence evaluation, while suppressing both hypothesis formation and experimental design (Shaklee & Paszek, 1985).


23. Research focusing on either domain-specific or domain-general knowledge has yielded much useful information about scientific discovery. However, such efforts are unable to assess an important aspect of this kind of problem solving: the interaction between the two types of knowledge. Similarly, the isolation of hypothesis search, experimentation strategies, and evidence evaluation begs a fundamental question: how are the three main processes integrated and how do they mutually influence one another? Recent investigations have begun to integrate the six different aspects of the scientific discovery process represented in Table 1 while still being cast at a sufficiently fine grain so as not to loose relevant detail about the discovery process.

24. Several of these studies use tasks in which domain-general problem-solving heuristics play a central role in constraining search while at the same time subjects' domain-specific knowledge biases them to view some hypotheses as more plausible than others. Furthermore, in these tasks, both domain-specific knowledge and domain-general heuristics guide subjects in designing experiments and evaluating their outcomes. With respect to the integration of the three processes, such tasks require coordinated search in BOTH the experiment space and the hypothesis space, as well as the evaluation of evidence produced by subject-generated experiments. (Dunbar, 1993; Klahr & Dunbar, 1988; Klahr, Fay & Dunbar, 1993; Kuhn, 1989; Kuhn, Amsel & O'Loughlin, 1988; Kuhn, Schauble & Garcia-Mila, 1992; Schauble, 1990; Schauble, Glaser, Raghavan & Reiner, 1991).


25. There are, of course, additional dimensions that warrant attention in the overall study of the cognition of scientific discovery. Recent investigations have begun to address the social context of science by studying interactions among subjects as they collaborate on "simulated science" problems (Okada, 1994). Other investigations have begun to "scale up" the difficulty of the problems, in order to reveal additional heuristics that subjects use for dealing with complexity (Schunn & Klahr, 1992; 1993). Perhaps the boldest and most promising study to date is one in which a cognitive scientist recorded and examined in great detail the day-to-day processes of scientific reasoning in several world-class molecular genetics laboratories for several months (Dunbar, 1994; Dunbar & Baker, 1994).

26. All of the CSS work cited above falls into the general category of cognitive psychology or cognitive development. Much of the work in those broader domains can be easily construed as having relevance for the cognitive psychology of science. For example, several laboratories are investigating the effects of diagrams and graphs on scientific thinking (e.g., Cheng & Simon, 1992; Fallside & Just, in press; Hegarty & Just, 1993; Shah & Carpenter, 1995; Tabachneck & Simon, 1992; Qin & Simon, 1992). Such work directly addresses a concern voiced in Bookstein's (1993) review that CMS inadequately treats the way that people process quantitative data.


27.There is indeed a vigorous field that is formulating models of the cognitive science of science, but CMS provides only a few glimpses of what it is, and a few hints about its potential value to the Philosopher of Science. Consequently, the field will have to look well beyond CMS to find it. Taken as a whole, the work cited in this review represents a powerful research program that has already revealed some of the basic structure of the processes of scientific discovery and will continue to reveal more. In contrast to an earlier reviewer's conclusion (Hardcastle, 1994) that Gertrude Stein's "There is no 'there', there" characterizes CMS, I would end on a more optimistic note: There IS a 'there', but it's over here.


Bazerman, C. (1988) Shaping Written Knowledge: The Genre and Activity of the Experimental Article in Science. University of Wisconsin Press.

Boden, M.A. (1990) The creative mind: myths and mechanisms. London: Basic Books.

Boden, M.A. (1994) Precis of The creative mind: myths and mechanisms. Behavioral and Brain Sciences, 17, 519-570.

Bookstein, F.L. (1993) Geometry as Cognition in the Natural Sciences: Book review of Giere on Scientific-Cognition. PSYCOLOQUY 4(65) scientific.cognition.2.bookstein.

Bruner, J.S., Goodnow, J.J. & Austin, G.A. (1956) A study of thinking. New York: NY Science Editions.

Bruner, J.S., Olver, R.R. & Greenfield, P.M. (1966) Studies in cognitive growth. New York: Wiley.

Callahan, J. & Sorensen, S. (1992) Using TETRAD II as an automated exploratory tool. Social Science Computer Review, 10, 329-336.

Carey, S. (1985) Conceptual change in childhood. Cambridge, MA: Bradford Book/MIT Press.

Case, R. (1974) Structures and strictures: Some functional limitations on the course of cognitive growth. Cognitive Psychology, 6, 544-573.

Cheng, P.W. & Holyoak, K.J. (1985) Pragmatic reasoning schemas. Cognitive Psychology, 17, 391-416.

Cheng, P.C.H. & Simon, H.A. (1992) The right representation for discovery: finding the conservation of momentum. In D. Sleeman & P. Edwards (Eds.), Machine Learning: Proceedings of the Ninth International Conference (ML92).

Chi, M.T.H. & Koeske, R.D. (1983) Network representations of a child's dinosaur knowledge. Developmental Psychology, 19, 29-39.

Crick, F. (1988) What Mad Pursuit: A personal view of scientific discovery. New York: Basic Books.

Dunbar, K. (1993) Concept discovery in a scientific domain. Cognitive Science, 17, 397-434.

Dunbar, K. (1994) How scientists really reason: Scientific reasoning in real-world laboratories. To appear in R.J. Sternberg & J. Davidson (Eds.), Mechanisms of Insight. Cambridge, MA: MIT Press.

Dunbar, K. & Baker, L.A. (1994) Goals, analogy, and the social constraints of scientific discovery. Behavioral and Brain Sciences, 17, 538-539.

Erdos, P., Fajtlowicz, S. & Staton, W. (1991) Degree sequences in the triangle-free graphs, Discrete Mathematics, 92 (91), 85-88.

Fallside, D.C. & Just, M.A. (in press) Understanding the kinematics of a simple machine. Visual Cognition.

Giere, R.N. (1992) Cognitive Models of Science. Minnesota Studies in the Philosophy of Science, volume 15. Minneapolis: University of Minnesota Press.

Giere, R.N. (1993) Precis of Cognitive Models of Science. PSYCOLOQUY 4(56) scientific-cognition.1.giere.

Hardcastle, G.L. (1994) Why Don't We Yet Have a Cognitive Science of Science? Book review of Giere on Scientific-Cognition. PSYCOLOQUY 5(43) scientific.cognition.6.hardcastle.

Hegarty, M. & Just, M.A. (1993) Constructing mental models of machines from text and diagrams. Journal of Memory and Language, 32, 717-742.

Holmes, F.L. (1985) Lavoisier and the Chemistry of Life: An Exploration of Scientific Creativity. Madison: University of Wisconsin Press.

Keil, F.C. (1981) Constraints on knowledge and cognitive development. Psychological Review, 88, 197-227.

Klahr, D. & Dunbar, K. (1988) Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-55.

Klahr, D., Fay, A.L. & Dunbar, K. (1993) Heuristics for scientific experimentation: A developmental study. Cognitive Psychology, 24(1), 111-146.

Klayman, J. & Ha, Y. (1987) Confirmation, disconfirmation and information in hypothesis testing. Psychological Review, 94, 211-228.

Klayman, J. & Ha, Y. (1989) Hypothesis testing in rule discovery: Strategy, structure, and content. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15(4), 596-604

Kuhn, D. (1989) Children and adults as intuitive scientists. Psychological Review, 96, 674-689.

Kuhn, D., Amsel, E. & O'Loughlin, M. (1988) The development of scientific reasoning skills. Orlando, FL: Academic Press.

Kuhn, D., Schauble, L. & Garcia-Mila, M. (1992) Cross-domain development of scientific reasoning. Cognition and Instruction, 9, 285-327.

Kulkarni, D. & Simon, H.A. (1990) Experimentation in machine discovery. In Shrager, J. & Langley, P. (Eds.) Computational Models of Scientific Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann.

Langley, P., Simon, H.A., Bradshaw, G.L. & Zytkow, J.M. (1987) Scientific discovery: Computational explorations of the creative processes. Cambridge, MA: MIT Press.

Mahoney, M.J. & DeMonbreun, B.G. (1978) Psychology of the scientist: An analysis of problem-solving bias. Cognitive Therapy and Research. 1(3), 229-238.

McCloskey, M. (1983) Naive theories of motion. In D. Gentner & A.L. Stevens (Eds.), Mental Models (pp. 299-324). Hillsdale, NJ: Erlbaum.

Mynatt, C.R., Doherty, M.E. & Tweney, R.D. (1977) Confirmation bias in a simulated research environment: an experimental study of scientific inference. Quarterly Journal of Experimental Psychology, 29, 85-95.

Mynatt, C.R., Doherty, M.E. & Tweney, R.D. (1978) Consequences of confirmation and disconfirmation in a simulated research environment. Quarterly Journal of Experimental Psychology, 30, 395-406.

Okada, T. (1994) Collaborative Scientific Discovery Processes. Unpublished Doctoral Dissertation. Department of Psychology, Carnegie Mellon University, Pittsburgh, PA.

Perkins, D.N. (1981) The Mind's Best Work. Cambridge, MA: Harvard University Press.

Posner, M.I. (1989) Foundations of Cognitive Science. Cambridge, MA: MIT Press.

Qin, Y. & Simon, H.A. (1992) Imagery as process representation in problem solving. Proceedings of the 14th Annual Conference of the Cognitive Science Society, July 29 - August 1, 1992.

Ruffman, T., Perner, J., Olson, D.R. & Doherty, M. (1993) Reflecting on scientific thinking: children's understanding of the hypothesis-evidence relation. Child Development, 64, 1617-1636.

Schauble, L. (1990) Belief revision in children: The role of prior knowledge and strategies for generating evidence. Journal of Experimental Child Psychology, 49, 31-57.

Schauble, L., Glaser, R., Raghavan, K. & Reiner, M. (1991) Causal models and experimentation strategies in scientific reasoning. The Journal of the Learning Sciences, 1, 201-238.

Schunn, C.D. & Klahr, D. (1992) Complexity management in a discovery task. In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society.

Schunn, C.D. & Klahr, D. (1993) Other vs. self-generated hypotheses in scientific discovery. In Proceedings of Fifteenth Annual Meetings of Cognitive Science Society.

Shaklee, H. & Paszek, D. (1985) Covariation judgment: Systematic rule use in middle childhood. Child Development, 56, 1229-1240.

Shafto, M.G. (1994) What Can Insiders Learn From Outsiders? Book review of Giere on Scientific-Cognition. PSYCOLOQUY 5(30) scientific.cognition.4.shafto.

Shah, P. & Carpenter, P.A.,(1995) Conceptual limitations in comprehending line graphs. Journal of Experimental Psychology: General.

Shrager, J. & Langley, P. (1990) Computational Models of Scientific Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann.

Siegler, R.S. & Liebert, R.M. (1975) Acquisition of formal scientific reasoning by 10- and 13 year-olds: designing a factorial experiment. Developmental Psychology, 10, 401-402.

Simon H.A. & Lea, G. (1974) Problem solving and rule induction: A unified view. In L. Gregg (Ed.), Knowledge and Cognition (pp. 105-128). Hillsdale, NJ: Lawrence Erlbaum.

Sodian, B., Zaitchik, D. & Carey, S. (1991) Young children's differentiation of hypothetical beliefs from evidence. Child Development, 62, 753-766.

Tabachneck, H. & Simon, H.A. (1992) Effect of mode of data presentation on reasoning about economic markets. American Association for Artificial Intelligence Spring Symposium Series: Working Notes, 59-64.

Thagard, P. (1992) Conceptual Revolutions. Princeton, NJ: Princeton University Press.

Tschirgi, J.E. (1980) Sensible reasoning: A hypothesis about hypotheses. Child Development, 51, 1-10.

Tweney, R.D., Doherty, M.E. & Mynatt, C.R. (Eds.). (1981) On Scientific Thinking. New York: Columbia University Press.

Vosniadou, S. & Brewer, W.F. (1992) Mental models of the earth: a study of conceptual change in childhood. Cognitive Psychology, 24, 535-585.

Valdes-Perez, R.E. (1994a) Conjecturing hidden entities via simplicity and conservation laws: Machine discovery in chemistry. Artificial Intelligence, 65(2), 247-280.

Valdes-Perez, R.E. (1994b) Algebraic reasoning about reactions: Discovery of conserved properties in particle physics. Machine Learning, 17(1), 47-68.

Wason, P.C. (1960) On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12, 129-140.

Wason, P.C. (1968) Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20, 273-281.

Weisberg, R.W. (1993) Creativity: Beyond the Myth of Genius. New York: W.H. Freeman.

Wiser, M. (1989, April) Does learning science involve theory change? Paper presented at the Biannual Meeting of the Society for Research in Child Development, Kansas City.

Wisniewski, E.J. & Medin, D.L. (1991) Harpoons and long sticks: the interaction of theory and similarity in rule induction. In D.H. Fisher, Jr., M.J. Pazzani & P. Langley (Eds.) Concept Formation: Knowledge and Experience in Unsupervised Learning. San Mateo, CA: Morgan Kaufmann.

Volume: 5 (next, prev) Issue: 69 (next, prev) Article: 12 (next prev first) Alternate versions: ASCII Summary