Summary of PSYCOLOQUY topic Connectionist Explanation

Topic:
Title & AuthorAbstract
9(04) ARE CONNECTIONIST MODELS THEORIES OF COGNITION?
Target Article by Green on Connectionist Explanation
Christopher D. Green
Department of Psychology
York University
North York, Ontario
M3J 1P3 CANADA
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: This paper explores the question of whether connectionist models of cognition should be considered to be scientific theories of the cognitive domain. It is argued that in traditional scientific theories, there is a fairly close connection between the theoretical (unobservable) entities postulated and the empirical observations accounted for. In connectionist models, however, hundreds of theoretical terms are postulated -- viz., nodes and connections -- that are far removed from the observable phenomena. As a result, many of the features of any given connectionist model are relatively optional. This leads to the question of what, exactly, is learned about a cognitive domain modelled by a connectionist network.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(05) DO WIRES MODEL NEURONS?
Commentary on Green on Connectionist-Explanation
Jack Orbach
Department of Psychology
Queens College
City University of New York
Flushing, NY 11367

JOrbach@worldnet.att.net
Abstract: Connectionists should not lose sight of the fact that the electronic circuit has little in common with the neural circuit in the brain.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(06) THE ROLE OF IMPLEMENTATION IN CONNECTIONIST EXPLANATION
Commentary on Green on Connectionist-Explanation
Gerard J. O'Brien
Department of Philosophy
University of Adelaide
South Australia 5005
Australia
http://chomsky.arts.adelaide.edu.au/Philosophy/gobrien/gobrien.htm

gobrien@arts.adelaide.edu.au
Abstract: Green is right to question the explanatory role of connectionist models in cognitive science. What is more, he is generally right in his judgement that the only way of interpreting connectionist models as theories of cognitive phenomena is by construing them as "literal models of brain activity" (1998, para. 20). This is because connectionist explanations of cognitive phenomena are more dependent on details of implementation than their conventional ("classical") counterparts.

Keywords: classicism, cognition, connectionism, explanation, implementation, methodology, theory.

9(07) LASHLEY'S LESSON IS NOT GERMANE
Reply to Orbach on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Orbach (1998) incorrectly interprets my target article (Green 1998) as claiming that connectionist networks actually model neural activity, whereas in reality I argue that nets will NEED to model neural activity if they are to model anything at all.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(08) PROBLEMS WITH THE IMPLEMENTATION ARGUMENT:
Reply to O'Brien on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: O'Brien (1998) and I (Green 1998) agree on many issues, but his reliance on Clark's (1990, 1993) justification for connectionist research in terms of explanatory inversion raises as many problems as it solves. Connectionism seems to generate ontological problems that do not impede symbolic cognitive science.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(09) ARE HYPOTHETICAL CONSTRUCTS PREFERRED OVER INTERVENING VARIABLES?
Commentary on Green on Connectionist-Explanation
Michael E. Young
Dept. of Psychology
The University of Iowa
Iowa City, IA 52242
www.psychology.uiowa/edu/faculty/young.htm

michael-e-young@uiowa.edu
Abstract: Green (1998) expresses dissatisfaction with contemporary connectionist models as theories of cognition. A reexamination of the historical distinction between hypothetical constructs and intervening variables and their relative roles in theory development reveals an important role for well-designed, parsimonious connectionist models in the study of cognition. Although realist theories (i.e., theories that include hypothetical constructs) are bolder and might provide more intellectual satisfaction to psychologists, instrumentalist theories (i.e., theories that include only intervening variables) can bring rigor and understanding to the enterprise of cognitive science.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(10) LOCALIST CONNECTIONISM FITS THE BILL
Commentary on Green on Connectionist-Explanation
Jonathan Grainger
Centre de Recherche en Psychologie Cognitive, CNRS
Universite de Provence
Aix-en-Provence
France

Arthur M. Jacobs
Dept. of Psychology
Philips University of Marburg,
Marbug, Germany

grainger@newsup.univ-mrs.fr jacobsa@mailer.uni-marburg.de
Abstract: Green (1998) restates a now standard critique of connectionist models: they have poor explanatory value as a result of their opaque functioning. However, this problem only arises in connectionist models that use distributed hidden unit representations, and is NOT a feature of localist connectionism. Indeed, Green's critique reads as an appeal for the development of localist connectionist models as an excellent starting point for building a unified theory of human cognition.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(11) CONNECTIONISM AND COGNITIVE THEORIES
Commentary on Green on Connectionist-Explanation
David A. Medler
Center for the Neural
Basis of Cognition
Carnegie Mellon University
Pittsburgh, PA 15213

Michael R. W. Dawson
Biological Computation Project
Department of Psychology
University of Alberta
Edmonton, Alberta
Canada T6G 2E9

medler@cnbc.cmu.edu mike@psych.ualberta.ca
Abstract: The relationship between connectionist models and cognitive theories has been a source of considerable debate within cognitive science. Green (1998) has recently joined this debate, arguing that connectionist models should only be interpreted as literal models of brain activity; in other words, connectionist models only contribute to cognitive theories at the implementational level. Recent results, however, have shown that interpreting the internal structure of connectionist models can produce novel cognitive theories that are more than mere implementations of classical theories (e.g., Dawson, Medler, & Berkeley, 1997). Furthermore, such connectionist theories have an advantage over more classical approaches to cognitive theories in that they posit explanatory -- as opposed to merely descriptive -- theories of cognition.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(12) ARE NEURAL NETS A VALID MODEL OF COGNITION?
Commentary on Green on Connectionist-Explanation
William C. Hoffman
Institute for Topological Psychology
2591 W. Camino Llano, Tucson, AZ, USA 85742-9074

willhof@worldnet.att.net
Abstract: Connectionist models purport to model cognitive neuropsychology by means of adaptive linear algebra applied to point neurons. As a theory of cognition, this approach is deficient in several aspects: noncovergence in neurobiological real-time; omission of two topological structures fundamental to the information processing psychology on which connectionist models are based; omission of the local structure of neurobiological processing; omission of actual neuron morphologies, cortical cytoarchitecture, and the cortical orientation response; the inability to perform memory retrieval from point-neuron "weights" in neurobiological real-time; and failure to implement psychological constancy. Cognitive processing by neuronal flows is offered as a viable alternative. Finally, neural nets fail Hempel's test of empirical and systematic import.

Keywords: connectionism, neural nets, neuropsychology, cognition, perception, computational models, philosophy of science, memory, psychological constancy, symmetric difference.

9(13) REALISM, INSTRUMENTALISM AND CONNECTIONISM
Reply to Young on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Young (1998) argues that my critique of connectionism (Green 1998) is grounded in an assumption that realism is superior to instrumentalism as an interpretation of scientific theories, and that the difficulties that I argue connectionism faces can be avoided if connectionists adopt an instrumentalist stance. I made no such assumption, however, and a closer examination of instrumentalism shows it to be detrimental to the connectionist cause. Realism -- probably neural realism -- remains the connectionist's best hope.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(14) DOES LOCALISM SOLVE CONNECTIONISM'S PROBLEM?
Reply to Grainger & Jacobs on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Grainger & Jacobs (1998) argue that the problems facing connectionism discussed in my target article (Green 1998) can be overcome by switching to a localist connectionist perspective. I question whether the cost of doing so outweighs the disadvantages of staying with the parallel distributed processing approach to connectionist cognitive science.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(15) STATISTICAL ANALYSES DO NOT SOLVE CONNECTIONISM'S PROBLEM
Reply to Medler & Dawson on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Medler & Dawson (1998) claim (1) that I am just a closet implementationalist, (2) that I have ignored a range of statistical analyses that answer my challenge to connectionism, and (3) that only connectionist networks can produce explanatory models of cognition. I reply that I am not an implementationalist, that the statistical analyses to which they refer do not solve the problem I have posed, and that the question of whether a theory is explanatory is independent of the question of how it was generated.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(16) OF NEURONS AND CONNECTIONIST NETWORKS
Reply to Hoffman on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Hoffman (1998) tells us of a number of difficulties connectionists may face in any attempt to model neural activity with connectionist networks. I have no reason to doubt that, but it is more of a problem for connectionists than it is for my argument (Green 1998).

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(17) WHY CONNECTIONIST NETS ARE GOOD MODELS
Commentary on Green on Connectionist-Explanation
Changsin Lee, Bram van Heuveln,
Clayton T. Morrison & Eric Dietrich
PACCS, Department of Philosophy
Binghamton University
Binghamton, NY 13902
http://www.paccs.binghamton.edu/chang/

chang@turing.paccs.binghamton.edu
Abstract: We agree with Green that some connectionists do not make it clear what their nets are modeling. However, connectionism is still a viable project, connectionism, because it provides a different ontology and different ways of modeling cognition by requiring us to consider implementational details. We also argue against Green's view of models in science and his characterization of connectionist networks.

Keywords: cognition, connectionism, explanation, model, ontology, theory

9(18) CONNECTIONIST MODELING AND THEORIZING:
WHO DOES THE EXPLAINING AND HOW?
Commentary on Green on Connectionist-Explanation
Morris Goldsmith
Department of Psychology
University of Haifa
Haifa, 31905, Israel

mgold@psy.haifa.ac.il
Abstract: Green's (1998) criticism that connectionist models are devoid of theoretical substance rests on a simplistic view of the nature of connectionist models and a failure to acknowledge the division of labor between the model and the modeller in the enterprise of connectionist modelling. The "theoretical terms" of connectionist theory are not to be found in processing units or in connections but in more abstract characterizations of the functional properties of networks. Moreover, these properties are -- and at present should be -- only loosely tied to the known (and largely unknown) properties of neural networks in the brain.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(19) DOES BRAIN ACTIVITY-ORIENTED MODELLING SOLVE THE PROBLEM?
Commentary on Green on Connectionist-Explanation
Claus Lamm
Brain Research Laboratory
Department of Psychology
University of Vienna
A-1010 Vienna
Austria

Claus.Lamm@univie.ac.at
Abstract: Claiming that it is not clear how many theoretical terms a connectionist model has to be built of is one of Green's (1998a) main arguments for referring to (a) a lack of correspondence of the theoretical entities of connectionist models to any type of empirical entity and (b) the resulting abundance of degrees of freedoms in the connectionist modelling of cognition. A more brain-oriented modelling approach might yield the desired theoretico-empirical mapping but it does not reduce a model's degrees of freedom.

Keywords: cognition, connectionism, methodology, theory, computer modelling, epistemology

9(20) COGNITIVE THEORY AND NEURAL MODEL: THE ROLE OF LOCAL REPRESENTATIONS
Commentary on Green on Connectionist-Explanation
Paul A. Watters
Department of Computing
Macquarie University
NSW 2109
AUSTRALIA

pwatters@mpce.mq.edu.au
Abstract: Green raises a number of questions regarding the role of "connectionist" models in scientific theories of cognition, one of which concerns exactly what it is that units in artificial neural networks (ANNs) stand for, if not specific neurones or groups of neurones, or indeed, specific theoretical entities. In placing all connectionist models in the same basket, Green seems to have ignored the fundamental differences which distinguish classes of models from each other. In this commentary, we address the issue of distributed versus localised representations in ANNs, arguing that it is difficult (but not impossible) to investigate what units stand for in the former case, but that units do correspond to specific theoretical entities in the latter case. We review the role of localised representations in a neural network model of a semantic system in which each unit corresponds to a letter, word, word sense, or semantic feature, and whose dynamics and behaviour match those predicted from a cognitive theory of skilled reading. Thus, we argue that ANNs might be useful in developing general mathematical models of processes for existing cognitive theories that already enjoy empirical support.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(21) FUNCTION, SUFFICIENTLY CONSTRAINED, IMPLIES FORM
Commentary on Green on Connectionist-Explanation
Robert M. French
Psychology Department (B33)
University of Liege,
4000 Liege, Belgium
http://www.fapse.ulg.ac.be/Lab/Trav/rfrench.html

Axel Cleeremans
Seminaire de Recherche en Sciences Cognitives
Universite Libre de Bruxelles
1050 Brussels, Belgium

rfrench@ulg.ac.be axcleer@ulb.ac.be
Abstract: Green's (1998) target article is an attack on most current connectionist models of cognition. Our commentary will suggest that there is an essential component missing in his discussion of modeling, namely, the idea that the appropriate level of the model needs to be specified. We will further suggest that the precise form (size, topology, learning rules, etc.) of connectionist networks will fall out as ever more detailed constraints are placed on their function.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(22) MODELS AND THEORIES OF COGNITION ARE ALGORITHMS
Commentary on Green on Connectionist-Explanation
Bruce Bridgeman
Department of Psychology
University of California
Santa Cruz, CA 95064
USA

bruceb@cats.ucsc.edu
Abstract: PDP models (sometimes misnamed "connectionist") solve computational problems with a family of algorithms, but changeable weights between their connections mean that the details of their algorithms are subject to change. Thus they do not fulfill the requirement that a model must specify its algorithm for solving a computational problem, or that it must model real data and fail to model false data. Other models use distributed coding but retain homeomorphism and explicit algorithms. An example uses a lateral inhibitory network with fixed weights to model visual masking and sensory memory.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(23) CONNECTIONIST NETS ARE ONLY GOOD MODELS IF WE KNOW WHAT THEY MODEL
Reply to Lee et al. on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Lee, Van Heuveln, Morrison, & Dietrich (1998) suggest, incorrectly, that I argued (Green 1998a) that connectionist networks will not be scientific models unless and until they capture every aspect of neural activity. What I argued was that unless and until connectionists come to terms with the idea that connectionist networks must model SOMETHING (and neural activity currently seems to be the best candidate, but it need not be the only one) they are not models of anything at all, and therefore may have little role to play in cognitive science.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(24) CAN CONNECTIONIST THEORIES ILLUMINATE COGNITION?
Comment on Green on Connectionist-Explanation
Athanassios Raftopoulos
Assistant Professor of Philosophy and Cognitive Science
American College of Thessaloniki
Anatolia College
P.O. BOX 21021,
55510 PYLEA
Thessaloniki, GREECE

maloupa@compulink.gr
Abstract: In this commentary I attempt to show in what sense we can speak of connectionist theory as illuminating cognition. It is usually argued that distributed connectionist networks do not explain brain function because they do not use the appropriate explanatory vocabulary of propositional attitudes, and because their basic terms, being theoretical, do not refer to anything. There is a level of analysis, however, at which the propositional attitude vocabulary can be reconstructed and used to explain the performance of networks; and the basic terms of networks are not theoretical but observable entities that purport to refer to terms used to describe the brain.

Keywords: connectionism, cognition, explanation, philosophy of science, theory, theoretical terms.

9(25) HIGHER FUNCTIONAL PROPERTIES DO NOT SOLVE CONNECTIONISM'S PROBLEMS
Reply to Goldsmith on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Goldsmith (1998) argues that I (Green 1998a) am wrong in asserting that nodes and connections are the theoretical entities of connectionist theories. I reply that if he is right, then connectionist theory is not connectionist after all. I also comment briefly on Seidenberg's (1993) approach to the interpretation of connectionist research, and on the issue of the proper distinction to be drawn between theories and models.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(26) THE DEGREES OF FREEDOM WOULD BE TOLERABLE IF NODES WERE NEURAL
Reply to Lamm on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Lamm (1998) expresses concern that there is a lack of fit between my call to connectionists to declare themselves to be direct modelers of neural activity and my concern that connectionist nets have too many degree of freedom (Green 1998). I am sympathetic with his worry, but argue that the degrees of freedom problem does not loom as large once we know what constraints we are working under -- as we would if we declared that connectionist nets are literal neural models.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(27) LOCALIST CONNECTIONISM DOES NOT ADDRESS THE ISSUE
Reply to Watters on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Watters (1998) is concerned that I excluded localist connectionist networks from consideration (Green 1998a). As indicated in my prior reply (Green 1998b), localist nets do not suffer from the problems outlined in the target article because they are a species of "classical" symbolic cognitive theory.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(28) SEMANTICS IS NOT THE ISSUE
Reply to French & Cleeremans on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: French & Cleeremans claim that my argument (Green 1998) requires that every part of a connectionist network be semantically interpretable. They have confused semantic interpretation (an issue peculiar to cognitive science) with a simple correspondence between aspects of models and aspects of the portion of the world being modeled (an issue as relevant to physics as to cognitive science), and have thereby misunderstood my position. Most of the rest of their commentary follows from their initial misapprehension.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(29) LATERAL INHIBITION IS A GOOD EXAMPLE
Reply to Bridgeman on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo/

christo@yorku.ca
Abstract: Bridgeman's (1998) example of a class of networks that are grounded in the known neuroanatomy of the Limulus addresses many of the problems I raised quite nicely. I also discuss the differences between the terms "connectionist," "PDP," and "neural network."

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(30) CONNECTIONIST MODELLING STRATEGIES
Commentary on Green on Connectionist-Explanation
Jonathan Opie
Department of Philosophy
The University of Adelaide
South Australia 5005
http://chomsky.arts.adelaide.edu.au/Philosophy/jopie/jopie.htm


jopie@arts.adelaide.edu.au
Abstract: Green offers us two options: either connectionist models are literal models of brain activity or they are mere instruments, with little or no ontological significance. According to Green, only the first option renders connectionist models genuinely explanatory. I think there is a third possibility. Connectionist models are not literal models of brain activity, but neither are they mere instruments. They are abstract, IDEALISED models of the brain that are capable of providing genuine explanations of cognitive phenomena.

Keywords: connectionism, explanation, instrumentalism, realism, idealisation.

9(35) NEITHER SEMANTICS NOR THEORY-OBSERVATION ARE RELEVANT
Reply to Raftopoulos on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Raftopoulos (1998) claims that I (Green 1998) argued that every node in a connectionist network must be semantically interpretable, and that I rely crucially on an untenable distinction between theory and observation to make my argument run. I reply by showing that neither of these claims is correct, and that Raftopoulos's case against my argument does not appear to be coherent.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(36) CONNECTIONISM IS A PROGRESSIVE RESEARCH PROGRAMME:
"COGNITIVE" CONNECTIONIST MODELS ARE JUST MODELS
Commentary on Green on Connectionist-Explanation
Michael Thomas
Department of Psychology
King Alfred's University College
Winchester
Hants SO22 4NR

Tony Stone
Division of Psychology
South Bank University
London SE1 0AA

michael.thomas@psy.ox.ac.uk stonea@sbu.ac.uk
Abstract: Connectionist models are cognitive models which can serve two functions. They can demonstrate the computational feasibility of a cognitive theory (in this sense they model cognitive theories), or they can suggest new ways of conceiving the functional structure of the cognitive system. The latter leads to connectionist theories with new theoretical constructs, such as stable attractors, or soft constraint satisfaction. A number of examples of connectionist models and theories demonstrate the fertility of connectionism, a progressive research programme in Lakatos's (1970) sense. Green's (1998a) specificity argument against connectionist theoretical constructs fails because it relies upon a simplistic view of theoretical constructs that would undermine even the "gene" construct, Green's paradigmatic example of a theoretical entity in good standing. This view of theoretical entities is based upon a simplistic Popperian picture of science.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(37) IDEALISATION IS FINE; OPPORTUNISM IS NOT
Reply to Opie on Connectionist-Explanation
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca
Abstract: Opie (1998) seems to accept my worries about the ontological base of connectionism and resolves them by accepting my suggestion that neurology is the ground of connectionist cognitive science. He goes on to argue that scientists should be given some room to idealise the entities under study. One cannot agree. However, he seems to begs the question in suggesting that units and connections are the very things that units and connections model. There also appear to be some problems with the use of sources in support of his position.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.

9(47) CONNECTIONIST MODELS CAN REVEAL GOOD ANALOGIES
Commentary on Green on Connectionist-Explanation
Alberto Greco
DISA Psychology Laboratory
University of Genoa
Genova, Italy
http://www.lettere.unige.it/sif/strutture/1/home/greco/index.htm

greco@disa.unige.it
Abstract: Green (1998a) argues that distributed connectionist models are not theories of cognition. This is reasonable if it means that the explanatory role of connectionist models is not clear, but Green's analysis seems directed against the wrong target when he applies a realist position to models. His argument confuses models with objects. Models are useful as long as they establish analogies between unknown and known phenomena; but not all details are important. The real problem may concern the explanatory role of connectionist models (which is what Green also seems concerned about), but then it should be formulated on different grounds. If they are intended as cognitive models (and not as mere AI artifacts), their internal operations should be describable (by analogy) using a cognitive vocabulary. This is often not the case with connectionist models. Are they always useless as cognitive models then? I cannot share Green's conclusion that the only hope for connectionism is to model brain activity. On the contrary, because the most attractive feature of connectionist models is that they can perform cognitive tasks using no symbols, they can be useful tools for studying (by analogy) the origin and grounding of symbols.

Keywords: artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.