Christopher D. Green (1998) Semantics is not the Issue. Psycoloquy: 9(28) Connectionist Explanation (25)

Volume: 9 (next, prev) Issue: 28 (next, prev) Article: 25 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(28): Semantics is not the Issue

SEMANTICS IS NOT THE ISSUE
Reply to French & Cleeremans on Connectionist-Explanation

Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca

Abstract

French & Cleeremans claim that my argument (Green 1998) requires that every part of a connectionist network be semantically interpretable. They have confused semantic interpretation (an issue peculiar to cognitive science) with a simple correspondence between aspects of models and aspects of the portion of the world being modeled (an issue as relevant to physics as to cognitive science), and have thereby misunderstood my position. Most of the rest of their commentary follows from their initial misapprehension.

Keywords

artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.
1. Even 10 years after their famous critique of connectionism, connectionists can't seem to get Fodor & Pylyshyn (1988) off their minds. Although it has been implied and suggested by others of my commentators, French & Cleeremans (1998) are the most explicit about their belief that I'm just rerunning an old argument about the alleged necessity of semantic interpretability in cognition against the connectionist community. This, despite the fact that I explicitly say in my target article: "it is important to note that I am not arguing that connectionist networks must give way to symbolic networks because cognition is inherently symbolic... That is an entirely independent question" (Green 1998, para. 23).

2. Specifically, French & Cleeremans betray their misunderstanding of my argument -- they characterize it as an "attack," whereas I see it more as a "challenge" to be met -- when they say "Green would claim that we have learned nothing [from their connectionist stick-balancing machine] because there is no explicit semantic content to the nodes and connections in the connectionist network" (para. 4). This is is entirely immaterial to the argument in the target article, where the issue was not whether the nodes and connections REPRESENT anything (in the semantic sense), but rather whether they CORRESPOND to anything in the phenomenon that is being studied -- viz., "real" [1] (in vivo) cognitive systems like you and me.

3. Much of the rest of French & Cleeremans's critique follows from this basic misconstrual of my position. They argue that "looking inside the connectionist box does indeed tell us a LOT." The target article did not suggest otherwise [2]. It does tell us a lot, but only about the workings of the connectionist box -- unless we have independent reason to believe that "real" cognitive systems are structured in a similar way. In order to answer that question, we must SPECIFY what "parts" or "aspects" of the "real" cognitive system are structured in the same way as the connectionist "simulation" of it. Declaring that the nodes and connections of connectionist systems correspond to neurons is ONE way of doing this. French & Cleeremans ask "Why stop at neurons?" (para. 6). No reason at all. Indeed, I have repeatedly made the point that I am open to connectionists declaring that the parts of their systems correspond to other things. The point is that they must correspond to SOMETHING and that that something must be declared in advance. Otherwise, the model operates under no constraints at all. And since, as we all know as a matter of basic logic, any finite set of data (which is all we ever have) perfectly confirms an infinite number of theories, a theory with no constraints is of little scientific interest.

4. Finally, near the end of their commentary, French & Cleeremans misconstrue my point about dual-action neurons. First of all, I was careful to note that dual-action neurons do exist (para. 19), but that they seem to be (at least) very rare in mammals, whereas their artificial counterparts are virtually universal in connectionist models. Second, even if such neurons DID heavily populate mammalian brains, their argument would not be with me, but with Crick and Asanuma (1986) and Churchland (1990), who have been the point's main advocates. Neurons are not the issue: specifying what one is modeling -- particularly what the parts of one's model correspond to in the real world -- is the issue. To this, French & Cleeremans would seem to have little to say.

ENDNOTES

[1] The term "real" is used here, and throughout, advisedly. It is only being opposed to the term "artificial" as it is used in the phrase "artificial intelligence." Whether "artificial" intelligence systems can achieve actual intelligence is a separate question on which one can be agnostic at present. The term "real" here is only meant to refer to the life forms that we have found that are naturally cognitive (e.g., people), and whose cognitive systems we are attempting to study through experimentation, simulation, and a variety of other scientific techniques.

[2] If French & Cleeremans really believe I am not aware of techniques for discovering patterns with the activations and weights of the nodes and connections, they are advised to read the other commentaries and replies that came before them. I have discussed these at some length elsewhere.

REFERENCES

Churchland, P. M. (1990) Cognitive activity in artificial neural networks. In: Thinking: An invitation to cognitive science (Vol. 3), ed. D. N. Osherson & E. E. Smith, MIT Press.

Crick, F.H.C. & Asanuma, C. (1986) Certain aspects of the anatomy and physiology of the cerebral cortex. In: Parallel distributed processing: Explorations in the microstructure of cognition (vol. 2), ed. McClelland, J. L. & Rumelhart, D. E., MIT Press.

Fodor, J. A. & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition 28:3-71.

French, R. M. & Cleermans, A. (1998) Function, sufficiently constrained, implies form: Commentary on Green on connectionist- explanation. PSYCOLOQUY 9 (21) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.21.connectionist-explanation.18.french

Green, C.D. (1998) Are Connectionist Models Theories of Cognition? PSYCOLOQUY 9 (4) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.04.connectionist-explanation.1.green


Volume: 9 (next, prev) Issue: 28 (next, prev) Article: 25 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: