French & Cleeremans claim that my argument (Green 1998) requires that every part of a connectionist network be semantically interpretable. They have confused semantic interpretation (an issue peculiar to cognitive science) with a simple correspondence between aspects of models and aspects of the portion of the world being modeled (an issue as relevant to physics as to cognitive science), and have thereby misunderstood my position. Most of the rest of their commentary follows from their initial misapprehension.
2. Specifically, French & Cleeremans betray their misunderstanding of my argument -- they characterize it as an "attack," whereas I see it more as a "challenge" to be met -- when they say "Green would claim that we have learned nothing [from their connectionist stick-balancing machine] because there is no explicit semantic content to the nodes and connections in the connectionist network" (para. 4). This is is entirely immaterial to the argument in the target article, where the issue was not whether the nodes and connections REPRESENT anything (in the semantic sense), but rather whether they CORRESPOND to anything in the phenomenon that is being studied -- viz., "real" [1] (in vivo) cognitive systems like you and me.
3. Much of the rest of French & Cleeremans's critique follows from this basic misconstrual of my position. They argue that "looking inside the connectionist box does indeed tell us a LOT." The target article did not suggest otherwise [2]. It does tell us a lot, but only about the workings of the connectionist box -- unless we have independent reason to believe that "real" cognitive systems are structured in a similar way. In order to answer that question, we must SPECIFY what "parts" or "aspects" of the "real" cognitive system are structured in the same way as the connectionist "simulation" of it. Declaring that the nodes and connections of connectionist systems correspond to neurons is ONE way of doing this. French & Cleeremans ask "Why stop at neurons?" (para. 6). No reason at all. Indeed, I have repeatedly made the point that I am open to connectionists declaring that the parts of their systems correspond to other things. The point is that they must correspond to SOMETHING and that that something must be declared in advance. Otherwise, the model operates under no constraints at all. And since, as we all know as a matter of basic logic, any finite set of data (which is all we ever have) perfectly confirms an infinite number of theories, a theory with no constraints is of little scientific interest.
4. Finally, near the end of their commentary, French & Cleeremans misconstrue my point about dual-action neurons. First of all, I was careful to note that dual-action neurons do exist (para. 19), but that they seem to be (at least) very rare in mammals, whereas their artificial counterparts are virtually universal in connectionist models. Second, even if such neurons DID heavily populate mammalian brains, their argument would not be with me, but with Crick and Asanuma (1986) and Churchland (1990), who have been the point's main advocates. Neurons are not the issue: specifying what one is modeling -- particularly what the parts of one's model correspond to in the real world -- is the issue. To this, French & Cleeremans would seem to have little to say.
[1] The term "real" is used here, and throughout, advisedly. It is only being opposed to the term "artificial" as it is used in the phrase "artificial intelligence." Whether "artificial" intelligence systems can achieve actual intelligence is a separate question on which one can be agnostic at present. The term "real" here is only meant to refer to the life forms that we have found that are naturally cognitive (e.g., people), and whose cognitive systems we are attempting to study through experimentation, simulation, and a variety of other scientific techniques.
[2] If French & Cleeremans really believe I am not aware of techniques for discovering patterns with the activations and weights of the nodes and connections, they are advised to read the other commentaries and replies that came before them. I have discussed these at some length elsewhere.
Churchland, P. M. (1990) Cognitive activity in artificial neural networks. In: Thinking: An invitation to cognitive science (Vol. 3), ed. D. N. Osherson & E. E. Smith, MIT Press.
Crick, F.H.C. & Asanuma, C. (1986) Certain aspects of the anatomy and physiology of the cerebral cortex. In: Parallel distributed processing: Explorations in the microstructure of cognition (vol. 2), ed. McClelland, J. L. & Rumelhart, D. E., MIT Press.
Fodor, J. A. & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition 28:3-71.
French, R. M. & Cleermans, A. (1998) Function, sufficiently constrained, implies form: Commentary on Green on connectionist- explanation. PSYCOLOQUY 9 (21) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.21.connectionist-explanation.18.french
Green, C.D. (1998) Are Connectionist Models Theories of Cognition? PSYCOLOQUY 9 (4) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.04.connectionist-explanation.1.green