Christopher D. Green (1998) Higher Functional Properties do not Solve Connectionism's Problems. Psycoloquy: 9(25) Connectionist Explanation (22)

Volume: 9 (next, prev) Issue: 25 (next, prev) Article: 22 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(25): Higher Functional Properties do not Solve Connectionism's Problems

HIGHER FUNCTIONAL PROPERTIES DO NOT SOLVE CONNECTIONISM'S PROBLEMS
Reply to Goldsmith on Connectionist-Explanation

Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca

Abstract

Goldsmith (1998) argues that I (Green 1998a) am wrong in asserting that nodes and connections are the theoretical entities of connectionist theories. I reply that if he is right, then connectionist theory is not connectionist after all. I also comment briefly on Seidenberg's (1993) approach to the interpretation of connectionist research, and on the issue of the proper distinction to be drawn between theories and models.

Keywords

artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.
1. Goldsmith (1998) touches on a number of issues that have been discussed in my replies to previous commentaries: localist connectionist models (Green 1998b), Seidenberg (Green 1998c), statistical techniques for post hoc analysis the behavior of networks (Green 1998c), the proper analysis of the distinction between models and theories (Green 1998d). He offers no replies to the arguments presented there.

2. More important, Goldsmith argues that I am mistaken in claiming that the nodes and connections of connectionist networks are the "real" theoretical entities of connectionist theory. Instead, he argues, "that role is filled by more abstract characterizations of the functional properties of the networks. I have two objections to this claim. First, if nodes and connections are not theoretical entities, then what are they? They are not observable entities, and there are not many options left. One might say that they are not important at all; that they are those parts of any theory that are not to be taken seriously. (To use the example of Lee et al. (1998), just because we use planets to model subatomic particles it does not follow that we expect them to have moons.) But this brings me to my second objection: if the nodes and connections are not to be taken seriously, then in what sense is the person who so abandons them still to be considered a connectionist? None that I can see. If the "real" theoretical entities are higher "functional properties" then those properties, not nodes and connections, are the real core of the theory, which is now only incidentally connectionist.

3. There seems to be only one way out of this dilemma. One could argue that the "functional properties" in which one is interested imply strictly (i.e., deductively) the existence of a connectionist substrate. For example, if the "functional properties" of interest are, as Goldsmith suggests (para. 10), "stable attractors, clean-up processes, collateral support, superposition, gradient descent, trajectories in weight space, trajectories in state space, and so forth," and if one can show (by a sort of Kantian "transcendental deduction") that these cannot be instantiated in any way but in a connectionist network, then one can have one's cake, and eat it to. Or can one? For this just puts connectionists right back where they started: trying to show that either cognition, or some aspect of its substrate, is connectionist; that is, nodes and connections. And this is precisely what I suggested in my target article that connectionists have not done, and seem unlikely to do, unless they adopt a neural interpretation of their networks.

4. Let me reiterate, because Goldsmith (para. 12), like so many other commentators, overstated my position on the neural interpretation: Connectionists could develop some alternative ontology that will allow them to slip by the Scylla of propositional attitudes and the Charybdis of neural modeling. However, one has yet to see an even vaguely plausible alternative of this type proposed.

5. Before closing, allow me briefly to touch on two of the issues that Goldsmith mentioned but that had been brought up by other commentators. First, Goldsmith, like Medler and Dawson (1998) before him, seems to hold that Seidenberg's (1993) analysis of Seidenberg and McClelland's (1989) word-reading model is a paradigm of connectionist analysis. (I must, at this point, note my utter, jaw-dropping awe at the chutzpah it must have taken to draw on the work of Chomsky, of all people, to buttress a defense of connectionism! para. 6). He says that "for [Seidenberg], connectionist theories, like other theories, are embodied in concepts and principles rather than in units and connections... [he] identified 'broad theoretical claims,' such as those concerning the representational status of words... and the postulation of a single-process mechanism... to handle rule- governed words, irregular words, and non-words" (para. 7).

6. Goldsmith leaves out an important premise in the argument that gets one from a computer program to an explanation of cognition, however. Only if we have reason to believe that connectionist networks have something deeply in common with "real" (i.e., in vivo) cognitive processes (or their underpinnings) -- something that goes beyond a mere similarity in superficial behavior -- do we have reason to believe that the one may play a significant role in explaining the other. If we have no reason to believe that their commonalities extend below the surface, then we have no reason to believe that what is going on inside a connectionist net has anything at all to do with what is going in "inside" cognition. Surface similarities are a dime a dozen: both the sky and a robin's egg are blue, but that does not give us reason to believe that they have something deep and important in common. So, unless we have reason to believe that cognitive processes are actually instantiated in nodes and connections (e.g., we believe that the neural processes underlying cognition are themselves connectionist nets, or are in some important way closely related to connectionist nets), the fact that both brains and nets can learn to "read" words is to be regarded as just another interesting coincidence.

7. Second, Goldsmith claims that I have ignored an important "division of labor" between the model and the modeller. He says (para. 8), "Theories are put forward by scientists, not by models. Simulation models are powerful tools that help researchers develop, test, present and demonstrate the plausibility of their theoretical ideas." As far as this goes, this seems reasonable enough, but it does not go very far at all. It is tantamount to explaining rockets by saying that they are powerful tools that allow astronauts to go into outer space. The questions that are put forward in the massive philosophical literature about the role of models in science (the "semantic" approach to philosophy of science; see Green 1998d for references), concern HOW they are so used, WHAT sorts of things they are that they can effectively be put to such uses, WHETHER they embody theories in some important way (contra Goldsmith), or, by contrast, are just auxiliaries to theory, or, by another contrast, are in fact the "real" aim of science, and our intense focus on theories over the last century or more has been misplaced. Goldsmith addresses none of these deep enduring questions, much less answers them, so there is no real reason to accept that his claim in this regard has any more bearing on my argument about connectionism than on any other question in science.

REFERENCES

Green, C.D. (1998a) Are connectionist models theories of cognition? PSYCOLOQUY 9(4) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.04.connectionist-explanation.1.green

Green, C.D. (1998b) Does localist connectionism solve the problem? Reply to Grainger & Jacobs on Connectionist-Explanation. PSYCOLOQUY 9(14) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.14.connectionist-explanation.11.green

Green, C.D. (1998c) Statistical analyses do not solve connectionism's problem: Reply to Medler & Dawson on Connectionist-Explanation. PSYCOLOQUY 9(15) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psycoloquy.98.9.15.connectionist-explanation.12.green

Green, C.D. (1998d) Connectionist nets are only good models if we know what they model: Reply to Lee, Van Heuveln, Morrison, & Dietrich. PSYCOLOQUY 9(23) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psycoloquy.98.9.23.connectionist-explanation.20.green

Goldsmith (1998) Connectionist modeling and theorizing: Who does the explaining and how? Commentary on Green on connectionist-explanation PSYCOLOQUY 9(18) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psycoloquy.98.9.18.connectionist-explanation.15.green

Lee, C., van Heuveln, B. Morrison, C.T., & Dietrich, E. (1998) Why connectionist nets are good models: Commentary on Green on connectionist-explanation PSYCOLOQUY 9(17) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psycoloquy.98.9.17.connectionist-explanation.14.green

Medler, D. A., & Dawson, M. R. W. (1998). Connectionism and cognitive theories. PSYCOLOQUY 9(11) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.11.connectionist-explanation.8.medler.

Seidenberg, M. & McClelland, J. (1989). A distributed developmental model of word recognition and naming. Psychological Review, 96, 523-568.

Seidenberg, M. (1993). Connectionist models and cognitive science. Psychological Science, 4, 228-235.


Volume: 9 (next, prev) Issue: 25 (next, prev) Article: 22 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: