Christopher D. Green (1998) Of Neurons and Connectionist Networks. Psycoloquy: 9(16) Connectionist Explanation (13)

Volume: 9 (next, prev) Issue: 16 (next, prev) Article: 13 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(16): Of Neurons and Connectionist Networks

Reply to Hoffman on Connectionist-Explanation

Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3


Hoffman (1998) tells us of a number of difficulties connectionists may face in any attempt to model neural activity with connectionist networks. I have no reason to doubt that, but it is more of a problem for connectionists than it is for my argument (Green 1998).


artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.
1. Hoffman (1998) gives us some excellent reasons to be concerned about the appropriateness of connectionist networks considered as models of neural activity. Although some of his examples now seem outdated (e.g., Minsky & Papert's critique of perceptrons was effectively overcome once we learned how to train three-layer networks; see Werbos 1974; Rumelhart, Hinton & Williams 1986), the issues he raises should give the connectionist some pause before accepting my challenge to regard parallel distributed networks as literal models of brain function.

2. Of course, there are many possible avenues of apparent escape, and only further empirical research will show which, if any, of these are authentic. The most obvious is the frequent claim that the network units do not represent individual neurons, per se, but larger groupings of neurons (bundles, clusters, assemblies, or what have you). It is entirely up to connectionists whether they try to make the "neural" part of the phrase "neural net" stick or, by contrast, attempt to articulate a new domain that exists, somehow, "midway" between the immediate constituents of propositional attitudes, on the one hand, and individual neurons, on the other.

3. Although he does not say so directly, Hoffman seems to believe that his argument has some bearing on mine (Green 1998). I believe this arises from a slight misreading of my target article common to a number of neuroscientists who have read it. I had nothing whatever to say about how SUCCESSFUL connectionists might hope to be, ultimately, in the attempt to use connectionist networks as literal models of neural activity. My point was, rather, that if they do not make such an attempt, their "models" are left with very little in the way of a cognitively relevant domain to map. If Hoffman is right that there are cognitively relevant aspects of neural activity that connectionist networks are unlikely ever to be able to capture, then the problem before them is even more dire than I had suspected.

4. One final thought. Hoffman makes mention of dynamical systems, which are becoming increasingly popular in cognitive science these days. He specifically states that the dynamics he is talking about occur within and between neurons; that is to say, he has a clear domain. Many other cognitive dynamicists, however, are not so clear, and may well fall into the very same difficulty that I outlined for connectionists in my target article.


Green, C. D. (1998) Are connectionist models theories of cognition? PSYCOLOQUY 9(4)

Hoffman, W. C. (1998) Are neural nets a valid model of cognition: Commentary on Green on connectionist-explanation. PSYCOLOQUY 9(12) psyc.98.9.12.connectionist-explanation.9.hoffman

Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986) Learning internal representations by error propagation. In: Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1), ed. Rumelhart, D. E. & McClelland, J. L., MIT Press.

Werbos, P. (1974) Beyond regression: New tools for prediction and analysis in the behavioral sciences. Unpublished doctoral dissertation. Harvard University.

Volume: 9 (next, prev) Issue: 16 (next, prev) Article: 13 (next prev first) Alternate versions: ASCII Summary