Green is right to question the explanatory role of connectionist models in cognitive science. What is more, he is generally right in his judgement that the only way of interpreting connectionist models as theories of cognitive phenomena is by construing them as "literal models of brain activity" (1998, para. 20). This is because connectionist explanations of cognitive phenomena are more dependent on details of implementation than their conventional ("classical") counterparts.
2. It is important to appreciate, however, that in spite of its neural inspiration, connectionism has a life beyond modelling human cognitive phenomena. Just as with conventional digital computers, connectionist nets can be used to perform all manner of computational tasks. And while some of the tasks that well-known connectionist nets perform have a human-like flavor (such as recognizing faces or converting written text into speech), their relationship with human cognition is actually very tenuous. For example, no one, least of all Sejnowski and Rosenberg (1987), claims that NETtalk is a realistic model of how English speaking subjects convert graphemes into phonemes. In this sense, NETtalk certainly does not represent a theory of (even a fragment of) human cognition. What is interesting about NETtalk and similar connectionist nets, however, is that they demonstrate that even very simple networks of processing units (simple, at least, when compared with the complexity and size of real neural networks) can realize some very powerful information processing capacities. And this is relevant for cognitive science because it lends plausibility to the suggestion that these nets capture, albeit in a rudimentary way, the style of computation that is employed by the brain's own neural networks, and hence that connectionism can be used as a basis for framing cognitive explanations.
3. It is this last, theoretical, aspect of connectionism that is the focus of Green's target article (1998). He contends that despite the reluctance expressed by some prominent connectionists about interpreting connectionist networks as literal models of brain activity (e.g., Smolensky, 1988) , it is only by so doing that one makes these networks into serious candidates for theories of human cognition (1998, para. 20). Green, in my view, is right about this. What he is highlighting is the fact, much stressed by some connectionists (e.g., Chater and Oaksford, 1990), that connectionist explanations of cognitive phenomena are more dependent on implementational considerations than their conventional ("classical") counterparts. Unless connectionist models are implemented in neurally realistic connectionist networks, their status as cognitive explanations is indeed suspect.
4. This raises a number of complex issues concerning the precise fashion in which connectionism represents a genuine alternative to the classical conception of cognition. But one relatively straightforward way of illustrating this point about connectionism, and hence another way of arriving at Green's conclusion, is by contrasting the means by which connectionists go about fashioning their explanations of cognitive phenomena with the standard methodology used in classical cognitive science.
5. Classical cognitive science typically pursues a "top-down" methodological strategy, whereby one designs around an input/output characterisation of a cognitive capacity both a set of symbolic representations the capacity draws upon in the course of its operation and an algorithm by which these symbols are manipulated. In the terms of Marr's influential analysis of information processing tasks (1982, Chp.1), this strategy involves progressively enriching a "level-1" description of the capacity in question (a description in which "the performance of the [task] is characterized as a mapping of one kind of information onto another" (1982, p.24)), until a fully-fledged "level-2" theory of the capacity has been articulated (a theory which describes the symbolic transformations needed in order to satisfy the input/output profile of the task). Marr's own work on vision is a classic example of this top-down methodological strategy in practice (1982). And the psychological theorizing on memory and language Green describes (1998, paras.4-8) also has this flavor.
6. The methodology used by connectionists, as they seek explanations of cognitive phenomena, could not be more different. As Clark (1990) has observed, the connectionist performs a kind of Copernican revolution in cognitive science: "the connectionist effectively inverts the usual temporal and methodological order of explanation, much as Copernicus inverted the usual astronomical model of the day by having the earth revolve around the sun instead of the other way round. Likewise, in connectionist theorizing, the high-level understanding will be made to revolve around a working [connectionist network] which has learnt how to negotiate some cognitive terrain" (p.299). That is, rather than articulating a computational theory around the input/output profile of some cognitive capacity, the connectionist seeks to construct a connectionist network capable of performing the input/output transformations in question. It is only once this working network is in hand -- an outcome that might take painstakingly long to achieve, as the model builder tries out all kinds of possible network configurations and learning rules -- that the theorist is able (by various numerical methods such as cluster analysis) to articulate a computational theory of the capacity in question: an account in terms of representational transformations. In the terms of Marr's analysis, connectionists pursue a "bottom-up" methodological strategy; they construct level-2 explanations of cognitive phenomena on the foundations of working models (ie., "level-3" implementations).
7. The bottom-up methodology of connectionism has profound implications for the nature of explanation in connectionist cognitive science. In a nutshell, it means that connectionist theories of cognitive phenomena are only as good as the network implementations from which they have been derived. To the extent that these implementations incorporate features that are neurophysiologically and neuroanatomically unrealistic, the cognitive theories they license will be suspect. Green is right; connectionists who seek to develop theories of human cognition should eventually restrict themselves to "units, connections, and rules that use all and only principles that are known to be true of neurons" (1998, para. 20).
8. This, of course, should not prevent these connectionists from experimenting in the meantime with networks which do not (or at least do not appear to) satisfy this requirement. Part of the problem here is that we are still largely in the dark about what are the computationally significant properties of real neural networks. Indeed, as others have pointed out (e.g., Churchland, 1986, Chp.9), what is required here is a co-evolutionary research strategy; a strategy in which connectionists and neuroscientists guide each other to the features of neurons and their synaptic connectivity that are essential to their cognitive function. Connectionists should always be willing to constrain their processing units, connections and learning rules in accordance with the latest work in neuroscience. But equally, there are valuable lessons to be learned by neuroscientists from connectionist "conjectures" about mechanisms that might be present in real neural networks.
 Incidentally, there are other prominent connectionists who are not so reluctant. Sejnowski (1986), for example, argues that while PDP systems do not attempt to capture molecular and cellular detail, they are nonetheless "stripped-down versions of real neural networks similar to models in physics such as models of ferromagnetism that replace iron with a lattice of spins interacting with their nearest neighbors" (1986, p.388; see also Churchland and Sejnowski 1992, Chp.3).
Chater, N. & Oaksford, M. (1990). Autonomy, implementation and cognitive architecture: A reply to Fodor and Pylyshyn. Cognition 34: 93-107.
Churchland, P.S. (1986) Neurophilosophy. MIT Press
Churchland, P.S. & Sejnowski, T (1992) The Computational Brain. MIT Press.
Clark, A. (1990) Connectionism, competence, and explanation. In: The Philosophy of Artificial Intelligence, ed. Boden, M. Oxford: Oxford University Press.
Green, CD. (1998) Are Connectionist Models Theories of Cognition? PSYCOLOQUY 9(4) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.04.connectionist-explanation.1.green
Marr, D. (1982). Vision. San Francisco: Freeman.
Sejnowski, T.J. (1986) Open questions about computation in cerebral cortex. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition Vol. 2: Psychological and Biological Models, ed. McClelland, J.L. & Rumelhart, D.E., eds. MIT Press.
Sejnowski, T.J. & Rosenberg, C. (1987) Paralled networks that learn to pronounce English text. Complex Systems 1: 145-68.
Smolensky, P. (1988) On the proper treatment of connectionism. Behavioral and Brain Sciences 11: 1-23.