Lamm (1998) expresses concern that there is a lack of fit between my call to connectionists to declare themselves to be direct modelers of neural activity and my concern that connectionist nets have too many degree of freedom (Green 1998). I am sympathetic with his worry, but argue that the degrees of freedom problem does not loom as large once we know what constraints we are working under -- as we would if we declared that connectionist nets are literal neural models.
2. I have sympathy with Lamm's concern, but I think there is a way out. There are, as we all know, billions upon billion of neurons in the human brain. If that is the domain that we have declared we are modeling, then surely we are allowed to have just as many theoretical entities in our models, as long as they are doing just as much work. The problem with the hundreds of nodes and thousands of connections in the typical connectionist network (that is not said to be a direct model of neural activity) is that we do not know what sorts of entities they refer to, and thus we have no idea how many there should be to accomplish a particular kind of task. The connectionist just keeps adding more until the model "gets it right" so to speak. The degrees of freedom are unconstrained. If we all commit to modeling neural activity, however, we have a concrete idea of what kinds of constraints we are working under.
3. Some are bound to say, "that's not much of a constraint at all because no connectionist network has ever come close to having the number of nodes that a brain has neurons." True enough, but then again no connectionist network has ever come close to doing all the things the brain does. We mostly model little problems that are computed in relatively small regions of brain. Besides, the whole discussion is now predicated on the assumption that connectionist networks are direct models of neural activity, and explicating that assumption is all I was really aiming for.
 Because the point has been misstated so often, it must be repeated that the target article is not committed to the idea that the ontological basis for connectionist nets MUST be neural. Proposals about other ontological realms that perhaps lie somewhere between the mental symbols of "classical" cognitive science and neural activity may be viable. However, so far no such (plausible) proposals have been made in this discussion. To repeat: there has to be SOME specified domain of entities to which the nodes and units of connectionist nets refer; it need not be neural.
 I must admit to having trouble, however, with his claim that this strategy "interposes the human brain as a hidden layer between connectionism and cognition" (para. 2). What it does is make the human brain (or parts of it) a "real world" instantiation of a connectionist net that computes cognitive processes. Perhaps Lamm was just "making a phrase," and did not mean his use of "hidden layer" to be taken literally.
Green, C.D. (1998) Are connectionist models theories of cognition? PSYCOLOQUY 9(4) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.04.connectionist-explanation.1.green
Lamm, C. (1998) Does brain activity-oriented modelling solve the problem? Commentary on Green on Connectionist-Explanation PSYCOLOQUY 9(19) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psycoloquy.98.9.19.connectionist-explanation.16.green