Christopher D. Green (1998) The Degrees of Freedom Would be Tolerable if Nodes Were Neural. Psycoloquy: 9(26) Connectionist Explanation (23)

Volume: 9 (next, prev) Issue: 26 (next, prev) Article: 23 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(26): The Degrees of Freedom Would be Tolerable if Nodes Were Neural

THE DEGREES OF FREEDOM WOULD BE TOLERABLE IF NODES WERE NEURAL
Reply to Lamm on Connectionist-Explanation

Christopher D. Green
Department of Psychology
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo

christo@yorku.ca

Abstract

Lamm (1998) expresses concern that there is a lack of fit between my call to connectionists to declare themselves to be direct modelers of neural activity and my concern that connectionist nets have too many degree of freedom (Green 1998). I am sympathetic with his worry, but argue that the degrees of freedom problem does not loom as large once we know what constraints we are working under -- as we would if we declared that connectionist nets are literal neural models.

Keywords

artificial intelligence, cognition, computer modelling, connectionism, epistemology, explanation, methodology, neural nets, philosophy of science, theory.
1. Lamm (1998 calls into question the fit between, on the one hand, my call that connectionists explicitly declare themselves to be modeling neural activity directly [1] and abide by the constraints inherent in that project, and, on the other hand, what he believes to be my expressed worry that connectionist nets may just have too many degrees of freedom to EVER be good scientific theories (Green 1998). He makes a good point. As he says, while admitting that one is modeling neural activity "makes a model's terms interpretable, it does not solve the 'degrees of freedom' problem" (para. 2) [2]. He goes on to point out that "the immediate result [of any serious attempt to model neural activity literally] will be an exponential explosion in the number of units/terms of a model" (para. 4). The implication is that someone who was truly concerned about degrees of freedom could not countenance such a development.

2. I have sympathy with Lamm's concern, but I think there is a way out. There are, as we all know, billions upon billion of neurons in the human brain. If that is the domain that we have declared we are modeling, then surely we are allowed to have just as many theoretical entities in our models, as long as they are doing just as much work. The problem with the hundreds of nodes and thousands of connections in the typical connectionist network (that is not said to be a direct model of neural activity) is that we do not know what sorts of entities they refer to, and thus we have no idea how many there should be to accomplish a particular kind of task. The connectionist just keeps adding more until the model "gets it right" so to speak. The degrees of freedom are unconstrained. If we all commit to modeling neural activity, however, we have a concrete idea of what kinds of constraints we are working under.

3. Some are bound to say, "that's not much of a constraint at all because no connectionist network has ever come close to having the number of nodes that a brain has neurons." True enough, but then again no connectionist network has ever come close to doing all the things the brain does. We mostly model little problems that are computed in relatively small regions of brain. Besides, the whole discussion is now predicated on the assumption that connectionist networks are direct models of neural activity, and explicating that assumption is all I was really aiming for.

ENDNOTES

[1] Because the point has been misstated so often, it must be repeated that the target article is not committed to the idea that the ontological basis for connectionist nets MUST be neural. Proposals about other ontological realms that perhaps lie somewhere between the mental symbols of "classical" cognitive science and neural activity may be viable. However, so far no such (plausible) proposals have been made in this discussion. To repeat: there has to be SOME specified domain of entities to which the nodes and units of connectionist nets refer; it need not be neural.

[2] I must admit to having trouble, however, with his claim that this strategy "interposes the human brain as a hidden layer between connectionism and cognition" (para. 2). What it does is make the human brain (or parts of it) a "real world" instantiation of a connectionist net that computes cognitive processes. Perhaps Lamm was just "making a phrase," and did not mean his use of "hidden layer" to be taken literally.

REFERENCES

Green, C.D. (1998) Are connectionist models theories of cognition? PSYCOLOQUY 9(4) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.04.connectionist-explanation.1.green

Lamm, C. (1998) Does brain activity-oriented modelling solve the problem? Commentary on Green on Connectionist-Explanation PSYCOLOQUY 9(19) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psycoloquy.98.9.19.connectionist-explanation.16.green


Volume: 9 (next, prev) Issue: 26 (next, prev) Article: 23 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: