James Levenick (1993) A Welcome Change From Back-propagation Models of Cognition. Psycoloquy: 4(35) Categorization (6)

Volume: 4 (next, prev) Issue: 35 (next, prev) Article: 6 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 4(35): A Welcome Change From Back-propagation Models of Cognition

A WELCOME CHANGE FROM BACK-PROPAGATION MODELS OF COGNITION
Book Review of Murre on Categorization

James Levenick
Computer Science Department
Willamette University (D186)
Salem, OR 97301

levenick@willamette.edu

Abstract

Building a computational model of cognition is a daunting task. As one gains even the beginnings of an approximate understanding of the mechanisms, sophistication, subtlety, and raw processing power of the human mind, the prospect of true artificial intelligence appears increasingly remote; this in spite of various optimistic pronouncements and the apparently exponential growth rate of microprocesser speed. Murre's (1992a, b) CALM (Categorization and Learning Module) (and its several variants introduced herein) is clearly an advance over the simple feedforward back-propagation networks (of various stripes and colors) that have been so prominent in the past several years. CALM has a number of intriguing and laudable attributes including: (1) a means of doing unsupervised competitive learning; (2) the possibility of resolving the "stability/plasticity" dilemma; (3) some measure of psychological and neurophysiological plausibility; (4) a mechanism to provide automatic arousal and thus more rapid learning in response to novelty; and, (5) an appropriate application of a genetic algorithm. This review will consider each of these five and then turn to several questions and more contentious issues.

Keywords

Neural networks, neurobiology, psychology engineering, CALM, Categorizing And Learning Module, neurocomputers, catastrophic interference, genetic algorithms.

I. A MEANS OF DOING UNSUPERVISED COMPETITIVE LEARNING

1. Any memory model which purports to simulate human memory, even in the weakest sense must be capable of learning without being provided the correct answer or category. This is the most glaring weakness of back-propagation networks as models of human memory. The CALM models appear to be able to learn to categorize very well without being provided with the correct answers.

II. THE POSSIBILITY OF RESOLVING THE "STABILITY/PLASTICITY" DILEMMA.

2. The stability/plasticity dilemma is endemic to learning systems, be they genetic algorithms, classifier systems, or artificial neural networks. The learning system may succeed in learning correct outputs, or responses to one set of situations or inputs, but after it subsequently learns to handle some other set of inputs (or, more broadly, when it encounters a different environment) it will fail to respond appropriately to the old, previously learned inputs.

3. The mechanism for this failure is easy to understand in the case of "catastrophic forgetting" in back-propagation networks. In that case, weights are modified repeatedly until the network gives correct outputs for every input in a data set. With any difficult input set it typically takes many hundreds of repetitions to achieve a correct array of weights. Outputs are determined by a set of weighted sums; when all of those sums exceed (or fall below) the appropriate thresholds, even by a tiny fraction, training is terminated. Unfortunately, in most cases, any subsequent weight adjustment will cause some (or most) of those weighted sums to no longer exceed (fall below) the thresholds (in some cases even a single small weight change will cause numerous incorrect answers - again because training only occurs in cases where the net produces incorrect answers -- there is no overlearning). Thus, the network forgets most of what it knew. This problem is exacerbated by the distributed nature of the representations and the lack of intralevel competition.

4. Murre presents two avenues of attack on this problem. His CALM models create modular representations and allow competition and feedback between modules at the same level. These two measures appear to solve the problem (at least in the cases he investigates). These results may point neural network researchers in what seems a useful direction.

III. SOME MEASURE OF PSYCHOLOGICAL AND NEURO-PHYSIOLOGICAL PLAUSIBILITY.

5. The basic CALM modules are designed to model neocortical minicolumns. The constraints embodied in the model include: (1) Dale's principle (that individual neurons emit only one type of transmitter), (2) learning as a local phenomenon that does not require knowledge of the correct response, and (3) the capacity to differentiate between novel and familiar input and behave differently on that basis. The second and third constraints represent an advance over simple back-propagation models.

IV. A MECHANISM TO PROVIDE AUTOMATIC AROUSAL AND THUS MORE RAPID LEARNING IN RESPONSE TO NOVELTY.

6. This is an important feature of human cognition that is simply not addressed by most models to date. It is a means of focussing the attention of the system to facilitate learning. The problem of how a learning system can determine what to attend to is mysterious and abstruse; Murre's mechanism offers an avenue to attack this difficult problem.

7. An application of a genetic algorithm to the problem of tuning multiple interacting parameters in attempting to evolve a network to solve a nontrivial problem. This is the class of problems that genetic algorithms were designed to solve. It is an eminently reasonable approach.

IV. SOME QUIBBLES.

8. Chapter 4, Psychological Models, tests CALM-based simulations on what are termed implicit and explicit memory tasks; the results are compared with similar experiments involving human subjects. The thesis of the chapter is that "multiple memory" explanations (such as the semantic-episodic, or procedural-declarative dichotomies) are inferior to "multiple-process" explanations of the effect. The results of the CALM experiments serve only as a proof in principle; they demonstrate that the effect can be produced with a single representational structure, but not that the representational structure used is the only one that can produce the effect.

9. Other multiple-process explanations spring immediately to mind, the best known being short-term memory (STM). It is well known that representations of things that have been experienced recently remain more reactive, more easily brought into consciousness and more likely to influence categorization decisions than other things. It has not yet been established whether this effect is due to residual activity in those representations or to some form of short term connection, or to a combination of the two, or to some other mechanism entirely. Perhaps CALM connections from context nodes could be construed as a kind of short term connection strength, but in CALM no distinction is made between those connections and the category learning connections (that presumably model long term memory).

10. Dale's law, mentioned on page 9, does not appear in the index of the Eccles (1957) reference cited; perhaps this is due to my having found the 1964 edition, which does refer to Dale's principle.

11. The V-nodes are reminiscent of Milner's (1956) regional inhibitory nodes, except that Milnerian nodes are driven by all the nodes they inhibit (as opposed to one per R-node). Later in the book (p. 121) Murre discusses a simulation where the ratio of V-nodes to R-nodes is reduced thus moving the model closer to Milner's formulation.

III. CONCLUSION

12. This book represents a tremendous amount of work by an organized research team who may yet construct a revolutionary model of cognition. It reads rather like a work in progress, offering the results of a number of simple experiments that demonstrate the model's potential. It seems to have great promise -- CALM is clearly an advance over feedforward back-propagation networks. Its attention to psychological and neurophysiological constraints, its attentional mechanism based on novelty (which offers a solution to the stability/plasticity dilemma), modularity, and its apparent generality are certainly encouraging. Nevertheless, many models of cognition have appeared that accomplished simple tasks but subsequently could not be extended to handle more difficult tasks. Whether CALM based models will prove a stepping stone to true artificial intelligence remains to be seen.

REFERENCES:

Eccles, J. E. (1964) The Physiology of Synapses. New York: Springer-Verlag

Milner, P. M. (1956). The Cell Assembly Mark II. Psychological Review, 62: 242-252.

Murre, J.M.J. (1992a) Learning and Categorization in Modular Neural Networks. UK: Harvester/Wheatsheaf; US: Erlbaum

Murre, J.M.J. (1992b) Precis of: Learning and Categorization in Modular Neural Networks. PSYCOLOQUY 3(68) categorization.1


Volume: 4 (next, prev) Issue: 35 (next, prev) Article: 6 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: