Mike Page (1994) Real Progress in Neural Modelling:. Psycoloquy: 5(75) Pattern Recognition (9)

Volume: 5 (next, prev) Issue: 75 (next, prev) Article: 9 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 5(75): Real Progress in Neural Modelling:

REAL PROGRESS IN NEURAL MODELLING:
FROM A NODE TO A SONNET
Book Review of Nigrin on Pattern-Recognition

Mike Page
Medical Research Council Applied Psychology Unit
15 Chaucer Rd., CAMBRIDGE, CB2 2EF
United Kingdom
voice: U.K. (0223) 355294 #742

mike.page@mrc-apu.cam.ac.uk

Abstract

Nigrin's (1993) book describes the development of his SONNET model from foundations laid down by Grossberg and colleagues. The model is rigorous and imaginative, and its design and implementation are described with great clarity. The book suffers a little, in its later stages, from a lack of simulations, coupled with a near surfeit of ideas. Nonetheless, this book is an important addition to the neural-modelling literature, one which comes heartily recommended.

Keywords

geometric transformation invariance, neural network models, pattern recognition.
1. It seems a shame to start what will otherwise be a reasonably positive review with a criticism, but, given that the title of a book is the first thing one is likely to read, it seems appropriate to observe at the outset that Nigrin (1993, 1994) might have done better in this regard. His title suggests, perhaps, a review of neural-network Aapproaches to pattern recognition, whereas this book concentrates almost entirely on the development of Nigrin's (more elegantly titled) SONNET model.

2. In as much as it reviews any of the vast literature which might plausibly fall within the scope defined by the title, it heavily emphasizes the work of Stephen Grossberg and colleagues at the Boston Center for Adaptive Systems. While such an emphasis appeals to the proclivities of this particular reviewer, it may not be universally welcomed. This is partly because adherents to the "Boston approach" have adopted a language which is not universally accepted. For example, in the first chapter, the concepts "Short Term Memory" and "Long Term Memory" are identified with cell activity and connection weights respectively. Many readers, in particular psychologists, will find this identification confusing. For them, "Short Term Memory" is a description applied to performance across a wide variety of experimental situations, characterized by short retention intervals rather than by any presumed mechanism. Indeed, many previous models of aspects of short-term memory have involved weight learning (see e.g., Burgess and Hitch's (1992) model of serial recall). It is to be hoped that readers will be able to overcome any initial discomfort, since I believe that the work of Grossberg et al., and Nigrin's extensions of it, have much to recommend them.

3. Nigrin's introductory chapter contains a clear, if not uncontroversial, statement of design philosophy and a bold summary of the "properties which should be satisfied by any classifying system" (p. 20). Whilst extending this list of properties to ANY CLASSIFYING SYSTEM is perhaps over-egging the pudding, this summary is thoughtful and instructive in the demands it makes of systems which seek to perform those tasks to which Nigrin later applies SONNET. In building a neural-network model, one may well decide to relax certain of Nigrin's stringent criteria; indeed the extent to which the SONNET model itself lives up to such a high specification is debatable (see later). Nevertheless, one should be aware that such decisions have been made and be prepared, if necessary, to defend them in the context of a particular application. The neural network literature is littered with instances in which, for example, supervised learning has been employed in a quite inappropriate manner. Nigrin's checklist is thus a valuable yardstick against which to measure the validity of particular models.

4. In chapter 2, Nigrin gives an admirably clear exposition of the ideas underlying Adaptive Resonance Theory (ART), including additive and shunting equations for cell dynamics, the noise-saturation dilemma, instar and outstar learning, and the stabilizing effect of feedback. He goes on to relate these "spatial" mechanisms to the coding of temporal patterns and provides both a concise description and thoroughgoing critique of Cohen and Grossberg's (1987) masking field model. It is this critique which really drives the subsequent development of the SONNET model and, for those who struggled to apply the masking field in its raw form, this is where the book really takes off.

5. Ironically, it is in the latter part of this chapter, the part relating to temporal order, that Nigrin also appears temporarily to relax his critical faculties, particularly with reference to Grossberg's earlier work. The section describing the storage of temporal order as a gradient across localist item representations (pp. 70-79) is rather too respectful, in that it ignores entirely a large body of experimental work relating to, for example, effects of phonological similarity, word-length, irrelevant speech, and modality on short-term serial recall (see e.g., Baddeley, 1990). Whilst I am sympathetic to the general approach (Page, 1993; Page, in press), I am also aware that psychologists faced with such an ill-specified serial-order mechanism will be justifiably sceptical. The point is made most clearly on pages 78-79, where Nigrin claims, without suggesting any mechanism, that a "bow" in the activation gradient across localist item representations, can give rise to an equivalent bow in a standard serial-recall curve. This is clearly not the case, since, using the method of reading off items in order of their decreasing activation, items from the end of the list, that is those with high activations, will tend to be recalled too early. This is quite the opposite of what the recency portion of a SERIAL recall curve would suggest, namely that items from the end of the list tend to be recalled last. In this case, Nigrin should have heeded his own footnote, which observes that "this is a good example where data from humans can help in neural network design".

6. The SONNET network, described in chapters 3 and 4, results from a bold attempt to deal with the weaknesses inherent in the original masking-field design, without sacrificing the important ideas on which that design was based. The thoroughness with which Nigrin approaches this daunting task is admirable. He identifies problems with great precision and provides imaginative and detailed solutions. Of particular note are (1) his insistence that SONNET should self-organize from a HOMOGENEOUS network, thus avoiding many problems associated with combinatorial explosion, (2) the formulation for I*, which departs from the more traditional use of dot-products alone, (3) the avoidance, in certain circumstances, of a reset mechanism (pace Pickering, 1994), and (4) the detail and clarity of the learning rules employed. Whilst the network developed in this chapter is complex, possessing a number of free parameters which will alarm many psychological modellers, Nigrin clearly elucidates the reasons for this complexity.

7. Occasionally, however, one does get the feeling that Nigrin is over-zealous in his desire to cover every eventuality, almost regardless of whether such eventualities are likely to occur in vivo, or whether, given that they did occur, they would be dealt with as elegantly as Nigrin would hope. As an example, Nigrin sets great store by the ability of the network to learn to recognize words from continuous streams of, say, speech. This seems an unreasonably harsh requirement. Ignoring for the moment the fact that many words will be learned in isolation, and possibly under supervision, even words in continuous speech will be segmented somewhat by patterns of stress (see e.g., Cutler and Norris, 1988). Working with a version of SONNET 1, Page (1993) found that it was necessary, not just desirable, to use additional information, in this case metrical information, to ensure correct operation when faced with "continuous" input. This was partly done by allowing this information to affect the (partially random) F1-saturation reset mechanism, itself perhaps the weakest part of SONNET's operation. Once again, closer examination of human performance may have allowed Nigrin to be content with a slightly simpler version of his model. Nevertheless, the fact that he has solved a harder problem than perhaps exists in reality should not detract too much from the praise which I believe SONNET deserves. Chapters 3 and 4 benefit from clearly motivated and well described simulations.

8. The same cannot be said of the subsequent chapters. Chapter 5 contains suggestions of how the basic SONNET module might be incorporated within a hierarchy. Again the problems are clearly elucidated and plausible mechanisms are suggested, but unfortunately these are not translated into specific formulations, still less are they simulated. For example, on page 223 we are assured that feedback should be used to increase the confidence (I*) of a cell's classification. The precise nature of this influence is not discussed and the idea is left hanging. The failure to convert these ideas into simulations allows potential problems to go unnoticed. For instance, Page (1993) found that, if the patterns ABCD and BCDE had both been learned as familiar, then the stimulus BCD elicited greater top-down expectation for a subsequent A than for the more plausible E. This is a specific example of a more general problem for the unmodified SONNET model, namely that it does not act as a "Cohort-type" model (Marslen-Wilson and Welsh, 1978), in that the activation of "words" can persist in spite of their being inconsistent with the "stimulus so far". Page (1993) suggests modifications which allow SONNET to behave more like a Cohort-type model, as well as solving a similar problem relating to inter-module expectation as it filters down a hierarchy.

9. Sections 5.4-5.7 each contain ideas which suffer from not having been worked out in detail: the dynamic binding of rhythm information with item information is unconvincing -- surely it is possible to factorize rhythmic information from a succession of sounds, each of which is different and perhaps "unfamiliar" and thus unclassified; the putative "elimination of the lockstep operation of the network" by basing the activity of s-cells "not on the order in which the classifications are made but on some function of the activity of s-cells in the previous layer" opens an enormous can of worms -- how would this interact with the learning rules, normalization etc.? The potential inclusion of an attentional reset system, though clearly desirable, requires that an unparsed copy of the original stimulus survives as fodder for the post-reset reparsing process.

10. In chapter 6, which Nigrin admits is "one long gedanken experiment", the ideas come thick and fast. Nigrin rightly regards these ideas as some of the most important in the book, and the first sections, on the problems of repeated items and synonyms, are excellent. Likewise, his proposed solutions are fresh and interesting. As an aside, I find (as does da Costa, 1994) his tendency to refer to competition between "links" to be unnecessarily distracting, particularly since he concedes (on p. 252) that the distinction between links and inter-neurons is irrelevant. The novelty of link competition adds nothing to the explanation and serves only to disconcert the reader. Nevertheless, the passages describing the provision of multiple representations for repeating items are particularly good, and Nigrin does not shy away from the implications that this has for classification (pp. 267-272).

11. Proper assessment of the more complex aspects of the extended model, such as the binding of distributed representations, will have to await detailed specification and simulation, as will questions about whether the complex patterns of connectivity can themselves self-organize. In this regard, the reader may feel a little aggrieved that more simulations are not presented in the latter parts of the book. Distributed representation and the "binding problem" are hot topics in connectionism at the moment, and many approaches, including Nigrin's, appeal to pulse-phase information. What is less clear is how the correct binding comes to be represented by relative phase in the first place. Chapter 7 begins to chart out the nature of the SONNET 2 model but stops short of a full specification and simulation. Potential applications of such a model are discussed with reference to translation-invariant, and size-invariant, recognition and an interesting twist is given to the notion of a synonym. Detailed assessment of the plausibility of the model as applied in these particular domains is, I fear, beyond the scope of this review (and probably this reviewer).

12. So how well does Nigrin's model measure up to his own criteria? SONNET certainly self-organizes and uses unsupervised learning, almost to a fault. It is both plastic and stable, though stability has been bought at the price of freezing learning, a measure which may prove difficult to reconcile with the need to modify representations and to unlearn, as well as with effects such as long-term repetition priming. The modelling of noise is too limited: noise should best be present as an independent term in each of the model's differential equations. Real-time operation is satisfactorily achieved and a full range of learning rates are possible, though with little indication of how rates might be modulated in response to various learning situations. The network scales well, though clearly some doubts remain in Nigrin's mind, as well as in that of the reader, that hardware requirements will not prove to be a limiting factor. One particular way in which SONNET scales nicely is that it allows incremental learning of extended sequences (e.g., melodies; see Page, in press), an achievement that has proved difficult in other paradigms. The coarseness of categories can be adjusted using an ART-like vigilance parameter, though it is less clear whether a given stimulus pattern can be represented at two different vigilance levels simultaneously (e.g., as both exemplar and relative to a prototype). Context sensitivity is supported, though not always in a well-specified fashion. Processing of multiple patterns, combination of existing representations and synonym processing are plausibly promised for SONNET 2, whilst relearning and unlearning are left for a later date.

13. If the summary above appears negative, this is only a result of applying Nigrin's own extremely exacting standards. The core of this book, the development of the SONNET model, is a triumph of imaginative modelling, and one is left, in spite of the reservations expressed above, with a sense of great admiration for the author's endeavours. In short, Nigrin's work sets standards which other modellers would do well to follow.

REFERENCES

Baddeley, A.D. (1990) Human Memory. Lawrence Erlbaum Associates Ltd: Hove, U.K.

Burgess, N. and Hitch, G. (1992) Towards a network model of the articulatory loop. Journal of Memory and Language, 31, 429-460.

Cohen, M.A. and Grossberg, S. (1987) Masking fields: a massively parallel neural architecture for learning, recognizing, and predicting multiple groupings of data. Applied Optics, 26, 1866-1801.

Cutler, A. and Norris, D. (1988) The role of strong syllables in lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121.

da Costa, L. (1994) A non-mystifying approach to artificial neural networks. PSYCOLOQUY 5(15) pattern-recognition.2.dacosta.

Marslen-Wilson, W.D. and Welsh, A. (1978) Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology, 10, 29-63.

Nigrin, A. (1993). Neural Networks for Pattern Recognition. Cambridge, Mass.: The MIT Press.

Nigrin, A. (1994). Precis of: Neural Networks for Pattern Recognition. PSYCOLOQUY, 5(2), pattern-recognition.1.nigrin.

Page, M.P.A. (1993) Modelling aspects of music perception using self-organizing neural networks. Unpublished doctoral thesis, University of Wales College of Cardiff.

Page, M.P.A. (in press) Modeling the perception of musical sequences with self-organizing neural networks. Connection Science.

Pickering, A.D. (1994) Neural nets cannot live by thought (experiments) alone. PSYCOLOQUY 5(35) pattern-recognition.6.pickering.


Volume: 5 (next, prev) Issue: 75 (next, prev) Article: 9 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: