Ken Richardson (1999) Hyperstructure in Brain and Cognition. Psycoloquy: 10(031) Hyperstructure (1)

Volume: 10 (next, prev) Issue: 031 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 10(031): Hyperstructure in Brain and Cognition

HYPERSTRUCTURE IN BRAIN AND COGNITION
Target Article on Hyperstructure

Ken Richardson
Centre for Human Development & Learning
The Open University
Walton Hall
Milton Keynes MK7 6AA
United Kingdom

k.richardson@open.ac.uk

Abstract

This target article tries to identify the informational content of experience underlying object percepts and concepts in complex, changeable environments, in a way which can be related to higher cerebral functions. In complex environments, repetitive experience of feature- and object-images in static, canonical form is rare, and this remains a problem in current theories of conceptual representation. The only reliable information available in natural experience consists of nested covariations or 'hyperstructures'. These need to be registered in a representational system. Such representational hyperstructures can have novel emergent structures and evolution into 'higher' forms of representation, such as object concepts and event- and social-schemas. Together, these can provide high levels of predictability. A sketch of a model of hyperstructural functions in object perception and conception is presented. Some comparisons with related views in the literature of the recent decades are made, and some empirical evidence is briefly reviewed.

Keywords

complexity, covariation, features, hypernetwork, hyperstructure, object concepts, receptive field, representation
    The target article below has just appeared in PSYCOLOQUY, a
    refereed journal of Open Peer Commentary sponsored by the American
    Psychological Association. Qualified professional biobehavioral,
    neural or cognitive scientists are hereby invited to submit Open
    Peer Commentary on it. Please email or see websites for
    Instructions if you are not familiar with format or acceptance
    criteria for PSYCOLOQUY commentaries (all submissions are
    refereed).

    To submit articles and commentaries or to seek information:

    EMAIL: psyc@pucc.princeton.edu
    URL:   http://www.princeton.edu/~harnad/psyc.html
           http://www.cogsci.soton.ac.uk/psyc

I. INTRODUCTION.

1. Our understanding of the relationship between brain and cognition is still rather rudimentary. Though remarkable advances have been made with specific aspects of 'higher' cerebral structure and function, as well as in cognitive theory, knitting these advances together into a general picture and theoretical framework has been more difficult: understanding and data interpretation, in either domain, are still dominated by general metaphors rather than detailed models. This problem is seen acutely in attempts to describe and explain perhaps the most basic - yet most formidable - problem: the apprehension of objects in perception, and their processing through object concepts. Most contemporary theorists would probably readily agree with Gibson's (e.g., 1986) view that the formation of a consistent object-image in the face of alteration in appearance (i.e., invariance under transformation) is one of the most fundamental problems in psychology. Many will also agree with Fodor (1994, p.95) that the nature of conceptual representation of objects 'is the pivotal theoretical issue in cognitive science; it's the one that all the others turn on'.

2. In both aspects of object processing, 'features' of objects have been seen as the crucial processing primitives. Thus, the problem of 'delivery' of an object image to the conceptual system has been tackled largely through the use of a 'feature-detection' model. 'First, complex stimuli are analyzed into their constituent components by low-order sensory neurons that are selective for simple stimulus features ... Then increasingly complex stimulus selectivities are synthesised by combining the selectivities of appropriate low-order neurons' (Knudsen & Brainard 1995, p.19). This process continues until an internal object-image has been reconstructed.

3. Models of what then happens in the conceptual system have likewise been dominated by features such as processing primitives (for review see Hampton, 1997). The main objective has been to explain how predictability about the infinite variability among experienced objects is achieved by reducing it to smaller sets (categories) of shared properties against which judgements can be made about current novel experiences. As Anderson (1991, p.411) put it, 'if one can establish that an object is in a category, one is in a position to predict a lot about that object'. Concepts are said to be the representations resulting from 'category learning'; and category establishment and functions have been the subjects of much research and many recent models.

4. Almost all models have assumed, though, that concepts consist of multidimensional arrays in memory space, based on the 'similarities' of experienced objects, where similarity is determined by shared features in the received object-images. Category identity of novel object stimuli is then assigned according to feature-by-feature matching to stored information. In prototype models the memory array is a compression or generalization from feature combinations into an ideal type (in a way which differs for different models), against which current images can be matched (for recent review see Hampton 1997). In exemplar models, all received images are stored in memory, but clustered according to similarity of features. New classification decisions are based on a weighted sum of the similarity of a current image's features to those in the alternative clusters (Nosofsky 1986; Medin, Goldstone & Gentner, 1993; Livingstone, Andrews & Harnad, 1998).

5. Theories of object processing and concepts, then, both seem to take features as the fundamental primitive. The point made by William Uttal ten years ago, still seems to be valid: 'when most theorists ... attempt to develop computational models or psychological theories of form perception, the contemporary zeitgeist is to look at features, to analyze features, to manipulate features, and generally to emphasize features as the putative means by which humans process forms' (1988, p221). Uttal also observes that this preoccupation has been a distraction, creating a significant distance between models and real experience on the one hand, and real brains on the other. In what follows I want first to describe some of the problems with the 'featural' view of object processing and object concepts. I shall then attempt to lay theoretical foundations for an alternative view, before offering a sketch of that view and how it might work. Next, I will attempt to show how this view actually extends or develops many points at least implicit in a number of previous models. Finally, I shall attempt to show how the terms of the new model correspond with the informational 'substance' of cerebral processing, before looking briefly at some empirical evidence.

II. PROBLEMS WITH FEATURE-BASED THEORIES.

6. Accounts of object-image construction, categorization and conceptual functions, based on orderly arrays of features, raise several difficulties when we look at the real nature of any complex stimuli (such as objects). One of the problems is the sheer multitude of features bombarding the sense receptors. For example, a visual snapshot of a typical scene may consist of fifty or more objects and thousands of component features often, if not usually, partially occluding one another. Even in a world of static objects, with features fully presented in perfect canonical form, how these should be analysed, and what kind of information guides this 'binding', remains unclear, although the visual system does it efficiently in a fraction of a second (Singer & Gray 1995).

7. This problem is compounded when it is realised that external objects and their features are rarely if ever experienced in static canonical forms, and thus 'ready made', as it were, for neural computation. As Zeki (1993, p243) explains (again with reference to visual apprehension of objects), 'Whether in the hands of the psychologists or the physiologists, there is a fundamental flaw in this approach, for the visual environment is in practice never static; it is usually in a continual state of flux. An object, for example, is commonly viewed from different angles and at different distances ... The pattern of retinal illumination produced by that object will change from moment to moment.' Such movement may be produced 'passively', by the viewer's movements in and around a scene (including eye-movements), or 'actively', by direct manipulation of an object. Either way, the received visual pattern will be a continuously flowing mosaic, almost always novel, in which even simple features or components (already perhaps partly or totally occluded) present rapidly changing forms.

8. The fact that features do not confront the system as ready-made codes creates many problems for models of concepts. Most models, and experimental tests of them, have glossed over this problem by using artificial, highly simplified, exemplars with perfectly formed features in static, time-frozen, conditions. The way in which these correspond with aspects of objects, and what people really attend to, in real experience has to be assumed. Thus, as Wisniewski & Medin (1994, p.364) explain, 'In the standard model, the processes that produce the features are not addressed ... researchers have generally ignored this aspect of learning', and 'it is generally true that features are not just "out there" ready for a learning system to operate over them'. In consequence, the features that are adopted as the basis of models simply tend to be those "that are easily operationalized in experiments. ..yet their selection is generally unprincipled, and we have no means of knowing if (they) really exist in human knowledge" (Barsalou and Hale, 1993, p137).

9. The nature of the 'similarity' computation which is the basis of a categorization function in the standard view is a related general problem. If we don't know what features people may attend to in real life objects, it is difficult to be certain how they enter into the feature-matching which is said to determine categorization. Perhaps it is not surprising, therefore, that a number of studies (summarised by Medin, et al 1993) show that reported similarity (between a given stimulus and one or more others) varies with experience of those objects, and with the context, direction and motivation of comparison, all adding up to 'a similarity that is too dynamic to be treated as fixed' (p271).

10. Another general problem concerns 'storage' requirements which are difficult to substantiate in cerebral terms. It has always been difficult to envisage what form a prototype takes in the cognitive/cerebral system. Indeed, Rosch (1978) spoke of prototypes as 'convenient fictions' which do not constitute a theory of representation as such. Likewise, in exemplar models it has always been difficult to envisage how the multitude of experiences of objects of the same class can be stored as independent cases in feature-clusters. This problem becomes compounded by the dynamic nature of most visual experience. Does the fact that the image of an object changes (sometimes radically) from moment to moment, mean that each momentary image becomes stored as a different exemplar? By the same token, it is quite possible for objects of different classes to obtain very similar images. All this must seriously confuse a system's attempts to create a category array on the basis of feature-similarity alone.

11. Of course, it is true that many different models - even strongly contradictory ones, such as 'prototype' versus 'exemplar' models - have found good 'fits' to predictions in experiments. However, these have invariably been achieved by long and repetitive periods of training, with simplified and restricted sets of stimuli, as mentioned above. In addition such fits have usually been much assisted by the use of ad hoc 'weighting' parameters, such as 'response bias' parameters, which are mostly estimated from the data rather than a priori theoretical principles. For example, the most empirically impressive model (the exemplar model of Nosofsky) assumes weights on features based on 'selective attention', and a 'scale parameter' reflecting subjects' 'experience with the stimulus' (1986, p.41). The existence of such weightings in real conceptual systems, and their associated computations, however, is uncertain.

12. Indeed, as models have been tried and tested, they have proved to be restricted to narrow aspects of conceptual function, and have had to introduce further processes. In Nosofsky's model, the concept consists of a similarity array, as described above. This appears to be a mere database, that is, an adjunct to the real, but undescribed, conceptual functions, and therefore, 'Operating on this similarity representation may be rather complex attention and decision processes' (Nosofsky, 1986, p.54). The latter seem mostly to be conceptual functions, but we are left in the dark as to what they are. In addition, it has become increasingly necessary to postulate a system which abstracts analytical 'rules', operating in parallel to 'similarity' computations, to explain data 'fits' (Nosofsky & Palmeri, 1997). Again, where these come from, and how they operate, is not explained. Similarly, background knowledge or 'theories' have been increasingly appealed to (e.g. Spalding & Murphy, 1996), although how these differ from concepts is not made clear.

13. Another pervasive assumption has been that object categorization per se is the chief function of conceptual representation. Yet we usually need to predict much more about an object than just labelling it as belonging to this or that category. We usually want to predict its utility with respect to some purpose. As Barsalou (1993, p.177) notes, concepts appear to be recruited adaptively, flexibly and creatively, rather than as mere agents of categorization. An object ostensibly belonging to one taxonomy (e.g., Chair) one day, can belong to others the next (firewood, something to stand on, something to support another object, etc.). 'None of these ad hoc categories captures the taxonomic essence of their referents or their physical structure. Instead, the primary function of these categories is to capture information that bears on goal achievement.' Thus, Billman (1996) concludes that all recent models - 'rule', 'exemplar', 'connectionist' and 'schema' (prototype) models - designed to account for categorization fail to explain 'the pervasiveness of concepts throughout cognition and the flexibility with which any individual concept can be used across many cognitive processes'. Similar points were made by Lakoff (1987).

14. Finally, human concepts of objects appear to be embedded in the networks of social relationships in which they are used and operated upon. As Scribner (1997, p.268) noted, 'In acting with objects the child is not merely learning the physical properties of things but mastering the social modes of acting with those things. These socially evolved modes of action are not inscribed in the objects themselves and cannot be discovered independently by the child from their physical properties - they must be learned through a socially-mediated process.'

15. In sum, models based on features as computational primitives still lack ecological grounding, cerebral credibility, parsimony and functional comprehensiveness. They thus offer only limited, piecemeal answers to major questions about object processing, about higher cerebral functions, and even why humans have such big brains.

III. HIGHER ORDER INVARIANCES AND STRUCTURES.

16. Coexisting with models based on artificial concepts have been several strands of thought proposing a role for 'higher order' factors and computations in object perception and conception. In theories of object perception, it is well known how Gibson (1986) posited a role for 'higher order' invariances, such as ratios and proportions in the optic array which specify object shape under transformation, or spatial layout under altered points of view. Shepard (1984) indicated the importance of 'higher order variables' in dealing with dynamic experience of objects, including 'general principles of kinematic geometry' and 'constraints governing transformations in the world' (p.439). Researchers working with point-light stimuli (see further below) have argued the need for 'higher order neural mechanisms' for integrating and reconstructing a percept from degenerate sensory information (Lappin, 1995, p.360). Other investigators have stressed 'the perceptual indeterminacy of stimuli', arguing strongly for a holistic over an 'elementaristic' (i.e., featural) view even in the earliest stages of form detection (Uttal, 1988).

17. At the 'higher' conceptual level, various kinds of higher-order 'structure', and structure-preserving transformations, have been suggested as additions or alternatives to feature lists and similarity computations. Shepard (1975) has suggested that mental representations combine abstract, 'second-order isomorphisms', in which transformations of an object (e.g., rotation, folding) are preserved, rather than direct 'first order' copies of surface appearance. Uttal (1988, p.221), who argues that even simple geometric forms have more complexity than we have commonly realised, points to compelling evidence that 'people recognize forms not because of the nature of the parts but, rather, because of some attribute of arrangement of the parts'. Likewise, in rejecting a pure additive model of feature alignment in categorization (and with particular reference to analogical reasoning), Medin et al (1993) argue that 'relational structures crucially determine the process of setting up correspondences between entities ... critical for determining similarity' (p.257).

18. Even a cursory consideration of 'dynamic realism', as mentioned above, reveals the need for such factors, though where and how they operate is another matter. This uncertainty has produced the irony that, whereas "representation" psychologists have tended to work with simple, static features, of dubious reality, the ecological theorists, who have most stressed the dynamically flowing nature of visual experience, have rejected all appeals to inferential processes and internal images or representations (Gibson 1986). It seems that Gibson was opposed to internal images and representations because they always imply the need for a homonculus to analyse them (and, indeed, an infinite regress of such homonculi - c.f. Shepard, 1984). On the other hand, Shepard points to the abundant evidence for such images and representations: the frequent construction of object-images from highly degenerate input; acts of creative imagination (as in science itself); dreams; and the role of brain lesions which disrupt perception. He concludes that, in neglecting this evidence, 'Gibson seems to have given up too much' (1984, p.420).

19. What is clear, then, is that 'higher order' factors in perception and conception need further specification and/or mutual reconciliation. The ecological psychologists are criticised for their vagueness about the source and constitution of invariants (e.g., Marr, 1982). At the same time, many others lament the absence of a holistic theory of form perception and conception, or even the terms in which such a theory may be couched (e.g., Uttal, 1980. Accordingly, Medin et al (1993) conclude that what current models lack 'is a well specified mechanism for feature construal and context- and comparison- specific feature construction' (p.273). Finally, Braddon (1986) stresses that higher-order structures, or second-order isomorphisms, are not, any more than features, just 'out there', waiting to be processed by a passive system. Rather, they are also created by the system's actions: 'In short, the second-order isomorphism perspective fails to consider that perceivers are not only observers and representers of object transformations but also actors who generate or induce those transformations' (Braddon, 1986, pp.135-136; see also the view of Scribner, quoted above).

IV. WHAT THE BRAIN AND COGNITION ARE FOR.

20. Perhaps another current irony is that, whereas psychologists are increasingly looking to the neurosciences for clarification of these matters, the brain sciences have reached a stage in which they desperately need psychologically enlightened models of higher-order structures and principles governing perception and conception, i.e., of what exactly is processed, and how. Marr repeatedly stressed that 'What higher nervous systems do is determined by the information-processing problems that they must solve', and observed that we will have little success in understanding these without studying the underlying structure of those problems (1982, p.349). He insisted that programs and models which merely 'mimic' some small aspect of human performance do not further our general understanding. Uttal also complained about 'heuristic physical analogies and descriptive models, not too far removed in concept or meaning from the data themselves' (1981, p.986).

21. Although frequently glossed over, this matter is not trivial. Cariani (1995, p.209) argues that 'The human brain is by far the most capable, the most versatile, and the most complex information-processing system known to science'; but, '(d)espite great advances, the neurosciences are still far from understanding the nature of the "neural code" underlying the detailed workings of the brain, i.e. exactly which information-processing operations are involved'. Jackendoff argued that the crucial issue in cognitive neuroscience 'has to do, not with the elementary operation of neurons, or with achieving computational power per se, but with the following question: Over what kinds of information are computations in the brain carried out?' As he also observes, 'there are large subcultures of the cognitive psychology and artificial intelligence communities where this question is not considered a central concern', and this neglect 'does not get us any closer to discovering what the brain actually computes'. (1989, p.172). In view of this it is, perhaps, not surprising that Pinker, strongly advocating a computational-evolutionary view of intelligence, reminds us that 'the apparent evolutionary uselessness of human intelligence is a central problem of psychology, biology and the scientific world view' (1997, p.300, emphasis added).

V. ENVIRONMENTAL AND BIOLOGICAL HYPERSTRUCTURES.

22. The focus on static feature- and object-images in most models of representation, and a lack of purchase on the functional raison d'etre of evolved brain and cognition, are, I suspect, related aspects of the same problem: i.e. disregarding the very changeability of the world which cognitive systems evolved to deal with in the first place. I suggest this neglect may, in turn, reflect the narrow 'adaptationist' paradigm prevailing in both biology and psychology over most of this century. Anderson (1991) notes how many theories have been guided by an underlying notion of adaptation. Shepard (1981, p.307; c.f. Anderson 1991) states, 'We cannot gain a full understanding by simply guessing at the form and level of organizational principles without recognising their role in the adaptation of the species to its environment.' And Pinker observes that 'Our mental programs work as well as they do because they were shaped by natural selection to allow our ancestors to master rocks, tools, plants animals, and each other, ultimately in the service of survival and reproduction' (1997, p.36).

23. It has been a common response to the failure to specify complex psychological structures to argue that they have been put there as more or less fixed computational routines (and thus somehow guaranteed) by genetic selection. Shepard argued that the 'higher order' variables of object perception have been 'picked up genetically over an enormous history of evolutionary internalization' (p.431). Pinker (1997) holds a similar view: 'Our physical organs owe their complex design to the information in the human genome, and so, I believe, do our mental organs' (p.31). In recent years, a voluminous literature on these ideas in 'evolutionary psychology' has developed (ably summarised in Pinker 1997; see also further comments below).

24. The problem with this view is that there is more than one kind of adaptation. The simplest adaptationist principle assumes a more or less direct and durable correspondence between internal and external structure. Thus, Plotkin (1994) argues that our cognitive functions are based on a 'relationship of fit ... in-formed by the environment' (p. xv, hyphen in original). This kind of internal-external correspondence has been very strongly expressed in recent 'modular' theories: 'Natural selection shapes domain-specific mechanisms so that their structure meshes with the evolutionarily-stable features of their particular problem-domains' (Cosmides & Tooby, 1994, p.96). In this view of adaptation, there is some direct correspondence between the structure of DNA selected, and the structure of some aspects of the environment.

25. There is, I suspect, a huge paradox in such views which, again, stems from neglect of the dynamic nature of experience in complex, object-laden, environments. Such gene-based adaptations are only possible when a DNA-environment correlation persists, and remain predictable, over many generations. With more complex environments, such direct correlations are rare, so that predictability has to be found in 'deeper' information sources. Perhaps the simplest example is that of seasonal change. As indicated in Figure 1, it is only when embedded in time (as signalled by photoperiodicity), that a covariation emerges between food and location. That is, prediction can be found in complex environments, but only when the correlation or covariation can be 'dug out' from deeper levels. This deeper spatiotemporal structure cannot be encoded in the linear structure of DNA; rather, adaptation to it requires new levels of adaptive regulation; i.e., physiological, epigenetic, and genomic regulations, as well as the genetic products of natural selection (Richardson, 1998).

ftp://coglit.psy.soton.ac.uk/pub/psycoloquy/1999.volume.10/Pictures/rich1.html

    Figure 1.  Predictability between two variables (food and place) is
    absent, unless its nestedness in another variable (season) is
    registered. Predictability arises from this interactive covariation
    relation (X's indicate co-occurence of the variable values).

26. In my view, cognitive regulations emerged from just such needs to find predictability in deeper covariation structures (because these relations are interactive, at potentially many depths, and possibly non-linear, I will use the term 'covariation' to avoid confusion with the popular notion of linear, bivariate correlation). As simple niches became filled in the course of evolution, organisms were obliged to move into more complex ones, defined precisely by the fact that predictability had to be discovered at increasingly greater depths. This must have been increasingly the case as the world of behaving, moving, acting animals increasingly became one of objects - trees, rocks, seeds, plants, and other animals - experienced in ever-changeable form. The only information-for-predictability in such transient surface form is the deeper covariation structure arising from joint movements of corners, edges, and other parts. This structure, at least, remains invariant and characteristic for any given object or object class. Moreover - unlike the case with seasonal, and other durable structures - such informational structure has to be induced de novo throughout life, often on a continuously updating basis. This is why conceptual systems came to be needed very early in the course of the evolution of vertebrates. This need became particularly acute for humans, themselves embedded in complex social relations, who turned such a system to the manufacture of objects, with preconceived affordances, in enormous quantities, and in constantly novel designs and uses.

27. This kind of analysis may give us further clues regarding the nature of the information that the cognitive system, and the parts of the brain that support it. Consider the prediction of the location of prey from sound, when the precise combination of values of acoustic variables is almost always novel. In studies on the barn owl, Knudson (1985) notes the complexity of the relationship between auditory cues and the location of the prey, involving more than a simple cue-response correlation function. For example, intensity difference at the two ears is an important cue to location in relation to the head, but is dependent upon the frequency of the sound (that is, the covariation between intensity difference and location is conditioned by frequency). Moreover, this interaction itself interacts with the developmental age of the bird, and the growing distance between the ears. In other words, location of prey requires more than coding of direct cues; a cognitive system is necessary to take account of the 'deeper' covariations through which those cues, as "constantly novel" combinations, have to be interpreted. Similar observations can be made regarding the identification of objects through the visual system.

28. The seemingly simple task of seeing, catching, grasping and handling objects (by mouth or hand) requires a precision of predictability that is, likewise, only possible through 'deep' covariation structures. Such action requires a preparatory tension in a novel array of muscle fibres which must exactly match the novel array of forces in the object, determined by shape, orientation, speed and direction of motion, size and weight, and substance. This precision of action is possible, because relations between variables are literally informed by their interaction with other variables or other covariations, possibly at several depths. For example, the predicted distance between two features of a typical object will be conditioned by the perceived distance of the object as a whole; or the size- weight relation will be conditioned by the perception of the substance (e.g., fruit versus rock). Such deeper (i.e., conditioned) covariations are crucial when an important feature is hidden from the current viewpoint, or when we are uncertain about the weight we have to catch.

29. As already mentioned, such 'deep' covariation structures are different from simple bivariate correlations, and their inter- relations are different from ordinary inclusion hierarchies (Dawkins, 1976; Plotkin, 1994). They have been described as 'hyperstructures', because the relationship between levels is an interactive, rather than a merely additive, one (Baas, 1994). Generally, if interactions between levels of a hyperstructure have been registered in a system, then the system can predict missing variable values, creatively, even when 'given' values of related variables are quite novel. For example, the 'most likely' values of one or more occluded features of an object may be predicted from the values of those currently sensed, even though the actual combination of 'given' values may never have been experienced before. In this way the owl can predict locations of prey at angles and distances never before experienced.

30. It seems reasonable to suggest that, if such covariation structure is so ubiquitous in experience, and can be highly providential for predictability, especially in the world of objects, then it might form the elusive invariances necessary for object-image creation and conceptual functions. As already suggested, these might correspond with the 'higher order' invariances of the ecological psychologist and with Shepard's (1984, p.423) second-order isomorphisms. They might also explain the evolutionary mushrooming of the cerebral cortex. As animals evolved into increasingly complex environments, their cognitive systems, functioned as 'hyperstructure-detectors' working at increasingly deeper levels of information structure which could themselves change throughout life, often as a result of the organisms' own actions. Such a system was, of course, influenced by the evolution of a social-cooperative mode of life in humans, leading to a further tripling in relative brain size and attendant cognitive capacity (Donald 1991).

31. It is also worth noting in passing how such a system surpasses the passivity of simple adaptations. It is a striking aspect of both computational (e.g., Pinker, 1997) and connectionist (e.g., Elman et al, 1997) models of cognitive functions that they can 'read' the world, or have it read to them, but they cannot change it. Animals in possession of a cognitive system, and hyperstructural representations are enabled, not to only represent the world they live in, but also to anticipate needs before they arise, and make changes accordingly (Piaget 1970/1988). They move from the status of a "bundle" of passive adaptations to active movers in the world. The culmination of these developments in a providential socio-cognitive system is that humans largely adapt the world to themselves, rather than vice versa (Richardson 1998).

32. In sum, changeability in complex environments means a real dearth of static feature- and object-images. Instead, information-for-predictability has to be trawled at deeper levels than the direct structural correspondence assumed by the simple adaptationist. Although such 'deep' structure may itself be sufficiently persistent for it to be internalised at the physiological level (as with seasonal adaptations), this is not the case for the 'constantly novel' nature of the experience of objects. This is why a conceptual system became necessary. I will now attempt to sketch a model of how such a system deals with objects in experience.

VI. INDUCTION AND USE OF COGNITIVE HYPERSTRUCTURES.

33. Because our experience of an object is nearly always shifting and novel, the cognitive system cannot work with preformed object images or features being picked up by 'feature detectors', and arranged in linear similarity spaces. I suggest that the only consistent information about an object in experience is the set of complex covariations inherent in the spatiotemporal transformation of its parts in an infinite diversity of orientations and distances. I will now try to propose a model of how these enter into visual object processing and conceptual functions.

VII. THE CREATION OF A FEATURE-IMAGE.

34. Experience starts as an object traverses the visual field (in active or passive motion, as we move around it, handle it etc.), and a mosaic of changing light energy falls on the retina. This changing mosaic of light is not random, however. Many aspects of the 'light spots' on the retina - their coordinates in two-dimensional space, their intensities, directions and speeds of change, will covary. Moreover, it is likely that these covariations will be 'deep', in the sense described above, such that some will be conditioned by the values of other variables, potentially at several different levels, so enhancing overall predictability.

35. Figure 2 shows perhaps the simplest 'feature' - a line - as an array of points of light falling on the retina, only two of which are shown (much enlarged for present purposes). This particular sequence is modelled from a film of the movements of the edge of a chair as viewed by someone passing around/towards it. As can be seen, there is (superficially, at least) no overt feature at all, only a spatiotemporally changing array of light points moving against a background. Yet subjects easily see a 'line' when such a sequence is presented to them.

ftp://coglit.psy.soton.ac.uk/pub/psycoloquy/1999.volume.10/Pictures/rich2.html

    Figure 2.  Changes in pairs of 'light points' emanating from an
    edge of a typical object in natural experience.

36. One way psychologists have tried to show how such recognition may be due to the 'depth' of covariation structure in experiences like this has been to tabulate the coordinates - the x- and y-values on a two-dimensional axis - of each light point from an overlain 20x20 grid (Richardson & Webster 1996). The time-based dynamic of the system is then captured by tabulating the increments of x- and y-values of each point of light from one 'moment' to the next (in experiments such sequences have been arranged on a Hypercard stack or Quicktime movie, so that each transition consists of about a tenth of a second). The 'transition' values for the sequence shown in Figure 2 are presented in Table 1. A quick 'run' on a statistical package shows that there are no simple associations (linear correlations) between any pairs of these transition values. What makes these moving points 'cohere' into a line is thus not a matter of simple correlation, no more than it is one of the activation of feature detectors.

    Transition ax ay bx by
    1          3  2  2  2
    2          1  3  3  2
    3          1  2  1  3
    4          3  3  1  2
    5          2  3  2  2
    6          3  1  3  3
    7          2  2  3  1
    8          1  1  2  3
    9          2  1  1  1

    TABLE 1. Transition values (increments in x- and y-values) of each
    of two moving points a and b in motion from one time-point to the
    next.

37. The fact that the array contains deeper covariations, however, is shown from simple log-linear analyses of the values in Table 1, which reveal three-way, and higher-order associations (see Richardson & Webster, 1996, for methods). It becomes clear, for example, that the values of ax, ay, and bx, although bereft of bivariate correlation, display a three-way association (or interaction). Further analyses may reveal still higher-order associations. This suggests informational coherence not suspected from analysis of simple association, coherence which can be used for prediction or expectation (in effect, 'binding' the separate light points into the coherent structure of a line or edge).

38. when extended to the dozens of light points in the original complete line, this suggests a very rich source of information for extraction of the invariant structure which is typical of this feature (and distinct from other features). The 'binding' of the independent light points through the deep covariation structure between them helps to create an image under novel orientations, distances, or where points are obscured or missing. Obviously, that structure will often include changes in other variables, such as intensity and colour, not just the changes in translation shown in Table 1. I argue that it is the interaction between that structure in the stimulus, and structures internalized from previous experience, which are the sources of the feature- (and, eventually, the object-) image. The complete image will also include the dynamics of the feature (or object) as a whole, such as rotation, 'looming' (dilation), spiralling, and so on. First, however, the covariation structure needs to be internalized and represented. Internalization may occur by, for example, a feedforward of received values to a related network which captures, or 'attunes' to, the hyperstructural relations. Reciprocal signals from this represented structure can then activate most-likely input units to create a more complete feature-image (Figure 3). An image of a line from an incomplete row of light points can thus be created, even though experienced from novel, and constantly changing, orientations and distances. Naturally, other simple features, such as curves or corners, will display other hyperstructural invariances which give them their specific recognisable form.

ftp://coglit.psy.soton.ac.uk/pub/psycoloquy/1999.volume.10/Pictures/rich3.html

    Figure 3.  Suggested process of feature-image creation based on
    activation of represented covariation hyperstructure - a pair of
    light spots from a line or edge moving across a receptor 'bed'
    feeds into a network which registers the hyperstructural
    information therein.

39. Although it is convenient to speak of representations as quasi-figural entities, I suggest it is more accurate to think in terms of 'attunements' of units or networks sensitive to such hyperstructured covariation (there is abundant evidence for such networks in CNS as I shall describe below). I stress that the processes of hyperstructure internalization, and current image construction (Figure 3) are reciprocal: the network into which the primary receptor information feeds quickly 'attunes' to the structure of input, but is then used as a kind of 'grammar' to interpret subsequent input, as a result of which it may become further updated, and so on. In the case of a simple line or edge, of course - as with many other familiar features - the recognition is likely to be due to a relatively stable hyperstructural representation settled not long ago.

40. The creation of a feature image is thus a consequence of an interaction between the deep covariations in current input, and those internalized from previous experience. In other words, conceptual processes - in the sense of using abstract information to go beyond the information given - are involved almost at the start of visual processing. This would seem to support the view that even early vision is based on 'very abstract constructions' (Marr 1982). It also concurs with Shepard's note about the 'abstractness' of the 'internalized Constraints'. Of course, as just mentioned, there will be dozens or hundreds of such light points in a typical feature in real experience, all furnishing an abundance of mutual information, of potentially much greater depth, and thus a rich resource for predictability. Small wonder that, as Baldwin (1901, quoted by Butterworth, 1993, p.184) put it, natural experience is 'a voracious datum of consciousness'.

VIII. THE CREATION OF OBJECT- and SCHEMA-IMAGES.

41. The account just given, of course, describes only the first, most primitive, level of image creation among primary sensory (visual) parameters. It should also be clear that the properties of such images, arising, as they do, from real objects, will themselves tend to covary in more or less complex, but characteristic, ways for a given object. The consequence of this is that (again, in a system sensitive to complex covariations) second-order hyperstructures can be registered, reflecting more complex features or feature-combinations (Figure 4). Again, these will reflect a binding of variable elements into an invariant structure by the nested covariations among them. These second-order constructions can, in turn, give rise to nested covariations which create tertiary-order hyperstructures. In a system which is 'looking' for covariations, therefore, the original hyperstructure becomes nested in others emergent from it. This evolutionary process continues as long as such structure continues to emerge - for example, up to the level of object and event-schema hyperstructures (in humans, complex social regulations).

ftp://coglit.psy.soton.ac.uk/pub/psycoloquy/1999.volume.10/Pictures/rich4.html

    Figure 4.  Nested representational hyperstructures and formation
    of current feature- and object-images.

42. The whole system of internalized hyperstructures functions at any of these levels in a way similar to that already described for the level of primary features. Nearly all experience of objects will consist of 'samples' of covariation patterns at one or more of these levels. For example, the current sample may contain characteristic covariations at the level of a social-schema (e.g., a game of tennis); at the level of an object (a tennis racket in typical swing), or a specific feature (an edge of a racket sticking out of a cupboard as we walk past). These activate the representational hyperstructure at the corresponding levels, and their implications are rapidly 'filled-out' by a process of spreading activation within and across levels, as indicated in Figure 5 (which also indicates a role for motivational/attentional needs). Recognition/classification, naming, relevant motor action, or other predictions can be made from this 'super-image' generated on-line.

ftp://coglit.psy.soton.ac.uk/pub/psycoloquy/1999.volume.10/Pictures/rich5.html

    Figure 5. General process of on-line construction of a 'super-
    image' from hyperstructures in experienced inputs.

43. This is not the limit of the scope of hyperstructural representation. So far, this picture describes a 'vertical' nest of hyperstructures as based on passively-experienced visual parameters. These hyperstructures can also be nested horizontally in various ways. Different objects from the same category, whilst having their individual hyperstructural representations, will also share much covariation structure, and this shared hyperstructure will form the concept of that category of objects. In addition, nearly all visual experience of objects is associated with motor action, even if it is just characteristic eye-movements, again characteristic for that object. More typically, the motor activity will include direct actions on objects, themselves entailing kinaesthetic/somatosensory hyperstructures with which the visual ones will mutually covary, affording still more complex sensory- motor hyperstructures. These may also involve other sensory modes, and, most importantly, affective variables. Finally, object hyperstructures will be nested with others horizontally in 'event' and 'social' hyperstructures to form event schemas. The result is really a system of hypernetworks.

44. From the elicitation and construction of primary features to complex knowledge and thought, then, a single self-organizing principle operates. Our complex knowledge of the world consists of nested hyperstructures, or hypernetworks, as just described. Thinking is the spreading activation, or 'search', through these for an optimum satisfaction of current activations at the required level. For example, we may have established a hyperstructure of 'apples' that permits us to predict the sweetness of a given exemplar from a novel combination of its other attributes, such as size and colour. Or we can make predictions about the actions and intentions of other people from hyperstructural representations at the level of social schemas. New knowledge (i.e., hyperstructures) can also be created 'internally' from the emergent covariations, correspondences or analogies between those induced directly from experience (or what Piaget called 'reflective abstraction'). All this can happen without the need for hypothetical computational engines, which may be fine for the idealised world of machine intelligence, but not the real world of complex animals.

IX. INTEGRATION WITH OTHER VIEWS.

45. Before looking at evidence for hyperstructural representation and processing in the brain, some further comparison with other views about object processing at the psychological level may be worthwhile. As I hope to show, the model actually echoes a number of these, and implies a much broader definition of conceptual functions than that bound up in categorization studies. But then, as Marr (1982, p358) declared, 'The chunks of reasoning, language, memory and perception ought to be larger than most recent theories in psychology have allowed', and an account of such functions 'must include the simultaneous computation of several different descriptions of it that capture diverse aspects of the use, purpose, or circumstances of the event or object'. This is a concern also expressed by Barsalou (e.g., 1993). Note also that, the term 'hyperstructural representation' should not imply quasi-figural constructions, as in the standard model of concepts. Rather, it implies an 'attunement' in networks which 'resonate' to current experiences in a way not dissimilar to the suggestions of both Gibson (1979) and Shepard (1984). I have simply tried to explicate the content of the attunement and what it resonates to.

46. Uttal suggested an autocorrelation description of form recognition that offered good fits to psychophysical data from recognition of dot patterns in single brief presentations of combined signal-plus-noise arrays . Recognition occurs, he claimed, 'on the basis of certain organizational characteristics' in the signal but not in the noise. The autocorrelation process is implemented by the system first making multiple representations of the signal across simple time delays or spatial shifts. By comparing these with the original, and integrating the unit 'shifts', the periodic or organized aspects are emphasised, and thus 'extracted' from the disorganized aspects. The model is consistent with neurological knowns, requiring no hypothetical computational circuits operating on 'given' features, but it is confined to the 'early detection process', and 'does not pertain to higher cognitive levels involving nonisomorphic and symbolic encoding' (1975, p.136). Although related in some ways to this, the model I am attempting to construct is more general, and treats perceptual and conceptual processes as continuous, integrated functions.

47. Other authors have suggested that procedures like entropy minimization and covariance diagonalization may account for form-recognition. But, as Uttal notes, these ideas have been invariably implemented in ways designed to reduce form to independent features and regularly fail to deal with overall structure. The main problem, he argues, is that we do not yet have 'a truly modern holistic theory of recognition based on the global rather than the local attributes' (1988, p.221). My aim in this paper has been to suggest what the 'global attributes' may consist of.

48. Cariani has detailed various possible kinds of temporal coding, one of which creates an 'autocorrelation-like' representation of periodicities in the input. He points out that temporal codes can effectively increase the dimensionality of the 'quality space' being encoded, thus setting the stage for 'system-theoretic definitions of emergence' (1995, p.222). In addition, he shows how the outputs of one autocorrelation process will themselves have temporal structure, so that increasingly global structures can arise iteratively from more local ones. My own proposals obviously overlap (at least potentially) with such schemes, except that I have tried to emphasise the role of nested covariation (correlation) structure, and how higher cognitive processes may arise from it.

49. The notion of represented hyperstrucures may also help reconcile some opposing views in the category learning literature. It is consistent with the idea of prototypicality, because there will be a central tendency (the ideal set of nested covariations), albeit without the abstraction and storage of an ideal prototype. All experienced objects or events (exemplars in the category learning literature) are also stored, at least in the sense of being individually retrievable or reinstated. This is because inputting a few characteristic variable values, especially if they include spatiotemporal ones, can activate a superimage of a whole object or event on the basis of the covariation structure they originally helped to induce. The internalized covariation hyperstructure may also explain why members of a category appear to be more coherent after learning (Livingstone, Andrews & Harnad 1998).

50. In a broader perspective, the idea is at least partly consistent with a number of other general observations, theories and views about both language and cognition. As well as the consistency with both ecological and constructivist views, the idea has obvious compatibility with the principles of Gestalt psychology whereby the 'given' is restructured according to relations between the parts. It may also be possible to relate it to Piaget's notion of coordination (and 'coordinations among coordinations') as constituting logicomathematical structures (e.g., Piaget, 1970/1988). It may offer an account of the difference between 'implicit' and 'explicit' memory: i.e., the relationship between knowledge in nested, hyperstructural form, and its counterpart in the more temporo-linear declarative form (such as speech). As Reber (1989, p.219) notes, implicit learning produces a knowledge base that is 'abstract and representative of the structure of the environment', and is induced from 'the complex covariations among events that characterize the environment'. This could also echo Vygotsky's description of the relationship between thought and speech, in which thought 'has its own structure', contrasting with the linear lay-out of speech, and thus explain why 'the transition from it to speech is no easy matter' (1962, p.149). Finally, in a hyperstructural representation, syntax and semantics are one, obviating the need for separate, independent computations or modules of dubious provenance in real brains (see Fitch, Miller and Talla 1997, for review). This would seem to be broadly consistent with Pustejovsky's (1995) view of lexical comprehension over variable contexts.

51. It seems worth emphasising that such a view of predictability (about objects as about the world in general), emergent from 'deeper' interactions among factors, eschews the need either for executive homunculi, or auxiliary computational processes operating on a separate array of features, as current models of concepts seem to assume. Indeed, the notion of conceptual hyperstructures reflects the more general shift in scientific theorising in recent years, that has become known as 'dynamic systems theory'. In the last few years, a wide range of natural systems have all been described in such 'system' terms (for review see Thelen & Smith 1994). Although organised, and often 'developing', such systems have no controlling 'codes', programs, blueprints, or schemata, only internal interactions, and their reactivity to external conditions.

52. It may also be the case that the idea of representational hyperstructures will fill a void in the connectionist literature. Remarkably, in the drive to get artificial networks to 'work' there has been little analysis of the nature of the 'knowledge' that results in such models (Hanson & Burr,1990), nor of the process by which it actually happens - beyond, that is, changes in anonymous connection weights. It can be argued that successful connectionist nets 'work' by surreptitiously discovering just such covariation hyperstructures in inputs. The resultant weight changes establish a covariation hyperstructure 'attuned' to those in input, as reinforced by the particular adjustment algorithm. For example, it can be shown that the famous XOR problem, often presented as an achievement of networks with hidden units, can be shown to be a result of internalization of the second-order interaction (i.e., deep covariation), which on closer inspection is shown to be inherent in the problem (Richardson, 1999, in press). In other words, all the prior manipulations and feedback algorithms required to get nets to 'work' are simply finding ways of creating a nested covariation structure from inputs which maximises predictability. This is most explicitly achieved in algorithms which maximize mutual information (e.g. Pearlmutter & Hinton, 1986), the latter being a measure of 'covariation complexity' (see Bozdogan, 1993).

53. Connectionist hyperstructures are, of course, rather impoverished compared to real cognitive hyperstructures because, compelled to passively submit to whatever is 'keyed in', they lack the nesting in 'action schemas' or the 'hypernetworks' of real life representations (including, in humans, culturally evolved social schemas). However, this suggests that we should really be looking for understanding of knowledge and cognition, not in anonymous weight changes in networks, but in the 'deep' covariation structure of human action and experience itself.

X. CEREBRAL HYPERSTRUCTURES.

54. There are good reasons for arguing that the brain sets up and helps utilise such hyperstructures or hypernetworks: there is abundant evidence that neurons in the cerebral cortex, and elsewhere in the brain, are sensitive to complex covariations; that the covariation patterns to which a neuron is attuned at any one time constitute its functional properties, and that such attunements may change with further experience on a lifelong basis (Weinberger 1995). Here, however, I can do little more than offer a glimpse of that evidence.

55. Mackay postulated the role of 'covariation units' on the basis of neurophysiological and behavioral evidence. He argued that what the system is registering in the maintenance of complex representations of the world are not features or other 'coded' symbols, but covariations: indeed, much of sensory behaviour (hearing while looking; moving eyes and head to sample a range of views; tactile exploration while seeing) has the purpose of collecting 'sample' covariations, rather then preformed images. Even very simple '(e)xploratory probing of the environment, as by a moving fingertip or a moving retina ... gives rise to covariation between sensory signals and the exploratory motor action'. Moreover, because covariations - especially those between different modalities - will lie at different depths, it is clear that those 'of importance for the organization of action must form a hierarchical family with many levels' (MacKay 1986, p.368).

56. Neurophysiological research long ago started to present evidence of 'higher' factors in sensory processing. Even earlier research on the retina suggested that 'the eye was not so much a detector of light as a detector of patterns created by those objects and events in the environment that were important for the animal' (Barlow, 1985, p.125). This suggested that 'exploitation of the redundancy in the input that results from the complex structure of associations it contains must play an important part in the process' (Barlow, 1985, p.133). It is now known that retinal ganglion cells exhibit temporally correlated discharges (Mastronarde, 1989; c.f. Cariani 1995).

57. Much has been learned recently through simultaneous recording from multiple neuron sites (Deadwyler & Hampson 1997). Correlated activity between groups of neurons widely dispersed in the cortex is now well established (Singer & Gray, 1995). Zeki (1993) reviews evidence of cells in the visual cortex which are sensitive, not only to specific inputs, but to other cell activity, nad its effects on those inputs. As Gilbert (1996) points out, a given input to a cell can have a very different effect depending on other inputs concurrently active, and the combined effect of two inputs can be much greater than their sum. The recording of correlated activity in groups of neurons that have been active in a learning experience strongly indicates increased 'cooperativity' associated with changes in their representational fields (Dinse, Reconzone & Merzenich, 1993). Cariani (1995) has considered a range of plausible neural mechanism for auto- and cross-correlation of signals in cortical and subcortical areas.

58. All this is rapidly dispelling the classical 'feature-detector' doctrine of cells with increasingly complex, but fixed, receptive fields channelling features into an object image. A new picture of ensembles of neurons with modifiable, context-dependent receptive fields is emerging. For example, in one study of 'orientation-selective' cells in the cat's visual cortex, it was shown that 'the analysis of local orientation and curvature involved interaction between disparate points in the visual field, and this interaction can actually produce an alteration in the functional specificity of cells' (Gilbert & Weisel, 1990, p.1699). Moreover, 'This "dynamic filtering" imparts to early stages in the visual pathway a much more powerful processing capability than previously thought, and raises new possibilities as to how the cortex analyses visual information. The effect of these changes in tuning on perception cannot be understood in terms of the properties of a single cell, but instead requires considering the properties of a neuronal ensemble' (Gilbert & Weisel, 1990, p.1700).

59. As Gilbert explains, "These findings require a new way of thinking of receptive fields such that, rather than being fixed in specificity and restricted in area, fields must be thought of as adaptive filters, modified by context, experience and expectation, and sensitive to the global characteristics of visual scenes.' (1996, p.273) Weinberger notes in his review of such effects, that this plasticity appears to be a life-long capacity, and that the 'dynamic regulation' implied 'constitutes a severe blow to the hypothesis that cortical perceptual functions are based on static properties of individual cells' (1995, p.153).

60. Interestingly, Phillips, Kay & Smyth have examined the possibility that cortical networks perform a kind of statistical latent structure analysis that discovers predictive relations between inputs. They point to a 'redundancy reducing' strategy in cortical processing, which they describe as that of 'maximizing the mutual information between output and input under a constraint that ensures data reduction' (1995, pp.117-118).

61. There is abundant evidence, in other words, that what the most evolved cerebral/cognitive systems are most 'interested' in are covariations, both simple and complex. That they are not interested in stable images is suggested in another, rather startling, way. Even in a perfectly static head and body, spatiotemporal covariations are arising through rapid oscillatory eye-movements of small amplitude. When a perfectly stable visual image is projected onto the retina (a technically difficult feat), the result is not a perfectly formed copy in the perceptual and/or cognitive system. Instead, the very opposite happens - the image disappears. As MacKay, in reviewing such phenomena, explains, 'Stabilisation, even if it does not abolish all retinal signals, eliminates all covariation. If no correlated changes take place, there is nothing for analysers of covariation to analyse. If, then, seeing depends on the results of covariation analysis, there will be no seeing.' (1986, p.371)

62. That the system has such 'interest' should not be surprising. It does not simply want to 'know' what is currently before it - i.e., a perfect reproduction of a current scene. It wants to know 'what will happen next', 'what else could it be related to?', 'what can it predict?', and 'what will be the result of a certain action?'. If, as Cariani suggests, '(m)otion appears to be essential for vision', that's because of the way the world is, and the way we experience it (1994, p.231).

63. It is worth emphasising that the structure of the cerebral cortex - a vast horizontal replication of a basic multi-layered processing unit (Rockel, Hiorns & Powell, 1980,) might be one eminently designed to represent the kind of informational hyperstructures and hypernetworks being suggested here. In particular, the rich reciprocal connections within and between all processing centres may have the function depicted by the 'inter-covariations' in Figure 4, in which image formation at one level is 'filled out' by reference to hyperstructural information at another. For example, it is well known that there are at least as many reciprocal connections from cortex to thalamic centres as vice versa. In this view, indeed, the thalamic centres may be considered to be a 'keyboard' of primary sensory values from which a harmonious sensory image is created from incomplete, discordant input after analysis by, and re- entry from, a hyperstructural 'grammar' in cortex.

XI. EMPIRICAL EVIDENCE FOR REPRESENTATIONAL HYPERSTRUCTURES.

64. As mentioned previously, the theory of hyperstructures echoes, and may help reconcile, many diverse views in cognitive theory. But such claims would need to be substantiated in considerable further research. In the rest of this paper I simply want to describe some research results that are indicative of support for the model sketched so far.

65. In 1991, Richardson & Carthy pitted the hyperstructural model against various other models, using categories of artifical stimuli (simple geometric forms), and found that the covariation relations between features gave good accounts of recognition/classification. In subsequent work, a developmental implication of the model was tested, based on the view that as children's experience in a domain increases, their mental representations of that domain will consist of hyperstructures of increasingly extensive and 'deeper' covariations. Thus it was found that eleven-year-olds' predictions of speeds of different bikes showed a much greater appreciation of the higher-order interactions among factors such as size of rider, size of bike and road surface, than the predictions of seven-year-olds (Richardson, 1992).

66. Further evidence for cognitive hyperstructures comes from the recognition of objects and events from point-light stimuli. Such stimuli do not contain features (at least, not in the explicit form assumed in most models of concepts). Yet, objects are readily recognised from such stimuli, as long as they exhibit appropriate spatiotemporal covariation (Fox & McDaniel 1982; Johansson 1985). Figure 6 shows a short series of snapshots from the most famous of these, the 'point light walker' (Richardson & Webster, 1996).

ftp://coglit.psy.soton.ac.uk/pub/psycoloquy/1999.volume.10/Pictures/rich6.html

    Figure 6.  A sequence of images from a point light walker (n.b.
    the images are 'spaced out' for this presentation).

67. In spite of widespread demonstration of such functioning, however, there is still considerable uncertainty ad to how it is achieved: 'global organization of the whole pattern is implicitly assumed to require integration and reconstruction by higher order neural mechanisms. Precisely how such global organization may be accomplished, however, is seldom discussed' (Lappin, 1995, p.360). I believe that recognition from point light stimuli is simply a spectacular example of the general capacity for image construction and categorisation (and other predictabilities), furnished by the hyperstructures I've been trying to describe.

68. Evidence that such recognition is based on covariation 'samples' in the stimuli was shown in an experiment in which the point light walker was reduced to ten different permutations of four points each (Richardson & Webster 1996). These very sparse inputs were selected in order to display different degrees of covariation complexity, as measured by computing their mutual information (using standard formulae) and fitting log-linear models (which identify different 'depths' of covariation). Across the range of stimuli, there was a significant association between the covariation complexity in the PLS and the numbers of subjects who recognised a person walking in the display. In related studies with PLS derived from other animate objects it was again found that older children appeared to be more sensitive to deeper hyperstructural relations that younger children.

69. Similar results were achieved with point-light stimuli derived from a variety of inanimate objects (e.g., kettle, vacuum- cleaner, iron) filmed in normal use (Richardson, Webster & Cope, submitted). Such PLS, especially the four-point stimuli we used, are relatively covariation-sparse compared with those from animate objects (which consist of numbers of coarticulating parts). Nonetheless, it was thought that this would be a good test of the interaction between covariation structures in current input and knowledge store (i.e., hyperstructural representation) in creating an object form. Responses to the PLS from these sources were compared with those from another set of PLS derived from sequences of random point lights, but matched for levels of covariation complexity. For each stimulus subjects were asked (i) 'do you think the points appear to move in an organized or coordinated fashion, or do they appear to consist of independent random movements?', and (ii) 'could the set represent an object or part of an object?'.

70. Even though the non-object PLS contained levels of covariation complexity that matched those in the object PLS, subjects were much more likely to say that the latter were 'coordinated', and that they represented real objects. Moreover, for the object PLS, these tendencies were significantly associated with the covariation complexities in the stimuli. In order for this to happen, subjects must have representations of the corresponding objects from past experience (which is not an original argument), but, since those representations are more or less activated according to the form and degree of covariation complexity in the PLS, then they must be in the form of covariation hyperstructures.

71. These are, of course, relatively meagre results. Yet the hyperstructural model is rich in the number of hypotheses it generates, and which can be readily tested in future research. Since it may offer a way of extending and reconciling a number of views, central to current psychology, such an approach would seem to be worthwhile.

ACKNOWLEDGEMENTS

Research in this department was funded in part by the U.K. Economic and Social Research Council (Grant No. R000236842) which is gratefully acknowledged. The author would particularly like to thank David Webster for stimulating the ideas presented here, and three referees who offered many helpful suggestions to a previous draft.

REFERENCES

Abramson, N. (1963) Information Theory and Coding, McGraw-Hill.

Anderson, J.R. (1991) 'The adaptive nature of human categorization.' Psychological Review, 98: 402-429.

Baas, N.A. (1994) 'Emergence, hierarchies and hyperstructures.' In: Artificial life III, (Santa Fe Institute Studies in the Science of Complexity), ed. C.G.Langton, Addison-Wesley.

Baldwin, J.M. (1901) Dictionary of philosophy and psychology. New York: Macmillan.

Barlow, H. (1985) 'The twelfth Bartlett memorial lecture: The role of single neurons in the psychology of perception.' The Quarterly Journal of Experimental Psychology, 37A: 121-145.

Barsalou, L.W. (1993) 'Challenging assumptions about concepts.' Cognitive Development, 8: 169-180.

Billman, D. (1996) 'Structural biases in concept learning: influences from multiple functions.' The Psychology of Leaning and Motivation, 35: 283-320.

Bozdogan, H. (1990) 'On the information-based measure of covariance complexity and its application to the evaluation of multivariate linear models.' Communications in Statistics: Theory & Methodology, 19: 221-278.

Braddon, S. S. (1986) 'Thinking on your feet: The consequences of action for the relation of perception and cognition.' In: Event cognition: An ecological perspective, ed. V. McCabe & G. J.Balzano (Eds.), Erlbaum.

Butterworth, G. (1983). 'Dynamic approaches to infant perception and action: old and new theories about the origins of knowledge.' In: A Dynamic Systems Approach to Development. eds. L.B. Smith & E. Thelen, MIT Press.

Cariani, P. (1995) 'As if time really mattered: temporal strategies for neural coding of sensory information.' Communications and cognition, 12, 161-229. Reprinted in: Origins: brain and self-organization, ed. K.H. Pribram, Erlbaum.

Cosmides, L. & Tooby, J. (1994). 'Origins of domain specificity: evolution of funcitonal organization.' In: Mapping the mind: domain specificity in cognition and culture, eds. L.A. Hirschfeld & S.A. Gelman (Eds.), Cambridge University Press.

Dawkins, R. (1976) 'Hierarchical organisation: a candidate principle for ethology.' In: Growing points in ethology. Cambridge, eds. P.P.G. Bateson & R.A. Hinde, Cambridge University Press.

Dinse, H.R., Recanzone, G.H. & Merzenich, M.M. (1993) 'Alterations in correlated activity parallel ICMS-induced representational plasticity.' NeuroReport, 5: 173-176.

Donald, M. (1991) Origins of the modern mind, Harvard University Press.

Fitch, R,H., Miller, S. & Tallal, P. (1997) 'Neurobiology of speech perception.' Annual Review of Neuroscience, 20: 331-353.

Fodor, J. (1994). 'Concepts: a potboiler.' Cognition, 50: 95- 113.

Fox, R. & McDaniel (1982) 'The perception of biological motion by human infants.' Science, 218: 486-487.

Gibson, J.J. (1986). The ecological approach to visual perception. Hillsdale, N.J.: Erlbaum.

Gilbert, I.D. (1996) 'Plasticity in visual perception and physiology.' Current Opinions in Neurobiology, 6: 269-274.

Gilbert, C.D. & T.N. Weisel (1990) 'The influence of contextual stimuli on the orientation selectivity in the cells of primary visual cortex of the cat.' Vision Research, 30: 1689-1701.

Hampton, J.A. (1997) 'Psychological representation of concepts.' In: Cognitive models of memory, ed. M.A. Conway, Psychology Press.

Hanson, S.J. & Burr, D.J. (1990). 'What connectionist models learn: learning and representation in connectionist networks.' Behavioral & Brain Sciences, 13: 471-518.

Jackendoff, R. (1989). 'Languages of the computational mind.' In: The computer and the brain: perspectives on human and artifical intelligence, eds. J.R. Brink & C.R. Haden, Elsevier/North-Holland.

Johansson, G. (1985) About visual event perception. In: Persistence and Change, eds R.E. Shaw & W.H. Warren, Erlbaum.

Knudson, E.I. (1985) 'Auditory experience influences the development of sound location and space coding in the auditory system.' In: Comparitive neurobiology, eds. M.J. Cohen & F. Strumwasser, Wiley.

Knudsen, E.I. & Brainard, M.S. (1995) 'Creating a unified representation of visual and auditory space in the brain.' Annual Review of Neuroscience, 18: 19-43.

Lappin, J.S. (1994) 'Sensing structure in space-time.' In: Perceiving events and objects, eds. G. Jansson, S.S. Bergstrom & W. Epstein, Erlbaum.

Livingstone, K.R., Andrews, J.K. & Harnad, S. (1998) 'Categorical perception effects induced by category learning.' Journal of Experimental Psychology: Learning, Memory & Cognition, 24: 734- 753.

Mackay, D. M. (1986) 'Vision - the capture of optical covariation.' In: Visual Neuroscience, eds. J.D. Pettigrew, K.J. Sanderson and W.R. Levick, Cambridge University Press.

Marr, D. (1982). Vision: a computational investigation into the human representation and processing of visual information, Freeman.

Mastronarde, D.N. (1989) 'Correlated firing of retinal ganglion cells.' Trends in Neuroscience, 12, 75-80.

Medin, D.J., Goldstone, R.L. & Gentner, D. (1993) 'Respects for similarity.' Psychological Review, 100: 254-278.

Nosofsky, R.M. (1986) 'Attention, similarity, and the identificaiotn-categorization relationship.' Journal of Experimental Psychology: General, 115: 39-57.

Nosofky, R.M. & Palmeri, T.J. (1997) 'An exemplar-based random walk model of speeded classification.' Psychological Review, 104: 266-300.

Pearlmutter, B.A. & Hinton, G.E. (1986). 'G-maximization: an unsupervised learning procedure for discovering regularities.' American Institute of Physics Conference Proceedings, 151: 333- 338.

Phillips, W. A., Kay, J. and Smyth, D. M. (1995) 'How local cortical processors that maximize coherent variation could lay foundations for representation proper.' In: Neural Computation and Psychology. Proceedings of the 3rd Neural Computation and Psychology Workshop(NCPW3), Stirling, Scotland, 31 August -2 September 1994, eds. L. S. Smith and P. J. B. Hancock: Springer

Piaget, J. (1970/1988) 'Piaget's Theory.' In: Manual of child psychology, ed. P.H. Mussen, London: Wiley; reprinted in eds. K. Richardson & S. Sheldon, Cognitive Development to Adolescence. Hove, Erlbaum.

Pinker, S. (1997) How the mind works, Penguin.

Plotkin, H.C. (1994) The nature of knowledge, Penguin.

Pustejovsky, J. (1995) The generative lexicon. Cambridge, Mass., MIT Press.

Reber, A.S. (1989) 'Implicit learning and tacit knowledge.' Journal of Experimental Psychology: General, 118, 219-235.

Richardson, K. (1992) 'Covariation analysis of knowledge representation: some developmental studies.' Journal of Experimental Child Psychology, 53: 129-150.

Richardson, K. (1998) The origins of human potential: evolution, development and psychology, Routledge.

Richardson, K. (1999, in press). Liberating constraints. Review essay on "Rethinking innateness" by J. Elman et al, Theory and Psychology, 9, 117-127.

Richardson, K. & Carthy, T. (1990) 'The abstraction of covariation in conceptual representation.' British Journal of Psychology, 81: 415-438.

Richardson, K. & Webster, D.S. (1996) 'Recognition of objects from point-light stimuli: evidence for covariation hierarchies in conceptual representation.' British Journal of Psychology, 87: 567-591

Richardson, K., Webster, D.S. & Cope, N. (submitted). Conception of visual form from point light stimuli.

Rockel, A.J., Hiorns, R.W. & Powell, T.P.S. (1980) 'The basic uniformity in structure of the neocortex.' Brain, 103: 321-244.

Rosch, E. (1978) 'Principles of categorization.' In: Cognition and categorization, eds. E.Rosch & B.B. Lloyd, Erlbaum.

Shepard, R. N. (1975). 'Form, formulation, and transformation of mental representations.' In: Information processing and cognition, ed. R. Solso, Erlbaum.

Shepard, R.N. (1981) 'Psychophysical compementarity.' In: Perceptual organization, eds. M.Kubovy & J.Pomerantz, Erlbaum.

Shepard, R.N. (1984) 'Ecological constraints on internal representaiotn: resonant kinematics of perceiving, iamgining, thinking and dreaming.' Psychological Review, 91: 417-448.

Singer, W. & Gray, C.M. (1995) 'Visual feature integration and the temporal correlation hypothesis.' Annual Review of Neuroscience, 18: 555-586.

Spalding, T.L. & Murphy, G.L. (1996) 'Effects of background knowledge on category construction.' Journal of Experimental Psychology: Learning, Memory and cognition, 22: 525-538.

Thelen, E. & Smith L.B. (1994) A dynamic systems approach to the development of cognition and action, MIT Press.

Uttal, W.R. (1975) An autocorrelation theory of form detection, Wiley.

Uttal, W.R. (1988) On seeing forms. Erlbaum

Weinberger, N.M. (1995) 'Dynamic regulation of receptive fields and maps in the adult sensory cortex.' Annual Review of Neuroscience, 18: 129-158.

Wisniewski, E.J., & Medin, D.L. (1994) 'On the interaction of theory and data in concept learning.' Cognitive Science, 18: 221-281.

Zeki, S. (1993) A Vision of the Brain. Oxford: Blackwell.


Volume: 10 (next, prev) Issue: 031 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: