Title & Author | Abstract | |
---|---|---|
12(034) | GROUNDING SYMBOLS IN THE ANALOG WORLD WITH NEURAL NETS -- A HYBRID MODEL
Target Article on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk |
Abstract:
Searle's Chinese Room Argument (that rule-based symbol
manipulation is not enough for symbol-understanding) is based
on a symptom of the Symbol Grounding Problem (that rule-based
symbol manipulation is circular and ungrounded). Symbols must
be grounded directly in the capacity to identify and interact
with the objects they designate. One candidate way to do this is
to use neural nets to try to give a robot Turing-scale
sensorimotor capacities congruent with its Turing-scale
linguistic capacities. Such a grounded hybrid symbolic/dynamic
robot would be immune to Searle's Chinese Room Argument.
Keywords: analog processes, category learning, computationalism, connectionism, dynamical systems, embodiment, language, meaning, neural nets, robotics, sensorimotor capacity, Searle, symbol grounding |
12(035) | SYMBOLISM VERSUS CONNECTIONISM: AN INTRODUCTION
Commentary on Harnad on Symbolism-Connectionism David M. W. Powers Informatique, TELECOM PARIS 46, rue Barrault 75634 Paris cedex 13, FRANCE Peter A. Flach Department of Computer Science University of Bristol The Merchant Venturers Building, Room 3.26 Woodland Road Bristol BS8 1UB, United Kingdom powers@inf.enst.fr Peter.Flach@bristol.ac.uk | Abstract: Harnad's (2001) main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the present debate, but is also a contribution to a longlasting discussion about such questions as: Can a computer think? If yes, would this be solely by virtue of its program? Is the Turing Test appropriate for deciding whether a computer thinks? |
12(036) | TRANSDUCTION AND DEGREE OF GROUNDING
Commentary on Harnad on Symbolism-Connectionism C. Franklin Boyle CDEC, 3028 Hamburg Hall Carnegie Mellon University Pittsburgh, PA 15213, USA fb0m@andrew.cmu.edu | Abstract: It is argued that Harnad's (2001) proposed solution to the symbol grounding problem falls short of the mark because it does not address the issues of HOW sensory invariants are embodied as symbols and, subsequently, how those symbols cause change. There is no guarantee that, as the outputs of neural nets, such symbols are not arbitrary encodings, which means the physical characteristics of sensory inputs preserved by Harnad's analog processes would be lost, having no effect on system behavior when the symbols are processed. Even if they are embodied by the symbols, processing on digital computers would mask their effects as those characteristics. |
12(037) | TRANSDUCTION, YES. DEGREES OF GROUNDING, NO.
Reply to Boyle on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Boyle (2001) is right that transduction is a critical component of symbol grounding. But symbols are not grounded by degrees: Either a robot can identify and interact with the objects its symbols are interpretable as referring to, or it cannot. The rest is just about the reliability and extent of the identification and interaction. |
12(038) | PEOPLE ARE INFINITARY SYMBOL SYSTEMS; NO SENSORIMOTOR NECESSARY
Commentary on Harnad on Symbolism-Connectionism Selmer Bringsjord Dept. of Philosophy Dept. of Comp. Sci. Rensselaer Polytechnic Institute Troy NY 12180, USA selmer@rpi.edu, selmer@rpitsmts | Abstract: Harnad (2001) is right that Searle's Chinese Room Argument shoots down the Turing Test. He is wrong that CRA doesn't shoot down the 'Total' TT as well -- it does. (He is also wrong about what people, at bottom, are. People, by my lights, are super- computational, and in principle they don't need sensorimotor capacities.) |
12(039) | PEOPLE ARE NOT VIRTUAL: REAL SENSORIMOTOR EMBODIMENT IS NECESSARY
Reply to Bringsjord on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Bringsjord's (2001) imagined possibilities, unlike Searle's (1980) thought-experiment, don't seem to have any bearing on what it takes, in reality, to generate a mind. Real sensorimotor embodiment looks like one of the necessary conditions. |
12(040) | THE UBIQUITY OF COMPUTATION
Commentary on Harnad on Symbolism-Connectionism Eric Dietrich Program in Philosophy, Computers, and Cognitive Science Binghamton Univ. dietrich@binghamton.edu | Abstract: I argue that from a natural, explanatory perspective, computation is ubiquitous. This is due to our finite ability to measure states and processes in the world. While is true that in many of our sciences we frequently use continuous descriptions, such descriptions are not necessary, and in cases where information is being studied, continuous descriptions can hamper research. Then, I discuss briefly the empirical nature of the computational hypothesis, and defend it against Harnad's (2001) criticisms. I also point out that computation in the physical world is a semantical process, and that criticisms of it based on its alleged purely syntactic nature are misguided. |
12(041) | THE UBIQUITY OF PHYSICS (AND THE PHYSICS-INDEPENDENCE OF COMPUTATION)
Reply to Dietrich on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Dietrich (2001) thinks everything is computational. If so, then nothing is meant by saying something is computational. But it is not so. Flying is not computational. Heating is not computational. By the same, thinking in not (all) computational. |
12(042) | COMPUTATIONALISM, NEURAL NETWORKS AND MINDS, ANALOG OR OTHERWISE
Commentary on Harnad on Symbolism-Connectionism Michael G. Dyer Computer Science Department 3532 Boelter Hall UCLA Los Angeles CA 90024, USA Dyer@cs.ucla.edu | Abstract: Rebuttals are given to Harnad's (2001) argument that only analog systems (by virtue of their physicality) might be capable of supporting Mind. These rebuttals are based on the assumption that Mind arises due to how matter is organized (as opposed to its specific physicality). Mind could arise in non-analog systems by means of simulating Mind's organizational properties and dynamics at an appropriate level. Many observable behaviors of Mind (e.g., language comprehension, creativity, learning, common-sense reasoning) have already been demonstrated (albeit at primitive stages) in non-analog (i.e. computational) systems, at both symbolic and artificial neural network levels of organization. |
12(043) | COMPUTATION AND MINDS: ANALOG IS OTHERWISE
Reply to Dyer on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Like Dietrich (2001), Dyer (2001) seems to think that code can reconfigure a computer into anything, with any properties. It cannot, and flying, heating and thinking are three examples. |
12(044) | THE TTT IS NOT THE FINAL WORD
Commentary on Harnad on Symbolism-Connectionism James H. Fetzer Department of Philosophy University of Minnesota Duluth, MN 55812, USA jfetzer@ub.d.umn.edu | Abstract: Interpreting Harnad (2001) behavioristically, this commentary contends that the TTT cannot adequately discriminate between thinking things and thoughtless machines. The difference at stake concerns an internal difference with external manifestations that requires an inference to the best explanation for its tentative solution. I thus cast doubt upon the adequacy of Harnad's approach. In a subsequent study (Fetzer 1995 and Fetzer 2001) I have advanced (what I take to be) a somewhat stronger critique of Harnad's position, which also addresses the TTTT. |
12(045) | TTT GUARANTEES ONLY GROUNDING: BUT MEANING = GROUNDING + FEELING
Reply to Fetzer on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Robotic grounding is not behaviorism, it is the reverse engineering of our cognitive capacities. It guarantees grounding (i.e., the autonomous capacity to connect symbols to the external objects they are about with no mediation by an external interpreter) but not meaning, which calls for something more (but not the resources Fetzer (2001) proposes). It calls for whatever it is that gives us feelings. But here too, the Total Turing Test is the final arbiter (yet no guarantor). |
12(046) | COMPUTERS DON'T FOLLOW INSTRUCTIONS
Commentary on Harnad on Symbolism-Connectionism Pat Hayes Beckman Institute University of Illinois 405 North Mathews Avenue Urbana, IL. 61801, USA hayes@cs.stanford.edu | Abstract: Searle (1980) makes the mistake of thinking that computation just consists of a CPU, rulefully manipulating symbols. The separability of software from its hardware implementation is not as absolute as that. Searle simulating a computer is not actually a computer. Hence nothing follows from what Searle does or does not understand whilst doing so. |
12(047) | IMPLEMENTATION-DEPENDENT COMPUTATION IS NOT COMPUTATION
Reply to Hayes on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Hayes (2001) suggestes that implementation-independence is not "absolute." This will have to be worked out more precisely, because, on the face of it, this seems to contradict classical (Turing) definitions of computation. For the time being, though, a sensorimotor robot is not just a computer, on any definition of computation, nor is any transducer of energy, just as a plane is not a computer, nor is a furnace. So just as a plane is no longer flying (hence no longer a plane), nor a furnace heating (hence no longer a furnace) if you remove their respective motoric and thermal transducers, a mind is no longer thinking (hence no longer a mind) if you remove its sensorimotor transducers, no matter what computational hardware or software you leave in place. |
12(048) | CONTINUITY, LEARNING, SYMBOL-GROUNDING AND MEANING
Commentary on Harnad on Symbolism-Connectionism Vasant Honavar Department of Computer Science Iowa State University Ames, Iowa 50011, U.S.A. honavar@iastate.edu | Abstract: If analog means continuous (as opposed to discrete), the centrality of analog sensory projections appears questionable. It is also unclear that learning is essential for symbol grounding. And is the locus of menaing the organism (system), the species, the gene, the environment, or the cosmos? |
12(049) | DYNAMICAL SYSTEMS, EVOLUTION, GROUNDING AND MEANING
Reply to Honavar on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Symbol systems (computation) are different from dynamical systems (physics) whether or not physics proves to be really continuous. Symbols can be grounded by learning or evolution or both, but Turing Test-passing is temporal, so it requires some learning. Sensorimotor grounding does not guarantee meaning, however, only autonomy from external interpretation. |
12(050) | COGNITION, CHAOS AND NON-DETERMINISTIC SYMBOLIC COMPUTATION: THE CHINESE ROOM PROBLEM SOLVED?
Commentary on Harnad on Symbolism-Connectionism R.W. Kentridge, Psychology Department, University of Durham, Durham DH1 3LE, UK web: www.dur.ac.uk/~dps0rwk robert.kentidge@durham.ac.uk | Abstract: Symbolic descriptions of the behaviour of continuous dynamical systems are, of necessity, approximations. In some cases, where the behaviour of the system is very orderly or completely random, then a single symbolic representation may provide an adequate description of the system. Dynamical systems which perform non-trivial computation do not, however operate in either of these regimes. One reason why symbol-grounding is such a tricky issue may be that no single symbolic description adequately describes the behaviour of computationally powerful dynamical systems like brains over a range of spatial or temporal scales. A symbolic description in which the 'contents' of symbols must sometimes be examined is, of course, not truly symbolic since its 'symbols' are not truly atomic. |
12(051) | IS REAL CHAOS ESSENTIAL TO COGNITIVE CAPACITY, OR WILL SYMBOLIC SIMULATION DO?
Reply to Kentridge on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Kentridge (2001) suggests that chaotic dynamics may be essential for cognition. But the performance power of chaos remains to be demonstrated, and even if it helps to pass the Turing Test, it remains to be shown (as with neural net parallelism) that serial symbolic simulation would not have done worked just as well. |
12(052) | GROUNDING ANALOG COMPUTERS
Commentary on Harnad on Symbolism-Connectionism Bruce J. MacLennan Computer Science Department University of Tennessee Knoxville, TN 37996, USA maclennan@cs.utk.edu | Abstract: The issue of symbol grounding is not essentially different in analog and digital computation. The principal difference between the two is that in analog computers continuous variables change continuously, whereas in digital computers discrete variables change in discrete steps (at the relevant level of analysis). Interpretations are imposed on analog computations just as on digital computations: by attaching meanings to the variables and the processes defined over them. As Harnad (2001) claims, states acquire intrinsic meaning through their relation to the real (physical) environment, for example, through transduction. However, this is independent of the question of the continuity or discreteness of the variables or the transduction processes. |
12(053) | SYMBOL GROUNDING IS AN END: ANALOG PROCESSES ARE A MEANS
Reply to MacLennan on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Analog computation can also be ungrounded (i.e., interpretable as meaning something, but not intrinsically maening anything). Any object or process can be "ungrounded" in that sense. But the symbol grounding problem afflicts symbol systems, and only symbol systems that aspire to implement thinking. The classical theory of computation pertains to discrete symbol systems. There is as yet no theory of "continuous symbols." Language (and the putative language of thought) are discrete symbol systems. |
12(054) | DIGITAL COMPUTERS AS RED HERRINGS
Commentary on Harnad on Symbolism-Connectionism Drew McDermott Computer Science Department Yale University P.O. Box 2158 Yale Station New Haven, CT 06520, USA mcdermott@cs.yale.edu | Abstract: There is indeed a symbol-grounding problem, but appealing to the powers of neural nets will not solve it. There is no important difference between digital computers and other computational media. If cognition really is computation, then it doesn't matter exactly what kind of computer performs it. However, this doesn't explain how symbols come to have meanings, or even how symbols come to exist. The causal theory of meaning will eventually solve this puzzle, we hope, and the solution will have the same outlines for digital computers as for neural nets and more exotic computational devices. |
12(055) | SOFTWARE CAN'T RECONFIGURE A COMPUTER INTO A RED HERRING
Reply to McDermott on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Some physical systems are computers, some are not. Those that are can be reconfigured by their software to simulate (i.e., to be systematically interpretable as) any other physical system and its causal properties and connections. But simulated causality is not real causality. A simulated plane cannot fly in a real sky. By the same token, simulated "thoughts" are not causally connected with the things they are interpretable (by external interpreters) as being about. |
12(056) | A GROUNDING OF DEFINITION
Commentary on Harnad on Symbolism-Connectionism David M. W. Powers Informatique, TELECOM PARIS 46, rue Barrault 75634 Paris cedex 13, FRANCE powers@inf.enst.fr | Abstract: The symbol grounding thesis rightly highlights that ungrounded semantics, based on dictionary-like definitions of symbols in terms of symbols, is inherently circular. Nonetheless there are some irrelevant distractions. In particular, it is argued that it is irrelevant whether a system is connectionist or symbolic, as either can simulate the other - both are equivalent to a Turing Machine. Furthermore, the distinction of digital vs analog is a red-herring given that in practice both are limited in resolution - again one can be simulated in terms of the other. Similarly, the question of parallel versus serial hardware does not affect the power of the machine - we can distinguish concurrency implemented using timesharing from true parallelism, but the two implementations are Turing equivalent. In all cases, there is a cost in simulating one kind of system in terms of another, but this only affects efficiency, not efficacy. |
12(057) | DON'T TAKE ANY VIRTUAL PLANES!
Reply to Powers on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: No disagreement with Powers (2001) about the power of computation to simulate neural nets, continuity, and paralellism. That's just the Church/Turing Thesis (that computation can simulate just about anything). But nets, continuity and parallel processing are just means, not ends. Real-world performance capacity, in contrast, is an end. And there is no way that a simulated transducer (optical, say) can transduce real light. It's the wrong causality, be it ever so Turing-Equivalent to it. So a virtual robot or virtual brain is no more able to think than a virtual plane is able to fly. |
12(058) | COMPUTATIONAL GROUNDING
Commentary on Harnad on Symbolism-Connectionism Herbert L. Roitblat Department of Psychology University of Hawaii Honolulu, HI 96822, USA roitblat@uhunix.uhcc.hawaii.edu | Abstract: Harnad (2001) defines computation in a conventional way to mean the manipulation of physical symbol tokens on the basis of purely syntactic rules. This definition inappropriately excludes analog systems, which can perform any computation that a symbolic system could (and vice versa). Because they have the same computational power, Harnad is arguably wrong that criticisms which apply to symbolic systems do not apply to analog systems. The real difference lies not in the continuous versus discrete properties of digital vs analog representations, but in what Harnad calls symbol grounding. Purely syntactic systems can transmit truth from the premises of an argument to the conclusion of the argument, but they cannot establish the validity of the premises. Symbol grounding is the process of establishing the premises. Although computers are capable of purely syntactic processing, for them to produce useful results, there must be constraints on what the symbols stand for. These constraints are part of what grounds the symbols. Neither Chinese rooms nor Chinese gyms can escape from the need to ground symbols for truly functional computation. |
12(059) | GROUNDING: INTERPRETER-INDEPENDENT CAUSAL CONNECTIONS BETWEEN SYMBOLS AND OBJECTS, TURING-SCALE
Reply to Roitblat on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Roitblat (2001) thinks grounding has something to do with "specifying the premises of formal arguments." I think it has to do with causally connecting symbols to what they are about, directly and autonomously, rather than through the mediation of an external interpretation: My candidate is causal grounding via whatever internal resources it takes to make a robot successfully pass the Total Turing Test. |
12(060) | THE FAILURES OF COMPUTATIONALISM: I.
Commentary on Harnad on Symbolism-Connectionism John R. Searle Department of Philosophy University of California Berkeley, CA 94720, USA searle@cogsci.Berkeley.edu | Abstract: Harnad (2001) accepts the Chinese Room Argument (Searle 1980) but not its logical consequences. The Argument shows that syntax by itself is not sufficient to cause/constitute semantics. Syntax is just as insufficient if it is inside a robot. The brain, we know, has sufficient power to cause/constitute semantics; the same cannot be said of sensorimotor transduction, or of connectionist networks, with or without syntax. |
12(061) | TITLE TO COME
Reply to Searle on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: Abstract to come |
12(062) | THE FAILURES OF COMPUTATIONALISM: II
Commentary on Harnad on Symbolism-Connectionism John R. Searle Department of Philosophy University of California Berkeley, CA 94720, USA searle@cogsci.Berkeley.edu | Abstract: Syntax is not sufficient to cause semantics; the brain is sufficient. |
12(063) | TITLE TO COME
Reply to Searle on Harnad on Symbolism-Connectionism Stevan Harnad Department of Electronics and Computer Science University of Southampton Highfield, Southampton SO17 1BJ United Kingdom http://www.cogsci.soton.ac.uk/~harnad/ harnad@cogsci.soton.ac.uk | Abstract: The logical consequence of the Chinese Room Argument (Searle 2001) is that syntax (computation, symbol-manipulation) by itself is not sufficient to cause/constitute semantics (conscious understanding of the symbol meaning by the system itself). But syntax inside a robot is no longer syntax by itself. Sensorimotor-transducers-plus-syntax constitute a larger candidate system. The brain is of course sufficient to cause/constitute semantics, but it is not clear which of the brain's many properties are necessary or relevant for its power to cause/constitute semantics. The brain's power to pass the Total Turing Test (TTT) is likely to be necessary (though not necessarily sufficient) to cause/constitute semantics. Hence any system with the power to pass the TTT provides empirical evidence as to what properties are likely to be necessary and relevant to cause/constitute semantics (but the TTT is still no guarantor). Sensorimotor transduction and/or connectionist networks may or may not be sufficient either to pass the TTT or to cause/constitute semantics, but they are certainly sufficient to escape the logical consequences of the Chinese Room Argument. |