James H. Fetzer (2001) The ttt is not the Final Word. Psycoloquy: 12(044) Symbolism Connectionism (11)

Volume: 12 (next, prev) Issue: 044 (next, prev) Article: 11 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(044): The ttt is not the Final Word

THE TTT IS NOT THE FINAL WORD
Commentary on Harnad on Symbolism-Connectionism

James H. Fetzer
Department of Philosophy
University of Minnesota
Duluth, MN 55812, USA

jfetzer@ub.d.umn.edu

Abstract

Interpreting Harnad (2001) behavioristically, this commentary contends that the TTT cannot adequately discriminate between thinking things and thoughtless machines. The difference at stake concerns an internal difference with external manifestations that requires an inference to the best explanation for its tentative solution. I thus cast doubt upon the adequacy of Harnad's approach. In a subsequent study (Fetzer 1995 and Fetzer 2001) I have advanced (what I take to be) a somewhat stronger critique of Harnad's position, which also addresses the TTTT.

    REPRINT OF: Fetzer, J.H. (1993).The TTT is not the final word.
    Think 2: 12-78 (Special Issue on "Connectionism versus Symbolism"
    D.M.W. Powers & P.A. Flach, eds.).
    http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

1. My purpose is to explain, first, that there is an alternative to Harnad's (1990) version of the symbol grounding problem, which is known as the problem of primitives; second, that there is an alternative to his solution (which is externalist) in the form of a dispositional conception (which is internalist); and, third, that, while the TTT, properly understood, may provide partial and fallible evidence for the presence of similar mental powers, it cannot supply conclusive proof, because more than observable symbolic manipulation and robotic behavior is involved here, as he admits (Harnad, 1991). Carrying the problem further appears to require inference to the best explanation (see Fetzer 1995).

2. Harnad (2001) claims that the combined power of symbolic manipulation and robotic behavior affords our best experiential test for understanding both language and cognition. His approach emphasizes the theoretical necessity to resolve the symbol grounding problem by establishing appropriate links between the symbols that a system can manipulate, the behavior that that system can display, and the properties of the external world. The meaning of the symbols that a system manipulates are 'grounded' when there is an appropriate isomorphism between those symbols and features of the world.

3. The purpose of the TTT is to measure the extent to which the symbols manipulated by a system can be systematically interpreted as standing for specific objects and properties in the external world by means of observations of the behavior that it displays in dealing with objects and properties in the external world. This approach is 'in the spirit of behaviorism', since the only kinds of tests used to determine the meaning of those symbols are formal criteria for qualifying as a 'symbol system' and behavioral criteria for qualifying as a 'meaningful' symbol system (cf. Harnad, 1990, p. 345).

4. That the symbols manipulated by one system can be systematically interpreted by another system as standing for specific features of the world, however, does not imply that they actually have that specific meaning - or any other meaning - for that system. The TTT is a measure of the extent to which those symbols can be systematically interpreted as if they stand for specific objects and properties in the external world. But a system can be 'semantically interpretable' as if it possessed a certain property even if it does not happen to possess that property - even on the basis of the TTT.

5. The insufficiency of the TTT, moreover, should not be very surprising. Quine's indeterminacy of translation and Dennett's intentional stance reflect the potential for alternative hypotheses which transcend symbolic manipulation or robotic capacity. The TTT might exhaust our experiential evidence (absent CT scans, X-rays, surgery and the like), but it does not exhaust our theoretical alternatives via inference to the best explanation (Fetzer, 1991, 1993). Systems with similar behavioral repertoires, for example, could still differ in their causal origins or in their internal composition, which might support very different inferences and conclusions regarding their mental powers.

6. There is an important difference, after all, between symbolic manipulation and robotic behavior, on the one hand, and the intellectual (mental, cognitive) states of which they may or may not be observable manifestations, on the other. Surely two systems are similar in their intellectual (mental, cognitive) processes only if their unobservable intellectual (mental, cognitive) processes are similar. A more promising approach 'in the spirit of behaviorism' would require a conception of behavior broad enough to include any internal or external effect of any internal or external cause.

7. A conception of this kind, of course, defeats the prospects for any attempt to reduce internal mental states to external observable behavior. But it does supply a foundation for interpreting the meaning of signs or symbols as their causal roles in influencing behavior in this broad sense. When minds are viewed as sign-using (or 'semiotic') systems, the meaning of a sign for a system becomes its causal role in influencing behavior (Fetzer, 1988, 1989; 1990; 1991), and the symbol grounding problem is seen to be a special case of what is better envisioned as the problem of primitives.

8. An approach of this kind places meanings within systems rather than in their relations to the external world. The meaning of a sign is located in the system's dispositions toward behavior when conscious of that sign's presence, given its other internal states, rather than in any isomorphism that may or may not obtain between those signs and features of the world. While these 'other internal states' differ for different kinds of systems, for human beings they include motives, beliefs, ethics, abilities, and capabilities. Complete sets of values of these variables thus form a context.

9. The meaning of a specific sign S for a particular semiotic system then becomes the totality of tendencies toward behavior of various kinds relative to the (possibly infinitely varied) contexts within which that system might find itself, when aware of that sign's presence. Two signs S1 and S2 have the same meaning for a system if that system would have the same dispositional tendencies in the same contexts, when aware of either sign's presence. Their status as sign-using systems requires that those signs be meaningful for those systems themselves, however, and not merely for a user of those systems or for an observer of their behavior (Fetzer, 1988; 1989; 1991).

10. This reflection suggests a crucial point. A system that passed the TT would have shown itself to be as capable as a human being with respect to symbolic manipulation, but not with respect to robotic behavior. A system that passed the TTT would have shown itself to be as capable as a human being with respect to symbolic manipulation and robotic behavior, but not with respect to mental powers. Any conclusion about the presence or the absence of mental powers presupposes a theory about the nature of the mind, which the TT and the TTT both require and the semiotic conception provides.

11. The introduction of technological innovations such as CT scans, X-rays and the like, therefore, can provide the foundation for even stronger tests of intellectual (mental, cognitive) similarity than Harnad recommends. Such tests would compare different systems not only with respect to their capacity for symbolic manipulation, as in the case of the TT, or their capacity for symbolic manipulation and robotic behavior, as in the case of the TTT, but also for their modes of internal processing. This could yield further evidence of the cognitive similarity of these systems, which in turn might support even stronger inferences and conclusions about their mental powers.

12. One of the benefits of a dispositional approach of this kind is that it can explain false beliefs and unsuccessful actions taken in the world as a function of the objects and properties that actually exist in the world, relative to our beliefs about it (Fetzer, 1990). Another is that it supplies a framework for understanding the manner in which connectionism and cognition may be more successfully related (Fetzer 1991). And a third is that it clearly defines the limitations of various influential but inadequate arguments that have been advanced by Fodor and Pylyshyn (Fetzer, 1992). The TTT seems to be a valuable contribution, but it is not the last word.

REFERENCES

Fetzer, J. H. (1988) Signs and Minds: An Introduction to the Theory of Semiotic Systems. In: J. H. Fetzer (ed). Aspects of Artificial Intelligence, Kluwer Academic Publishers.

Fetzer, J. H. (1989) `Language and Mentality: Computational, Representational, and Dispositional Conceptions'. In: Behaviorism 17: 1-39.

Fetzer, J. H. (1990) Artificial Intelligence: Its Scope and Limits. Kluwer Academic Publishers.

Fetzer, J. H. (1991) Philosophy and Cognitive Science. Paragon House Publishers.

Fetzer, J. H. (1992) `Connectionism and Cognition: Why Fodor and Pylyshyn are Wrong'. In: A. Clark and R. Lutz (eds). Connectionism in Context, Springer-Verlag.

Fetzer, J. H. (1993) Philosophy of Science. Paragon House Publishers.

Fetzer, J.H. (1995) Minds and Machines: Behaviorism, Dualism, and Beyond. Stanford Humanities Review 4: 251-265,

Fetzer, J. H. (2001) Computers and Cognition: Why Minds are Not Machines. Kluwer Academic Publishers.

Harnad, S. (1990) `The Symbol Grounding Problem'. Physica D, 335-346. http://cogprints.soton.ac.uk/documents/disk0/00/00/06/15/index.html

Harnad, S. (1991) `Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem,' In: Minds and Machines 1: 43-54. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/78/index.html

Harnad, S. (2001) Grounding symbols in the analog world with neural nets - A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034


Volume: 12 (next, prev) Issue: 044 (next, prev) Article: 11 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: