Selmer Bringsjord (2001) People are Infinitary Symbol Systems; no Sensorimotor Necessary. Psycoloquy: 12(038) Symbolism Connectionism (5)

Volume: 12 (next, prev) Issue: 038 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(038): People are Infinitary Symbol Systems; no Sensorimotor Necessary

PEOPLE ARE INFINITARY SYMBOL SYSTEMS; NO SENSORIMOTOR NECESSARY
Commentary on Harnad on Symbolism-Connectionism

Selmer Bringsjord
Dept. of Philosophy
Dept. of Comp. Sci.
Rensselaer Polytechnic Institute
Troy NY 12180, USA

selmer@rpi.edu, selmer@rpitsmts

Abstract

Harnad (2001) is right that Searle's Chinese Room Argument shoots down the Turing Test. He is wrong that CRA doesn't shoot down the 'Total' TT as well -- it does. (He is also wrong about what people, at bottom, are. People, by my lights, are super- computational, and in principle they don't need sensorimotor capacities.)

    REPRINT OF: Bringsjord, S. (1993). People are infinitary symbol
    systems; No sensorimotor necessary. Think 2: 12-78 (Special Issue on
    "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach,
    eds.). http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

1. Stevan Harnad and I seem to be thinking about many of the same issues. Sometimes we agree, sometimes we don't; but I always find his reasoning refreshing, his positions sensible, and the problems with which he's concerned to be of central importance to cognitive science. His 'Grounding Symbols in the Analog World with Neural Nets' (= GS) (Harnad 2001) is no exception. And GS not only exemplifies Harnad's virtues, it also provides a springboard for diving into Harnad-Bringsjord terrain:

2. The Harnad-Bringsjord agreement looks like this:

    A1 Harnad claims in GS that 'computationalism' is refuted by
    Searle's 'Chinese Room Argument.' I think he's right; in fact, in
    my 'What Robots Can and Can't Be' (Bringsjord, 1992) I prove this
    doctrine false with the help of, among others, Jonah, a mono savant
    with a gift for flawlessly visualizing every aspect of Turing
    machine computation -- and my proof disarms both the multiple
    person rebuttal (Cole, 1990; Dyer, 1990), and the pontifical
    complaint that human implementation isn't real implementation
    (Hayes, 1992).

    A2 Harnad claims and in part tries to show in GS that
    computationalism and connectionism can be profitably distinguished.
    I think he's right; in fact, in (Bringsjord, 1991a) I formally
    distinguish these camps.

    A3 Harnad claims in GS that connectionism is refuted by his
    Searlean 'Three Room Argument.' Again, I think he's right: my
    Searlean argument (Bringsjord, 1992) successfully targets
    computationalism and connectionism.

    A4 Harnad claims in GS that the symbol grounding problem,
    essentially the problem of how a candidate AI can have
    intentionality (= genuine beliefs about objects in the world
    external to it) is a very serious problem. I heartily agree. I
    discussed one version of the symbol grounding problem in my
    dissertation. And thanks to Harnad's recent seminal work on the
    subject I'm currently burning more than a few grey cells again
    pondering the problem.

3. That's what we agree on. On the other hand, the Harnad-Bringsjord clash looks like this:

    C1 Contra Harnad, I think connectionism and logicism can be
    conflated for formal reasons (pertaining to the equivalence of
    neural nets and cellular automata, and the fact that there is an
    as-precise-as-you-like discrete mathematical representation of any
    analog computation), which makes the supposed clash between them a
    red herring [the conflation is achieved in Bringsjord, 1991a. Since
    Harnad's hybridism presupposes the reality of the clash, his
    doctrine is apparently a non-starter.

    C2 The heart of Harnad's GS is his claim that TTT survives what TT
    couldn't, and that the symbol grounding problem can be solved for a
    candidate AI by insisting that it be a TTT-passer. I think that
    while TTT survives Searle, it (and other tests in the same spirit)
    succumbs to other thought-experiments [a defence of this view is in
    Bringsjord (Bringsjord, in 1994). And I'm inclined to believe that
    no candidate AI, perhaps nothing physical, will ever have
    intentionality (which, yes, given that we have intentionality, does
    imply that I'm at least agnostic on the truth or falsity of
    substance dualism, the doctrine that human agents are
    incorporeal).

    C3 Harnad (hastily) rejects in GS the idea that we could in
    principle survive the complete loss of transduction (the loss of
    limbs, sensory surfaces, neurological motor analogs, ...) and
    become 'brains in vats.' I think it's easy to imagine existing in a
    cerebration-filled but transduction-empty state, and that such
    thought-experiments establish not only the logical possibly of such
    existence, but the physical possibility [in which case sensorimotor
    capacity is superfluous for an AI-building project; see Bringsjord
    & Zensen, (1991b).

    C4 Harnad ends his paper with a large disjunction meant to capture
    'the possible ways his proposal' -- hybridism -- 'could be wrong.'
    The disjunction isn't exhaustive. My own position fails to appear,
    but perhaps comes closest to the Chomskyian view (Chomsky, 1980).
    In my opinion, people are probably symbol systems able in principle
    to get along just dandy without sensorimotor capacity [= (C3)];
    moreover, they're 'infinitary' symbol systems of a sort beyond the
    power of a Turing machine to handle. [My specification and defence
    of this position can be found in Bringsjord (1993) and Bringsjord &
    Zenzen (forthcoming).]

4. That, then, is what Harnad-Bringsjord terrain looks like. The topography seems interesting enough, but -- who's right, who's wrong, and are they ever both right or both wrong? Isn't that the question? We haven't sufficient space to take informed positions on all (Ai) and (Ci) -- but I will endeavour to substantiate a significant part of (C2), since this issue falls right at the heart of Harnad's GS.

5. As is well known, Turing (1964) holds that if a candidate AI can pass TT, then it is to be declared a conscious agent. His position is apparently summed up by the bold proposition that

    TT-P If x passes TT, then x is conscious. 

[Turing Harnadishly said -- in my opinion incorrectly -- that the alternative to TT-P was solipsism, the view that one can be sure only that oneself has a mind. See Turing's discussion of Jefferson's 'Argument from Consciousness' in Turing (1964).] Is TT-P tenable? Apparently not, not only because of Searle, but because of my much more direct 'argument from serendipity' (Bringsjord, 1994): It seems obvious that there is a non-vanishing probability that a computer program P incorporating a large but elementary sentence generator could fool an as-clever-as-you-like human judge within whatever parameters are selected for a running of TT. I agree, of course, that it's wildly improbable that P fool the judge -- but it is possible. And since such a 'lucky' case is one in which TT-P's antecedent is true while its consequent is apparently false, we have a counter-example.

6. This sort of argument, even when spelled out in formal glory, and even when adapted to target different formal renditions of Turing's conditional [all of which is carried out in (Bringsjord, 1994)], isn't likely to impress Harnad. For he thinks Turing's conditional ought to be the more circumspect 'none the wiser'

    TT-P' If a candidate passes TT we are no more (or less) justified
    in denying that it has a mind then we are in the case of real
    people.

Hence, TTT's corresponding conditional, which encapsulates GS' heart of hearts, would for Harnad read

    TTT-P If a candidate passes TTT we are no more (or less) justified
    in denying that it has a mind then we are in the case of real
    people.

Unfortunately, this conditional is ambiguous between a proposition concerning a verdict on two TTT-passers, one robotic, one human, and a proposition concerning a verdict on a TTT-passer matched against a verdict on a human person in ordinary circumstances. The two construals, resp., are:

    TTT-P1 If h, a human person, and r, a robot, both pass TTT, then
    our verdict as to whether or not h and r are conscious must be the
    same in both cases.

    TTT-P2 If a robot r passes TTT, then we are no more (or less)
    justified in denying that r is conscious then we are justified in
    denying that h, a human, observed in ordinary circumstances, is
    conscious.

7. But these propositions are problematic:

First, it must be conceded that both conditionals are unacceptable if understood to be English renditions of formulae in standard first- order logic -- because both would then be vacuously true. After all, both antecedents are false, since there just aren't any robotic TTT-passers around (the domain of quantification, in the standard first-order case, includes, at most, that which exists); and the falsity of an antecedent in a material conditional guarantees vacuous truth for the conditional itself. The other horn of the dilemma is that once these propositions are formalized with help from a more sophisticated logic, it should be possible to counter-example them with armchair thought-experiments [like that upon which my argument from serendipity is based -- an argument aimed at a construal of TT-P that's stronger than a material conditional]. Harnad is likely to insist that such propositions are perfectly meaningful, and perfectly evaluable, in the absence of such formalization. The two of us will quickly reach a methodological impasse here.

8. But -- there is a second problem with TTT-P1: Anyone disinclined to embrace Harnad/Turing testing would promptly ask, with respect to TTT-P1, whether the verdict is to be based solely on behavior performed in TTT. If so, someone disenchanted with this proposition at the outset would simply deliver a verdict of 'No' in the case of both h and r -- for h, so the view here goes, could be regarded conscious for reasons not captured in TTT. In fact, these reasons are enough to derail not only TTT-P1, but TTT-P2 as well, as will now be shown.

9. TTT-P2 is probably what Harnad means to champion. But what is meant by the phrase 'ordinary situations,' over and above 'outside the confines of TTT'? Surely the phrase covers laic reasons for thinking that other human persons are conscious, or have minds. Now, what laic reasons do I have for thinking that my wife has a mind? Many of these reasons are based on my observation that her physiognomy is a human one, on my justified belief that her sensory apparatus (eyes, ears, etc.), and even her brain, are quite similar to mine. But such reasons -- and these are darn good reasons for thinking that my spouse has a mind -- are not accessible from within TTT, since, to put it another way, if I put my wife in TTT I'll be restricted to verifying that her sensorimotor behavior matches my own. The very meaning of the test rules out emphasis on (say) the neurophysiological properties shared by Selmer and Elizabeth Bringsjord. The upshot of this is that we have found a counter- example to TTT-P2 after all: we are more justified in denying that TTT-passing r is conscious than we are in denying that Elizabeth is. And as TTT-P2 goes, so goes the entire sensorimotor proposal that is GS.

10. In response to my argument Harnad may flirt with supplanting TTT with TTTT, the latter a test in which a passer must be neuro- physiologically similar to humans [see Harnad's excellent discussion of TT, TTT, TTTT (1991)]. Put barbarically for lack of space, the problem with this move is that it gives rise to yet another dilemma: On the one hand, if a 'neuro-match' is to be very close, TTTT flies in the face of functionalism, the view that mentality can arise in substrates quite different than our own carbon-based one; and functionalism is part of the very cornerstone of AI and Cognitive Science. On the other hand, if a 'neuro-match' is relaxed so that it need only be at the level of information, so that robotic and human 'brains' match when they embody the same program, we face in an attempt to administer TTTT what may well be an insurmountable mathematical hurdle: it's in general an uncomputable problem to decide, when given two finite argument- value lists, whether the underlying functions are the same.

REFERENCES

Bringsjord, S. and Zenzen, M. (1991) `In Defense of Hyper-Logicist AI,' In: IJCAI T91, Morgan Kaufman Publishers, Mountain View, CA, pp. 1066--1072.

Bringsjord, S. (1991a) `Is the Connectionist-Logicist Clash One of AI's Wonderful Red Herrings?' In: Journal of Experimental and Theoretical Artificial Intelligence 3.4: 319--349.

Bringsjord, S. (1992) What Robots Can and Can't Be (Dordrecht, The Netherlands: Kluwer), ISBN 0-7923-1662-2.

Bringsjord, S. (1993)`Toward Non-Algorithimc AI,' in Ryan, K. T. and Sutcliffe, R. F. E., eds., Proceedings of AICS '92: The Fifth Irish Conference on AI and Cognitive Science, University of Limerick, September 10--12 (New York, NY: Springer-Verlag).

Bringsjord, S. (1994) `Could, How Could We Tell If, and Why Should -- Androids Have Inner Lives,' in Ford, K. and Glymour, C., eds., Android Epistemology (Greenwich, CT: JAI Press).

Bringsjord, S. and Zenzen, M. (forthcoming) SuperMinds: A Defense of Uncomputable Cognition (Dordrecht, The Netherlands: Kluwer).

Chomsky, N. (1980) `Rules and Representations,' In: Behavioral and Brain Sciences 3: 1--61

Dyer, M. G. (1990) `Intentionality and Computationalism: Minds, Machines, Searle and Harnad,' In: Journal of Experimental and Theoretical Artificial Intelligence 2.4. pp. 303--319.

Harnad, S. (1991) `Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem,' In: Minds and Machines 1: 43--54. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/78/index.html

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Hayes, P., Harnad, S., Perlis, D. and Block, N. (1992)`Virtual Symposium on the Virtual Mind', In: Minds and Machines 2: 217--238. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/85/index.html

Turing, A. M. (1964) `Computing Machinery and Intelligence'. Pages 4--30 of: Andersen, A. R. (ed), Minds and Machines, Contemporary Perspectives in Philosophy Series. Englewood Cliffs, NJ: Prentice Hall. http://cogprints.soton.ac.uk/documents/disk0/00/00/04/99/index.html


Volume: 12 (next, prev) Issue: 038 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: