Selmer Bringsjord (1996) Artificial Intelligence and the Cyberiad Test. Psycoloquy: 7(30) Robot Consciousness (18)

Volume: 7 (next, prev) Issue: 30 (next, prev) Article: 18 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 7(30): Artificial Intelligence and the Cyberiad Test

ARTIFICIAL INTELLIGENCE AND THE CYBERIAD TEST
Reply to Barresi on Robot-Consciousness

Selmer Bringsjord
Dept. of Philosophy, Psychology & Cognitive Science
Department of Computer Science
Rensselaer Polytechnic Institute
Troy NY 12180 (USA)

selmer@rpi.edu o http://www.rpi.edu/~brings

Abstract

Barresi (1995) accurately distills the overall position on the mind defended in (Brigsjord, 1992), but his objection to this defense is based upon a non sequitur. Moreover, his proposed replacement for the Turing Test by the "Cyberiad Test," certainly intriguing, suffers from the same fatal flaw infecting Turing's original test: viz., it is entirely possible (maybe even probable) that the Cyberiad Test should be passed with flying colors by mere zombies.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.

I. BARRESI'S CRITIQUE

1. Barresi writes:

    "I think [Bringsjord] believes that computable solutions will be
    found that come close enough behaviorally to pass the Turing test
    sequence without actually generating the behavior in the same
    manner as persons. Thus, while Bringsjord believes that persons
    perform mental operations that are not computable by Turing
    machines, he also believes that expert systems might be constructed
    using Turing-computable algorithms that can sufficiently
    approximate the behavioral consequences of these mental operations
    to pass most if not all of the Turing test sequence." (Barresi, 1995)

This is basically right. Just to set the record completely straight, my theory of mind as it relates to Artificial Intelligence (AI) and Cognitive Science can be distilled to the following seven propositions.

    (1) All claims of the form "if x passes test T, then x is
    conscious" are false, where T can be set to any test in what I have
    called (Bringsjord, 1995b) the "Turing Test sequence," which
    includes Turing's original test, Harnad's (1991) "Total Turing
    Test," and so on.

    (2) Computationalists will succeed in building T-passing artifacts,
    for at least nearly all of the T in the sequence.

    (3) Computationalists will succeed in building T-passing zombies,
    for at least nearly all T in the sequence (where "zombie" is used
    here in the philosopher's sense of the term, which denotes a being
    whose overt behavior is indistinguishable from that of human
    person's, but who lacks subjective awareness).

    (4) Church's Thesis is false.

    (5) Part of what personhood entails (e.g., qualia) cannot be
    reduced to any information-processing scheme.

    (6) Persons are not to be identified with Turing machines or any
    equivalent computational scheme (an identification endorsed by
    classical "Strong" AI), but they have (internal) Turing machines at
    their disposal (for problem solving and reasoning).

    (7) Persons also have (internal) "super"-Turing machines (machines
    capable of solving Turing-unsolvable problems) at their disposal
    (for problem solving and reasoning).

2. Bringsyord (1992, henceforth ROBOTS) is a defense of propositions (2) and (5). (The other propositions have been defended elsewhere, and these writings are unified and enhanced in Bringsjord & Zenzen, (forthcoming, the sequel to ROBOTS). ROBOTS, as Barresi kindly says, "raises the level of discussion from the ad hominum debates that typically occur between AI optimists and pessimists, to one which can focus on particular arguments, their premises and conclusions" -- but do my arguments advance the debates in question? Do I deliver a compelling case for (2) and (5)? Barresi doesn't think so. He claims that my strategy in ROBOTS for establishing proposition (2) "cannot contribute usefully to the Person Building Project (PBP) or to the evaluation of its possibility" for two reasons. I consider here only the first, which runs as follows.

    "If PBP is to be taken seriously at all, then we are seeking to
    develop a precise definition of person, and our current intuitions
    of person capacities based on direct knowledge of ourselves and
    other persons must be viewed as revisable. Hence, there can be no
    entailments of our intuitive understanding of persons that are not
    defeasible. The debate over the relationship between free will and
    determinism should show us that there are no necessary entailments
    of our intuitive interpretation of free will that can be used to
    prove that robots cannot be persons." (Barresi, 1995)

3. Unfortunately, there are several problems infecting this reasoning; I discuss two. First, Barresi's reasoning obviously includes an inference from "debate D has been inconclusive on whether or not P implies Q" to "P does not imply Q." But this inference is a non sequitur, as is revealed by effortless counter-examples: Debate about whether there is life on other planets has to this point been inconclusive, but it hardly follows that extraterrestrial life is nonexistent. Likewise: Debate over whether Goldbach's Conjecture is implied by the axioms of number theory is to this point still active, but it doesn't follow from this that the implication isn't there.

4. Second, some of Barresi's premises are false as well. For example, is it really true that, as he says, our intuitive understanding of personhood fails to entail any non-defeasible propositions? I cheerfully pontificate, on the basis of mere intuition, that being a person entails having a capacity to think. Now, is the proposition that personhood entails such a capacity really defeasible? What could possibly defeat the proposition that persons have the capacity to think? (Suppose that reason R comes along and Barresi claims that it provides a reason for doubting that persons have the capacity to think. Barresi has certainly done some thinking about R, no?)

II. THE CYBERIAD TEST

5. In the Cyberiad Test (CT), nature, not a human, is the judge. A robot (computer, android, etc.) passes CT if it is part of a "society" of self-replicating robots able to sustain itself as a species over time. (CT gets its name from Stanislaw Lem's (1976) The Cyberiad, in which our galaxy is populated by evolving inorganic machines, but devoid of humanity.) And presumably the corresponding Barresi-championed thesis is something like

    (8) If x passes CT, then x is conscious.

6. The denial of this proposition is entailed by my affirmation of (1) from above. So it's clear that I reject CT. But why do I? My rationale is but a trivial adaptation of my reasons (Bringsjord, 1995a, 1995b) for believing that zombies can pass the likes of TT and TTT. In arguing that zombies are possible, I have pointed out that there is certainly some set of utterances, and some set of bodily movements, which preclude any inferences to the proposition that the bearers of these utterances and movements are not conscious. I have gone on to point out that it's entirely possible for these sets to be actualized as a result of mere mindless serendipity. For example, in the linguistic sphere, why couldn't a randomized primitive sentence generator get lucky and produce strings coinciding perfectly with the sort of language we usually take to be indicative of underlying consciousness? Similarly, why couldn't a primitive randomized program controlling the effectors of a robot get lucky and produce the sort of behavior (e.g., playing billiards) we take to be indicative of (and perhaps essential for the development of) underlying consciousness? These rhetorical questions are replaced with careful argumentation in (Bringsjord, 1995a).

7. Now, in order to provide a counter-example to (8), in order to find a scenario wherein this proposition's antecedent is true while its consequent is false, we have only to imagine that some "empty" computational artifact serendipitously exhibits behaviors satisfying not a human judge, but nature. (And by the way, the well-known proofs that self-reproducing automata are possible in no way attribute consciousness to any of these machines.) Then, since if one such artifact can be imagined, a group of such mindless creatures can be imagined as well; we can easily extend our thought-experiment to one in which Barresi's (8) is refuted.

8. Someone may object: "Well, okay, it's true that it's possible for a group of monkeys typing away to play the role of a pen-pal, but surely it is improbable that these letters I've been receiving from "John" are but the product of monkey serendipity." I have rebutted this objection in my recent commentary on (Watt, 1996), and in (Bringsjord, 1995a). In a word, the rebuttal is that though monkey sleight of hand is indeed improbable, it is probable that shallow software tricks will soon enough arrive which generate text that encourages us to assume some consciousness at its source -- and who is to say now that it is improbable that CT be passed by machines running a bunch of tricks devised by clever humans?

9. Alert readers will have noted that my claim, made above, that my rejection of (1) entails that I reject Barresi's (8) is, strictly speaking, a non sequitur of my own. This entailment requires the additional premise that CT be in the Turing Test sequence. Is it? Yes. Though CT is derived from an impressive literary source, and is fun to ponder, the fact of the matter is that the test is subsumed by TTT. In order to see this, imagine that we isolate a robot capable of self-replication -- perhaps on another planet P. Now, instead of constantly monitoring this robot, we only check it periodically, where the intervals of time during which it is unobserved are allowed to be very long. If, at some point when a human returns to P to observe, he finds that a society of robots has evolved, he can declare that TTT has been passed to the point of satisfying the likes of Barresi.

REFERENCES

Barresi, J. (1995) Building Persons: Some Rules For The Game. PSYCOLOQUY 6(12) robot-consciousness.7.barresi.

Bringsjord, S. & Zenzen, M. (forthcoming) In Defense of Uncomputable Cognition. Dordrecht, The Netherlands: Kluwer.

Bringsjord, S. (1995a) Could, How Could We Tell If, and Why Should-Androids Have Inner Lives, in Android Epistemology, Ken Ford, Clark Glymour & Pat Hayes, editors, MIT Press, 1995, pp. 93-122.

Bringsjord, S. (1995b) In Defense of Impenetrable Zombies, Journal of Consciousness Studies 2.4: 348-351.

Bringsjord, S. (1994) Precis of: What Robots Can and Can't Be PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Bringsjord, S. (1992) What Robots Can and Can't Be. Dordrecht, The Netherlands: Kluwer.

Harnad, S. (1991) Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem, Minds and Machines 1.1: 43-55.

Lem, S. (1976) The Cyberiad: Fables for the Cybernetic Age, trans. M. Kandel. New York, NY: Avon Books.

Watt, S. (1996) Naive Psychology and the Inverted Turing Test PSYCOLOQUY 7(14) turing-test.1.watt.


Volume: 7 (next, prev) Issue: 30 (next, prev) Article: 18 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: