Selmer Bringsjord (1995) Why Didn't Evolution Produce Turing Test-passing Zombies?. Psycoloquy: 6(19) Robot Consciousness (12)

Volume: 6 (next, prev) Issue: 19 (next, prev) Article: 12 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(19): Why Didn't Evolution Produce Turing Test-passing Zombies?

WHY DIDN'T EVOLUTION PRODUCE TURING TEST-PASSING ZOMBIES?
Reply to Tirassa on Robot-Consciousness

Selmer Bringsjord
Dept. of Philosophy, Psychology & Cognitive Science
Department of Computer Science
Rensselaer Polytechnic Institute
Troy NY 12180 (USA)

selmer@rpi.edu

Abstract

Tirassa agrees that we can now safely conclude, on the strength of arguments like those given by myself (1992, 1994a) and Searle (1992), that computational systems cannot be conscious. He then takes us to what I see as the heart of the matter, by asking: Given that consciousness is more than computation, what exactly is the relationship between consciousness and computation in human and artificial agents? Put starkly, why didn't evolution produce zombies, creatures having our behavioral capacity (which can presumably be reached by a mere computational system) but lacking consciousness? I make four points concerning Tirassa's eloquent treatment of this fundamental and fascinating question.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.

I. THE BIG QUESTION

1. Tirassa (1994) does an admirable job of encapsulating our starting point: He says that AI can be understood as the attempt to build a computational system (with suitable sensors and effectors) which is, or constitutes, a person. In keeping with the terminology I employ in What Robots Can & Can't Be, Tirassa dubs such creatures robots. He then goes on to affirm that

    We may (in principle) build robots of any kind; but whatever their
    architecture, behavior, etc., may be, they can never be conscious.
    This ... has been argued by, among others, Bringsjord (1992) and
    Searle (1992). Thus, there is no reason to expect artificial
    consciousness, unless we can duplicate the relevant properties of
    biological nervous tissue (connectionism, of course, offers no
    solution in this respect, since artificial neural nets are exactly
    as neural as so-called classical computational systems are).

2. In light of this affirmation, a question naturally arises, viz.: Does consciousness make any difference to how the relevant computational systems -- those composing robots (R-computation) or those included in human cognition (H-computation) -- work? This is the question with which Tirassa grapples.

3. Tirassa says that there are three possible answers:

A1 Consciousness has nothing whatsoever to do with R- and

    H-computation.

A2a Consciousness has something to do with R- and H-computation;

    indeed, consciousness is a necessary feature of such computation.

A2b Consciousness has something to do with R- and H-computation;

    however, consciousness is but a contingent feature of such
    computation.

II. THE FIRST ANSWER--A1--IS UNTENABLE

4. Tirassa begins by pointing out that A1, though it appears to be entailed by much work in cognitive science, is unpalatable because of "evolutionary concerns." The reasoning is that since under A1 consciousness is irrelevant to H-computation,

    It follows that either consciousness does not exist (which, as I
    can personally guarantee, is not the case), or it is superfluous.
    In the latter case, biological minds must be something more than
    [H-computation], because they exhibit a property which is completely
    unrelated to [computation]. But then, under what selective pressure
    might such a property have evolved?

5. Tirassa's reasoning, at bottom, is apparently the following reductio ad absurdum:

    1.  A1. [Assumption for contradiction]

    2. If phenomenon x is enjoyed by humans, then it is the product of
       the selective pressure which drives evolution.

    3. If A1, then human consciousness isn't the product of the
       selective pressure which drives evolution.

    4. Human consciousness isn't the product of the selective pressure
       which drives evolution. [From 1, 3]

    5. Humans are conscious.

    6. Human consciousness is the product of the selective pressure 
       which drives evolution. [From 2, 5]

    7. A1 is false. [From 1, 4, 6; 5 as remaining premise]

6. This is a powerful little argument, but hardly invulnerable. The rationale Tirassa presumably has in mind for premise 3 includes the controversial claim that evolution consists of processes fully circumscribed by computation. Or perhaps he would simply rest his case for premise 3 on arguments from the fact that no consciousness- evolution connection is forthcoming. But either way -- and this is the first of my promised four points -- it should be noted that denying premise 2 is certainly not out of the question. (Denying 5, contra the likes of eliminativists like Dennett, does seem to me to be out of the question.) One scientist who may be fairly construed as rejecting premise 2 is John Eccles (1989), whose reasoning is reviewed, and taken quite seriously, in Bringsjord (forthcoming). I have no intention of pressing the claim here that premise 2 is vulnerable; I merely set the stage for future dialectic.

III. THE SECOND ANSWER--A2a--CAN BE REFINED

7. What about A2a? What about the view that consciousness is a necessary feature of H-computation (and hence of appropriately configured R-computation as well)? On this view, our being conscious, as Tirassa puts it, wouldn't be an "extravagant luxury," but would be "intimately connected to the distinctive behavioral performance exhibited by our species." But as Tirassa points out, "The problem here is that we do not have the slightest idea of what consciousness is for."

8. I'm not sure Tirassa's pessimism should be as strong as it apparently is (this is the second of my four promised points). I argue elsewhere (Bringsjord & Ferrucci, forthcoming; see also Bringsjord, 1994b) that consciousness, specifically the ability to have a point of view and to adopt the point of view of a fictional character, is at the heart of literary creativity, at least at the heart of the sort of creativity involved in producing belletristic fiction. To put it biographically, the claim is that when the great dramatist Henrik Ibsen says

    I have to have the character in mind through and through, I must
    penetrate into the last wrinkle of his soul. I always proceed from
    the individual; the stage setting, the dramatic ensemble, all that
    comes naturally and does not cause me any worry, as soon as I am
    certain of the individual in every aspect of his humanity.  But I
    have to have his exterior in mind also, down to the last button,
    how he stands and walks, how he conducts himself, what his voice
    sounds like.  Then I do not let him go until his fate is
    fulfilled. (Reported in Fjelde, 1965, p. xiv.)

he isn't merely reporting an idiosyncratic method.

9. Of course, the question arises as to what sense of the term necessary is assumed in A2a. Tied to my claims about literary creativity, the question becomes: In what sense is consciousness "necessary" for producing belletristic narrative? Here I would review the fact that there are different types of necessity. For starters, there are the following two: logical necessity (according to which, if p is logically necessary, denying p entails an outright contradiction; 2+2=4 is logically necessary), and physical necessity (if p is physically necessary then not-p entails the denial of some law of physics; that no object can travel faster than the speed of light is physically necessary). Clearly, I think, A2a can't be saying that consciousness is logically necessary for H- and (suitably impressive) R-computation -- because it's easy enough to imagine a zombie punching keys on a keyboard in such a fashion as to produce a string which comes to be regarded as belletristic. Just as clearly, A2a can't be based on physical necessity, because the lucky zombie imagined here doesn't spell the denial of any law of physics.

10. However (and this is my third point), perhaps there is another candidate construal for the necessity involved. Perhaps it can be said that consciousness is "probabilistically necessary" for H- and (suitably powerful) R-computation. This would mean, intuitively, that it is very unlikely that such a zombie exists (as it surely is!), and that one way to increase the odds of Hamlet showing up as part of the furniture of the universe is that evolution bring consciousness on stage. I don't for the life of me know how such a view can be specified and defended, but I think it's worth attempting to do so.

IV. THE THIRD ANSWER--A2a--IMPLIES MY KIND OF ENGINEERING

11. The third answer to the big question, A2b, recall, is that consciousness is relevant to H- and (sufficiently powerful) R-computation. (The probabilistic version of A2a may border on A2b, an issue I leave aside.) Tirassa sums up the view:

    We must acknowledge, as a matter of fact, [consciousness'] crucial
    role in the control of our behavior, but we cannot exclude the
    possibility that analogous results may be achieved with different
    methods.

12. Tirassa accurately takes me to be a proponent of this view (he is right that my ROB thesis -- in a nutshell: future robots will excel in ever more stringent Turing Tests -- follows, not coincidentally, from A2b). But though it's a view for which he has some personal sympathy, he says he can "adduce no empirical support in its favor." I haven't the time to fine-tune and defend the view (that I do in Chapter IV of (Bringsjord, 1992) and in (Bringsjord & Zenzen, forthcoming)); I would like instead to end by remarking (my fourth and final point) on Tirassa's elegant characterization of the view. He says:

    In the [A2b] case, there would be no principled limits to robots'
    possibilities; there might be relationships between AI and
    psychology, though far from those usually conceived. Rather than
    considering the similarities in the computations carried out, as is
    usually recommended (see, e.g., Pylyshyn, 1984), it might be more
    interesting to study issues like architectural requirements for
    efficient control, requirements on initial knowledge, etc. Once we
    have accepted this idea, the quest for Turing-testable robots
    becomes meaningless: Turing-testable behaviors are those of our
    species, and there is no (scientific) point in trying to simulate
    them when we know that computational consciousness is impossible.

13. This is it; this is my view encapsulated; this is the cornerstone of my approach to AI. There is a great engineering challenge in the works. It's going to be hard, very hard, to build such robots, that's clear. I think it will also be a lot of fun to try. But Tirassa, by my lights, is right: there is no scientific point in striving for Turing Test-passing robots, no point at all, because such creatures will always just be "there's-nobody-home-inside" zombies.

REFERENCES

Bringsjord, S. & Ferrucci, D. (forthcoming) Artificial Intelligence and Literary Creativity. Hillsdale, NJ: Lawrence Erlbaum.

Bringsjord, S. & Zenzen, M. (forthcoming) In Defense of Uncomputable Cognition. Dordrecht, The Netherlands.

Bringsjord, S. (forthcoming) Can Consciousness Be Evolutionarily Explained? Review of Eccles' Evolution of the Brain: Creation of the Self. Psyche.

Bringsjord, S. (1992) What Robots Can and Can't Be. Dordrecht, The Netherlands: Kluwer Academic Publishers.

Bringsjord, S. (1994a) Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Bringsjord, S. (1994b) Lady Lovelace Had it Right: Computers Originate Nothing. Behavioral and Brain Sciences, 17:532-33.

Eccles, J.C. (1989) Evolution of the Brain: Creation of the Self. New York, New York: Routledge.

Fjelde, R. (1965) Foreword in Ibsen, H., Four Major Plays. New York, New York: New American Library.

Ibsen, H. (1965) Four Major Plays. New York, New York: New American Library.

Pylyshyn, Z.W. (1984) Computation and Cognition. Cambridge, MA: MIT Press.

Searle, J. (1992) The Rediscovery of the Mind. Cambridge, MA: MIT Press.

Tirassa, M. (1994) Is Consciousness Necessary to High-Level Control Systems? PSYCOLOQUY 5(82) robot-consciousness.2.tirassa.


Volume: 6 (next, prev) Issue: 19 (next, prev) Article: 12 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: