Brian Scholl (1994) Intuitions, Agnosticism, and Conscious Robots. Psycoloquy: 5(84) Robot Consciousness (4)

Volume: 5 (next, prev) Issue: 84 (next, prev) Article: 4 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 5(84): Intuitions, Agnosticism, and Conscious Robots

INTUITIONS, AGNOSTICISM, AND CONSCIOUS ROBOTS
Book review of Bringsjord on Robot-Consciousness

Brian Scholl
Rutgers Center for Cognitive Science
Rutgers University
Piscataway, NJ 08855

scholl@ruccs.rutgers.edu

Abstract

One of the main theses of Bringsjord's "What Robots Can and Can't Be" (1992, 1994) is that cognitive engineers will NEVER be able to build real people (who enjoy consciousness, etc). I maintain here that each of the six main arguments he uses to support this thesis rests on crucial question-begging intuitions, about which we really ought to remain agnostic.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.

I. INTRODUCTION

1. Selmer Bringsjord's book, "What Robots Can and Can't Be" (1992), defends two theses: (A) Cognitive engineers will eventually be able to build robots that can pass all manners of Turing Tests; and (B) Cognitive engineers will (and can) never succeed in building true artificial persons, possessed of conscious selves. That is, "Robots... will DO a lot, but they won't BE a lot" (Bringsjord 1994, par. 1). I propose here to completely ignore (A), and concentrate on the the arguments Bringsjord uses to defend (B), the thesis that the so-called "Person Building Project" is doomed to failure. I maintain that all of his arguments rest on crucial question-begging intuitions or assumptions, about which we really ought to remain agnostic.

2. In the remainder of this introduction, I clarify my thesis and motivation. In section II, I single out one of Bringsjord's arguments as a case-study, and explain in some detail how and why it all comes down to question-begging intuitions. In Section III, I make the analogous points concerning the rest of Bringsjord's arguments, and I conclude in section IV with some comments on how we ought to think about materialism and dualism in cognitive science.

3. Whenever we think about consciousness, it's extremely important to remember that, whatever metaphysical beliefs we might have, we have absolutely no clue whatsoever how the lump of matter which we call a human being manages to give rise to consciousness and personhood (e.g., see Nagel, 1974). This being the case, we cannot rely on intuitions that a computer modelling a mind on a functional level would not also enjoy such qualities. That is, humans and computers are on a par here, and we really ought to remain agnostic on such issues until we start to figure out that crucial "how".

4. Many anti-functionalist arguments, including nearly all of Bringsjord's, do not respect this idea. Instead, they typically use intuitions which beg the question, along the following lines:

    (1) Ask the burning question: Could suitably organized computers
        have X (where X is consciousness, free will, personhood, etc.)?

    (2) Mess with the details, or focus on a particular, seemingly
        ridiculous instantiation.

    (3) Then say, "But of course computers wouldn't have X in this
        case. Just picture the situation in your mind's eye; it's
        obvious."

    (4) Therefore, the answer to (1) is a resounding "No."
        "AI-Functionalism" is false.

Given that we have no clue how things like consciousness arise, though, is (3) really all that "obvious"? The "suitably organized" clause in (1) usually implies a staggering degree of (a certain type of) complexity. Since we have no direct experience with such degrees or types of complexity, I would argue that we probably shouldn't trust (much less have) intuitions about what sorts of emergent properties might or might not arise out of it. To appeal to such intuitions at this stage simply begs the question. (This point has been made plenty of times before -- see, for example, most of the commentaries following Searle [1980] -- but the pesky, intuition-laden arguments which ignore it have proven nearly impossible to exterminate.) In the next section, I explain in some detail how one of Bringsjord's arguments rests on just this type of intuition.

II. THE ARGUMENT FROM ARBITRARY REALIZATION -- A CASE STUDY

5. Bringsjord characterizes the familiar "arbitrary realization" argument against functionalism in this way: "Now let S denote some arbitrary person, let B denote the brain of S, and set S be in mental state S*, fearing purple unicorns, for example. Now imagine that a Turing machine M, representing exactly the same flow chart as that which governs B, is built out of 4 billion Norwegians all working on railroad tracks in boxcars with chalk and erasers (etc.) across the state of Texas. From this hypothesis [together with the standard functionalist thesis] it follows that there is some agent m constituted by M which also fears purple unicorns. But it seems intuitively obvious that... there is no agent m constituted by M that fears purple unicorns" (pp. 209 - 210). Therefore functionalism is false, and the "Person Building Project" is doomed to failure.

6. This argument turns on the intuition that it would be unthinkably crazy if those Norwegians (or a Chinese Nation, or a bunch of beer cans, or whatever) constituted an actual conscious, intelligent entity. I agree, of course, that it would be crazy, but no more crazy than the fact that we human beings manage to enjoy consciousness and personhood. After all, the fact that consciousness seems to arise out of a complex organization of neuromush is just as fundamentally mysterious and wonderful as it would be for consciousness to arise out of a complex organization of anything else, including Norwegians! Until we gain some insight into how consciousness is possible AT ALL, we simply can't rule out functionalism as false without begging the question. Indeed, for cognitive science, a computational functionalism would seem to be the only game in town (see, e.g., Pylyshyn, 1984).

7. Now, I concede that Bringsjord's intuition can be compelling, but I think it's compelling for the wrong reason. Why is it, do you suppose, that when you try to imagine Bringsjord's Norwegian scenario, it strikes you that you could never get consciousness out of that? Bringsjord's intuition pump instructs us to imagine 4 billion people, dynamically instantiating a Turing Machine which represents all the complexity of a human brain. I submit that this particular intuition pump works simply because we cannot imagine that type of complexity. Here's the idea: "Which [intuition pumps] should be trusted is a matter to settle by examining them carefully, to see which features of the narrative are doing the work. If the oversimplifications are the source of the intuitions, rather than just devices for suppressing irrelevant complications, we should mistrust the conclusions we are invited to draw" (Dennett, 1981, pp. 459 - 460). Look: Functionalism (in this context) amounts to the thesis that the essence of personhood is a certain character or pattern of organization, which is sure to be so staggeringly complex that we cannot possibly grasp it in our "mind's eye." As such, Bringsjord has given us an impossible task. The source of the intuition here is the illicit slide from the (impossible) task of imagining a functional description of an entire brain, to the (more manageable) task of imagining a bunch of Norwegians walking around -- which is the best we can do!

III. THE OTHER ARGUMENTS: ALVIN, JONAH, GODEL, FREE WILL,

     AND INTROSPECTION

8. That's really all I have to say here about Bringsjord's book. What remains is to show how each of his other arguments turns on the same sorts of questionable intuitions. I propose to make relatively short work of this. In each case, I give a quick sketch of the argument, and point out where the intuitions are being appealed to. (In his discussion of these arguments, Bringsjord brings in lots of distinctions and subtleties, which I cannot do justice to here, because of space limitations. Luckily, none of them matter much for my position. The problem, I suggest, lies at the foundation.) My own polemical arguments above apply in each case; I won't bother repeating them in full each time.

9. ALVIN. Bringsjord's first stab at an anti-functionalist argument, in the introduction, involves Alvin the recluse cognitive engineer. (Note that, as far as my position here is concerned, this argument is no different from Jackson's [1982] example of Mary the color-blind color- scientist.) Alvin has never had the experience X which goes along with meeting a long-lost friend, but he HAS studied ALL the relevant abstract functional sentences (say, in a Turing-Machine language) which characterize personhood. So, the question is: When he leaves his lab and DOES meet a long-lost friend, will he have a revelatory experience? Functionalism, of course, says "No"; if he truly knows ALL the relevant information, then he'll say something like "Ah, yes! Feeling X; just as I expected." Bringsjord, however, contends that he would in fact be quite surprised (and thus this flavor of functionalism would be false), "simply because when I put myself in Alvin's shoes and imagine leaving my lab on the fated day, I'm absolutely bowled over by the meeting..." (p. 31). But this is clearly another bad intuition-pump: those just aren't the kind of shoes you can put yourself in!

10. JONAH. The argument from Jonah, for my purposes, is no different from Searle's original Chinese Room argument (Searle, 1980). Jonah is a mono-savant who can automatically and unconsciously reduce high-level computer programs to the primitive language of Turing Machines. He can then use his amazing powers of mental imagery to visualize the operation of the Turing Machine. Now, if functionalism is true, then "understanding Chinese" is a capacity which can be reduced to a computational process, which Jonah can instantiate and run, using his special powers. In fact, if Jonah does just that, then functionalism dictates that there must be a person S associated with him who understands Chinese. (Note that in the nontrivial versions of this setup, the person S need not be Jonah himself.) At this point Bringsjord argues that we have no reason to believe that such a person S exists, since the people who interact with him "will have ordinary and overwhelming reason to hold that they are interacting with [only] one person, namely Jonah" (p. 196). Well... yes, they will. But certainly that doesn't settle the argument. In dealing with such a contrived, extra-ordinary situation, such ordinary intuitions don't count for much. Bringsjord goes on to play with the mechanics of this example quite a bit, but I don't think that any of his moves escape this point. At each turn, the rational thing to do is to bite the bullet and say, "Well, perhaps, there really would be a person associated with Jonah. Who can tell, at this point?" We ought to remain agnostic on this issue, since we don't know anything at all about how persons are instantiated in ANY medium, including human brains.

11. GODEL. Bringsjord admits that his Godelian argument is not yet watertight, and I agree. (It is a neat idea, though.) Here's the stripped-down version:

    (1) Certain Godelian theorems show that Turing Machines can't solve
        certain problems -- call them X's.

    (2) Now, imagine Ralf, a really smart logician, whose fantastic
        mathematical powers allow him to effortlessly solve X-like
        problems.

    (3) Therefore Ralf isn't a Turing Machine, cognition isn't
        computation, and functionalism is false.

This argument is certainly valid. According to functionalism, if X is beyond the powers of computation, then it must also be beyond the powers of cognition. But, of course, this thesis also rules out the possibility of (2), that someone like Ralf could exist in the first place. Now, if in fact we could REALLY imagine someone with Ralf's powers, then perhaps (2) would be defensible. But of course we can no more grasp such a situation than we can grasp Alvin knowing ALL the relevant functional sentences. Again, it's what we CAN'T do that's responsible for the intuition. Since we can't imagine such situations, and since we know of no actual situations of the required type, the verdict is still out.

12. FREE WILL. The argument from free will is easy to anticipate. Here's the basic form:

    (1) Computers are deterministic.

    (2) If X is deterministic, then X can't have free will.

    (3) People have free will.

    (4) Therefore, people aren't computers.

My response should be just as easy to anticipate: again, people and computers are on a par here. As far as we can tell, complex systems of BOTH silicon and neuromush appear to follow deterministic laws, and (A) suitably-organized computers enjoying free will, is no more mysterious than (B) suitably-organized systems of neurons enjoying free will. As such, we ought to remain agnostic about the possibility of (A). Note that Bringsjord's final version of the argument from free will comes to rest upon the intuition that people are morally responsible for what they do (this is "obvious" [p. 302]), but that computers (being deterministic systems) cannot be held so responsible. My comments above apply just as easily to this particular version of the argument, mutatis mutandis.

13. INTROSPECTION. We humans appear to possess a certain degree of what Bringsjord calls "hyper-weak incorrigibilism": "humans have, with regard to a restricted class of properties, the ability to ascertain infallibly via introspection, whether they have these properties" (p. 329). This notion fuels the following argument:

    (1) Persons are hyper-weakly incorrigible.

    (2) So if a suitably organized computer were to be a person, it
        would also have to be hyper-weakly incorrigible.

    (3) But all computers are subject to hardware failure, which could
        cause a corresponding failure to "ascertain infallibly" the
        relevant properties by introspection.

    (4) Therefore computers couldn't be hyper-weakly incorrigible, and
        couldn't be persons.

My response to this line of reasoning, of course, is the same old story: Neuromush appears to be just as subject to "hardware failure" as is silicon. As such, humans and computers are (again) on a par here. We have no clue how a bunch of silicon-chips OR a bunch of neuromush could manage to introspect infallibly about anything. As such, how can we be sure that a suitably organized computer WOULDN'T have this property to the same degree that we humans do, regardless of hardware failure? That a bunch of silicon could manage to introspect is a crazy notion, to be sure, but no more crazy than that a bunch of neuromush could manage the same thing!

IV. CONCLUDING REMARKS

14. MATERIALISM AND DUALISM IN COGNITIVE SCIENCE. Now, it will be noticed that throughout this essay I have been assuming that materialism is the case. I have been saying things like "we have absolutely no clue whatsoever HOW the lump of matter which we call a human being manages to give rise to consciousness and personhood." Bringsjord, however, states repeatedly that he wishes to remain agnostic on this issue. Perhaps, he might counter, personhood DOESN'T arise out of lumps of matter; perhaps it comes from some other mystical, ethereal plane, or something. I have nothing to say here about this stance, except that it doesn't belong in any science, cognitive or otherwise. Here's the idea: "My main objection to dualism is that it is an unnatural and unnecessary stopping point -- a way of giving up, not a research program.... Never put "and then a miracle happens" in your theory. Now maybe there are miracles, but they are nothing science should EVER posit in the course of business" (Dennett, 1993, pp. 140 - 141). If this issue is a sticking point, feel free to amend my thesis here to read: All of Bringsjord's arguments depend on question-begging intuitions, unless personhood miraculously comes from some other mystical, ethereal plane.

15. At one point in his book, in a rather specific context, Bringsjord says, "I think intuition is on my side (for what it's worth here)" (p. 201). I agree! In fact, I think that's ALL that's on his side. But as I've tried to demonstrate here, it's not worth much at all.

16. To sum up: I suggest that (1) Bringsjord's anti-functionalist arguments require the service of (often implicit) question-begging INTUITIONS, concerning matters about which we really ought to remain agnostic, and (2) There are good reasons not to trust (much less have) such intuitions. But (1) is the important point here. Whatever their merit, these are just the sorts of "slippery metadisputes" that Bringsjord claims to have avoided.

V. REFERENCES

Bringsjord, S. (1992). What Robots Can and Can't Be. Boston: Kluwer Academic.

Bringsjord, S. (1994). Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Dennett, D. (1981). Reflections [on A Conversation With Einstein's Brain]. In The Mind's I, D. Hofstadter & D. Dennett (eds.), 457-460, New York: Bantam.

Dennett, D. (1993). Living on the Edge. Inquiry, 36, 135-159.

Jackson, F. (1982). Epiphenomenal Qualia. Philosophical Quarterly, 27, 127-136.

Nagel, T. (1974). What Is it Like to Be a Bat? Reprinted in Mortal Questions. Cambridge: Cambridge University Press, 1979.

Pylyshyn, Z. (1984). Computation and Cognition. Cambridge, MA: MIT Press.

Searle, J. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3:417-457.


Volume: 5 (next, prev) Issue: 84 (next, prev) Article: 4 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: