Selmer Bringsjord (1995) Computationalism is Doomed, and we can Come to Know it. Psycoloquy: 6(10) Robot Consciousness (5)

Volume: 6 (next, prev) Issue: 10 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(10): Computationalism is Doomed, and we can Come to Know it

COMPUTATIONALISM IS DOOMED, AND WE CAN COME TO KNOW IT
Reply to Scholl on Robot-Consciousness

Selmer Bringsjord
Dept. of Philosophy, Psychology & Cognitive Science
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180

selmer@rpi.edu

Abstract

Scholl (1994) claims in his review of What Robots Can & Can't Be (ROBOTS; 1992, 1994) that my six-prong attack on what I call the Person Building Project (the engineering side of Strong AI) begs the question. Scholl does an admirable job of recapitulating the computationalist party line, but his position is plagued by a failure of imagination, a fatally flawed method, and, ironically enough, repeated instances of the fallacy of begging the question.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.
1. The bulk of Scholl's (1994) complaint centers around one part of one prong of my attack, viz., my first version of the Arbitrary Realization Argument (ARA1). Since Scholl bowdlerizes ARA1, it's worth having on hand an accurate encapsulation.

2. In Chapter I of ROBOTS, it is established that the Person Building Project implies computational functionalism, one provisional version of which is:

(F) For every two "brains" x and y, possibly constituted by radically

    different physical stuff, if the overall flow of information in x
    and y, represented as a pair of flow charts (or a pair of Turing
    Machines, or a pair of Turing Machine diagrams, etc.), is the same,
    then if associated with x there is an agent s in mental state S,
    there is an agents' "associated with" or "constituted by" y which
    is also in S.

So, with PBP abbreviating the proposition that the Person Building Project will succeed, we have the conditional

    (PBP) -> (F).

Now, for ARA1, assume that (PBP) is the case. By modus ponens, then, we of course have (F).

3. Proceed to let s denote some arbitrary person, let B denote the brain of s, and let s be in mental state S*, fearing purple unicorns, for example. Next, imagine that a Turing machine M, representing exactly the same flow chart as that which governs B, is built out of 4 billion Norwegians all working on railroad tracks in boxcars with chalk and erasers across the state of Texas. From this hypothesis and (F), it follows that there is some agent m -- call it Gigantor -- constituted by M which also fears purple unicorns. But it seems intuitively obvious that (or, if you polled the "man on the street" about Norwegian-ridden Texas he'd say that):

(1) There is no agent m constituted by M that fears purple unicorns

    (or, there is no Gigantor).

We've reached a contradiction. Hence our original assumption, (PBP), is wrong.

4. Immediately after presenting this argument in the first two pages of Chapter VI: Arbitrary Realization, I write: "What sort of replies have been made to this argument? What sort of replies can be made?" I then spend the rest of the chapter unearthing and destroying replies to ARA1, including the one Scholl votes for in his review.

5. I will gladly confess that I only burn some serious grey cells pondering replies which concede that ARA1 is at minimum a colossal embarrassment for the computationalist (the best response to ARA1 is an ingenious one from John Pollock (1989) -- who is honest and brave enough to concede (1) and to defenestrate (F) -- which forced me to devise ARA*2*), but I do consider in ROBOTS, at some length, what I call the "bite the bullet" response, the one Scholl puts all his chips on. In this response, one clings to (F) and denies (1), and says, "Look, you really can't picture this vast Texas Turing Machine; it's ineffably gigantic, so you have no right to hold that there is no Gigantor enjoying the inner state of fearing purple unicorns." Scholl puts it even less circumspectly: "Since we have no direct experience with such degrees or types of complexity, I would argue that we probably shouldn't trust (much less have) intuitions about what sorts of emergent properties might or might not arise out of it." Most proponents of computationalism will run for cover upon hearing this, since they won't be inclined to save their doctrine by affirming the spooky, vaporous, quasi-religious notion of emergentism!

6. With all due respect to Scholl (who, after all, simply follows here the party line laid down by such die-hard computationalists as the Churchlands (1990)), I'm beginning to think the "bite the bullet" response should be called the "head buried in the sand" response. After all, the scenario behind ARA1 is no more complex than scenarios one must consider to do elementary mathematical logic. In fact, I'd say that scenarios which students must conceive of in order to pass elementary mathematical logic are harder to imagine than Norwegian-populated Texas! For example, readers of Computability and Logic (Boolos & Jeffrey, 1980), an elementary logic text, will find that they're encouraged to imagine what Weyl (1949) and Bertrand Russell long ago conceived: a machine with the capacity to work infinitely faster as time goes by. Boolos and Jeffrey point out that such machines (inspired by their terminology, I dub them "Zeus Machines" in ROBOTS) can "enumerate the natural numbers in a finite amount of time." (Let that sink in.) So now let us get things straight. Scholl and Co. would have us believe that we can't really imagine a very big garden-variety Turing Machine, while we can not only imagine Zeus Machines, but deliver nice mathematical theorems about them (e.g., it's easy enough to prove that the uncomputable Busy Beaver Function -- (Rado, 1963; Bringsjord, 1992) can be solved by Zeus Machines)? I don't buy it, not for a second. Mathematicians and logicians routinely conceive of things which make my Texas crew look excruciatingly mundane.

7. By the way, Scholl's lack of (or refusal to use his) imagination also infects his analysis of my embryonic version of my modalized Godelian case against the Person Building Project. For here Scholl says, point black, that we can't imagine a fantastic logician, Ralf, able to solve an uncomputable problem. Really? Then how would Scholl explain trial-and-error machines (Bringsjord, 1993; Kugel, 1986), which can be formalized and then proved to be capable of solving the Halting Problem, which is of course beyond the power of a computer or Turing Machine to solve? Would he say that such machines can be defined and mathematically dissected -- but can't be imagined?

8. Now for Scholl's method. Here's his guiding principle:

    Whenever we think about consciousness, it's extremely important to
    remember that, whatever metaphysical beliefs we might have, we have
    absolutely no clue whatsoever how the lump of matter which we call
    a human being manages to give rise to consciousness and personhood
    (e.g., see Nagel, 1974). This being the case, we cannot rely on
    intuitions that a computer modelling a mind on a functional level
    would not also enjoy such qualities. That is, humans and computers
    are on a par here, and we really ought to remain agnostic on such
    issues until we start to figure out that crucial "how". (Scholl,
    1994)

9. This approach is easily refuted by parody. Suppose that for Jones the inner workings of the combustion engine in his Ferrari are about as mysterious as the operation of the neuro-matter inside his cranium. Jones knows that when he wants to go fast, it's easily accomplished; but he hasn't the foggiest idea about what underlying processes propel him at the pleasant speed of 200+ mph. (Jones knows that when he wants to do a New York Times crossword puzzle, it's easily accomplished; but he hasn't the foggiest idea about what underlying processes propel him at the pleasant speed of 5 words/minute.) Now, Black comes to Jones and says: "You know what? The stuff inside your head is really a bunch of fleas, very small fleas, all doing remarkably coordinated gymnastics." What would Jones say? What is he entitled, epistemically speaking, to say?

10. Obviously, despite his ignorance about neuro-matter, Jones is entitled to say that Black must be mistaken (and that he might not be running on all cylinders). Similarly, and contra Scholl, ARA1 shows that a mechanical conception of mind is, to put it mildly, very suspicious -- despite our ignorance about how neuro-matter gives rise to such things as subjective awareness. Computationalism is doomed because it's wed to a mechanical concept of mind (that which is computable is said in recursion theory to be mechanically solvable), and we can come to see, by the lights of many arguments, despite our ignorance about the inner-workings of neuro-matter, that cognition, whatever else it might be, isn't mechanical.

11. Scholl repeatedly commits the fallacy of petitio principii. He says at one point, "Indeed, for cognitive science, a computational functionalism would seem to be the only game in town." Even if this were true (and it isn't: my brand of cognitive science -- according to which some cognition is computable, and some isn't but can be mathematically represented with help from the harder side of recursion theory -- is not only another, but a better game; Bringsjord & Zenzen (forthcoming)), the tacit reasoning here is circular. For the reasoning is:

(2) If computational functionalism -- our (F) from above, say -- is

    false, cognitive science and the Person Building Project are
    misguided.

(3) Cognitive science and the Person Building Project aren't misguided.

Therefore:

(4) Computational functionalism is true.

Since (3) is precisely what's at issue, this argument is fallacious.

12. Scholl's treatment of my agnostic attitude toward dualism is just more imprecision, and more question-begging. I'm agnostic on this score because I think there are some good arguments for dualism (of both the property (Jacquette, 1994) and substance variety (Bringsjord, in press)). Scholl informs me, however, that dualism "doesn't belong in any science, cognitive or otherwise." Now note that even if this is true, it doesn't follow that agnosticism about dualism doesn't belong in any science; Scholl's enthymematic reasoning is a non sequitur.

13. I would like very much to witness a conversation between Scholl and Penfield or Popper or Kripke or Descartes or Chisholm or Jackson or Eccles, etc. Would he simply tell them all, "Well, I don't really care about your arguments, you're just wrong"? Not a very promising debate move, that. My position, put crudely, is that these chaps are far from dim, and if you look at the arguments in question, if you look at them carefully and with an open mind, agnosticism of my sort is more than justified.

14. In sum, Scholl has done nothing more than do damage to the view he would protect. The fact is, computationalism, the Person Building Project, Strong AI -- all of this stuff is coming to seem ridiculously wobbly. Soon it will all crash once and for all, for everyone to see, and then, once the king's horses and men shake their heads and trot off, it will be just one more intellectual carcass along the road toward understanding the mind.

REFERENCES

Boolos, G.S. & Jeffrey, R.C. (1980). Computability and Logic. Cambridge, UK: Cambridge University Press.

Bringsjord, S. (1992). What Robots Can and Can't Be. Dordrecht, The Netherlands: Kluwer Academic Publishers.

Bringsjord, S. (1993) Toward Non-Algorithmic AI, in Ryan, K.T. & Sutcliffe, R.F.E., eds., AI and Cog Sci 92, in the Workshop in Computing Series, Springer-Verlag, pp. 277-288.

Bringsjord, S. (1994). Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Bringsjord, S. & Patterson, W. (in press). Review of John Searle's The Rediscovery of the Mind. Minds and Machines.

Bringsjord, S. & Zenzen, M. (forthcoming) In Defense of Uncomputable Cognition. Dordrecht, The Netherlands: Kluwer Academic Publishers.

Churchland, P.M. & Churchland, P.S. (1990). Could a Machine Think? Scientific American, 262.1: 32-37.

Jacquette, D. (1994) Philosophy of Mind. Englewood Cliffs, NJ: Prentice-Hall.

Kugel, P. (1986) Thinking May Be More Than Computing. Cognition, 22: 137-198.

Nagel, T. (1974). What Is it Like to Be a Bat? Reprinted in Mortal Questions. Cambridge: Cambridge University Press, 1979.

Pollock, J. (1989). How to Build a Person: a Prolegomenon. Cambridge, MA: MIT Press.

Rado, T. (1963). On Non-Computable Functions. Bell Systems Technical Journal, 41: 877-884.

Scholl, B. (1994). Intuitions, Agnosticism, and Conscious Robots: Book review of Bringsjord on Robot-Consciousness. PSYCOLOQUY 5(84) robot-consciousness.4.scholl.

Weyl, H. (1949). Philosophy of Mathematics and Natural Science. Princeton, NJ: Princeton University Press. (See pp. 38-42.)


Volume: 6 (next, prev) Issue: 10 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: