Selmer Bringsjord (1995) Are Computers Automata?. Psycoloquy: 6(39) Robot Consciousness (17)

Volume: 6 (next, prev) Issue: 39 (next, prev) Article: 17 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(39): Are Computers Automata?

ARE COMPUTERS AUTOMATA?
Reply to Rickert on Robot-Consciousness

Selmer Bringsjord
Dept. of Philosophy, Psychology & Cognitive Science
Department of Computer Science
Rensselaer Polytechnic Institute
Troy NY 12180 (USA)

selmer@rpi.edu http://www.rpi.edu/~brings

Abstract

At the heart of "What Robots Can and Can't Be" (ROBOTS, 1992) is the argument that persons can't be built by AI researchers because persons have certain properties such as free will that automata necessarily lack. Rickert (1995) objects that since computers are not automata, this argument fails. ROBOTS stated explicitly, however, that although computers (and, a fortiori, robots) are npt, strictly speaking, automata, what differentiates robots and computers from automata is of no help, because if Turing Machines can't have free will, then neither can embodied Turing Machines with sensors and effectors, that is, neither can robots.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.
1. At the heart of ROBOTS is this overarching argument (A1):

    (a) Persons aren't automata (e.g., aren't Turing Machines).

    (b) If AI's "Person Building Project" will succeed, then people
    are automata.

Therefore:

    (c) AI's Person Building Project will fail.

Rickert grants that (c) follows from (a) and (b), and grants (a) -- but rejects (b). As he declares, "A computer is not an automaton." His position is that the Person Building Project assumes that people are computers (not automata), and that this proposition, unlike (b), is true.

2. I have elsewhere proposed detailed accounts of the nature of computers and computation (1994a). Though space limitations preclude presenting these accounts here, the fundamental idea behind them is one Rickert embraces and -- in his review -- promotes. That idea, put concisely, is: computation is a computer at work, and a computer is a physical instantiation of an automaton. Robots, in this Rickert-Bringsjord view (R-B), would in turn be computers with sensors and effectors.

3. I present (R-B) in ROBOTS. For example, I explicitly discuss the relevance of sensors and effectors on pages 78-79 (I provide there a picture of an artificial agent, replete with sensors and effectors). Later, at the end of Chapter VI, I explicitly consider the possibility that an appeal to sensors and effectors can disarm the Arbitrary Realization Argument, but Rickert seems to have missed this material. What's important is seeing why (A1) is elliptical for an argument schema (A2) that explicitly accommodates R-B, viz.,

    (d) Persons have property F (free will).

    (e) No automaton can have F.

    (f) If automaton x can't have F, then physical instantiations of x
	having sensors and effectors can't have F either.

    (R-B) Computers are physical instantiations of automata; robots are
          computers with sensors and effectors.

    (b') If AI's "Person Building Project" will succeed, then if
         persons have F robots can have F as well.

Therefore:

    (c) AI's Person Building Project will fail.

4. A moment's reflection will reveal that (A2) is formally valid, since it can be symbolized and certified in first-order logic (the proof is a reductio). (In general, in order to produce the kernels of anti-Person Building Project arguments in ROBOTS, one has simply to instantiate F appropriately. For example, Chapter IX is a specification of (A2) with F set to "able to introspect infallibly.") Rickert affirms (R-B) and (b'). He seems to say that he is inclined to affirm (d) and (e) as well (for certain assignments to F). So then why wouldn't he be convinced by (A2)? Well, he might be convinced, but I suspect that our analysis has uncovered his real objection, namely that (f) is false, that is, that being physical and having sensors and effectors allows automata to have F.

5. This diagnosis is supported by the text in question. For here is what Rickert says about Godelian issues:

    In his version of the [Godelian] argument, Bringsjord compares a
    logician Ralf, and a Turing machine, M-Ralf. The idea is that,
    according to Godel's theorem, there is some proposition that M-Ralf
    cannot prove. But Ralf claims to be able to prove this
    proposition. In order to rebut the Godel argument, all we need do
    is find a way of programming a computer, so that its capabilities
    are as good as those of Ralf. In order to do this, we find a world
    class mathematician and logician Helen, with abilities at least the
    equal of Ralf. Then we connect our computer M-Ralf to a network,
    so that it simply acts as a copying system, echoing whatever Helen
    enters on her terminal. Then with this program, M-Ralf can do
    whatever Helen can do, and this will easily match the abilities of
    Ralf.

6. Unfortunately, though this rebuttal is inventive, it's invalid. Sensors and effectors, in Rickert's scenario, allow M-Ralf to plagiarize Helen's work; but these devices don't give M-Ralf the capability in question, that is, the power to carry out certain proofs. The problem can be put more precisely: Rickert needs a counter-example to principle (f); so, by the truth-table for the conditional, he needs satisfaction of (f)'s antecedent and falsification of its consequent. This is a situation his thought-experiment fails to give us, because F in (A2), and, specifically, in (f), is to be set to "proving certain sorts of theorems." But M-Ralf-plus-copying-system doesn't have the capability of proving certain sorts of theorems. Hence a counter-example cannot possibly be generated. (Note, generally, that if Rickert's rebuttal were correct, then it would be, in principle, impossible to establish that some capability enjoyed by persons is beyond computers! -- because these capabilities, by Rickert's trick, could always simply be copied.)

7. Since Rickert's review contains no additional arguments against (f) (it does contain additional assertions that (f) is false), I conclude that (A2) survives, and that the argument-schema at the heart of my book is unscathed.

REFERENCES

Bringsjord, S. (1994a) "Computation, Among Other Things, Is Beneath Us," Minds & Machines 4.4: 469-488.

Bringsjord, S. (1994b) Precis of: What Robots Can and Can't Be PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord

Bringsjord, S. (1992) What Robots Can and Can't Be (Dordrecht, The Netherlands: Kluwer).

Rickert, N. W. (1995) "A Computer Is Not an Automaton: Book Review of What Robots Can and Can't Be" PSYCOLOQUY 6(11) robot-consciousness.6.rickert.


Volume: 6 (next, prev) Issue: 39 (next, prev) Article: 17 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: