Selmer Bringsjord (1996) The Inverted Turing Test is Provably Redundant. Psycoloquy: 7(29) Turing Test (4)

Volume: 7 (next, prev) Issue: 29 (next, prev) Article: 4 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 7(29): The Inverted Turing Test is Provably Redundant

THE INVERTED TURING TEST IS PROVABLY REDUNDANT
Reply to Watt on Turing-Test

Selmer Bringsjord
Dept. of Philosophy, Psychology & Cognitive Science
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180 (USA)
http://www.rpi.edu/~brings

selmer@rpi.edu

Abstract

Watt's (1996) Inverted Turing Test (ITT) is probably redundant: it is easily shown to be entailed by the original Turing Test (TT). And, contra Watt, that which suggests ITT -- naive psychology -- is something already withering in many humans (e.g., Bringsjord). Indeed, before long, I predict this property will be moribund across the planet.

Keywords

False belief tests, folk psychology, naive psychology, the "other minds" problem, theory of mind, the Turing test.

1. As is well known, the TT allows for a human judge to ask questions via teletype of two sequestered contestants, one of which is a human, one of which is a machine. The judge is to ascertain, by way of the answers to these questions, which of the contestants is which. The machine "passes" the TT when the judge can do no better than 50/50 (in a suitably parameterized instance of the test). Watt's "new" Inverted TT puts the machine in the judge's seat:

    "... a system passes if it is itself unable to distinguish between
    two humans, or between a human and a machine that can pass the
    normal TT, but which can discriminate between a human and a machine
    that can be told apart by a normal TT with a human observer (Watt,
    1996)."

2. But the ITT is provably redundant; here's why. Let me be the judge in the original TT. The two contestants are C1 and C2. I say to C1:

    Suppose that an agent (where "agent", you understand, is a term
    covering both human and machine) says A1 in response to my question
    Q, while another agent replies with A2 to Q. Suppose, also, that
    one of these two agents is a machine, and the other a human. Which
    is which?

I assimilate C1's answer, and then promptly make the same query to C2. Obviously, I can compare the responses I receive with the responses I would be inclined to give -- and I can proceed to compare the answers given by my interlocutors with those given by my human friends placed as judges into other playings of the TT. Indeed, with the data I now have on hand, I can do what Watt wants users of the ITT to do: I can check to see if the machine shows "the same regularities and anomalies in the ascription of mental states that a person would" (Watt, 1996). Put baldly, why can't Watt simply whisper advice to the judge in the TT: "Look," Watt can say, "find out what it takes for the contestants to ascribe mental states, okay?"

3. Watt tells us --- on the strength of his somewhat idiosyncratic interpretation of Dennett (1985) --- that Harnad's (1991) Total Turing Test (TTT) is redundant with respect to TT. (Here he's just dead wrong: A mere "softbot" contestant in TT can be foiled by snail- mailing to it (say) a marble and asking for comment -- an unassailable point Harnad has implicitly made elsewhere (e.g., 1991, 1995).) Watt then anticipates a softer version of my objection regarding redundancy, and offers this anemic rebuttal:

    "A critically evaluated standard TT without a time limit would be
    sufficient to detect the presence of naive psychology. However,
    given that humans have all these psychological biases in their
    ascription of mental states, I doubt whether a truly critical
    version of the TT is psychologically possible without some
    variation of the test (Watt, 1996)."

4. The first sentence in this rebuttal is overthrown by the fact that as I have described it, subsuming the ITT within the TT requires no change in Turing's original game, and can be played comfortably within reasonable temporal parameters. (As I think you will agree, the schematic query I offered above is quite finite, even when the variables within it are robustly instantiated.) What of the second sentence? Here Watt's position is self-refuting. For if Watt is clever enough to identify the biases in question, if he is clever enough to take note of the "natural tendency in us to ascribe mental states to others and to themselves" (Watt, 1996) -- indeed so clever that he can think about designing a test to overcome these biases -- then how is it that the judge in the TT is so dim? Again, let's have Watt pay the judge a visit. "Listen Judge," Watt can say, "I've found out about these biases, you see, and I want to make sure that you, like me, rise above them, okay?"

5. Let me conclude by pointing out that I cheerfully resist naive psychology, for reasons that bear directly on Watt's paper. In what I take to be a refutation of the TT and all its variations (Bringsjord, 1995), I have explained why those who retreat to some such claim as "if a computer passes the TT, then it's probably conscious" are in trouble. They are in trouble because we are sure to have amongst us, sooner rather than later, computer programs which, by capitalizing on myriad "tricks," are very hard to distinguish, linguistically speaking, from humans. So already today, in 1996, I think about whether the thing I have just heard from textually over the Net is a fake. This sort of circumspection is bound to grow and grow -- to the point where we will all be often wondering if this or that textual exchange is driven by a human or a "bag of tricks" piece of software. In such an age (which I have argued will come to pass: Bringsjord, 1992, 1994), "naive psychology" will be no more.

REFERENCES

Bringsjord, S. (1995) Could, How Could We Tell If, and Why Should Androids Have Inner Lives. Chapter in the Android Epistemology, MIT Press, pp. 93-122. Ken Ford, Clark Glymour & Pat Hayes, editors.

Bringsjord, S. (1994) Precis of: What Robots Can and Can't Be PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Bringsjord, S. (1992) What Robots Can and Can't Be. Dordrecht, The Netherlands: Kluwer.

Dennett, D. C. (1985) Can machines think? In: How we know, ed. M. Shafto, Harper and Row.

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54.

Harnad, S, (1995) Does the Mind Piggy-Back on Robotic and Symbolic Capacity? In: H. Morowitz (ed.) "The Mind, the Brain, and Complex Adaptive Systems." Santa Fe Institute Studies in the Sciences of Complexity. Volume XXII. P. 204-220.

Watt, S. (1996) Naive Psychology and the Inverted Turing Test PSYCOLOQUY 7(14) turing-test.1.watt.


Volume: 7 (next, prev) Issue: 29 (next, prev) Article: 4 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: