Walter Massing (1995) Metaphysical Windmills in Robotland. Psycoloquy: 6(16) Robot Consciousness (11)

Volume: 6 (next, prev) Issue: 16 (next, prev) Article: 11 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(16): Metaphysical Windmills in Robotland

METAPHYSICAL WINDMILLS IN ROBOTLAND
Book Review of Bringsjord on Robot-Consciousness

Walter Massing
Day Clinic of the Langenhagen Psychiatric Clinic
Koenigstrasse 6a
D-30175 Hannover Germany

ndxdmass@rrzn-user.uni-hannover.de

Abstract

One of the main difficulties in Bringsjord's book What Robots Can and Can't Be (1992, 1994) arises due to the fact that concepts such as "person", "free will" and "introspection" are metaphysical and cannot be subjected to empirical scrutiny. Central statements such as "project to build a person" (PBP) are imprecisely formulated, for PBP is not a project, but is at best a prediction. Finally, Bringsjord's method is inadequately defined when he speaks of "precise deductive arguments which sometimes border on proofs," for deduction should not be regarded as a method but as a rule. Without going all the way down the road of the "eight clusters of philosophical issues," we will take a look at natural language as well as the more stringent Turing Test and the Goedel argument.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.

I. REASON

1. The psychiatrist may experience a situation where a deluded patient informs him that he is not the psychiatrist M., that he is not even a person but an automaton which has, however, been cloned so well that nobody but him, the patient, has noticed it. His assertion is thus: "You are not the person M, but an automaton."

2. It is not enough for the patient to make this statement. He develops a powerful system of formal deductions in order to prove his assertion that the psychiatrist M. is an automaton. This, in turn, puts the psychiatrist in a spot, and he looks for ways to find evidence to support the statement: "persons cannot be automata". Right in the middle of this search, the electronic message reaches him that Selmer Bringsjord has solved this problem.

II. INTRODUCTION

3. Bringsjord's book argues that (1) AI will continue to produce machines with the capacity to pass stronger and stronger versions of the Turing Test, but that (2) the "Person Building Project" (the attempt by AI and Cognitive Science to build a machine which is a person) will inevitably fail." His defense of (2) "rests in large part on a refutation of the proposition that persons are automata...".

4. How are we to understand sentences of the form "persons are automata" and "~(persons are automata)"? The first section of this review will investigate the form of Bringsjord's argumentation. This is made more difficult by the fact that inadequately defined concepts are mixed together with skillfully deployed and highly complicated mathematical constructions such as Turing machine, NP completeness, busy beaver function and so on. If instead of What Robots Can and Can't Be we ask the simple question: "will we ever succeed in building robots that display humanlike behavior?", there ceases to be a relationship between these highly complicated mathematical constructions and this task.

5. Without looking at all eight key philosophical clusters, sections V, VI and VII will look at natural language, the more stringent Turing Test and the Goedel argument.

III. METAPHYSICAL CONCEPTS

6. Since the time of Boethius "person" has been regarded as the "individual substance of a rational nature". Descartes referred to all animals for example as "automata". "'Automaton' in this thesis may for the moment be replaced by anything from 'digital computer', to 'finite automaton', to 'infinite abacus', to 'cellular automaton', to 'universal Turing machine'", says Bringsjord. "Universal Turing machine" would have been enough, for it includes everything else. Let us define (like Bringsjord):

    [1] Persons are universal Turing machines.

and replace "universal Turing machines" with "brains". Thus:

    [2] Persons are brains.

This sentence cannot be negated, because it is wrongly formed. The following sentence could be negated:

    [3] Artificial brains are universal Turing machines.

Equally the sentence:

    [4] Horses are flippers.

cannot be negated, that is cannot be falsified, but the following sentence can:

    [5] Horses are reptiles,

for "reptiles" can be replaced in the above example by "mammals". If you replace "flippers" in [4] with "hooves", the sentence ceases to be meaningful.

7. Bringsjord does not succeed, and indeed cannot succeed, in finding a class which includes "persons" and universal Turing machines (UTMs) as a meaningful subclass. For statements about universal Turing machines are falsifiable, for example, UTM1: a universal Turing machine always halts in a finite number of steps, but not statements like P1: "persons are genuine individual things" (p. 202). So if UTM1 and P1 are combined to form a class: UTM1 v P1, then UTM1 cannot be falsified any more, and it is no longer possible to test a theory which can only be empirical.

IV. FEASIBILITY OF PROJECTS

8. Projects are to be distinguished from scientific theories or from predictions. We would not talk of a "heliocentric project" when referring to the transition from the Ptolemaic to the Copernican conception of the world. Predictions, testable singular consequences of a theory (for example, proving the Einstein shift as a verification of the theory of general relativity) are not projects either.

9. Examples of typical projects are the NASA moon-landing project or the international genome project. What is typical about these projects? They take place within a frame of reference which is fully accounted for by theory, and within the limits of the technically feasible. They are not predictions for the reason that, in contrast to predictions, projects cannot falsify a scientific theory. Neil Armstrong would not have been sent to the moon if a theory could have been falsified by the failure of the mission.

10. Now PBP cannot be based on a theory for a variety of reasons. Two of these reasons follow. (1) The metaphysical concept of a person as the "individual substance of a rational nature" does not permit association with empirical properties in a theory. (2) If the metaphysical concept of a person is replaced by the empirically testable designation: "machine with humanlike behavior", then no theory of how a robot of this kind could be constructed has been developed up to now. Thus, there can at most be predictions as to whether a robot of this kind can be constructed.

V. PROOF BY DEDUCTION?

11. "First order natural deduction" is what Bringsjord calls the method which "proves" that the AI functionalism thesis is false. This thesis is as follows:

    (AI-F)  For every two "brains" x any y, possibly constituted by
    radically different stuff, if the overall flow of information in x
    and y, represented as a pair of flow charts (or a pair of Turing
    machines, or a pair of Turing machine diagrams...), is the same,
    then if "associated" with x there is an agent s in mental state S,
    there is an agent s' "associated" with y which is also in S.

12. Apart from the fact that the term "mental state" is undefined, and will remain undefined, and apart from the fact that the agents s and s' are not in an identical state S, but may at most be in similar states S and S', because we are talking about a pair of machines, the formal procedure will be investigated here:

    AI-F ->  (something)
            ~ (something)
     -------------------
    ~ AI-F

13. This method - let us call it "hypothetical deduction" - does not allow a deduction to be made from only one single verifiable statement, here AI-F, but it assumes that a deduction is made from a theory, that is a conjunction of several statements A1 & A2 & A3 ... & AN. If we term "A1 & A2 & A3 ... & AN" THEORY, then we get

    AI-F & THEORY ->  (something)
                    ~ (something)
    ----------------------------
    ~ AI-F v ~ THEORY

so that we have the choice between rejecting AI-F or THEORY. I should point out that Siu L. Chow, in a book review of Rakover on metapsychology (1994), uses the same argumentation.

14. Thus the "method of proof" would be formally incorrect even if all the ill-defined concepts such as "mental state", "pure psychological sentence", "s genuinely understands p", etc. were operationalized.

VI. PARSERS FOR NATURAL LANGUAGES?

15. In the chapter What Robots Can Be the impression is given that there is an automatic procedure which enables something like "natural language processing" (NLP) to take place. Concerning this, the author says on page 164:

    "Let's suppose that natural language input comes by way of a
    keyboard. Then part of what NLP ultimately aims at is some such
    scenario as this. You type natural language input on the keyboard;
    this input is parsed syntactically and semantically, giving (let us
    suppose) a tree (a tree in some generic sense, not just a parse
    tree); this tree is translated into the internal representational
    language (which we may suppose without loss of generality to be
    first-order logic);..."

16. The syntactic, and to an even greater extent, the semantic analysis of natural language is still an unsolved problem. Semantic parsers may be frequently used with programming languages, but in the area of natural languages, semantic parsing is more often ridiculed as "mentalese". Linguists and proponents of AI agree that the interpretation of natural languages causes substantial difficulties.

17. The best remarks on this subject were made 50 years ago by Tarski in a section on The Inconsistency of Semantically Closed Languages (1944):

    "Our everyday language is certainly not one with an exactly
    specified structure. We do not know precisely which expressions are
    sentences, and we know even to a smaller degree which sentences are
    to be taken as assertable. Thus the problem of consistency has no
    exact meaning with respect to this language. We may at best only
    risk the guess that a language whose structure has been exactly
    specified and which resembles our everyday language as closely as
    possible would be inconsistent."

VII. THE REFINEMENT OF THE TURING TEST

18. A refinement of the Turing Test (Turing, 1950; Harnad, 1991) in the direction of cloning a human individual does not accord with Turing's basic idea. It is possible that what he had in mind was the creation of a kind of model of the human mind, and testing the quality of this model installed on a machine by means of his test. In any case, it could not have been of any interest to him to make a perfect true-to-life replica of a human individual, but rather what he had in mind was the operationalization of human thinking.

19. It is for that reason that Turing imposed stringent restrictions on his imitation game, which he had partially formulated in the form of a bet:

    "I believe that in about fifty years' time it will be possible to
    programme computers, ..., to make them play the imitation game so
    well that an average interrogator will not have more than 70 per
    cent chance of making the right identification [as between human
    and computer] after five minutes of questioning."

20. Turing expressly excludes higher levels of his imitation game; thus he also excludes any imitation using biological similarities which are not compatible with the rules of the game, as the following passage shows:

    "The new problem has the advantage of drawing a fairly sharp line
    between the physical and the intellectual capacities of a man. No
    engineer or chemist claims to be able to produce a material which
    is indistinguishable from the human skin. It is possible that at
    some time this might be done, but even supposing this invention
    available we should feel that there was little point in trying to
    make a 'thinking machine' more human by dressing it up in such
    artificial flesh. The form in which we have set the problem
    reflects this fact in the condition which prevents the interrogator
    from seeing or touching the other competitors, or hearing their
    voices."

VIII. THE GOEDEL ARGUMENT

21. Bringsjord brings up the old Goedel argument again: "Whatever else Goedel did in his incompleteness results, he most certainly proved that automata are limited in certain ways; to put it another way and schematically, Goedel showed that classical automata can't do X." (p. 231). And the typical way in which he draws conclusions: "If it could be shown persons can do X, then we at least have the general shape of a case against PER-aut, the view that persons are automata." (It should perhaps be pointed out that Goedel was not dealing with automata but with sentences in the Principia Mathematica.)

22. This constantly raised objection has been perpetuated since 1961, the year when J.R. Lucas's article Minds, Machines and Goedel appeared. The fact seems to be less well known that Turing himself as early as 1950 anticipated the Goedel objection; to him it was an argument that from his point of view was unsuitable for attempting to prove the limits to the capacity of discrete machines. This mathematical objection goes back to the well-known Goedel theorem which states that in any sufficiently powerful logical system, sentences can be formulated that are neither provable nor refutable within the system, unless the system itself is inconsistent.

23. In line with this result, according to Turing, a machine could be requested to put a question to another machine constructed to a simple standard. If this question is answered wrongly or is not forthcoming, then this result would prove an incapacity of machines to which the human intellect is not subject.

24. An argument of this kind, Turing says, shows that the limits of particular machines may be provable, but that the assertion that such limitations do not apply to the human intellect is made without any proof. Turing points out how many errors are made by human beings and that his feelings of superiority towards a particular machine may seem justified, but that when many machines of this kind are combined, a degree of complexity would result which would make the machines cleverer than human beings. This was Turing's point.

25. The question of whether "machines are able to think the way a human being can think" can at most be answered if thinking is a property which can be conceived of as an algorithm. This does not seem to be the case. Turing succeeded in describing the procedure when calculating decimal expressions with finite means in such a way that for each computation step which a human being could carry out in this computation, a machine carries out the same step with a number of configurations q1, q2, ..., qR which is only finite. In contrast to the problem of computable numbers, there is no way of defining the configurations involved in thinking which would allow us to draw possible parallels between human beings and machines. However, this is not proof that machines cannot think. This is exactly what Turing wished to convey in his Mind article (1950).

26. Today, we know that many processes in nature are symbol manipulations. Now is it necessary for such systems of symbols to have to demonstrate their own consistency, symbol systems which, in the case of the genetic code, also generate human beings and animals? Or is it more important that nature should select those symbol systems which can survive best under present and future conditions? Thus it is not to be expected that the genetic code, an alphabet consisting of the letters adenosine, guanine, cytosine, thymine, will ever prove its own consistency.

IX. CONCLUSION

27. An empirical theory must do without metaphysical concepts. Thus metaphysical statements like "simplicity is indivisible" elude empirical scrutiny forever, because they are always true statements. Neither is it necessary to "prove" that the metaphysical concept "person" cannot be described empirically as "Turing machine". If people plan machines with humanlike behavior, without making any "metaphysical" claims, then a theory in the sense of "hypothetical" deductive systems would be of use. In this theory all of these machines could be deduced from basic assumptions. There has been no theory of this kind up to now; robots have been built - with some success - along purely pragmatic lines.

ACKNOWLEDGMENTS

I would like to thank Mr. Dermot McElholm for his valuable assistance in preparing this review.

REFERENCES

Bringsjord, S. (1992). What Robots Can and Can't Be. Dordrecht, The Netherlands: Kluwer Academic Publishers.

Bringsjord, S. (1994). Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Chow, Siu L. (1994). Theory-Data Relations and Theory Acceptance: Book Review of Rakover on Metapsychology. PSYCOLOQUY 5(25) metapsychology.5.chow.

Harnad, S. (1991) Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem. Minds & Machines 1.1: pp. 43-55.

Lucas, J.R. (1961) Minds, Machines, and Goedel, in A.R. Andersen, ed., Minds and Machines (Englewood Cliffs, NJ: Prentice Hall), pp. 43-89.

Tarski, A. (1944) The Semantic Conception of Truth and the Foundations fo Semantics. Philosophy and Phenomenology Research 4, pp. 341-375.

Turing, A.M. (1937) On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society (2) 42.

Turing, A.M. (1950) Computing machinery and intelligence. Mind 59:433- 460.


Volume: 6 (next, prev) Issue: 16 (next, prev) Article: 11 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: