John Barresi (1995) Building Persons: Some Rules for the Game. Psycoloquy: 6(12) Robot Consciousness (7)

Volume: 6 (next, prev) Issue: 12 (next, prev) Article: 7 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(12): Building Persons: Some Rules for the Game

BUILDING PERSONS: SOME RULES FOR THE GAME
Book Review of Bringsjord on Robot-Consciousness

John Barresi
Department of Psychology
Dalhousie University
Halifax, Nova Scotia
B3H 4J1 Canada

JBARRESI@AC.DAL.CA

Abstract

The criteria that Bringsjord uses to decide what robots can and can't be seem inconsistent. I agree with Bringsjord that human persons are not "classical automata" (e.g., Turing machines), but then neither are any interesting robots. I suggest that robot intelligence must be tested by nature rather than human judges. The "Cyberiad test" (Barresi, 1987) might replace the Turing test sequence.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.

I. INTRODUCTION

1. The mere existence of What Robots Can and Can't Be (Bringsjord, 1992; 1994) suggests that a crucial question for philosophical psychology in the twenty-first century will be: Is it possible to make persons or can we only make robots that behave like but won't be persons? We are no longer questioning the possibility of artificial intelligence (AI), but of artificial persons. Yet Bringsjord's approach to this new question seems to me to stick too closely to the old one of computer intelligence and hardly moves at all beyond it into the new territory of robotics or person building (PB). Even so, merely by posing the question of the possibility of building persons in an explicit fashion as well as by providing a variety of attempted proofs of its impossibility, Bringsjord raises the level of discussion from the ad hominum debates that typically occur between AI optimists and pessimists, to one which can focus on particular arguments, their premises and conclusions.

2. Bringsjord's conservatism in facing the difference between AI and PB programs is reflected in the different criteria that he applies when evaluating what robots can and can't be. When speaking of what robots can be, he uses an unending sequence of Turing tests, where robots who pass one test are moved on to a more stringent one, to evaluate how close robots can approach persons. For the most part his discussion of these possibilities does not involve setting any a priori restrictions on how a robot might be constructed.

3. However, when evaluating what robots can't be, he uses restricted criteria like AI functionalism, finite automata, and Turing machines to define more precisely what robots are, but what he believes that persons are not. As a result, it appears by the first set of criteria, that robots will closely approximate the behavior of persons, yet by the second set of criteria, that they can never become persons. It seems to me that both these sets of criteria are inadequate to the person building project (PBP), though they may suffice for the evaluation of AI. In what follows I defend this position, by showing how the use of these two sets of criteria leads Bringsjord into inconsistencies. I then put forward a more consistent criterion by which to decide whether robots are persons.

II. DEFINING ROBOTS SO THEY CAN'T BE PERSONS

4. The strategy that Bringsjord generally adopts to prove that robots are not persons is: first, to define what a robot is, for example, a Turing Machine; second, to describe some characteristic that seems prima facie applicable to persons, for example, free will; finally, to show in a deductive argument that this characteristic applicable to persons implies that persons cannot be robots so defined. In his argument from free will, Bringsjord claims that in order for persons to have free will they must be capable of entering "infinitely many mental states over some [finite time interval] "(p. 267), which implies that persons have mathematical functional capacities that Turing Machine robots cannot compute. In the end, each of these arguments rests on a comparison of entailments of our intuitive understanding of the capacities of persons to entailments of precise definitions of robots.

5. This strategy cannot contribute usefully to PBP or to the evaluation of its possibility for two reasons. First, if PBP is to be taken seriously at all, then we are seeking to develop a precise definition of person, and our current intuitions of person capacities based on direct knowledge of ourselves and other persons must be viewed as revisable. Hence, there can be no entailments of our intuitive understanding of persons that are not defeasible. The debate over the relationship between free will and determinism should show us that there are no necessary entailments of our intuitive interpretation of free will that can be used to prove that robots cannot be persons (cf. Dennett, 1984). Second, if PBP is to have a free range in the construction of robot-persons, we should not restrict our understanding of robots only to those defined in a particular precise manner. In the case of Bringsjord's argument on free will, he himself seems willing to admit that robots might be constructed that are not Turing machines but rather are trial and error machines (Putnam, 1965), that could compute the mathematical function (i.e., "busy beaver") that was used in this argument. Perhaps persons, whether of the robot or human variety, are species of trial and error machines (Kugel, 1986).

6. The difference between Turing and trial and error machines is that Turing machines provide definitive answers to decidable problems but give no answers to undecidable problems while trial and error machines give tentative but revisable answers to both problem types. As a result, trial and error machines are more powerful, for, in the limit, they can even solve the "halting" problem by giving an initial answer of "no", that never gets revised. Computers can simulate trial and error machines as well as Turing machines, depending on how one interprets various outputs before the program stops (Kugel, 1986). While trial and error machines currently seem more appropriate than Turing machines to model human mental activities, even these machines may lack characteristics that will someday be formalizable in our more precise definition of person. At any rate, we should not take any current particular definition of what a robot is as definitive for all PBP's.

III. ROBOTS CAN'T BE ANYTHING BUT CAN DO ANYTHING.

7. The second half of Bringsjord's strategy is to argue that, although robots won't be persons, they will be able to pass any Turing-for-robots test that we can devise. What warrant does he have for this optimism? Very little. He uses as part of his argument a sketch of one program that is supposed to be able to solve mysteries like those in the adventures of Sherlock Holmes, as well as another program that can be used to write such mysteries. He takes these sketches of programs as providing some inductive evidence that robots will someday instantiate full programs that these sketches talk about, and hence that they will exhibit some of the most cognitively complex of human capacities. Bringsjord's conservatism in this half of his strategy is expressed in part by his optimistic belief that the right kind of program, linked up with human and other substitutes for a body, will somehow tell us all we need to know to insure that we could build robots that could pass an indefinite sequence of Turing simulation games that test robots' capacities to mimic the behavior of persons. However, there are no robots here, with sense organs and behavior in the real world whose behavioral capacities can be evaluated. At best, we would have programs to which we could apply traditional Turing tests, where we used typed verbal input and output to assess the programs.

8. Why does Bringsjord believe that he has provided us with anything more than a dream? He admits that the success of his sketch of the SHERLOCK program hangs heavily on finding a solution to the frame problem. His story-writing program may be in even worse shape for, among other problems, he suggests that the writing program must be able to represent the point-of-view of characters as well as to engage in iconic representations of scenery and events. Both of these problems he takes as presently outside the range of computability. Yet, despite these difficulties he remains optimistic that robots will eventually pass most, if not all, of an indefinite sequence of robot-Turing tests. On what does he base his faith? Why does he think that robots will eventually outwit Sherlock and approximate Conan Doyle, yet also claim that they won't have free will, because they won't be able to compute the "busy beaver" function? Is Bringsjord being consistent here?

9. Actually, Bringsjord doesn't really make a claim about whether robots will or won't compute the "busy beaver" function, for he is rarely explicit about what kind of automaton the robots he is talking about are when he evaluates their capacities to pass the Turing test sequence. When he talks about the future success of robots, he appears at times to leave it open how they will solve the problems he is optimistic they will solve. Yet, because his focus here is on the robot's capacity merely to simulate human behavior rather than human mental operations, I think he believes that computable solutions will be found that come close enough behaviorally to pass the Turing test sequence without actually generating the behavior in the same manner as persons. Thus, while Bringsjord believes that persons perform mental operations that are not computable by Turing machines, he also believes that expert systems might be constructed using Turing-computable algorithms that can sufficiently approximate the behavioral consequences of these mental operations to pass most if not all of the robot-Turing test sequence.

10. But again I ask, since we know that trial and error machines can solve some of these problems, and we have reasons for believing that these machines provide better models for human mental activities than Turing machines (Kugel, 1986), why not just adopt the hypothesis that PB will require at minimum machines as powerful as trial and error machines? As far as I can tell, the reason that Bringsjord doesn't take this stand is because he wants to direct his attacks against the notion that persons are "classical automata" (e.g., Turing machines) (p.63). It would do him little good to deny that persons are Turing machines if he had to accept that robots weren't either. Thus, in order to maintain consistency between the two prongs of his argument, he has to remain faithful to the hypothesis that Turing-machine robots will someday pass most, if not all, of the Turing test sequence. However, based on the evidence so far, there is little reason to believe that Turing machines can do this. And even if they could pass any sequence of Turing simulation tests, are these tests adequate for deciding whether Turing-machine robots are behavioral equivalents of persons? I think not, as I will argue in the next section.

IV. NEW RULES FOR THE GAME: PROBLEMS AND PROSPECTS

11. Robots are not programs. Turing's notion of computable function combined with Church's thesis suggested that all thought could be mechanized as computable algorithms or programs. Hence the origin of computer intelligence or AI. But there is now reason to believe that thinking may surpass computability, perhaps especially in its most modular mechanical components, such as pattern perception and recognition (e.g., Hofstader, 1982; Kugel, 1986), let alone the possibility suggested by Bringsjord, that free will requires it. Moreover, the body and brain are integrated structures with many functions that are not essentially of a computational kind. We have no a priori reason for thinking that PB must distinguish cleanly between mechanisms of body and mind, a distinction that may be a holdout of Cartesian dualism. Indeed, one might suggest that the Turing machine interpretation of Church's Thesis, is just Descartes' rational soul reinterpreted as Turing computable function. It is time to reject any such Cartesian constraint on our possible understanding of ourselves and robots as person-machines. What is crucial is that, if we are to construct robots that are persons, they had better be able to do what we can do. Only then does it make sense to ask if they are what we are.

12. However, if robots are to be able to match our abilities, we need criteria by which to test these robots that go beyond the robot-Turing test sequence. The problem with the Turing test is that it uses human judges to decide if the program or robot passes the test. However, it is nature, not humans, that must ultimately judge if the robots match our abilities, and while the resources for testing by humans are necessarily finite, those of nature are potentially infinite. I have proposed an alternative test that makes nature, not humans the judge of robots or cybernetic men (Barresi, 1987). This "Cyberiad test" is based on the supposition that if we ever reach the point that we can construct our robot equivalents, then these constructed robots will also be intelligent enough to construct themselves. As a result, one can imagine a society composed entirely of such robots that should be able to construct their own replacements in order to maintain themselves as a species. While the length of time that such a society could continue to exist without degeneration through entropy is uncertain, if they are our intellectual equivalents, then it seems reasonable to expect that they should be able to survive for roughly the lifetime of our own species, say several million years.

13. I proposed the Cyberiad test as a means for us to evaluate just what is required of our robots if they are to match our abilities as a species, as well as to evaluate theoretical scenarios as to how future cybernetic scientists might construct our robot-equivalents. The key feature of the test is the supposition upon which it is based, that is that these robots are our equivalents in intelligence. Their survival, which will depend on their capacity to recursively replace themselves over a number of generations, will force them to use this intelligence by requiring them to maintain the technical competence and collaborative activity required for a successful robot construction industry over the multiple generations of the test. While this should not be a particularly difficult problem in the exercise of their intelligence given the supposition that they are the equals of the humans who constructed them, it should press this intelligence close to its limits, since it will require them to respond adaptively to the changing material and social conditions of the Cyberiad.

14. The Cyberiad test goes beyond the Turing test by providing a criterion that takes into account the fact that we - as humans - will never be able to predict our own future creative and adaptive acts as a species. Hence, we cannot specify at a particular time any finitely describable set of capacities or knowledge or behavior of the robots that will insure that they can pass the test. Rather, what is required is for us to envision and construct robots that can deal with an open future in just the same manner that we do, creating solutions to problems as they arise, taking advantage of these new solutions to extend robot knowledge, and accumulating this knowledge socially through language and culture.

15. Once we realize just how complex we really are as persons in society living with an open future, we may decide that it is just too difficult and perhaps impossible to build robot-persons identical to ourselves. Indeed, I have presented a number of arguments to suggest that this will be an impossibility (Barresi, 1987). However, at this point, I am much more sanguine. I no longer see our goal as cybernetic scientists as one of constructing cybernetic persons that are identical with ourselves. Rather, I would support the idea behind PBP, which is to become clearer about what it is that makes a complex object a person by building analogue persons that enjoy those abilities that we take to be essential to our evolving notion of person. The closer we come to building robot-persons similar to our selves in capacities, the clearer we will become on just what it is to be a person, rather than to be an object that behaves like a person. I suspect that in the end we will either not be able to create robot-persons that are functional equivalents of ourselves in abilities, or that we will begin to understand how we as strictly material beings enjoy personhood.

REFERENCES

Barresi, J. (1987) Prospects for the cyberiad: Certain limits on human self-knowledge in the cybernetic age. Journal for the Theory of Social Behaviour 17: 19-46.

Bringsjord, S. (1992) What robots can and can't be. Dordrecht, The Netherlands: Kluwer Academic Publishers.

Bringsjord, S. (1994) Precis of: What robots can and can't be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Dennett, D. (1984) Elbow room: The varieties of free will worth wanting. Cambridge, Mass: MIT Press.

Hofstader, D. (1982) Metafont, metamathematics, and metaphysics. Visible Language 16: 309-338.

Kugel, P. (1986) Thinking may be more than computing. Cognition 22: 137-198.

Putnam, H. (1965) Trial and error predicates and the solution of a problem of Mostowski. Journal of Symbolic Logic 20: 49-57.


Volume: 6 (next, prev) Issue: 12 (next, prev) Article: 7 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: