Jesse Hobbs (1995) Creating Computer Persons:. Psycoloquy: 6(14) Robot Consciousness (9)

Volume: 6 (next, prev) Issue: 14 (next, prev) Article: 9 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(14): Creating Computer Persons:

Book Review of Bringsjord on Robot-Consciousness

Jesse Hobbs
Dept. of Philosophy & Religion
University of Mississippi
University, MS 38677



Selmer Bringsjord's book, What Robots Can and Can't Be (1992, 1994), aims to improve incrementally on existing literature on the possibility of computer persons by formalizing arguments that are often bandied about loosely, and by offering new thought experiments to remedy defects in previous ones. It succeeds in this limited aim, but I doubt that these limited improvements are worth fighting for. I argue that massively counterfactual thought experiments are unreliable sources of intuitions, and problems with Searle's Chinese Room and the free will argument slip through cracks in the formalization. I further doubt that the person building project which lies at the heart of Bringsjord's discussion merits taking seriously, since it appears highly irrational.


behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.


1. What Robots Can and Can't Be (Bringsjord, 1992) is a well-researched book full of the thrust and parry, rebuttal and rejoinder typical of scholasticism. Some prefer this style of philosophy, but I find it tedious to wade through statements such as "But given that processes can presumably be multiplied a la Black and company, Rapaport's position would be vulnerable to the sort of attack I brought against Cole's" (p. 206). I rarely have the patience to go back and remind myself of who Black is, what he said about processes, what Cole's position was, how Bringsjord attacked it, and how all of this bears on Rapaport's position. Neither does Bringsjord have the patience to spell all of this out for me.

2. Bringsjord's book aims at incremental improvement, and delivers on this aim. What emerges are two or three epicycles added to existing debates on functionalism, Searle's Chinese Room, and other arguments against Strong AI and computational theories of mind. Those who want to be brought up to date on these debates will find the book useful, but I would have preferred to see the debate raised to a higher plane. I am sympathetic with Bringsjord's conclusions, but I doubt that merely tinkering with existing evidence and argumentative strategies is the most promising approach. If nothing else, this book reveals Bringsjord as the consummate tinkerer.


3. For example, I presume readers are familiar with Ned Block's "Chinese Nation" argument and the polemical remarks about beer cans tied together with string which Searle has addressed against functionalism. Functionalism is the position that human rationality and personhood can be fully accounted for in terms of formal and/or causal relations among discrete units; and that any device instantiating the exact same relations has the exact same rationality and personhood, regardless of the material substrate in which they are instantiated -- even beer cans tied together with string. Bringsjord's opening salvo against functionalism involves "4 billion Norwegians all working on railroad tracks in boxcars with chalk and erasers . . . across the state of Texas" (p. 209). They implement a Turing machine program that instantiates a soon-to-be-discovered flow chart of the mental processes of a person in a state of "fearing purple unicorns."

4. After 20 pages of thrusting and parrying, punch and counterpunch, we end up with Norwegians now strung out across Texas, New Mexico, and Arizona, filling and draining troughs of water according to specific instructions, with a new Norwegian holding a hose, and an armature that automates the process (p. 229). The troughs of water represent squares on a Turing machine tape -- a full trough represents a 1; an empty trough a 0. Intuition is then supposed to tell us that such an arrangement could not be conscious, but could instantiate the full functional architecture of a conscious being. I don't deny that if you read the fine print, you can find in this example some improvements over Searle's tin cans and string, but it's the kind of improvement only a mother could love. Are more expansive and dramatic experiments really just what the doctor ordered?

5. The problem is that these thought experiments rely primarily on intuitions regarding highly counterfactual, or contrary to fact, states of affairs. There is little evidence that our intuitions in these exotic situations are very reliable or telling. To illustrate, several decades ago philosophers debated whether counterfactual conditionals have truth values at all. A standard example was the following:

    If Bizet and Verdi had been compatriots, Verdi would have been

This seems reasonable, but no more so than:

    If Bizet and Verdi had been compatriots, Bizet would have been

Unfortunately, they can't both be true, because together they imply:

    If Bizet and Verdi had been compatriots, then Verdi would have been
    French and Bizet would have been Italian,

in which case they wouldn't have been compatriots after all. Could they both be false? That would contradict the even more innocuous,

    If Bizet and Verdi had been compatriots, then either Verdi would
    have been French or Bizet would have been Italian.

They wouldn't have been German, would they? So which of the above are true and which false? If we can't decide such mundane counterfactuals, why believe we can do better with fabulously exotic ones involving trains of Norwegians across Texas using erasers and chalkboards mounted on boxcars?

6. The counterfactual game is played by holding everything else in the world fixed except the conditional's antecedent and whatever it necessitates, either logically, or through the laws of nature. This works well when we have a firm grip on what the antecedent asserts and what its implications are, but not otherwise. Although there are principled means for establishing the truth values of some simple counterfactuals, such means aren't available for the more exotic ones, so thereis no meaningful sense in which we can declare them true or false.

7. Unfortunately, Bringsjord makes appeal to exotic thought experiments a trademark of his work, calling it "something . . . distinctive about the method followed in this book" (p. 23). Rather than address the above issue squarely, he dismisses it by saying, "I haven't a clue as to why some consider thought-experiments anathema. (Certainly Einstein would have been distressed by this attitude.)" (p. 24). But Einstein's thought experiment (involving the speed of light for observers in different inertial reference frames) was not even counterfactual -- his pre-existing beliefs were counterfactual! His experiment consisted of saying, "Suppose x, and for all we know x might be true," and it turned out that x was true. But when Bringsjord supposes x, not only are we certain that x is dead false, but that x could not be true in any but the remotest possible worlds -- the ones about which we know the least.

8. Conclusion: when someone tells you, "Suppose you are in a Chinese Room in a vat . . . then what do your intuitions tell you?", laughter is the best medicine.


9. Another nice dimension of Bringsjord's treatment is his rigorous formalization of various standard arguments. In what follows I discuss Searle's Chinese Room and the free will arguments against computer persons. The formalizations permit Bringsjord to exposit, evaluate and respond to points made on both sides of the debate. But I think major problems with Bringsjord's positions slip through cracks in the formalization, so that some of his effort ends up misdirected.

10. One of the primary problems with Searle's Chinese Room and similar thought experiments is that they trade on the vagueness and ambiguity in expressions such as "knows" or "understands" Chinese. I traveled to China in 1986 and -- trust me -- you don't know how delighted I would have been to have the book Searle talks about, in which he looks up the squiggles he receives, and which he uses to write down squoggles in reply! The people I talked to would have thought I knew Chinese if I said nothing more than the Chinese equivalent of "I'm fine, thank you." With such a book no doubts would ever have arisen about whether I know Chinese. Can we all be wrong?

11. People who debate the pros and cons of Searle's Chinese Room rarely address whether "knows Chinese" should be interpreted in a theoretical or practical sense, or why. Bear in mind that a Turing machine responds not only to linguistic input from outside (under the door of the Chinese Room), but also to the machine's state at the time it receives that input. A modified Chinese Room robot can also respond to nonlinguistic stimuli. The book in which Searle looks up the squiggles must be sensitive to all these dimensions or it does not fairly represent the computational power of a Turing machine. But this makes the squiggles and squoggles less meaningless than they might otherwise seem.

12. For example, if I am slipped a piece of paper saying squiggle, and my state is needing a ticket to go from Guangzhou to Beijing, and I am in the state of being at the ticket window of the train station -- first, I know that this squiggle came in response to my previous squoggle, and means something relative to it. (Incidentally, I have some fairly precise knowledge of what that squoggle was about, because it was in response to my needing a train ticket, and was not brought on by any previous squiggle, but by my state changing from being next in line to being at the front of the line. I can therefore infer that my first squoggle meant something like, "I want a ticket to go to Beijing on the hard sleeper tomorrow at 6:30 a.m.")

13. Now I go to the page in the book which covers both this squiggle and this state of affairs, and it tells me to write a new squoggle. If the book is competently written and exhaustively researched, then tomorrow about 6:30 a.m. I will be on the hard sleeper bound for Beijing. I feel justified in saying that I know Chinese under these circumstances, and a person without Searle's book does not, because I know enough to get by, and that's the primary purpose of knowing a language. This is not to say I couldn't come to know Chinese better, or that I shouldn't, but the issue has never been cast as one of degree -- of how well I know Chinese -- but whether I know it at all.

14. Bringsjord discusses an objection to Searle's argument by Rapaport which is similar to mine, and appreciates the need to distinguish understanding(w) (which I have of Chinese) from understanding(s) (which the native speaker has); but having done so he feels that he and Searle are vindicated (p. 204). But different people understand languages in different ways, and the backbone of Searle's contention has never been that computers understand language differently, but that they lack understanding altogether, because they are utterly lacking in intrinsically personal properties such as propositional attitudes, except in a derived or figurative sense (Searle, 1992). Otherwise, his argument would not be that computers can't be persons, but that they constitute different kinds of persons. Since Bringsjord also argues that computers cannot be persons of any kind, he seems similarly required to say that they cannot have understanding of any kind, whether understanding(w) or understanding(s).


15. Bringsjord's appeal to free will can be quickly dispatched. He argues that people have free will in a contra-causal, incompatibilist, "agent causation" sense, and automata do not. Hence persons are not automata, and automata can't be persons. While this conclusion may be devoutly to be desired, Bringsjord really doesn't adduce a shred of evidence that humans have free will in this very strong sense, and such evidence is nearly impossible to find. In fairness, evidence against this claim is equally hard to find -- the debate is more one of conflicting allegiances and gut feelings than anything else -- but the result is that any argument inevitably turns on some question-begging premise or other.

16. A rough and ready summary of Bringsjord's argument is as follows (adapted from p. 281):

    No one ever has power over any state of affairs unless
    indeterminism is true and iterative agent causation is sometimes

    If no one ever has power over any state of affairs, then no one is
    ever morally responsible for anything that happens.

    Someone is morally responsible for something that happens.

    Therefore, it's not the case that no one ever has power over any
    state of affairs, and iterative agent causation is sometimes true.

    Iterative agent causation is never true of automata.

    Therefore, the thesis that persons are automata is false.

17. Something is dreadfully wrong on the face of it, because God could be the only person that iterative agent causation has ever been true of -- the only person ever morally responsible for anything -- and the premises would all be true while human beings are nothing but automata, and so the conclusion is apparently false. Indeed, generations of Calvinists have believed something like this. Bringsjord would reply that "Persons are automata" means "All persons are automata", and even if every human is an automaton, the fact that God is not saves the falsity of the conclusion. But those engaged in the person building project could reply that they are only concerned with human persons -- not angelic beings. Their claim that all human personhood and rationality is adequately representable by automata is still untouched by Bringsjord's free will argument.

18. The third premise worries me more than this. Isn't the issue precisely over whether people are morally responsible, as traditionally understood? We have correctional institutions on the assumption that crimes have causes and these causes can in principle be corrected, even if we don't understand human behavior well enough to correct them now. Assuming moral responsibility for humans is simply question-begging, and although I hope it is true, it embarrasses me to see people of my ideological persuasion argue in so high-handed a manner -- I'm afraid Bringsjord and others are giving people like me a bad name.

19. For the record, this is what Bringsjord says of human moral responsibility (p. 302):

    Defending this premise is like defending yourself against the
    skeptic. There are people who insist that we can't know we aren't
    brains in vats. And I suppose there are people who claim that no
    one is morally responsible for anything that happens. The only
    thing that can be given to such people is a story . . . .

(The story that follows is one of unspeakable cruelty -- fortunately, highly counterfactual -- which I trust the gentle reader will not want me to repeat.)

20. The story is irrelevant and the analogy faulty. People have often been held responsible for acts which later were discovered to have other contributing causes, but there has never been a person who was discovered to be a brain in a vat while thinking he/she was not. Juvenile delinquents turn out to suffer from chemical imbalances in the brain at times, and if this happens sometimes, how can we be sure it doesn't happen all the time, but in more complex ways than our primitive neuroscience can currently identify? Scientists are not given to Pyrrhonean skepticism, and no scientist worries that we might be brains in a vat, but many scientists doubt that human moral responsibility exists as traditionally conceived. There is no evidence for contracausal free will sufficient to satisfy the scientific community, and this should worry Bringsjord, but it does not. His response dangerously underestimates the strength of the opposing point of view.

21. He also produces no evidence or argument that a robot could not be morally responsible for its actions.


22. Bringsjord misses a major opportunity by not using his philosophical platform to expose some of the pat assumptions which typically undergird AI research. Why do John Pollock and others want to build a computer person (understood now as a genuine person in the fullest sense of the term, having rights that create correlative duties on our part, such as perhaps a duty not to unplug it)? I am not satisfied with Pollock's (1989) stated rationale that it will help us understand human rationality better.

23. For example, the market value of a Cadillac "with a personality" or a Mercedes which runs "when it wants to" is nowhere near that of a car that runs first time, every time. What makes computer programs any different? A computer person cannot be sold or profited from, nor can it be treated the way every other computer is -- that would be slavery -- nor can it be debugged -- that would be undue influence -- nor can it be discarded in a year when the next generation of more powerful computer persons comes on line. Could it be discarded ever? We don't eliminate persons once they outlive "their useful lives," but computer viruses don't generally kill computers, and they don't die of natural causes either. So there seems to be little incentive to create a genuine computer person, as opposed to something with equal or greater computational power which is more reliable, and the creator can use or dispose of as he or she sees fit.

24. A computer cannot be a person in the fullest sense without having nearly total autonomy from its creator, not to mention unpredictability and a sense of self-worth. If these are not firmly grounded in reality -- if the autonomy and unpredictability are not genuine -- then no serious claim to personhood can be entertained. But if they are firmly grounded in reality, turning such a device loose would create a major liability risk. For example, Pollock could be guilty of negligence if harm resulted from some defect in the computer person's design or engineering. Under today's strict liability laws, this is not an idle possibility, nor does it matter whether Pollock foresaw the problem, or thought it foreseeable. Parents are often held responsible for property damage caused by their children.

25. If this machine is indeed installed with the same rational makeup found in human beings, then it could rob a bank or habitually show up late for work. People do such things, and are not obviously irrational. If it in turn gave Pollock a cut from the proceeds, there isn't a court in the world which would believe Pollock's asseverations of his own innocence and of the computer's autonomy, because fraud would be so easy to commit and so hard to disprove in the computer's reams of installed code. In short, creating a genuine computer person -- as opposed to a computer program with similar capabilities but lacking personhood -- not only offers fewer benefits, but imponderably greater risks. It is irrational.

26. Therefore, I doubt Pollock's ingenuousness when he says that this is precisely what he and others are engaged in, and I don't understand why Bringsjord takes the person building project so seriously. Talk of building persons has nothing more behind it than shock value and a desire to titillate. If there ever comes a day when the AI community has the capability of creating a person, or when outsiders think it is close to doing so, the prevailing attitude in the AI community will be fear, not delight -- fear of mountains of governmental regulations and paperwork; fear of attacks on AI laboratories by right-wing religious ideologues; fear of prosecutors and regulators watching every step; fear of protracted, fruitless legal wrangling. The AI community will do everything in its power to make sure that no computer person ever gets created, either accidentally or intentionally, the same way the recombinant DNA people were forced to respond during the 70s to public awareness of the awesome potential of their research.

27. Pollock's (1989) rationale that building a computer person will help us understand human rationality better does not pass ethical muster, either. It is comparable to nontherapeutic research performed on infants. The fact that humans often bring infants into the world for selfish and immoral reasons is irrelevant -- they shouldn't. Neither should the person building project proceed until it passes ethical muster, which requires among other things that the computer person be created for its own sake. This in turn requires answering the questions of what sakes are, and how things can be given sakes which don't have them already -- a problem that doesn't arise for human persons.

28. Another possibly legitimate reason for creating a computer person can be gleaned from philosophy of religion, where it is argued that a reason worthy of an almighty God for creating human persons is to show us the bounty of God's love. So far, I have not heard AI researchers talk about the bountiful love they intend to bestow on their computer person offspring, but perhaps I haven't been listening closely enough.


Bringsjord, S. (1992). What Robots Can and Can't Be. Dordrecht, The Netherlands: Kluwer Academic Publishers.

Bringsjord, S. (1994). Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Pollock, J. (1989). How to Build a Person: A Prolegomenon. Cambridge: MIT Press.

Searle, J. (1992). The Rediscovery of the Mind. Cambridge: MIT Press.

Volume: 6 (next, prev) Issue: 14 (next, prev) Article: 9 (next prev first) Alternate versions: ASCII Summary