Gregory R. Mulhauser (1995) What Philosophical Rigour can and Can't be. Psycoloquy: 6(28) Robot Consciousness (15)

Volume: 6 (next, prev) Issue: 28 (next, prev) Article: 15 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(28): What Philosophical Rigour can and Can't be

WHAT PHILOSOPHICAL RIGOUR CAN AND CAN'T BE
Book Review of Bringsjord on Robot-Consciousness

Gregory R. Mulhauser
Department of Philosophy
University of Glasgow G12 8QQ
Scotland

scarab@udcf.gla.ac.uk

Abstract

Clarity of argumentation is the strongest mark in favour of What Robots Can and Can't Be (1992), in which Selmer Bringsjord attempts to show that what he calls the "Person Building Project", or (PBP) -- the proposition that "Cognitive Engineers will succeed in building persons" (p. 7) -- is doomed to failure. Interestingly, while denying (PBP), Bringsjord affirms that cognitive engineering will succeed in producing robots capable of passing more and more difficult versions of the Turing test -- it's just that those robots won't be persons. Unfortunately, the book is plagued with persistent technical errors which render its arguments wholly unconvincing.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.

I. INTRODUCTION

1. Despite the book's title, Bringsjord ignores robots almost entirely, asserting explicitly that, "though a fleshed-out realistic view of logicist person-building must include talk of sensors, effectors, brains, and the outside environment, talk of what persons, at the core, are, need not be clouded by these matters" (Bringsjord, 1992, p. 255). Bringsjord really is concerned with the "identity" between persons and automata (his (PER-AUT)) such as Turing machines (p. 8), an identity which he takes to be implied by (PBP), and it is to these arguments against (PER-AUT), which constitute the bulk of the book, that we devote most of our attention here.

2. In what follows, we first outline methodological problems and technical inaccuracies which compromise the book before considering in more detail some specific arguments that persons are not Turing machines. We shall see that What Robots Can and Can't Be (hereafter, Robots) demonstrates little in the way of philosophical rigour to justify the text's supremely confident tone.

II. SHAKY FOUNDATIONS

3. Perhaps the first immediately obvious methodological difficulty is Bringsjord's misuse of identity in (PER-AUT), "Persons are Turing machines" (or automata, Universal Turing machines, etc., p.8). Turing machines are mathematical abstractions possessed of, among other things, arbitrarily long input/output tapes. Alternatively, Turing machines are sets of numbers. Even the most ardent fans of (PBP) would be reluctant to assert either that persons have arbitrarily long input/output tapes or that persons are sets of numbers. Indeed, Bringsjord himself notes that "Persons are genuine individual things, not logical constructions" (p. 83). Establishing Bringsjord's desired negation of the proposition that "Persons are Turing machines" and thus, by modus tollens, the negation of (PBP) is trivial. This is a minor trouble which could be patched up without undue difficulty, yet in at least one case (pp. 126-128, discussed below) Bringsjord's arguments hinge on this triviality.

4. Slightly more troubling is the treatment of probabilistic automata, first categorised oddly in terms of real rather than rational number probability distribution functions (p. 75) and then glossed over (pp. 116- 117) in a discussion ignoring basic issues about noise, quantum indeterminacy, levels of description, or complexity. The deck looks stacked against probabilistic automata anyway, as Bringsjord later reveals that his argument about free will (discussed below) "is formidable enough to forever quiet those who hold that merely noting that automata can have the mathematical property of non-determinism ... is enough to derail arguments against the Person Building Project from 'free will'" (p. 327).

5. Ironically, the most disturbing technical faults concern the kinds of issues in theoretical computer science which command centre stage in Robots. For instance, the bizarre statement that "there is fundamental agreement that analog devices ... employ a non-symbolic form of representation" (p. 76) is followed by "All the evidence suggests that analog computers are no more powerful than non-analog computers" (p. 76) and "Turing machines and neural nets are equal in power" (p. 239). I am puzzled as to the reference of the first quotation, but conclusive evidence has recently emerged (Siegelmann, 1995 most importantly; see also Mulhauser, 1992 & 1993; Siegelmann and Sontag, 1994) that chaotic analogue neural networks can compute functions beyond the capabilities of Turing machines, including the so-called (Hopcroft and Ullman 1979) unary halting function. It's easy enough to excuse Robots for contradicting a paper published two or three years later, but ignoring earlier work on analogue computation (such as Vergis et al., 1986; Blum et al., 1989) and whitewashing the genuine controversy in the area is not as easy to forgive.

6. Perhaps it's worthwhile noting explicitly that analogue neural nets with so-called super-Turing capabilities, even if relevantly similar to chaotic neural nets in the human brain, do not, by themselves, constitute an argument against (PBP) or the kinds of functionalist theories on which Bringsjord believes (PBP) rests. The conviction of most 'person builders' is merely that the essential properties of person-hood (or cognition or consciousness or whatever) are to be found in some kind of information flow which can be duplicated by some kind of computational substrate, whether that substrate operates according to a Turing model of computation or some superset of it.

7. Along the lines of this caveat, Bringsjord's comment is all the more out of place: "...do discoveries in physics impact work in theoretical computer science concerning the mathematical limits of automata? No, not at all" (p. 271). The two are intimately connected, and in some places in the text (such as pp. 304-308, discussed below), obfuscation of this fact is the only thing which lends the book's arguments even superficial plausibility. David Deutsch (see also Landauer, 1991 and the many references therein) put it well ten years ago, the embedded assumption of materialism notwithstanding:

8. "... there is no a priori reason why physical laws should respect the limitations of the mathematical processes we call 'algorithms' ... there is nothing paradoxical or inconsistent in postulating physical systems which compute functions not in [the set of recursive functions] ... Nor, conversely, is it obvious a priori that any of the familiar recursive functions is in physical reality computable. The reason why we find it possible to construct, say, electronic calculators, and indeed why we can perform mental arithmetic, cannot be found in mathematics or logic. The reason is that the laws of physics 'happen to' permit the existence of physical models for the operations of arithmetic such as addition, subtraction and multiplication. If they did not, these familiar operations would be non-computable functions. We might still know of them and invoke them in mathematical proofs (which would presumably be called 'non-constructive') but we could not perform them." (Deutsch 1985, p. 101)

9. Alternatively, Siegelmann (1985) puts it simply, "Computer models are ultimately based on idealized physical systems, called 'realizable' or 'natural' models." (p. 545).

10. Finally, Robots obscures questions at the heart of the connectionist-logicist debate on each and every occasion it is mentioned. Such questions, often phrased in terms of 'compositionality' or 'systematicity' or 'constituent structure', include the problem of whether connectionist networks can perform symbolic operations on distributed representations without explicit decomposition back into symbols. Robots doesn't acknowledge that this is an issue (nor is there any reference to the original Fodor and Pylyshyn, 1988, although Bringsjord's own 1991 article is cited no less than six times in this context or that of analogue computation). There is merely repeated assertion that because neural nets and Turing machines are computationally equivalent -- an assertion which we have already noted is false -- there is no real debate at all.

III. IN SEARCH OF RIGOUR

11. Space considerations preclude our discussing each of Bringsjord's arguments against (PBP) or (PER-AUT); instead, I offer what I take to be a representative sample of the kinds of arguments (and their errors) around which the bulk of the book is formed.

12. In chapter 3, titled "Arguments Pro, Destroyed", Bringsjord considers arguments for (PBP) or related propositions and then has his way with them. The chapter opens with Nelson's (1982) analogical argument that humans are likely to be automata because of many characteristics apparently held in common. After several pages (pp. 97-105) of exposition, Bringsjord finishes, "Nelson's argument will succeed only if I fail in the rest of this book" (p. 105). That the success of Nelson's argument entails the failure of the rest of the book is singularly unenlightening and offers Robots no argumentative support.

13. We do find an argument immediately after this, however, where Bringsjord considers a position originally due to Dennett (1976) and modified by Nelson (1982). Nelson and Dennett suggest that a psychological theory T ought to explain cognitive phenomena in terms of minimal functional parts which do not themselves require intelligence. These minimal parts, they suggest, will follow Turing computable processes, and therefore the whole theory itself will describe a Turing computable process. Bringsjord replies that the minimal parts could actually be computing something like his apparently favourite noncomputable function, the so-called Busy Beaver Function. And to make sure such computation doesn't by itself require intelligence, he suggests it could be computed "by a random dance of atoms or, alternatively, by dim busy beavers" (p. 114). The obvious question is why should we suppose a psychological theory T would describe minimal parts computing noncomputable functions -- unless, of course, we already believe persons compute noncomputable functions anyway? For that matter, why not just suppose that everything a person does is a result of "a random dance of atoms"? Robots offers no argument for why we should suppose either noncomputability or random atom dances in the minimal components of a theory meant to explain psychology.

14. Finally, the chapter finishes with a brief consideration of an argument, similar to that of Dennett and Nelson, attributed to David Cole. Cole's position is that if we grant that the neural constituents of brains have computable transfer functions, and if we grant that "brains produce mentality", then persons must be automata. Bringsjord's reply is to question the inference from the brain's producing mentality to the associated person's being an automaton, but ironically this reply is plausible only because of the obfuscation inherent in (PER-AUT) (see above, paragraph 3). His neglecting to provide an appropriate analysis of identity in (PER-AUT) renders absurd his careful criticism of the issue here.

15. Chapter 4 is the odd one out in Robots, in that it's meant to be a defense of his proposition (ROB), the position that cognitive engineers will eventually build a robot able to excel in the Turing test sequence. The second half of the chapter is a discussion of Bringsjord's own research into computer-generated fiction which, while interesting, contains little which bears philosophically on the rest of the book. The first half of the chapter, by comparison, is more than thirty pages of discussion of Manning (1987) ending with the underwhelming conclusion that a computer meant to solve Sherlock Holmes-style mysteries would face the Frame Problem. Solving a murder mystery is almost a textbook example of an activity which might lead a computer to face the Frame Problem.

16. The next chapter cavalierly brushes off the enormous literature on Searle (1980) with the assertion that Bringsjord's own approach "takes us past nearly all of this dialectic, and puts us at perhaps the final moment in it" (p. 185). Unfortunately, the mechanics of the approach do not bear out quite such a degree of optimism. Bringsjord appeals to an imaginary mono savant called Jonah, who can, without actually understanding a word of the language, quickly compile LISP programs and run them on a Register Machine he visualises in his mind. Bringsjord considers the position, which he takes to be a consequent of (PBP), that running the right program P on a machine M would make it so there is a person s associated with M who understands Chinese. But this position, he says, means that if Jonah ran such a program on his visualised Register Machine, then Jonah must understand Chinese. Since Jonah doesn't understand Chinese, so the story goes, (PBP) must be false. But obviously neither Jonah nor his physical substrate, if he has one, is identical to the Register Machine he is visualising, so why should we think he is the person s who understands Chinese? (That would be a neat trick: to become the person associated with a machine by imagining the machine running some software.) After some 14 pages of ruminations about psychological identity and multiple persons associated with one body, Bringsjord's reply to a similar objection raised by Cole (1990) boils down to "I think intuition is on my side" (p. 201), while his response to Rapaport's (1990) version of the objection is another pointer forward to discussions in the coming chapter.

17. That next chapter, on ARA, or arbitrary realisation arguments, can be summed up very neatly: Block (1978). Bringsjord tells us:

18. "Though proponents of functionalism will continue to cook up moves to thwart ARA-like arguments ... I think it's pretty clear that they're fighting a losing battle. The dialectic has been going on for nearly fifteen years, and has brought us to where we now stand in this chapter. I, for one, am ready just to take as a starting place that ARA has once and for all demolished AI-functionalism ..." (p. 223)

19. If we are happy just to accept that ARA destroys functionalism in the first place, and given that Bringsjord takes functionalism to be a consequent of (PBP), why bother with the rest of the book? Why not just point to Block (1978)? Moreover, since it isn't clear how this chapter adds anything new to the discussion, redeeming a few of Robots' forward-pointing promissory notes from as far back as Chapter 3 amounts simply to asking whether or not we accept Block's original play on intuition dating back to 1978. It's also interesting to note as an aside that one defense Bringsjord offers for Block is a straightforward appeal to dualism (pp. 215-216).

20. In the next chapter, Bringsjord attempts to reanimate arguments against (PER-AUT) from Godel's incompleteness theorems, a project which he concedes is nearly universally considered a red herring. By the end of the chapter, unfortunately, Robots offers up no reason to think it anything but.

21. After formulating a singularly unconvincing version of a Godelian argument -- unconvincing because it posits a Turing machine called Ralf who has "discovered" in some unspecified way the truth of a Godel sentence -- Bringsjord considers Kirk's (1986) objection that incompleteness results don't apply if we choose to identify persons with finite automata. Kirk's argument includes the reasonable observation that without relying on some infinitely large external aid, the size of computations that persons can perform is limited. To this, Bringsjord replies (p. 246) that persons could get by just fine with finite external storage space because of some capacity for apparently arbitrary compression of representations which he imagines persons to have. The reply blatantly misrepresents basic facts about information compression and complexity theory. (Shannon and Weaver, 1949; Deutsch, 1985; Chaitin, 1987; Bennett, 1990; Li and Vitanyi, 1993; Mulhauser, 1995) For instance, elementary cardinality considerations show that fewer than one in a million binary strings are compressible by 20 bits or more.

22. Still arguing against Kirk, Bringsjord shifts quickly on from arithmetical abilities to storytelling, where he offers the assertion that "it certainly seems logically possible that a superhuman ... could both grasp and generate each and every story..." (p. 251) in some infinite set of stories. He models this infinite set on an intuitive appeal to well-formed collections of "English words and neologisms indicating novel concepts" (p. 249) or, alternatively, on Chomsky's (1956, 1957) observations about paired correspondences nested to an arbitrary depth. Insofar as generating stories requires keeping track of the stories being generated, however, the assertion begs exactly the same information compression questions as the earlier reply about performing computations on arbitrarily huge problems.

23. Back on the offensive for his Godelian argument, Bringsjord finishes the chapter supposing that Ralf (who, recall, is our hypothetical super-capable Turing machine person) can, "given enough time and energy" (p. 262) see the truth of any first-order self-referential formula. By appeal to the Fixed Point Theorem, Bringsjord concludes that Ralf can thus solve the Halting Problem, and so Ralf couldn't have been a Turing machine in the first place. But into the phrase "given enough time and energy", Robots already packs everything needed to exceed the limits of Turing computability: being able to 'see the truth' of an arbitrary first-order self-referential formula may take an infinite amount of time and is equivalent to being able to complete an infinite number of Turing computations. And this amounts to being able to solve the Halting Problem anyway. So we can dispense with the Fixed Point Theorem and the rest of it, really: if we are happy to attribute to Ralf the capability to compute noncomputable functions anyway, then the argument goes through. But as to why we should be happy with this -- unless, of course, we already believe persons compute noncomputable functions anyway (see paragraph 13) -- Robots offers neither insight nor argument.

24. But if this argument seems to test credibility on the Turing machine business to its limit, chapter 8 is even more curious. Here, Robots appeals to an incompatibilist, libertarian notion of free will and the infinite regress (dubbed 'iterative agent causation', as if having a name should make it more palatable) which the view entails. Although this sort of position no doubt loses the interest of most readers, it is worth going through Bringsjord's main argument, because it is truly remarkable. First, he suggests that if determinism is true, no one has power over any state of affairs. Next, he says only iterative agent causation allows agents to have power over states of affairs if indeterminism is true. Now the remarkable part: if no one ever has power over any state of affairs, then no one is ever morally responsible for anything that happens, but (presto!) someone is morally responsible for something that happens. Therefore, through some simple implications and the supposition that either determinism or indeterminism is true, Bringsjord concludes that iterative agent causation is true -- and with iterative agent causation comes the capacity of persons to exist in an infinite number of distinct mental states in a finite amount of time. Here the argument enters the surreal with a defense of the proposition that iterative agent causation implies the negation of (PER- AUT).

25. The defense, although stated rather obtusely in the text (pp. 305-308), is really quite simple: equate this infinite sequence of distinct mental states in finite time with a Turing machine allowed to finish infinite sequences of computations in finite time; since such a machine can compute noncomputable functions, it isn't really a Turing machine, and neither are persons possessed of iterative agent causation. Unsurprisingly, however, Robots offers no reason why we should believe that an agent's infinite mental states correspond in any way to actual useful states of computation in a Turing machine. (And to be explicit, proponents of something like Bringsjord's characterisation of functionalism or (PBP) would be quite happy with the suggestion that persons pass through plenty of computationally irrelevant physical states, just like my Macintosh does, so the rationale for such a bizarre equation certainly does not come from them. See Marr, 1982 and Pylyshyn, 1984 for relevant notes on levels of description and computational irrelevance.)

26. In a final set of arguments Robots defends 'hyper-weak incorrigibilism', the view (p. 335) that it is necessarily true that if a person believes himself to seem to be in a particular state (such as feeling pain), then he does seem to be in that state. Here Bringsjord argues that persons possess hyper-weak incorrigibilism, but robots could not, on account of the physical unreliability of their hardware. (Curiously, Bringsjord cites McEliece's 1985 Scientific American article on unreliable hardware but neither of the classic works by von Neumann, 1956 or Winograd and Cowan, 1963.) It's worth noting that Bringsjord concedes both "for the sake of argument" (p. 346) that his position is also an argument against materialism and that the argument itself relies on there being an explicit symbolic representation in a robot of a proposition of the form, "the robot believes the robot seems to be in pain". The first concession precludes Bringsjord's argument being used against (PER-AUT), since a dualist could maintain that a person really was an abstract Turing machine which just couldn't be perfectly instantiated in the material world. (He intends it instead as a direct attack on (PBP).) Bringsjord inadequately defends his position against two objections to the latter, the requirement for explicit symbolic representation, but the argument itself can be defused easily without rehearsing those objections.

27. We may grant, for the sake of argument, that for the persons we observe in the actual world, if they believe they seem to be in such and such a state, then in fact they do seem to be in such and such a state. But Robots offers no rationale for injecting a modal operator. That is, why should we infer from what is observed actually to be the case to its necessarily being the case? The subtle difference is significant. Under a non-modal interpretation, some person for whom hyperweak incorrigibilism suddenly fails (that is, who finds he's mistaken about seeming to be in such and such a state) is no longer a person. Perhaps he's gone insane or whatever -- although even this seems needlessly stringent -- but if we hold that hyper-weak incorrigibilism is simply a necessary condition of personhood, then he is no longer a person. But under Robots' extravagant modal interpretation, a person who experiences such a failure is a logical impossibility.

28. The difference it makes to the argument, of course, is that for Bringsjord a person cannot become a non-person through failure of hyper-weak incorrigibilism, but only through failure of some other person-characteristic. That's a bizarre sort of property: once you've got it, you can't get rid of it! On a non-modal interpretation, which avoids this fishiness -- while maintaining the notion that in fact when persons believe themselves to seem to be in certain states, they do seem to be in those states -- Bringsjord's argument is rendered impotent. A robot experiencing hardware failure and a resultant failure of hyper-weak incorrigibilism has ceased to be a person, just as any other person who experiences such a failure ceases to be a person.

IV. CONCLUSION

31. Although we've not rehearsed every single argument which Robots offers against (PBP), we've explored flaws in a wide range of central arguments and exposed many of the underlying shortcomings in terms of methodology and the presentation of basic facts about computation, information, and complexity. The single strongest merit of the book, which does make it worth reading, is the high standard of clarity in argumentation displayed throughout most of the text. While I find the arguments themselves unsatisfying and technically flawed, it is refreshing to see philosophy presented in a clear enough fashion that it is possible quickly and easily to establish where there is an argument and how to evaluate it.

REFERENCES

Bennett, C.H. (1990) Entropy and Information: How to Define Complexity in Physics, and Why. In: W.H. Zurek (ed.) Complexity, Entropy, and the Physics of Information. Redwood City, CA: Addison-Wesley: 137-148.

Block, N. (1978) Troubles with Functionalism. In: Perception and Cognition: Issues in the Foundations of Psychology. Minnesota Studies in the Philosophy of Science, vol. 8. Minneapolis, MN: University of Minnesota Press.

Blum, L.; Shub, M. and Smale, S. (1989) On a Theory of Computation and Complexity over the Real Numbers: NP-Completeness, Recursive Functions and Universal Machines. Bulletin of the American Mathematical Society 21: 1-47.

Bringsjord, S. (1991) Is the Connectionist-Logicist Clash One of AI's Wonderful Red Herrings? Journal of Experimental and Artificial Intelligence 3.4: 319-349.

Bringsjord, S. (1992) What Robots Can and Can't Be. Boston: Kluwer Academic Publishers.

Bringsjord, S. (1994) Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Chaitin, G.J. (1987) Algorithmic Information Theory. Cambridge, MA: Cambridge University Press.

Chomsky, N. (1956) Three Models for the Description of Language. IRE Transactions on Information Theory 2: 113-124.

Chomsky, N. (1957) Syntactic Structures. The Hague: Mouton.

Cole, D. (1990) Artificial Intelligence and Personal Identity. April 1990 APA Central Division Meeting: New Orleans, LA.

Dennett, D. (1976) Why the Law of Effect Will Not Go Away. Journal of the Theory of Social Behavior 5: 169-187.

Deutsch, D. (1985) Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer. Proceedings of the Royal Society of London A400: 97-117.

Fodor, J.A. and Pylyshyn, Z. (1988) Connectionism and Cognitive Architecture: A Critical Analysis. Cognition 28: 3-71.

Hopcroft, J.E. and Ullman, J.D. (1979) Introduction to Automata Theory, Languages, and Computation. Redwood City, CA: Addison-Wesley.

Landuaer, R. (1991) Information is Physical. Physics Today 44: 23-29.

Kirk, R. (1986) Mental Machinery and Godel. Synthese 66: 437-452.

Li, M. and Vitanyi, P. (1993) An Introduction to Kolmogorov Complexity and Its Applications. New York: Springer-Verlag.

Manning, R. (1987) Why Sherlock Holmes Can't Be Replaced By an Expert System. Philosophical Studies 51: 19-28.

Marr, D. (1982) Vision. New York: Freeman.

McEliece, J. (1985) The Reliability of Computer Memories. Scientific American 252: 88-95.

Mulhauser, G.R. (1992) Computability in Neural Networks. Short paper presented September 1992 at British Society for the Philosophy of Science meeting: Durham, England.

Mulhauser, G.R. (1993) Computability in Chaotic Analogue Systems. Presented July 1993 at the International Congress on Computer Systems and Applied Mathematics: St. Petersburg, Russia.

Mulhauser, G.R. (1995) To Simulate or Not to Simulate: A Problem of Minimising Functional Logical Depth. In: F. Moran, A. Moreno, J.J. Merelo, P. Chacon (eds.) Lecture Notes in Artificial Intelligence 929. Berlin: Springer-Verlag: 530-543.

Nelson, R.J. (1982) The Logic of Mind. Dordrecht: D. Reidel.

Pylyshyn, Z.W. (1984) Computation and Cognition: Towards a Foundation for Cognitive Science. London: MIT Press.

Rapaport, W.J. (1990) Computer Processes and Virtual Persons. Technical Report 90-13. Department of Computer Science. SUNY at Buffalo.

Searle, J. (1980) Minds, Brains, and Programs. Behavioral and Brain Sciences 3: 450-457.

Shannon, C. and Weaver, W. (1949) The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press.

Siegelmann, H.T. (1995) Computation Beyond the Turing Limit. Science 268: 545-548.

Siegelmann, H.T. and Sontag, E.D. (1994) Analog computation via neural networks. Theoretical Computer Science 131: 331-360.

Vergis, A., Steiglitz, K. and Dickinson, B. (1986) The Complexity of Analog Computation. Mathematics and Computers in Simulation 28: 91-113.

von Neumann, J. (1956) Probabilistic Logic and the Synthesis of Reliable Organisms from Unreliable Components. In: C. Shannon and J. McCarthy (eds.) Automaton Studies. Princeton: Princeton University Press: 43-98.

Winograd, S. and Cowan, J.D. (1963) Reliable Computation in the Presence of Noise. Cambridge, MA: MIT Press.


Volume: 6 (next, prev) Issue: 28 (next, prev) Article: 15 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: