Kevin B. Korb (1995) Persons and Things:. Psycoloquy: 6(15) Robot Consciousness (10)

Volume: 6 (next, prev) Issue: 15 (next, prev) Article: 10 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(15): Persons and Things:

PERSONS AND THINGS:
Book Review of Bringsjord on Robot-Consciousness

Kevin B. Korb
Dept. of Computer Science
Monash University
Clayton, Victoria 3168
Australia

korb@cs.monash.edu.au

Abstract

Selmer Bringsjord's What Robots Can and Can't Be (1992, 1994) reviews many of the arguments and thought experiments current in the foundational literature with the aim of refuting functionalism and thereby the possibility of building persons out of computers. Some of these arguments might work, but only if the functionalism under assault is sufficiently distant from biological functionalism.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.
1. What robots might do and what robots might be when they are doing it are importantly different questions. Selmer Bringsjord's thesis is that what robots might do is pretty well unlimited -- passing Turing's original test for intelligence will just be the first step in mounting an open-ended hierarchy of behavioral abilities by robots -- but whatever they do, they won't be persons. They will inevitably fail to be morally responsible or, perhaps more pointedly, conscious. For my part, I agree with Bringsjord that the behavioral tests we might dream up can in principle be met (although I am less sanguine than he: he allows only that they are extremely difficult to meet; I would guess that they are no easier than (even if distinct from) coming to a full accounting of the function of the human brain), but I can only reserve judgment on the question of personhood. Either of these stances will, of course, seem utterly preposterous to a behaviorist, for they both deny that the satisfaction of behavioral criteria -- any behavioral criteria -- can guarantee the identity of mental states of behaving systems. What is more interesting, and to my way of thinking downright odd, is that many functionalists will also find these stances preposterous. What they will draw upon for their sense of outrage is not the arbitrary behaviors posited of a robot, but the arbitrary information-processing structures in virtue of which the robot can behave in the ways postulated. If a robot can be given an arbitrary internal functionality, and if such functionality is all there is to mental structure, then robots can be given arbitrary mental structure. Now that's simply a valid argument. How the premises come unglued is a matter on which Bringsjord and I differ; just how I shall detail below.

2. Bringsjord's treatment has the merit of covering a large fraction of the arguments current in the literature which purport to demonstrate or to rebut the claim that programmed computers can be agents, including those based upon John Searle's Chinese Room, Tim Maudlin's Olympia, arguments from Goedel's incompleteness results, Frank Jackson's knowledge argument, and Ned Block's arbitrary realization thought experiments. These and more are interestingly dealt with, whether or not the treatments will be found altogether congenial. (However, I must note one large omission in Bringsjord's discussion -- which is also, in light of his choice of title, surprising: there is no discussion of perhaps the best known critic of artificial intelligence, Hubert Dreyfus, whose "What Computers Can't Do" has now gone into its third edition.)

3. Given the breadth of treatment, I cannot do justice to it by trying to review the likely impact of the whole. Instead, I shall focus on just two arguments; they are not at all representative (nor would be any other pair), however the first is Bringsjord's version of Jackson's argument that computers cannot be persons, which he considers more persuasive than the original (p. 33), and the second is the arbitrary realization argument, which Bringsjord considers definitive. I shall differ in both assessments.

4. What Bringsjord sees himself as arguing against, he calls the person-building project (p. 7): That cognitive engineers will succeed in building persons out of the typical computational stuff with which they work (which proposition Bringsjord abbreviates as PBP). PBP is plausibly construed as depending upon (implying) functionalism, since without the notion that mental structure is determined by function, and does not depend upon substance, it would be quite implausible that silicon-based components in any configuration would achieve what genuine neural networks do. What Bringsjord calls AI functionalism will serve as well as any definition (I deviate slightly from p. 14): If the overall flow of information in X and Y is the same (e.g., they instantiate the same Turing machines, in like environments), then if X is in mental state S, then Y is also in mental state S. In order to refute PBP, Bringsjord aims at refuting AI functionalism. Interpreted strictly, it appears that Putnam (1975) has already refuted it (while also being one of its originators, Putnam, 1967), since mental states apparently have wide content; for example, such mental states as the belief that water is refreshing refer to things in the world and their truth conditions cannot be exhausted by any accounting in terms of neurostates alone. However, I shall attempt to ignore this difficulty since there is more going on in these arguments than a mere neglect of wide content. Nevertheless, the issue refuses to go away, as we shall see.

5. In response to such notions, Frank Jackson (1982) presented a little thought experiment about Mary the neuroscientist, also known as the knowledge argument against functionalism, which shares some nasty attributes with Searle's Chinese Room; namely, it is an extremely simple story, it is apparently entirely unobjectionable, and it has been thoroughly vexatious for functionalists (1982, p. 128):

    Mary is a brilliant scientist who is, for whatever reason, forced
    to investigate the world from a black-and-white room via a
    black-and-white television monitor. She specializes in the
    neurophysiology of vision and acquires, let us suppose, all the
    physical information there is to obtain about what goes on when we
    see ripe tomatoes, or the sky, and use terms like red, blue, and so
    on. She discovers, for example, just which wavelength combinations
    from the sky stimulate the retina, and exactly how this produces
    via the central nervous system the contraction of the vocal chords
    and expulsion of air from the lungs that results in the uttering of
    the sentence "The sky is blue." ... What will happen when Mary is
    released from her black-and-white room or is given a color
    television monitor? Will she learn anything or not? It seems just
    obvious that she will learn something about the world and our
    visual experience of it. But then it is inescapable that her
    previous knowledge was incomplete. But she had all the physical
    information. Ergo there is more to have than that, and Physicalism
    is false....

In particular, our phenomenologies, the qualitative aspects of subjective experience -- our qualia -- have something to teach us beyond what neurophysiological theories and such like may tell us. I do not have a settled opinion of this story (yet). Dennett's opinion (1991, pp. 399f) is that thought experiments of this kind take advantage of the flexibility and ambiguity of natural language: in particular, he suggests, if we truly attempt to imagine what it would be like for Mary to know everything physical there is to be known about neuroscience, then we will soon find that our imagination fails us. But if our imagination fails us, then how can we justify any opinion about what would or would not be surprising to Mary when she left her office? So, according to Dennett, all that we have here is a failed thought experiment that shows us nothing but the limits of our own imaginations. I am not persuaded that this is a fully adequate response to Jackson's story -- that is, I suspect that one can do better. Nevertheless, Dennett's response is not exactly unreasonable: we can after all describe all kinds of impossible things in English which superficially don't appear to be impossible at all. For example, I can ask you to imagine a function which maps the reals onto the integers 1-1. Now, such a function is impossible, as Cantor proved last century, but that's no impediment to describing such a function; furthermore, an untutored person, after some prompting, will agree to imagine the function and subsequently claim to have imagined it (indeed, most school children will protest when you then claim that these functions are impossible!). This example suggests that Jackson's thought experiment could well be entirely confused, without our knowing about it (aren't we all school children when it comes to neuroscience?). At the very least we can agree with Dennett that Jackson is asking an awful lot of our imaginations and of our intuitions based upon them.

6. So let us see what Bringsjord does to strengthen Jackson's argument, in what he calls the "argument from Alvin." First, he asserts (p. 29):

    (1) If AI functionalism is true, then there is some set of
    sentences AIF* such that knowing AIF* implies genuine understanding
    of all of human psychology.

Where AIF* is a set of sentences which "refer to Turing machines, configurations of Turing machines, the input and output to Turing machines, and no other type of thing" (p. 28). This seems like a fair interpretation of what AI functionalism has to say. (If we assume implausibly either that there is no wide content to psychology or that AI functionalism can capture such wide content; it is the latter that the Alvin argument requires.) Following Bringsjord, let us suppose AI functionalism is correct; then we infer:

    (2) There is such a set AIF*.

Now let us suppose that there is some truly extraordinary computer scientist, named Alvin, who has spent almost all of his life working with computers in a private laboratory, not in total isolation (as Mary was), but in something close to that. Suppose Alvin, a hard worker indeed, has managed to develop an elaborate computer model he calls AIF* and knows it forwards and backwards; furthermore, suppose that Alvin's AIF* is identical with the AIF* in (1) and (2). Then,

    (3) Alvin genuinely understands all of human psychology.

Now most of us know what it feels like to meet a long lost friend for the first time. This is not an easily expressed property, but we may imagine a great story teller capturing it in some moving story. In any case, it is a psychological property and since we have concluded that Alvin understands all of human psychology, he must understand something about this property as well. In particular, Alvin must understand what it would be for Alvin to have this property because of encountering a long lost friend for the first time; calling what is understood here Phi*:

    (4) Alvin genuinely understands Phi*.

(This is not right at all, actually. Since this last proposition makes reference to Alvin and since what Alvin knows by virtue of which he is being said to understand psychology is merely the complicated innards of one or more Turing machines, it cannot follow that Alvin will understand anything about Alvin unless reference can be analyzed into some fact about abstract Turing machines -- which is hardly in the cards. So we find that both (1) and (4) at least depend upon sliding over the matter of wide content. But let us grant, for the sake of continued argument, that Alvin does not only have a Turing-machine model of all of human psychology but also knows exactly how it applies to each human in history, including Alvin.)

Bringsjord holds that the following statement is clearly correct:

    (5) If an agent genuinely understands the statement that [when it
	(the agent) enters condition P that causes it to have
	psychological property F], then if the agent enters condition
	P, it will not "have a revelation" about what it's like to
	enter P and as a result have F.

This formulation is undoubtedly not tight enough: we can imagine that an agent may have a revelation about a non-standard way of getting from condition P to having F, for example. But it seems clear enough what Bringsjord is getting at. If Alvin understands all of Alvin's psychology, including what would cause him to be in any mental state under any circumstances and what those states in turn would cause, then it seems that being put in such circumstances cannot reveal anything new to Alvin about his mental states. But, remember, Alvin has been in a fairly isolated environment. As it turns out, he has some long lost friends (so the isolation hasn't been complete), but he's never run into any of them again. That is, until today, when he for the first time encounters a long lost friend. There are apparently two possible outcomes to this story; either,

    (6) Alvin exclaims, "Oh my God! That's what Phi* is all about!"

or

    (6') Alvin yawns and says, "Phi* instantiated; how boring."

Of course, Bringsjord claims it is just obvious that something like (6) would ensue, with Alvin being the subject of a revelation. But in that case we can get a direct contradiction -- it follows from (4) and (5) that Alvin cannot find this revelatory; so by reductio, one of the premises to the argument must be retracted. Bringsjord finds that the original assumption of AI functionalism is more dubious than any of the premises he has brought in and so finds this a persuasive (but admittedly not compelling) argument against functionalism.

7. However, the Alvin argument suffers from the same kind of objection we lodged against Jackson's earlier argument. We have been asked to suppose that Alvin can learn enough about psychology that he can no longer learn anything more: Alvin has to have learned so much about himself that he can have no further revelations about his own (or, indeed, any human's) psychology. Presumably, since there is no end to the number of possible circumstances that may induce psychological states (since there is no end to the number of possible circumstances), this implies that Alvin has an infinite mental capacity. Well, maybe some being can learn that much, but we certainly cannot imagine such a thing in any kind of detail, and without the details available we will no more know that the thought experiment described is coherent than we know that 1-1 onto maps from the reals to the integers are coherent. And, of course, if Bringsjord's example is not coherent, conclusions drawn therefrom demonstrate nothing. In particular, we have been given no reason to believe that AIF* is finite and that it makes sense to talk of (3) being true for any Alvin.

8. Another way of registering much the same complaint is to say that Bringsjord's intuition that (6) is obviously the right outcome and (6') wrong is ungrounded (or, perhaps (6'') is the right outcome: having Alvin act like (6) and think like (6')). What we know of Alvin from the story is that Alvin will be able to anticipate all mental consequences of encountering his friend, etc. It is clearly not true that the premises imply that Alvin will stop having new experiences, but rather that the new experiences he does have will have no unanticipated direct mental effects. Now what it is like to have a new experience whose direct effects can be fully anticipated is not the kind of thing any of us know about; in fact, only Alvin knows about this sort of thing. So it is really only Alvin's "intuitions" about how Alvin would respond to the scenario in question that matter a damn -- and, unfortunately, neither Bringsjord nor I can ask Alvin about this. That is, I don't trust Bringsjord's intuitions on this matter and neither should he. Someone might counter here that new experiences are inherently revelatory and so Alvin's having one necessarily defeats the claim that Alvin understands all of human psychology. But such a move trivializes the argument altogether since it would demand of any psychological theory that it furnish not just an understanding of human experience, whatever that might turn out to be, but in addition it must furnish all the experiences themselves, when it becomes trivial that there are no such theories.

(Richard Holton has suggested to me that Jackson's original story should be understood as supporting ineffable experiences. That is, it follows from the story that whatever theory of experiences is produced and understood, that theory cannot capture everything there is to the understanding of experience, since no theory can be in the business of supporting the pre-experiencing of experiences and there is always something (qualitative) to be learned from any new experience. Whatever it is that is subsequently learned cannot be axiomatized or captured in an explicit theory and is in that sense ineffable. This might be right: that is, if Jackson's story works, then it supports the existence of ineffable experience. But it is worth noting that there is less here than meets the eye: for however ineffable the qualitative flavor of experience, this interpretation of the argument in no way supports dualism over physicalism. We can produce exactly the same line of reasoning supposing that the original theory of experience is dualistic: i.e., imagine that Mary knows not just everything physical about neuroscience but also has a complete theory about human qualia (supposing these to be distinct). No such theory will allow her to pre-experience anything, because no theory supports the pre-experiencing of anything.)

9. Ned Block's arbitrary realization argument (Block, 1980) is perhaps more persuasive that functionalism is in trouble; at least Bringsjord believes so. The initial idea behind such an argument can hardly be denied by functionalists: since functionalism asserts that it is the function instantiated that matters for mentality and not the stuff instantiating it, it has to be granted that it may turn out that weird things instantiate mental function -- that is, have mentality. After all, we may run into weird aliens who have mental lives and yet are made up of stuff as strange as you like. But now taking AI functionalism in particular, according to Block, Searle, and many others, we can instantiate very complex Turing machines by, say, having the population of China follow instructions corresponding to such Turing machines. So, if there is some Turing machine which captures the mental functions of some human -- as there must be according to AI functionalism -- we can get the collective population of China (or of earth or of the Milky Way galaxy or, at any rate, of something) to instantiate that mental function and so to have the corresponding mental states. Searle prefers coordinated beer cans for denouncing functionalism (coordinated with what, he does not say). Note also that the intuition relied upon -- that coordinated beer cans are non-mental -- remains even when objections based upon wide content are dealt with (as Bringsjord notes, p. 229); that is, if we introduce transducers which directly modify some beer can configurations in response to the environment and similarly introduce effectors that allow beer can deliberations to affect the environment, the oddness of supposing that clanking beer cans can adopt mental states remains. (However, such additions to the system strictly speaking violate AI functionalism, which you will recall is strictly limited to making reference to the internals of Turing machines and therefore must eschew transducers and effectors.) The point then is that such systems, whatever they are doing, are obviously not mental; since AI functionalism requires them to be mental, AI functionalism is false.

10. There is a serious flaw with this kind of argument, which is that it and the AI functionalism which it is denouncing do not take functionalism seriously. What is a function? It is certainly not a mere state transition in a Turing machine. The concept surely did arise from human experience with machines, but not abstract machines, rather the kind that actually operate -- physical machines. The functions of gears, levers, pulleys, and what not, can be specified either in the context of a particular machine or generally. A lever, for example, is a device for applying force across a fulcrum by use of a rigid bar or structure. Clearly, we have the possibility of multiple realizations of levers: we can implement them with rocks, metal or humans as fulcrums and wood, metal or maybe ice as the bars, for example. But there is no question about the possibility of arbitrary realization: it is just absurd (in most contexts) to contemplate snowmen or fragrances as the fulcrum and ice cream or starlight as the bar. So, are combinations of the latter counterexamples to the claim that the lever can be defined in functional terms? They obviously do not constitute levers no matter how the snowmen, etc. may be shaped and arranged; but interpreting the claim that the lever is a functional concept as carrying a commitment to arbitrary realizations is what is fundamentally absurd in this story (for a serious account of functionality, see Wright, 1976).

11. In an exactly parallel way we can point out that the Chinese population cannot instantiate a Turing machine in the same way that silicon chips can (as Pollock, 1989, has noted): no matter what forces you bring to bear upon the people to carry out their instructions, they may simply decide not to do so -- they may go on a sit-down strike, for example. No matter how they are organized, so long as they remain people, their collective organization will not have the same causal structure as any physical realization of a Turing machine (i.e., computers as we know them, with all their limitations, which I will call PTMs). Pollock sums up this point (1989, p. 78): "Insofar as the behavior of the Chinese citizenry is not dictated by laws of nature, it does not follow from [functionalism] that a structure composed of them has mental states" -- alternatively, what is missing from the story is the nomic necessity of the people's actions.

12. Bringsjord, however, believes he has a fix for the arbitrary realization argument (pp. 221ff). Suppose a mad scientist gains control of the Chinese population and wires their brains together so that their actions are no longer "free" but are dictated by the scientist. In that case, the Chinese system continues to behave in accord with the wishes of some person, rather than independently, but we can fix that as well. Suppose the mad scientist operates the system by punching buttons on a control panel under a cherry tree. Well, the scientist may up and die, but the system could carry on, as dropping cherries may fall at random in just such a way as to force the Chinese system to continue instantiating a particular Turing machine. In such a case, since the dropping cherries and everything else in the system behaves the way they do in accord with natural laws, we may have a system which not only instantiates a Turing machine describing the information processing of some mental being, but does so strictly in accord with the laws of nature, and so is, supposedly, a PTM.

13. This rebuttal suffers from more than a few difficulties. One clear difference between the Chinese system and the brain supposedly simulated by it is that the probability that the Chinese system will continue to behave coherently rapidly drops to near zero. The idea that a system whose intelligent-seeming behavior is the result of accident should be counted as intelligent is a perversity which Bringsjord himself denounces when discussing the inadequacies of Turing's original test (p. 12). Bringsjord's attempted resurrection of arbitrary realization just misses the point: A physical realization of a Turing machine is no such thing if the state transitions require the interventions of an outside agent or are the result of random events. The Chinese system does not have the right causal structure to be a PTM: this is not merely a matter of the system's behavior being in accord with the laws of nature (any physicalist would have granted that any of the prior Chinese systems behaved in accord with the laws of nature, after all), rather the counterfactual behavior of the system when presented with an input must disregard cherry (or bird) droppings and simply produce the right output. This shows that Pollock was on the wrong track in supposing that what is lacking in Block's and others' arbitrary realizations is a mere requirement of nomic necessity: it is presumptuous to say that human decision making is not governed by natural law (and also apparently not in the spirit of functionalism), so what is lacking must be causal structure instantiating the right natural laws -- those which determine functionality and support appropriate counterfactuals.

14. "Serendipitous droppings" cannot be what drives mentation as we know it. I claim: every step Bringsjord may take away from the chance effects of droppings playing a role in his story and towards making the Chinese system self-actuating and responsive to an external environment will be a step towards dismantling the intuition that the system described is without intelligence or intentionality. What Bringsjord is surely right about is: since many functionalists have ignored causal structure, what they have been entertaining is a functionalism of the disembodied spirit -- and such functionalism is quite properly the object of the ridicule to which Block, Searle and others have subjected it.

15. There are some obvious ways in which human brains (and, surely, alien "brains" as well, whatever they are made of) differ from any silicon-based PTM. Human brains respond to much more than what are ordinarily counted as "inputs" or sensation; brains respond to whatever causal force makes a difference to them -- which is a truism. This will include the ordinary inputs, but also such things as ambient heat or chemicals ingested or insufficient sleep. No one who is not in the grip of an ideology could deny that heat, chemicals and sleep affect human mental states -- but AI functionalism denies that implicitly because there is apparently no way to represent these causal factors as input to a computing process. This is not to say that we cannot use computers to simulate such processes: we can simulate the effect of heat on neurons, just as we can simulate its effect on ice caps. However, we have no way of having a PTM respond to heat -- physical heat -- in the same way that human brains do. The requirement I'm suggesting here is also not that the PTM must take physical states in response to heat identical to those of some brain (that would in the end just be requiring that it be a human brain, after all); what I'm suggesting is that the PTM should at least have to respond to heat in its information processing in the same way that human brains do before we grant that it has the same mental structure as any human brain. It would be sufficient then if there were transducers for converting physical heat into some PTM input and if in consequence to rising heat, for example, the PTM's information processing broke down in the same ways that human information processing does. It may be conceivable that we can arrange for such a relationship between PTM processing and brain processing in response to heat. But now we must go on to arrange for similar correspondence between the PTM and brain for any number of other causal agents, including chemicals, sleep, etc. What is in serious doubt is whether this process can in fact be extended indefinitely or whether instead the materials known to be sufficient for arbitrary computation (silicon chips, etc.), plus whatever transducers and effectors, indeed simply fail to be sufficient for arbitrary mentation.

16. It may be objected that the tasks I have in mind, although pertinent to modeling human mental structure, are superfluous to the overall task of building persons. Who cares whether a PTM breaks down in the same way as brains when thrown in a hot soup? I suggest that we contrariwise need to be concerned with just such questions: if we accept that human mentality evolved, then it must have evolved so that the organism can better cope with just such environmental stresses. If the complex mental structures that we have exist in order to cope with environmental insults (and opportunities), and if it is a precondition of agency that a system adopt something like those mental structures, then it is simply of the essence of the person-building project that our PTMs must have the kinds of causal structures I am talking about.

17. In particular, if having qualia and being conscious have functions -- as functionalism by definition demands -- then PTM-agents shall presumably have to support those functions. What this means is that they shall have to support a causal structure corresponding somehow to that which we carry about, and especially allowing that many of the causes of qualia and many of their effects are likewise causes and effects for the PTMs. We have only the vaguest of conceptions of what that causal structure looks like, but even that vague conception makes it painfully clear that nutrition, chemicals, heat and so on are directly connected to conscious processing -- and the connections are just as clearly not incidental, but central to the various functions of consciousness. It would be a remarkable arrogance to suppose that the functions of conscious states can be reduced to their causal relationships to sensory and gross behavioral processing alone. People are often inclined to substitute arrogance for knowledge, but it is less than obvious that progress is multiply realizable in that way.

18. As I've been suggesting, it is not clear (and how could it be in advance of trying?) whether the "computational stuff" of cognitive engineering will suffice for the creation of agents with conscious states. It may be that important causal relations in which consciousness participates just are not computational: the function is there, but the computation isn't. This would not be a failure for functionalism, but only for its pale cousin, AI functionalism. If this turns out to be the case, then the domain of artificial agency shall eventually have to be ceded to the bioengineers. AI will nevertheless have its hands full dealing with the never-ending hierarchy of Turing tests -- which is what it will do, and what it will be is the science of intelligence, rather than the science of consciousness. (For the record, I have previously argued that these are distinct, but from a different point of view, treating Searle's Chinese Room in particular, in Korb, 1991.)

19. As I hope my reflections indicate, Bringsjord's book provides a good airing of many of the disputes swirling about the foundations of AI. For the most part, the descriptions of the disputes are accessible and useful. Bringsjord is particularly to be thanked for his efforts at disentangling the issues of what robots may be from how they behave, even if they are incomplete and uneven. What distinguishes these questions is precisely whatever it is that would render something on the Turing hierarchy of behavioral tests at least logically insufficient either for the attribution of intelligence or for the attribution consciousness -- and we have found in Bringsjord's work some reason to be cautious regarding the prospects for consciousness building in particular. All in all, readers interested in an accessible and provocative review of the foundations of artificial intelligence will find Bringsjord's book a likely selection.

REFERENCES

Bringsjord, S. (1992). What Robots Can and Can't Be. Boston: Kluwer Academic.

Bringsjord, S. (1994.) Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Block, N. (1980). Troubles with Functionalism, in C.W. Savage (ed.) Perception and Cognition. Minnesota Studies in the Philosophy of Science, vol. 9, pp. 261-325. Minneapolis: University of Minnesota.

Dennett, D. (1991). Consciousness Explained. Little, Brown.

Jackson, F. (1982). Epiphenomenal Qualia, Philosophical Quarterly, 32: 127-136.

Korb, K.B. (1991). Searle's AI Program, Journal of Experimental and Theoretical Artificial Intelligence, 3: 283-296.

Pollock, J. (1989). How to Build a Person: A Prolegomenon. MIT.

Putnam, H. (1967). Psychological Predicates, in W.H. Capitan and D.D. Merrill (eds.) Art, Mind, and Religion. Pittsburgh: University of Pittsburgh.

Putnam, H. (1975). The Meaning of Meaning, in K. Gunderson (ed.) Language, Mind and Knowledge. Minnesota Studies in the Philosophy of Science, vol. 7. Minneapolis: University of Minnesota.

Wright, L. (1976). Teleological Explanation. University of California Press.


Volume: 6 (next, prev) Issue: 15 (next, prev) Article: 10 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: