Selmer Bringsjord (1994) What Robots can and Can't be. Psycoloquy: 5(59) Robot Consciousness (1)

Volume: 5 (next, prev) Issue: 59 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 5(59): What Robots can and Can't be

WHAT ROBOTS CAN AND CAN'T BE
[Kluwer Academic Publishers, 1992, 10 chapters, 380 pages]
Precis of Bringsjord on Robot-Consciousness

Selmer Bringsjord
Dept. of Philosophy, Psychology & Cognitive Science
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180

selmer@rpi.edu

Abstract

This book argues that (1) AI will continue to produce machines with the capacity to pass stronger and stronger versions of the Turing Test but that (2) the "Person Building Project" (the attempt by AI and Cognitive Science to build a machine which is a person) will inevitably fail. The defense of (2) rests in large part on a refutation of the proposition that persons are automata -- a refutation involving an array of issues, from free will to Godel to introspection to Searle and beyond. The defense of (1) brings the reader face to face with Sherlock Holmes and Dr. Watson as they tackle perhaps their toughest case (Silver Blaze); the upshot of this visit with Conan Doyle's duo is an algorithm-sketch for solving murder mysteries. The author's mechanical approach to writing fiction and the philosophical side of computerized story generation are also discussed.

Keywords

behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.

I. CHAPTER 1: INTRODUCTION

I.1 THE GENERAL POSITION

1. My book (Bringsjord 1992a) argues that Artificial Intelligence will eventually produce robots (or androids) whose behavior is dazzling, but it will not produce robotic persons. Robots will DO a lot, but they won't BE a lot. The reader is asked to evaluate this claim on the basis of my defense of this position (and other arguments in the literature), rather than on the basis of slippery metadisputes about whether or not my position and arguments for it are prima facie plausible.

2. My position accords well with the decline of behaviorism, and specifically the apparent decline of the behavioristic Turing Test (see Rey, 1986) and any number of the Turing-like Tests proposed in the literature [NOTE #1]. Readers familiar only with Turing's original test (Turing, 1964), and not with the variations that have been derived from it, should imagine now an ever more stringent sequence of Turing-like tests T1, T2, T3,..., the first member of which is the original imitation game. How does the sequence arise? In T2 we might allow the judge to observe the physical appearance of the contestants; in T3 we might allow the judge to make requests concerning the sensorimotor behavior of the contestants; in T4 we might allow the judge to take skin samples; in T5 we might allow the judge to run brain scans, then surgical probing, and so on. The point is that we can pretty much rest assured that AI will gradually climb up the sequence; that soon we'll have T.75, eventually T1 (though probably not by 2000 as Turing had predicted), etc. I hold that robots will pass, if not all, then at least a goodly number of tests in the Turing Test sequence, but they will always lack some of the properties claimed, in "What Robots Can and Can't Be" (henceforth, ROBOTS), to be necessary for personhood. I defend this position with precise deductive arguments which (in my opinion) sometimes border on proofs.

I.2 THE TARGET

3. What group of people, what field or discipline, aims at bringing us these powerful robots? Candidate terms abound: "Strong" AI (Searle, 1980), GOFAI, or Good Old Fashioned Artificial Intelligence (Haugeland, 1986), "Old Hand" AI (Doyle, 1988), "Person Building" AI (Charniak & McDermott, 1985; Pollock, 1989), etc. All these AI researchers envision the building of an "intelligent robot," one who excels in the Turing Test sequence, able not only to checkmate you, but to debate you. If such a vision will sooner or later come to pass, as I'm quite sure it will, then, again, a question worth asking now is: will robots, in the coming age in which they excel in the Turing Test sequence, be persons?

4. Let us call the AI/Cog-Sci project to build a person the "Person Building Project." PBP will denote the proposition that the Person Building Project will succeed. This project has flesh-and-blood proponents, optimistic ones. For example, according to Charniak and McDermott (1985, p. 7), "The ultimate goal of AI research (which we are very far from achieving) is to build a person, or more humbly, an animal." Some of the arguments in ROBOTS, if sound, could, with minor modification, refute Charniak and McDermott's thesis that AI can succeed in building a sophisticated animal. But what I wish to attack directly is the heart of the Person Building Project: the proposition that persons are automata.

5. Here, in a nutshell, is why I think the Person Building Project will fail. (The symbol -> stands for material implication; ~ stands for truth-functional negation):

    (1)  PBP -> Persons are automata.  

    (2)  ~ Persons are automata.

Therefore:

    (3)  ~ PBP.

6. This is a formally valid argument: simple modus tollens. Premise (2) is repeatedly established in ROBOTS through instantiations of a simple schema, viz,

    (4)  Persons have F.

    (5)  Automata can't have F.

Therefore:

    (6)  Persons can't be automata.

In the book, F stands for such things as free will, the ability to infallibly introspect (over a narrow range of properties), an inner "what it's like to be" experience, etc.

7. Why is (1) true? Those engaged in the Person Building Project are committed to certain well-defined algorithmic techniques which, though hard to enumerate precisely, are used when you construct and program a high-speed computer with sensors and effectors. If someone managed to build a person by stirring up some fertile biological soup in the right way, or by somehow compressing all of evolution into a second of development that could be magically applied to a single-cell creature, this would not spell success for the person builders I have in mind. To affirm PBP is to hold that certain computer techniques will produce people. Nor is the idea that by using these techniques you'll get lucky and bring a person into existence through a side-effect of what you've done. There are those whose ultimate aim is this side-effect -- thinkers who hope to build a computational device whose structure is appropriate for "ensoulment," a device alongside which a person, an immaterial entity, will pop into existence and connect up with the device in some way (e.g., see Turkle, 1984). These thinkers aren't my concern herein. I'm concerned with those who think AI techniques are near the essence of personhood, or mindedness (or mentality, mentation, cognition) itself.

8. For such cognitive engineers, the success of their techniques won't show that people are, essentially and in general, the particular computers they are working on. If a team of cognitive engineers succeeded beyond their wildest dreams and happened to do so by programming a Cray 4, they would not be entitled to hold that persons are Cray 4s. There is no reason to think that the specific physical material (the particular computer and peripheral components) constituting the robots produced by the Person Building Project will be essential. Human persons happen to be made of flesh, not silicon; AI (unlike, say, neurophysiology) does not work with flesh. So behind PBP is "AI-Functionalism," according to which people are idealized computers. This intuition is, as Haugeland (1986) has suggested, captured elegantly by "Persons are automata," or at least by something very close to it.

II. CHAPTER 2: OUR MACHINERY

9. Chapter 2 contains the rough-and-ready ontology presupposed throughout the book, and an account of the logico-mathematical language used. It also characterizes personhood on the basis of some crucial prephilosophical data and gives definitions of finite automata, Turing machines, and cellular automata. Using these definitions, I clarify "Persons are automata," and distinguish between the different versions of this proposition that arise when one specifies the automaton in question. I also say a little about Church's Thesis and, via the "Busy Beaver" function and the Halting Problem, about uncomputability (see also Bringsjord 1993a,b; Bringsjord & Zenzen, forthcoming).

III. CHAPTER 3: ARGUMENTS PRO, DESTROYED

10. Chapter 3 presents refutations of five arguments for the view that persons are automata: the Argument from Analogy (Nelson, 1982), the Argument from What Should Remain Unexplained (Dennett, 1976), the Argument from Natural Functions (Burks, 1973), the How to Build a Person Argument (Pollock, 1989), and the Pretty Much What Everyone Believes Argument (Cole, personal communication).

11. Cole summarizes his argument as follows:

    I suppose that many like myself who believe that persons are
    automata suppose this because we see that neurons appear to be
    finite probabilistic automata, that is to say they have computable
    transfer functions.  From there we note that brains are composed
    merely of neurons (neural nets), along with some supporting
    structures, glial cells, etc., and that brains produce mentality.
    (ROBOTS, P. 126)

12. I show that this argument, tempting though it is, is ultimately untenable. If one assumes agent materialism and jumps beyond what we know to be the case (that brains, with supporting structures, produce mentality), to what many suspect is the case (that brains, with supporting structures, are persons), then Cole's argument does work -- but such a strategy would need independent defense (Pollock 1989).

13. Overall, Cole seems to vote "yes" on all six components of the Contemporary Cognitive Sextet:

    (C1) Token Physicalism,
    (C2) Agent Materialism,
    (C3) Functionalism,
    (C4) Persons are Automata,
    (C5) Person Building will succeed,
    (C6) Robot Building will succeed.

My own vote on each, supported by the arguments in the book, would be:

    (C1) Maybe
    (C2) Maybe
    (C3) No
    (C4) No
    (C5) No
    (C6) Yes

IV. CHAPTER 4: WHAT ROBOTS CAN BE

14. The 4th chapter provides some evidence that robots will ascend the Turing Test sequence. Two questions are addressed: (1) Are mysteries of the sort solved by Sherlock Holmes solvable by an expert system of the future? (2) Is it possible to get a computer to write sophisticated fiction? I argue that both these questions should be answered in the affirmative; my argument provides reason for optimism concerning the powers of future robots. Why address mystery-solving? It's one of the few concrete human abilities that a philosopher (Manning, 1987) has argued cannot be matched by a machine. And computer-generated fiction is my own area of research in Cognitive Engineering (Bringsjord 1992b). The rest of ROBOTS consists of a series of deductive arguments against the Person Building Project.

V. CHAPTER 5: SEARLE

15. My Searlean (Chinese Room) attack on person-building is based on Jonah, an imaginary mono savant. Jonah is blessed with the following powers. He can -- automatically, swiftly, without conscious deliberation -- reduce high-level computer programs (in, say, PROLOG and LISP) to the super-austere language that drives a Register machine (or Turing machine). Once he has carried out this reduction, he can use his incredible powers of mental imagery to visualize a Register machine, and to visualize this machine running the program that results from his reduction. So if you give Jonah a LISP program, he translates it into a Register program, without giving any thought whatever to the MEANING of the LISP program. Jonah, in some attenuated sense, knows the syntax of LISP, but he doesn't have any semantics for the language. He doesn't know, for example, that (DEFUN ...) is a string that defines a function; he doesn't even know that (+ ...) is a built-in function for addition. He DOES know, however, how to "run" his visualized Register machine given a Register program and data put into the first register R0. Jonah is also capable of taking input through his senses and translating it into input to his visualized machine [NOTE #2].

16. It should be obvious how Jonah gives rise to a Chinese Room-like situation. Suppose it's 2040, and that person-builders have produced robots which they herald as persons. Since one of the hallmarks of persons is that they can converse in and understand natural languages, the person-builders will claim that their robots can do the same. If these robots converse in and understand some natural language L, it should be a trivial matter to get them to speak and understand Chinese. So, here we are in 2040: some no doubt super-long computer program P enables robots to speak and understand Chinese. And here is how Jonah enters the picture. We simply give him P, ask him to reduce P to a Register program P', and then ask him to run P' on his visualized Register machines in such a way that input we give him on index cards (strings in Chinese) goes into register R0; and the output, after processing, comes back into R0, whereupon Jonah spits back this output, writing it down for us on an index card. Jonah does not himself speak a word of Chinese.

17. The argument then runs as follows. It appears that Person-Building AI/Cog Sci is committed to

    (7) If the Person Building Project will succeed, then there is a
	computer program P such that when P runs on a computer M there
	is a person s associated with M who understands Chinese.

And

    (8) If there is a computer program P such that when P runs on a
	computer M there is a person s associated with M who
	understands Chinese, then if Jonah reduces P to P' and runs P'
	Jonah understands Chinese.

But

    (9) It's not the case that if Jonah runs P' Jonah understands
        Chinese.

18. Hence, by hypothetical syllogism and modus tollens it follows from these three propositions that the Person Building Project won't succeed. Proposition (9) is not to be viewed as a premise, but rather as an intermediate conclusion following (by elementary logic) from the following three propositions.

    (L*) If an agent s understands two natural languages L0 and L1, then
         s can (perhaps only after considerable effort that produces a
	 long-winded translation) translate between L0 and L1.

    (10) Jonah (by hypothesis) understands English.

    (11) Jonah CAN'T translate between English and Chinese.

19. Three objections to Searlean arguments appear to be the most up-to- date and promising, one from Churchland & Churchland (1990; cf. Searle 1990), and two rather more subtle ones, one from Cole, and one from Rapaport (personal communication). I rebut all three.

VI. CHAPTER 6: ARBITRARY REALIZATION

20. We start with what should be an uncontroversial conditional, namely,

        If the Person Building Project will succeed, then AI-
        Functionalism is true.

Now, assume that person-builders will manage to build robotic persons. By modus ponens, then, we of course have AI-Functionalism, the "flow chart" version (Dennett, 1978) of which is

    (AI-F) For every two "brains" x and y, possibly constituted by
	   radically different physical stuff, if the overall flow of
	   information in x and y, represented as a pair of flow charts
	   (or a pair of Turing machines, or a pair of Turing machine
	   diagrams,...), is the same, then if "associated" with x
	   there is an agent s in mental state S, there is an agent s'
	   associated with or constituted by y which is also in S.

21. Now let 'B' denote the brain of some person s and let s be in the mental state FEARING PURPLE UNICORNS. Now imagine that a Turing machine M, representing exactly the same flow chart as that which governs B, is built out of 4 billion Norwegians all working on railroad tracks in boxcars with chalk and erasers (etc.) across the state of Texas. From this hypothesis and (AI-F), it follows that there is some agent m constituted by M which also fears purple unicorns. But it seems intuitively obvious that

        There is no agent m constituted by M that fears purple
        unicorns.

We've reached a contradiction. Hence our original assumption, that the Person Building Project will succeed, is wrong. I consider and rebut the best objections I know of to this reasoning.

VII. CHAPTER 7: GODEL

22. The 7th chapter is an argument that Godelian incompleteness at least threatens the thesis that persons are automata. Godelian attacks on such theses are not new, but they are currently thought by most philosophers and AI researchers to be unsound, perhaps even rather silly. I try to resurrect the Godelian case. Without claiming it is demonstrative, I do try to show that, contrary to current opinion, it should be taken seriously. Lack of space precludes encapsulating the relevant arguments here.

23. The book ends with two arguments without precedent in the literature, one concerning free will and the other concerning introspection.

VIII. CHAPTER 8: FREE WILL

24. Chapter 8 is a rigorous reconstruction of the extremely vague argument that people can't be machines because people enjoy autonomy, while the behavior of machines is causally predetermined by their programs operating in conjunction with laws of nature. The reconstruction hinges on the proposition, defended in the chapter, that people enjoy what I call 'iterative agent causation,' the view that people can directly bring about certain of their own mental events (e.g., decisions), AND bring about the bringing about of these events, ad infinitum. The reasoning is as follows:

    (12) If determinism, the view that all events are causally
	 necessitated, is true, then no one ever has power over any
	 state of affairs.

    (13) If indeterminism, the view that determinism is false, is true,
	 then, unless people enjoy iterative agent causation, no one
	 ever has power over any state of affairs.

    (14) Either determinism or indeterminism is true (a tautology).

Therefore:

    (15) Unless iterative agent causation is true, no one ever has power
	 over any state of affairs. [from (12)-(14)]

    (16) If no one ever has power over any state of affairs, then no one
	 is ever morally responsible for anything that happens.

    (17) Someone is morally responsible for something that happens.

Therefore:

    (18) It's not the case that no one ever has power over any state of
         affairs. [from (16), (17)]

Therefore:

   (19) Iterative agent causation is true. [from (18), (15)]

   (20) If iterative agent causation is true, then people aren't
        automata.

Therefore:

   (21) People aren't automata. [from (8), (9)]

The chapter includes a defense of all the premises in this argument.

IX. CHAPTER 9: INTROSPECTION

25. Chapter 9 revolves around what I call 'hyper-weak incorrigibilism,' the view that humans have, WITH RESPECT TO A RESTRICTED CLASS OF MENTAL PROPERTIES, the ability to ascertain infallibly, via introspection, whether they have these properties. Here is how the first version of the argument, which is aimed at a symbolicist version of the Person Building Project, runs. Suppose that this project will succeed; then it would seem that three things are true of the robots that will be the crowning achievement of this project:

    (22) If there is some significant mental property that persons have,
	 these robots must also have this property;

    (23) The objects of these robots' "beliefs" (hopes, fears, etc.)
	 -- the objects of their propositional attitudes  -- are
	 represented by formulas of some symbol system, and these
	 formulas will be present in these robots' knowledge bases;

    (24) These robots will be physical instantiations of automata
	 (the physical substrate of which will be something like
	 current silicon hardware, but may be something as extravagant
	 as optically based parallel hardware).

26. It follows from the doctrine of hyper-weak incorrigibilism and (22) that the powerful robot (call it 'r') eventually to be produced by Strong AI/Cog Sci will be able to introspect infallibly with respect to a certain privileged set of mental properties C'. That is, it follows that the relevant instantiation of hyper-weak incorrigibilism is the case, viz,

    (25) For every property F, if it's a member of C', then it is
	 necessarily true that: if r believes r has F, r does indeed
	 have F.

But now in light of (23) it follows that (25) implies that

    (26) For every property F, if it is a member of C', then,
	 necessarily: if the formula corresponding to r's belief
	 that r has F is an element of r's knowledge base, then
	 r does indeed have F.

27. Let's suppose, then, that we have in the picture, along with our robot r, a certain particular property from C', say the property SEEMING TO BE IN PAIN, a property we'll designate 'F*'. It follows that

    (27) It is logically necessary that: if the formula corresponding to
	 r's belief that r has F* is in r's knowledge base, then r does
	 indeed have F*.

28. Now, having arrived at this point, let's turn to a simple and well- known fact about hardware (ANY hardware), namely, that it is physically possible (that is, not contrary to the laws of physics) that hardware fails. Accordingly, it is physically possible that the substrate of r fails, and since, in turn, it is physically possible that this failure is the cause of the fact that the formula in question (the formula corresponding to r's belief that r has F*) is in r's knowledge base:

    (28) It is logically possible that the formula in question is in r's
	 knowledge base while r does NOT have F*.

29. But (27) and (28), by an elementary law of modal logic (in a word: if it's logically necessary that if P then Q, then it's not logically possible that P while not-Q) form a contradiction. Hence, by indirect proof, our original assumption, that symbolicist Person Building will succeed, is wrong. I go on to consider and refute a number of objections to this line of reasoning, including one that is likely to come from connectionists.

X. CHAPTER 10: CONCLUSION

30. The final chapter offers a retrospective view of the colorful thought-experimental characters visited along the journey and makes some brief concluding remarks about the overall case for the view that robots will largely do what we do, but won't be one of us.

XI. ERRATA

31. I have assembled a list of typos and so on (e.g., p. 19's list is in error: the first two conditionals are supposed to be prefixed by necessity operators, and the S5 derivability claim in line 9 should have its schematic conditional necessitated; Harnad (1991) was inadvertently omitted from the bibliography, etc.), but I should inform my critics that none of these glitches seems to spell genuine trouble for the arguments at the heart of the book. If any of my arguments fail, I'm afraid it will be due to deeper defects.

NOTES

#1. For a lucid discussion of the Turing Test, the Total Turing Test, and the Total Total Turing Test, see (Harnad, 1991). For an argument that even more stringent tests, mathematically speaking, than those considered by Harnad cannot separate the "thinkers" from the "pretenders," see Bringsjord (1994).

#2 For a fascinating account of a real-life idiot savant reminiscent of Jonah, see the case of Christopher, in Blakelee (1991).

REFERENCES

Blakelee, S. (1991) "Brain Yields New Clues on Its Organization for Language," Science Times of The New York Times, September 10, pp. C1, C10.

Bringsjord, S. (1994) Could, How Could We Tell If, and Why Should -- Androids Have Inner Lives. Chapter forthcoming in Android Epistemology, JAI Press, Greenwich, CT. Ken Ford & Clark Glymour, eds.

Bringsjord, S. (1993a) Church's Thesis, Contra Mendelson, is Unprovable ... And Worse: It May Be False. Presented at the annual Eastern Division meeting of the American Philosophical Association, December 30, 1993, Atlanta, Georgia.

Bringsjord, S. (1993b) Toward Non-Algorithmic AI. In Ryan, K.T. & Sutcliffe, R.F.E., eds. AI and Cog Sci T92, in the Workshop in Computing Series, (New York, NY: Springer-Verlag), pp. 277-288.

Bringsjord, S. (1992a) What Robots Can and Can't Be. Boston: Kluwer.

Bringsjord, S. (1992b) CINEWRITE: an Algorithm-Sketch for Writing Novels Cinematically, and Two Mysteries Therein. Instructional Science 21: 155-168.

Bringsjord, S. & Zenzen, M. (forthcoming) In Defense of Non- Algorithmic Cognition (The Netherlands: Kluwer).

Burks, A. (1973) Logic, Computers, and Men. Presidential Address, Western Division of the American Philosophical Association, in Proceedings of the American Philosophical Association April: 39-57.

Charniak, E. & McDermott, D. (1985) Introduction to Artificial Intelligence (Reading, MA: Addison-Wesley).

Churchland, P.M. & Churchland, P.S. (1990) Could a Machine Think? Scientific American 262.1: 32-37.

Dennett, D. (1978) Brainstorms (Cambridge, MA: Bradford Books, MIT Press).

Dennett, D. (1976) Why The Law of Effect Will Not Go Away. Journal of the Theory of Social Behavior 5: 169-187.

Doyle, J. (1988) Big Problems for Artificial Intelligence. AI Magazine, Spring: 19-22.

Harnad, S. (1991) Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem. Minds & Machines 1.1: 43-55.

Haugeland, J. (1986) Artificial Intelligence: the Very Idea (Cambridge, MA: Bradford Books, MIT Press).

Manning, R. (1987) Why Sherlock Holmes Can't Be Replaced By An Expert System. Philosophical Studies 51: 19-28.

Nelson, R.J. (1982) The Logic of Mind (Dordrecht, The Netherlands: D. Reidel).

Pollock, J. (1989) How to Build a Person: A Prolegomenon (Cambridge, MA: Bradford Books, MIT Press).

Rey, G. (1986) What's Really Going on in Searle's 'Chinese Room'. Philosophical Studies 50: 169-185.

Searle, J. (1990) Is the Brain's Mind a Computer Program? Scientific American 262.1: 25-31.

Searle, J. (1980) Minds, Brains, and Programs. Behavioral & Brain Sciences 3: 417-424.

Turkle, S. (1984) The Second Self (New York, NY: Simon & Shuster).

Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall.

--------------------------------------------------------------------

        PSYCOLOQUY Book Review Instructions

The PSYCOLOQUY book review procedure is very similar to the commentary procedure except that it is the book itself, not a target article, that is under review. (The Precis summarizing the book is intended to permit PSYCOLOQUY readers who have not read the book to assess the exchange, but the reviews should address the book, not primarily the Precis.)

Note that as multiple reviews will be co-appearing, you need only comment on the aspects of the book relevant to your own specialty and interests, not necessarily the book in its entirety. Any substantive comments and criticism -- including points calling for a detailed and substantive response from the author -- are appropriate. Hence, investigators who have already reviewed or intend to review this book elsewhere are still encouraged to submit a PSYCOLOQUY review specifically written with this specialized multilateral review-and-response feature in mind.

1. Before preparing your review, please read carefully

    the Instructions for Authors and Commentators and examine
    recent numbers of PSYCOLOQUY.

2. Reviews should not exceed 500 lines. Where judged necessary

    by the Editor, reviews will be formally refereed.

3. Please provide a title for your review. As many

    commentators will address the same general topic, your
    title should be a distinctive one that reflects the gist
    of your specific contribution and is suitable for the
    kind of keyword indexing used in modern bibliographic
    retrieval systems. Each review should also have a brief
    (~50-60 word) Abstract

4. All paragraphs should be numbered consecutively. Line length

    should not exceed 72 characters.  The review should begin with
    the title, your name and full institutional address (including zip
    code) and email address.  References must be prepared in accordance
    with the examples given in the Instructions.  Please read the
    sections of the Instruction for Authors concerning style,

    INSTRUCTIONS FOR PSYCOLOQUY AUTHORS AND COMMENTATORS

PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 40,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral science, cognitive science, neuroscience, social science, etc.). All contributions are refereed.

Target article length should normally not exceed 500 lines [c. 4500 words]. Commentaries and responses should not exceed 200 lines [c. 1800 words].

All target articles, commentaries and responses must have (1) a short abstract (up to 100 words for target articles, shorter for commentaries and responses), (2) an indexable title, (3) the authors' full name(s) and institutional address(es).

In addition, for target articles only: (4) 6-8 indexable keywords, (5) a separate statement of the authors' rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses).

All paragraphs should be numbered in articles, commentaries and responses (see format of already published articles in the PSYCOLOQUY archive; line length should be < 80 characters, no hyphenation).

It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article.

PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected.

Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY,

Please submit all material to psyc@pucc.bitnet or psyc@pucc.princeton.edu Anonymous ftp archive is HOST princeton.edu DIRECTORY pub/harnad/Psycoloquy URLs are: ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1994.volume.5/ http://www.princeton.edu/~harnad/psyc.html gopher://gopher.princeton.edu:9000/1


Volume: 5 (next, prev) Issue: 59 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: