We argue that Bringsjord (1992, 1994) should be agnostic about his version (ARA1) of Block's Arbitrary Realization Argument, because Bringsjord admits to agnosticism about low-level functionalism and low-level functionalism implies that the key claim of ARA1 is false.
1. There is much to admire (and dispute!) in Bringsjord's stimulating book, What Robots Can and Can't Be (1992). We choose to focus on an argument that is key to Bringsjord's rejection of "Artificial Intelligence Functionalism" (AI-F), and hence to his position that persons are not automata. Our comments are directed to the Arbitrary Realization Argument (ARA), and in particular, to ARA1, Bringsjord's version of Block's "Chinese Nation" argument (Block, 1978).
2. In Chapter IV of his book, Bringsjord "heartily agree[s]" with Pollack's "evidence for functionalism, generically conceived" (p. 125). But he remains unmoved because of the "formidable deductive arguments against (AI-F)" (p. 125) in Chapter VI, which are the various Arbitrary Realization Arguments. Argument ARA1 tells a Block-like story, replacing Block's Chinese by Norwegians implementing a Turing Machine in Texas. Although the argument is indeed deductive, it hangs entirely on one claim (1VI): the Turing Machine M implemented by the Norwegians in Texas does not constitute an agent with the appropriate mental state ("fearing purple unicorns" in his example). We would like to argue that, rather than believing that "ARA has once and for all demolished AI-Functionalism" (p.223), Bringsjord should be agnostic with respect to claim 1VI and therefore not view ARA1 as a refutation of AI-Functionalism.
3. The following is a brief synopsis of our argument:
(1) Bringsjord admits to being agnostic about low-level functionalism. (2) Low-level functionalism is plausible. (3) Low-level functionalism, with a few more plausible assumptions, implies claim 1VI is false. (4) So Bringsjord should be an agnostic about 1VI. (5) So Bringsjord should not find ARA1 compelling.
We now detail the argument, referring to the above points as "Step (1)," etc.
4. Step (1): Bringsjord's agnosticism about "low-level functionalism." In his parenthetical mention of the "scenario in which neurons are replaced one at a time by silicon-based work-alikes," Bringsjord says he is "agnostic" about this argument's presuppositions: materialism and low-level functionalism (p. 218).
5. Step (2): Low-level functionalism is plausible. Bringsjord's agnosticism is a reasonable position to hold, but we are more inclined to view low-level functionalism as quite likely true, albeit not yet firmly established. Let us distinguish several levels of functionalism, all varieties of low-level functionalism:
a. Particle-level functionalism. If every electron in a brain were replaced by a particle that is functionally equivalent to an electron, it is reasonable to suppose that the mental states would not change. There is every reason to believe that an anti- matter brain (in an anti-matter body, in an anti-matter universe), would operate the same as our brains.
b. Atom-level functionalism. Neuroscientists study the brain's use of certain substances by labeling them with rare isotopes that can be detected more easily than the naturally occurring isotopes (Kauppinen, Williams, Busza and Bruggen, 1993). Underlying this procedure is the assumption that isotopes of an element can be substituted for one another without interfering with their function.
c. Molecule-level functionalism. Synthetic molecules with certain shapes and chemical properties function in the brain. For example, various synthetic drugs are known to mimic several of the functional roles of opioid peptides.
d. Neuron-level functionalism. A number of authors have considered the scenario of replacing neurons with "silicon-based work-alikes" in Bringsjord's phrase: Cole & Foelber (1984) (whom Bringsjord cites), Cuda (1985), Chalmers (1993). Especially when the scenario is imagined with some detail (as by Chalmers), we find it quite plausible that the overall function of the brain would not be affected by such replacement. It is clear that it is neuron-level functionalism about which Bringsjord is agnostic.
6. Step (3): Neuron-level functionalism implies 1VI. Since Bringsjord is agnostic about neuron-level functionalism, he should be agnostic about its consequences. In the following paragraphs, we will state a few plausible hypotheses and draw conclusions.
7. Hypothesis (a): The mental aspects of the brain are determined by its pattern of neuron activity. This is an assumption underlying much research in neuroscience (e.g.. as stated explicitly by Changeux and Dehaene, 1989). Therefore, replacement of all the neurons in the brain by silicon "work-alikes" should maintain the mental aspects of the brain. Now we can view the silicon brain as a particular digital computer.
8. Hypothesis (b): The functionality of a computer design is not dependent on the physical material from which it is composed. Computers have been built out of silicon, brass gears (Babbage's "Analytical Engine"), fiber optics, even Tinkertoys (Dewdney, 1993). Therefore replacement of the silicon brain-computer with one made of some other material should maintain the mental aspects.
9. Hypothesis (c): The functionality of a computer design is not dependent on its size. Bringsjord is "prepared to admit, for the sake of argument, that size can't make a difference," although he does have "some inchoate reservations" (p.214). But certainly that the size of computers doesn't make a difference is well-established by advances in miniaturization. Therefore, replacement of the brain-computer with Norwegians in Texas should maintain the mental aspects.
10. Step (4): Since Hypotheses (a), (b), and (c) above are all plausible, Bringsjord should be agnostic about his 1VI. If his "inchoate reservations" about size are a sticking point, then in the absence of (c) he should be agnostic about "same-size" AI-Functionalism, a version of AI-F that demands that the functionally equivalent systems be the same size.
11. Step (5): Rather than finding the ARA1 argument (which rests entirely on 1VI) "formidable," "demolishing" AI-Functionalism, Bringsjord's agnosticism about low-level functionalism should extend to ARA1, and he should be as unmoved by it as we are.
Block, N. (1978) Troubles with Functionalism. In Readings in the Philosophy of Psychology, Vol. 1, Harvard University Press.
Bringsjord, S. (1992) What Robots Can and Can't Be. Boston: Kluwer Academic.
Bringsjord, S. (1994) Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.
Chalmers, D.J. (1993) Toward a theory of consciousness, Center for Research on Concepts and Cognition, Indiana University.
Changeux, J.-P. and Dehaene, S. (1989) Neuronal models of cognitive functions, Cognition 33:63-109.
Cole, D. and Foelber, R. (1984) Contingent materialism, Pacific Philosophical Quarterly 65.1: 74-85.
Cuda, T. (1985) Against neural chauvinism, Philosophical Studies 48: 111-127.
Dewdney, A.K. (1993) The Tinkertoy Computer, W.H. Freeman, 7-15.
Kauppinen, R.A., Williams, S.R., Busza, A.L., and van Bruggen, N. (1993) Applications of magnetic resonance spectroscopy and diffusion- weighted imaging to the study of brain biochemistry and pathology, Trends in Neurosciences Vol. 16, No. 3, 91-92.