Brown & O'Rourke (1994) argue that in response to an objection to one version of the Arbitrary Realization Argument (ARA1) against the computational conception of mind -- an argument I specify and defend in Chapter VI of What Robots Can and Can't Be (Bringsjord 1992) -- I unwittingly affirm (by adopting an agnostic attitude toward neuron-level functionalism) the negation of a premise in ARA1. This argument seems to be provably fallacious.
1. My first version of the Arbitrary Realization Argument (ARA1), runs as follows: In Chapter I of "What Robots Can & Can't Be" (Bringsjord 1992) it is established that the Person Building Project implies computational functionalism, one provisional version of which is
(F) For every two brains x and y, possibly constituted by radically
different physical stuff, if the overall flow of information in x and y, represented as a pair of flow charts (or a pair of Turing Machines, or a pair of Turing Machine diagrams, etc.) is the same, then if associated with x there is an agent s in mental state S, there is an agent s' associated with or constituted by y which is also in S.
So, with "PBP" abbreviating the proposition that the Person Building Project will succeed, we have the conditional
(PBP) -> (F).
Now, for ARA1, assume that (PBP) is the case. By modus ponens, then, we of course have (F).
2. Proceed to let s denote some arbitrary person, let B denote the brain of s, and let s be in mental state S*, fearing purple unicorns, for example. Next, imagine that a Turing machine M, representing exactly the same flow chart as that which governs B, is built out of 4 billion Norwegians all working on railroad tracks in boxcars with chalk and erasers across the state of Texas. From this hypothesis and (F), it follows that there is some agent m -- call it Gigantor -- constituted by M which also fears purple unicorns. But it seems intuitively obvious that (or, if you polled the "man on the street" about Norwegian-ridden Texas he'd say that):
(1-VI) There is no agent m constituted by M that fears purple unicorns
(or, there ain't no Gigantor).
We've reached a contradiction. Hence our original assumption, (PBP), is wrong.
3. After presenting this argument in the first two pages of Chapter VI: Arbitrary Realization, I spend the rest of the chapter unearthing and destroying objections to ARA1. One of these objections is a clever one from Cole and Foelber (1984) -- an objection which appeals to "neuron-level functionalism," the view that gradually replacing neurons with "silicon-based workalikes" will leave the associated mentation intact. For reasons which needn't detain us here, I'm agnostic about neuron-level functionalism; so I'm unmoved by the Cole-Foelber objection. This agnosticism is what Brown & O'Rourke attack.
4. Brown & O'Rourke offer the following argument against ARA1 (and here I quote their chained inference):
(1) Bringsjord admits to being agnostic about low-level functionalism.
(2) Low-level functionalism is plausible.
(3) Low-level functionalism, with a few more plausible assumptions,
implies claim 1-VI is false.
(4) So Bringsjord should be an agnostic about 1 V-I.
(5) So Bringsjord should not find ARA1 compelling.
5. Unfortunately, the inference to (4) is fallacious, as is easily proved. The rule of inference relied upon here is
R-a If s is agnostic about p, and p implies q, then s should be
agnostic about q.
Agnosticism, with respect to an agent s and a proposition p, obtains if and only if it's not the case that s believes p, and it's not the case that s believes not-p. (So, for example, an agnostic with respect to Judaism doesn't believe that Yahweh exists, and doesn't believe that Yahweh fails to exist.) Hence, R-a is equivalent to
R-a' If s neither believes p nor believes not-p, and p implies q,
then s should neither believe q nor believe not-q.
6. Proving R-a' to be a fallacy is effortless: Suppose that Jones is agnostic about the proposition that there is life on planet X. Then by R-a', if this proposition implies some proposition q, Jones ought to be agnostic about q as well. But the proposition in question here, there is life on X, implies there is life simpliciter. And that there is life is surely something Jones needn't be agnostic about!
7. Though it may be painful, there really is no way to salvage Brown and O'Rourke's argument. Even if one grants for the sake of argument both that low-level functionalism is plausible and that this doctrine implies 1-VI in the strictest sense of imply, Brown & O'Rourke are in error. For consider this more general rule of inference (also instantiated in Brown & O'Rourke's argument):
R-a'' If s neither believes p nor believes not-p, and p is
plausible, and p formally implies q, then s should neither believe q nor believe not-q.
We can easily counterexample R-a'' with a simple mathematical example. Suppose that Smith is agnostic about Goldbach's Conjecture (GC: Every even number greater than or equal to 4 is the sum of two primes), one of the great open questions in number theory. GC implies, in the strictest, most precise sense, that every even number greater than or equal to 4 and less than 10 is the sum of two primes. But, of course, no one, Smith included, should be agnostic about the proposition that every even number greater than or equal to 4 and less than 10 is the sum of two primes!
8. Two final points. (1) It's perhaps worth noting that every formalization of knowledge and belief in use in Cognitive Science and AI implies that R-a, R-a', and R-a'' are invalid. Readers not yet baptized into the formalisms involved can readily verify this, starting, I suggest, with Genesereth & Nilsson's (1987) classic mathematization of belief and knowledge in a computational context, and ending, perhaps, with Moore's (1995) just-released treatment. (2) Brown and O'Rourke were perhaps led astray by confusing the fallacious rules isolated above with one that is quite plausible, viz.,
R-a''' If s is agnostic about p, and some proposition q implies p,
then s should be (at most) agnostic about q as well.
Indeed, it was with this rule in mind that I argued as I did in the chapter Brown & O'Rourke may now, given the above analysis, find compelling.
Bringsjord, S. (1994) Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.
Bringsjord, S. (1992) What Robots Can and Can't Be. Dordrecht, The Netherlands: Kluwer Academic Publishers.
Brown, M. & O'Rourke, J. (1994) Agosticism About the Arbitrary Realization Argument. PSYCOLOQUY 5(83) robot-consciousness.3.brown.
Cole, D. & R. Foelber (1984) Contingent Materialism. Pacific Philosophical Quarterly 65.1:74-85.
Genesereth, M. & Nilsson, N. (1987) Logical Foundations of Artificial Intelligence. San Mateo, CA: Morgan Kaufmann.
Moore, R.C. (1995) Logic and Representation. Stanford, CA: CSLI.