Stevan Harnad (2001) Computation and Minds: Analog is Otherwise. Psycoloquy: 12(043) Symbolism Connectionism (10)

Volume: 12 (next, prev) Issue: 043 (next, prev) Article: 10 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(043): Computation and Minds: Analog is Otherwise

COMPUTATION AND MINDS: ANALOG IS OTHERWISE
Reply to Dyer on Harnad on Symbolism-Connectionism

Stevan Harnad
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ
United Kingdom
http://www.cogsci.soton.ac.uk/~harnad/

harnad@cogsci.soton.ac.uk

Abstract

Like Dietrich (2001), Dyer (2001) seems to think that code can reconfigure a computer into anything, with any properties. It cannot, and flying, heating and thinking are three examples.

    REPRINT OF: Harnad, S. (1993). Harnad's response to Dyer. Think 2:
    12-78 (Special Issue on "Connectionism versus Symbolism" D.M.W.
    Powers & P.A. Flach, eds.).
    http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

1. Dyer (2001) thinks thinking corresponds to a level of organization in a computer, the differences among the ways this same organization could be implemented being irrelevant. The question I keep asking those who adopt this position is: Why on earth should one believe this? Look at the evidence for every other kind of example one could think of: Compare the respective virtual and real counterparts of planetary systems, fluids, electrons, furnaces, planes, and cells, and consider whether, respectively, movement, liquidity, charge, heat, flight and life might be an 'organizational' property that they share. I see absolutely no reason to think so, so why think so of thought?

2. But, of course, it's possible, because unlike the observable properties I singled out above, thought is unobservable (with one notable exception, which I will return to). So perhaps thought can be correctly attributed to a virtual mind the same way quarks or superstrings (likewise unobservable) can be attributed to (real) matter (Harnad 2001a). Who's to be the wiser?

3. The first intuition to consult would be whether those of us who are not willing to attribute motion in any sense of the word to a virtual universe would be more willing to attribute quarks to it. I wouldn't; at best, there would just be squiggles and squoggles that were interpretable as matter, which was in turn interpretable as a manifestation of quarks: and virtual quarks, though just as unobservable as real quarks, do not thereby become real! Ditto, I would say, for virtual thoughts.

4. But again, it still seems possible (that's part of the equivocality of unobservables: they're much more hospitable to modal fantasies than observables are). This is of course the place to invoke the one exception to the unobservability of thought: The thinker of the thought. HE of course knows whether our attribution is correct (or not, although in that case no knowing is going on). And that is precisely the observation-point where Searle (1980) cleverly positions himself. For the thesis that thinking is just computation -- i.e., just implementation-independent symbol manipulation, with the thinking necessarily 'supervening' on every implementation of the symbol system -- is exquisitely vulnerable to the Chinese Room Argument. And this has nothing to do with 'levels.' Whether Searle manipulates the symbols in a higher-level programming language or all the way down at the binary machine code level, the question is: Is anyone home in there, understanding? Searle says 'no'; the symbols he manipulates are systematically interpretable as saying 'yes.' Whom should you believe?

5. I, for one, see no difference between Searle's implementation of the TT-passing computer and Searle's implementation of the planetary system simulator. In the latter case Searle also manipulates symbols that are systematically interpretable as motion, yet there is no motion in either case. What is the grounds for the special dispensation in the case of the mind simulation? Are we in the habit of thinking that merely memorizing and manipulating a bunch of meaningless symbols gives birth to a second mind?

6. As I've said before, we risk being drawn into the hermeneutic circle here (Dyer 1990, Harnad 1990); such is the power of symbolic oracles that simulate pen-pals instead of planets: We can't resist the interpretation. But a step back (and out of the circle) should remind us that we're just dealing with meaningless squiggles and squoggles in both cases. Does Dyer really think that my readiness to believe that

    [1] "if the TTT could be passed by just a symbol-manipulator and
    some trivial transducers (an antecedent that I really do happen to
    doubt, and one that certainly does not describe the brain, which is
    doing mostly transduction and analogs of it all the way through),
    then, conditional on this unlikely antecedent, ablating the
    transducers would turn the mental lights off"

is all that much more counterintuitive than the belief that

    [2] "if the 'organization' of a computer simulating a planetary
    system is 'reconfigured' so it instead simulates a TT pen-pal, that
    would turn the mental lights on"?

7. And although there is no problem with (i) a real body with a real mind and real TTT capacity, like mine, whether it is interacting with a real world or Virtual Reality [VR], and likewise no problem with (ii) a real robot with real TTT capacity (and hence, by my lights, a real mind), whether it is interacting with a real world or VR, there is definitely a problem if you try to make the equation virtual on both ends -- i.e. (iii) with a virtual robot in a virtual world. For then all you have left is the Cheshire cat's smile and a bunch of squiggles and squoggles. This example makes it even clearer that it is only the (real!) sensorimotor transducer surface and the real energy hitting it that can keep such a system safely out of the hermeneutic circle.

8. I, by the way, do not define mind at all (we all know what it's like to be one) and insist only on real TTT-capacity, no more, no less. I think the science (engineering, actually) ends there, and the only thing left is trust.

REFERENCES

Dyer, M. G. (1990) Intentionality and Computationalism: Minds, Machines, Searle and Harnad. In: Journal of Experimental and Theoretical Artificial Intelligence 2.4. pp. 303-319.

Dyer, M.G. (2001) Computationalism, neural networks and minds, analog or otherwise. PSYCOLOQUY 12(042) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.042

Harnad, Stevan (1990) Lost in the hermeneutic hall of mirrors. Journal of Experimental and Theoretical Artificial Intelligence 2:321-327. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/77/index.html

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Harnad, S. (2001a) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language, and Information 9(4): 425-445. (special issue on "Alan Turing and Artificial Intelligence") http://cogprints.soton.ac.uk/documents/disk0/00/00/16/16/index.html

Searle, J. R. (1980) "Minds, brains and programs." Behavioral and Brain Sciences 3: 417-424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html http://www.bbsonline.org/documents/a/00/00/04/84/index.html


Volume: 12 (next, prev) Issue: 043 (next, prev) Article: 10 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: