No disagreement with Powers (2001) about the power of computation to simulate neural nets, continuity, and paralellism. That's just the Church/Turing Thesis (that computation can simulate just about anything). But nets, continuity and parallel processing are just means, not ends. Real-world performance capacity, in contrast, is an end. And there is no way that a simulated transducer (optical, say) can transduce real light. It's the wrong causality, be it ever so Turing-Equivalent to it. So a virtual robot or virtual brain is no more able to think than a virtual plane is able to fly.
REPRINT OF: Harnad, S. (1993) Harnad's response to Powers. Think 2: 12-78(Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm
1. Powers's VTTT (Virtual TTT) is no TTT at all: just another TT, a symbols-only oracle. But if I assume that Powers does not mean to invoke Dyer's (Harnad 2001a) and McDermott's (Harnad 2001b) Cheshire cat with this example, but only the possibility that, in principle, a simulated robot in a simulated world, simulated by a sufficiently ingenious (very near omniscient) modeller, who had successfully anticipated and encoded everything that was relevant in both, could come up with the complete blueprint from which to build a real TTT-passing robot through virtual-world testing alone, then I don't disagree, though this seems just about as likely as Bringsjord's (2001) pongid poetry (and Powers seems to agree). But even then it would certainly not be just a matter of 'unplugging' the virtual robot from its virtual world and 'replugging' it into the real world (as Powers seems to suggest): if in doubt, try this first with virtual planets in a real sky.
2. And it would still only be the real TTT robot, successfully built from the principles learned from the VTTT, that was grounded. Virtual grounding is not grounding any more than virtual transduction is transduction. The causal connections between symbols and what they are interpretable as being about must be real. There is no way to break out of the symbolic circle through mere symbol/symbol connections.
3. Powers also seems to think Searle (1980) can simulate transduction the same way he can simulate symbol manipulation. I'd like to hear this spelled out in a concrete case, say, transducing photons. Short of using his own eyes as add-on peripherals (which would of course be begging the question -- see my reply to McDermott (Harnad, 2001b)), the only way I can see to 'reconfigure' Searle so as to be able to do this would seem to call for more radical forms of engineering than mere software!
Bringsjord, S. (2001) People are Infinitary symbol systems; no Sensorimotor Necessary. PSYCOLOQUY 12(038) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.038
Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034
Harnad, S. (2001a) Computation and minds: Analog is otherwise. PSYCOLOQUY 12(043) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.043
Harnad, S. (2001b) Software can't reconfigure a computer into a red herring. PSYCOLOQUY 12(055) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.055
Powers, D.M.W. (2001) A grounding of definition. PSYCOLOQUY 12(056) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.056
Searle, J. R. (1980) "Minds, brains and programs." Behavioral and Brain Sciences 3: 417-424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html http://www.bbsonline.org/documents/a/00/00/04/84/index.html