Roitblat (2001) thinks grounding has something to do with "specifying the premises of formal arguments." I think it has to do with causally connecting symbols to what they are about, directly and autonomously, rather than through the mediation of an external interpretation: My candidate is causal grounding via whatever internal resources it takes to make a robot successfully pass the Total Turing Test.
REPRINT OF: Harnad, S. (1993). Harnad's response to Roitblat. Think 2: 12-78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm
1. Roitblat's (2001) understanding of all the elements in this discussion differs so radically from my own that space forces me to leave the reader to adjudicate our difference. All I can do is itemize some of our most prominent differences:
i. For me, obeying differential equations is physics and implementing symbol manipulations is computation. They are not the same kind of thing, and indeed, the difference between them, the hardware/software distinction, is what most of this discussion is about. (Nor, as far as I can see, does this have anything to do with the psychologists' hoary example [which is often cited to illustrate one insight or another, but has not yet led to any concrete results] that the dog [somehow] correctly estimates the trajectory of the frisby without doing a conscious calculation.)
ii. Nor, from the equally venerable platitude that continuous signals can be approximated arbitrarily closely by discrete ones, does it follow that virtual objects are approximately real: Tighten the approximation as close as you like, a simulated waterfall will never be wet. Roitblat seems to conflate discrete approximation of continuous signals -- a transducer function -- with systematically interpretable symbolic representation (see Harnad 1987).
iii. Roitblat seems to think grounding has something to do with "specifying the premises of formal arguments." I cannot see this is it all. It has to do with causally connecting symbols to what they are about directly and autonomously, rather than through the mediation of an external interpretation: My candidate is causal grounding via whatever internal resources it takes to make a robot pass the TTT.
iv. Roitblat seems to be using the words 'syntax', 'systematicity', and especially 'semantics' very differently from the way I do. I think my usage is more standard (e.g., see Fodor & Pylyshyn 1988), but let the reader judge.
v. We also differ in our construal of the Chinese Gym and Chinese Room Arguments, for which I can only refer the reader to the target article and the original sources (Searle 1980, Harnad 1989 Harnad 1990) for comparison.
Fodor, J., and Pylyshyn, Z. (1988) 'Connectionism and cognitive architecture: A critical analysis'. Cognition, 3--71.
Harnad, S. (1987) (ed.) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/71/index.html http://cogprints.soton.ac.uk/documents/disk0/00/00/15/72/index.html
Harnad, S. (1989) 'Minds, Machines and Searle'. Journal of Theoretical and Experimental Artificial Intelligence 1: 5--25. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/73/index.html
Harnad, S. (1990) 'The Symbol Grounding Problem'. Physica D, 335--346. http://cogprints.soton.ac.uk/documents/disk0/00/00/06/15/index.html
Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034
Roitblat, H.L. (2001) Computational grounding. PSYCOLOQUY 12(058) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.058
Searle, J.R. (1980) 'Minds, brains and programs'. In: Behavioral and Brain Sciences 3: 417--424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html http://www.bbsonline.org/documents/a/00/00/04/84/index.html