Herbert L. Roitblat (2001) Computational Grounding. Psycoloquy: 12(058) Symbolism Connectionism (25)

Volume: 12 (next, prev) Issue: 058 (next, prev) Article: 25 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(058): Computational Grounding

COMPUTATIONAL GROUNDING
Commentary on Harnad on Symbolism-Connectionism

Herbert L. Roitblat
Department of Psychology
University of Hawaii
Honolulu, HI 96822, USA

roitblat@uhunix.uhcc.hawaii.edu

Abstract

Harnad (2001) defines computation in a conventional way to mean the manipulation of physical symbol tokens on the basis of purely syntactic rules. This definition inappropriately excludes analog systems, which can perform any computation that a symbolic system could (and vice versa). Because they have the same computational power, Harnad is arguably wrong that criticisms which apply to symbolic systems do not apply to analog systems. The real difference lies not in the continuous versus discrete properties of digital vs analog representations, but in what Harnad calls symbol grounding. Purely syntactic systems can transmit truth from the premises of an argument to the conclusion of the argument, but they cannot establish the validity of the premises. Symbol grounding is the process of establishing the premises. Although computers are capable of purely syntactic processing, for them to produce useful results, there must be constraints on what the symbols stand for. These constraints are part of what grounds the symbols. Neither Chinese rooms nor Chinese gyms can escape from the need to ground symbols for truly functional computation.

    REPRINT OF: Roitblat, H. L. (1993) Computational grounding. Think
    2: 12-78 (Special Issue on "Connectionism versus Symbolism" D.M.W.
    Powers & P.A. Flach, eds.).
    http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

I. DEFINITION OF COMPUTATION

1. Harnad (2001) defines computation to mean the manipulation of physical symbol tokens on the basis of syntactic rules defined over the shapes of the symbols, independent of what, if anything, those symbols represent. He is, of course, free to define terms in any way that he chooses, and he is very clear about what he means by computation, but I am uncomfortable with this definition. It excludes, at least at a functional level of description, much of what a computer is actually used for, and much of what the brain/mind does. When I toss a Frisbee to the neighbor's dog, the dog does not, I think, engage in a symbolic soliloquy about the trajectory of the disc, the wind's effects on it, and formulas for including lift and the acceleration due to gravity. There are symbolic formulas for each of these relations, but the dog insofar as I can tell, does not use any of these formulas. Nevertheless, it computes these factors in order to intercept the disc in the air. I argue that determining the solution to a differential equation is at least as much computation as is processing symbols. The disagreement is over what counts as computation, I think that Harnad and I both agree that the dog solves the trajectory problem implicitly. This definition is important, because, although Harnad offers a technical definition for what he means by computation, the folk-definition of the term is probably interpreted differently, and I believe this leads to trouble.

II. CRITICISMS OF COMPUTATION APPLY TO ANALOG SYSTEMS

2. Harnad claims that criticisms that apply to symbolic systems do not apply to analog systems. I think that this claim stems in part from his definition of computation as based on arbitrary symbol manipulation. If one accepts that computers can perform operations other than pure symbol manipulation, then I think, the distinction evaporates. So-called symbolic systems can represent analog properties along various dimensions with arbitrary precision. In other words, a digital system can implement (I argue that it is truly an implementation rather than an emulation) an analog system with whatever precision one wants. Watches are an example of a system that exists in both digital and analog form.

3. An analog watch is an analog of the rotation of the earth (times two, i.e., two rotations per day). The position of the hour hand, for example, revolves exactly twice for each rotation of the earth. It preserves continuity between times in continuity between positions. That is, for every degree the earth rotates, the hour hand rotates two degrees; two times that surround a third time have corresponding hand positions that surround a third position (e.g. 1:00 and 3:00 have 2:00 between them both in time and in hand position), and similarity between times corresponds proportionally to similarity between hand positions (within 12-hour limits).

4. A digital watch also represents the passage of time, but there are no analogs between the parts of the digital display and the passage of time. Symbols appear twice during each 24 hours. The symbols depend on the time of day, but the similarity between two displays does not correspond to the similarity between the two times. For example, 9:58 and 9:59 differ by one minute. 9:59 and 10:00 differ by the same amount, yet the first two times share many display features, while the second two times do not.

5. These two kinds of watches demonstrate clearly the difference between analog and symbolic representations of a continuous variable, but what does one make of those watches that are entirely computational, but display an analog watch face on their liquid crystal display (LCD)? Is this watch analog or digital, analog or symbolic? From the point of view of behavioral consequences, the LCD watch might as well be analog, if all we wish is to tell the time. Its inner workings, however, are digital. Similarly, is a binary code for an integer symbolic or analog? If a binary code seems too symbolic, then how about a gray code in which the similarity between representations preserves the similarity between the numbers represented? My point is that it is often difficult to decide whether a system is analog or digital/symbolic even if one knows the architecture. One can implement the other freely with no functional consequence. Hence, it is difficult to argue that one system is prey to certain criticisms and the other is automatically immune.

III. GROUNDING

6. The real issue in considering analog systems, however, lies not in the continuous versus discrete properties of the representation, but in what Harnad calls its symbol grounding. Harnad seems to focus on those analog forms of representations in which the variability of the representation is causally connected to the variability of the object (feature, dimension, etc.) that is being represented. Harnad reflects a common view when he claims that computational accounts of intelligence rest on syntactic symbol manipulation, i.e., that intelligence is implemented as a formal symbol system. A formal system, as Turing and others showed, can be used as a syntactic device for writing deductive relations. Formal systems can transmit truth from the premises of an argument to the conclusion of the argument. That is, if the premises of an argument are true, and the argument is made in the proper form, then the conclusions are guaranteed to be true. Formal systems, however, provide no mechanism for guaranteeing that the premises of the argument are true, because such guarantees are impossible. Rather, establishing the validity of a premise requires an induction, for which there are no infallible rules. By this analysis symbol grounding is nothing more than specifying the premises of formal arguments. One of Harnad's major contributions is the reminder that symbolic arguments require premises, but cannot independently specify them. Some outside mechanism (i.e., other than a purely syntactic process) must establish the premises. Searle (1980) seems to argue that the causal structure of the brain is essential to establishing premises; Harnad uses the concept of transduction for symbol grounding; but any mechanism that can establish the premises will do.

IV. SYNTACTIC SYMBOLS AS COMPUTATION

7. Two characteristics that Harnad mentions as critical to the definition of computation are that computational systems (in his usage) employ only syntactic processes, and computational systems are systematic. It seems to me that these characteristics are incompatible with one another. A syntactic process operates according to rules, which are defined relative to the form of the symbol, rather than relative to the content of the symbol. A systematic process is (briefly) one in which the symbols can be combined in different ways to represent different operations or expressions and the meaning of the symbols, if they have any, remains intact throughout their usage. These two properties are incompatible in the trivial sense that one specifies the syntax of the system whereas the other specifies the semantics of the system, but it seems to me that requiring a system to be systematic prevents it from being wholly syntactic. I will try to make this idea clear concerning an ordinary digital computer.

8. At some level of operation a computer is entirely syntactic. Its memory registers contain binary numbers, its operation registers perform different functions dependent on the instruction that is numerically coded in specific locations, etc. and it does not matter what the numbers that it manipulates represent. At this level of description, the patterns of numbers in the registers, the number of registers, the operations to be performed in response to the various codes, etc., are highly machine specific, so by the definition, these operations may not be computation.

9. At another level the computer implements programming language instructions that are more generalizable across machines (we can neglect machine-specific dialect differences), and symbols can be interpreted. At this level, however, the purity of the syntactic process is, I believe, lost. Symbols in a computer, are represented as numbers -- patterns of 1's and 0's. There is nothing else to serve the symbolic function. A given number could stand for anything, but within the context of a computer program the meaning of the number is constrained or the program is nonfunctional. The number 613, for example, could stand for the speed of a jet, the weight of the fuel, the number of its current mission, etc. The shape of the symbol in each case is arguably the same, but the way in which it is used is not determined by the shape the symbol, but by its function relative to the instructions of the program. It is used in different ways depending on its meaning and its context. For example, if the program were to determine a safe route from the plane's current position to its designated landing field, then substituting the number of its current mission for the distance to the landing field could be disastrous if they do not happen by accident to agree.

10. A counter argument to this assertion is that the shape of the symbol is given by more than its value, but also by its location, etc. This is a plausible counter argument, but it would appear to demand that part of the shape of the symbol is determined by the meaning that is attached to it, because its meaning is the means by which it is entered into a particular position, for example (the computer might have access to the inertial guidance system which places the calculated location of the plane into a specific location in memory), and not in locations reserved for other kinds of information. Acceptance of this counter argument is acceptance that the relation between symbols and their meaning is not arbitrary. Hence, it implies that a pure syntactic system is not an adequate model of computation.

11. To continue this line of argument, one might then claim that the plane uses grounded symbols because of the causal connections between the inertial guidance system and the computer. One might claim that the computer in the plane passes a kind of limited total Turing test (pardon the apparent oxymoron) as a synthetic creature capable of navigating through the sky. The same argument, however, applies to other computer programs that do not have direct access to transducers. A computer program cannot behave systematically if it is provided with inconsistent data, no matter what the program is intended to compute. My argument is that all effective programs must have grounded symbols, that is, established premises, if they are to function systematically.

12. Some people argue that when the inputs are themselves verbal and symbolic as in the standard Turing test, then the grounding may be in the head of the interviewer, not in the computer program. Although Searle tells a good story about a Chinese room, it is not clear to me that a computer program could actually implement the kind of system that Searle discusses without some notion of the semantics. The notion of semantics could come from having an infinite code book that represents each word in each context (i.e., a unique symbol for each utterance) or it could come in some more compact form, but it must include semantic information, for example, to know that the bird flew out to the right fielder and that the batter flied out to the right fielder. Many other examples are available to demonstrate that semantics plays an important role even in determining appropriate syntax. The difficulty of machine translation, even given an attempt at a rich semantic lexicon, also suggests that semantics plays an ineluctable role in communication.

V. THE CHINESE ROOM

13. Although Harnad accepts Searle's Chinese room hypothesis as compelling, I do not. It is simply not clear to me what more is required than the ability to deal effectively with the language in order to demonstrate that the system knows the language (the intrusion of semantics explains part of why I believe this). The same problem occurs, by the way, in second language acquisition. Teachers of second languages have difficulty assessing the competence of their students, because the students can often produce substantial utterances with rather complicated syntax, without being able to produce other similar utterances. Second-language teachers are often hard pressed to decide whether their students actually know the language they are studying. Because the students are human, however, and because they possess first-language skills (which sometimes are incompatible with second language performance), observers and instructors are often willing to attribute knowing to the students, using standards no stricter than those applied to the Chinese room. Harnad's total Turing test is more stringent than Turing's original test, which Turing conceived as a substitute covering for his difficulty in defining thinking, but it is unclear what conceptual criteria it adds. Further, although Harnad accepts Searle's Chinese room as compelling, he claims that the multiple personality version of the same mechanism, the Chinese gym, actually does (or at least might) understand. Other than the fact that there are boys rather than Searle in the room, I fail to see how either Gedanken experiment is compelling. Searle could, for example, run around to take the place of each boy in turn and pass symbols to the other virtual boys without any loss of functionality.

14. I certainly cannot see that one person alone with the whole code book could possibly understand less than a room full of people, each of which had only a part of the code book. A related question that has been lurking in my thoughts recently is: what do we say if we replace Searle with someone who does understand Chinese, but we use some kind of cipher to prevent that person from recognizing that it is Chinese that is being passed? Can the person both understand Chinese and the system including the person not understand Chinese?

15. If I am correct, that semantics plays an essential role in both computation and in language usage, then this position does not undermine Harnad's claims that symbol grounding is necessary, rather it suggests the variety of ways in which symbols may be grounded, and its suggests that such grounding is not only necessary to minds, but to basic computer programs as well. The total Turing test may be a sufficient condition to establish that we have no reason to doubt the presence of mind in the robot, but it is far from guaranteeing that the robot has a mind. In any event, the tests we devise for ourselves and our systems are of no use unless we have the theoretical and technological bases to attempt to pass those tests. Lacking such theories these tests are nothing more than operational definitions. A mind, then, is what a total Turing test tests. Without an underlying conceptualization, alternative operational definitions are incommensurable because each is a definition of the term. What we need is a theory of mind. Harnad's investigations of these issues are likely to help the development of such theories.

REFERENCES

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Searle, J. R. (1980) "Minds, brains and programs." Behavioral and Brain Sciences 3: 417-424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html http://www.bbsonline.org/documents/a/00/00/04/84/index.html


Volume: 12 (next, prev) Issue: 058 (next, prev) Article: 25 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: