Ariella V. Popple (1996) The Existence-in-principle of the Rosetta Stone:. Psycoloquy: 7(37) Turing Test (5)

Volume: 7 (next, prev) Issue: 37 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 7(37): The Existence-in-principle of the Rosetta Stone:

THE EXISTENCE-IN-PRINCIPLE OF THE ROSETTA STONE:
IMPLICATIONS FOR CONSCIOUSNESS
Commentary on Bringsjord on Turing-Test

Ariella V. Popple
Department of Psychology
University of Durham
Science Laboratories
Durham DH1 3LE
United Kingdom

A.V.Popple@dur.ac.uk

Abstract

Bringsjord (1996) suggests that naive psychology will vanish when we become unsure whether our netscape correspondents are human or just a bag of tricks. This pessimism, which stems from his assertion that humans are not automata, may necessitate allowing humans to contravene the laws of physics. Recent developments associating thermodynamics with information theory anticipate that contemporary quandaries about mind and brain may one day become as incomprehensible as the idea that Egyptologists once argued about the existence-in-principle of the Rosetta stone.

Keywords

False belief tests, folk psychology, naive psychology, the "other minds" problem, theory of mind, the Turing test.
1. The problem of studying consciousness scientifically is one of finding an appropriate framework in which to do so. The problem has arisen in the domain of artificial intelligence and within the framework of the Turing Test.

2. The Turing Test is the most general behavioural test of intelligence. The tester simply decides which of a human-machine pair is intelligent on the basis of their behaviour. Any subvariants of this test which examine a particular aspect of behaviour are simply special cases of the general Turing Test (Watt, 1996a; Popple, 1996; Bringsjord, 1996).

3. As Descartes was the first to point out, it is impossible to be certain (on the basis of their behaviour) that even your closest friends are not zombies or automata. Passing the Turing Test doesn't solve the other minds problem (Watt 1996a and 1996b; Bringsjord, 1994).

4. The Turing Test therefore isolates two of the problems associated with consciousness. The first question: "Can mind appear in a physical thing like a computer?" is answered by the test. The second question, which the Turing Test does not address, is: "What is (the appearance of) mind?".

5. One thing a computer clearly doesn't do is violate the principle of causality, and this represents a third (almost discredited) problem of consciousness, not addressed by the Turing Test: "Why (appear to) have mind if it doesn't make any difference to things in the world?".

6. Marr's theory of three levels of analysis clarifies this existing framework. The computational level describes the INPUT-OUTPUT function. The Turing Test is a test of output (answers) to input (questions) and is therefore at the computational level. Marr's second level is the representational level, or level of algorithm. At this level input is transformed into output via a number of intermediate representations, by a process of logical necessity. A set of different algorithms may be computationally equivalent. In the same way, a computer and a person might both have the appearance of mind, one using an actual mind algorithm and the other using a different algorithm. The third level is the level of hardware implementation. Marr stated that "the same algorithm may be implemented in quite different technologies", and indeed this follows from the notion of a universal Turing Machine. Any computer (given the right software) can mimic any other computer, in fact a program or algorithm can be seen as a virtual-machine, or a description of a computer to be mimicked rather than a set of instructions for symbolic transformation.

7. So where does this leave the scientific enquiry into consciousness? The brain's physiology can be unraveled to the minutest detail. The complexities of human behaviour can be recorded to the greatest psychophysical and psychosocial accuracy. Nevertheless, if hardware (i.e., neurophysiology) does not constrain process, and different algorithms may be computationally equivalent (i.e., produce the same psychophysical functions), then (providing mind IS software) consciousness lies outside what can be addressed by science.

8. Imagine a society where irrational numbers have not been conceived of. The society is divided into "circumdualists" who believe that the circumferences of circles and their radii are made of different stuff, and "radiists" who believe that anything about circles that can't be measured in a rational scale of their radius either doesn't exist or makes no difference in the real world. From our enlightened position we can see that their rational scale of measurement has gaps which would be filled if they were to adopt a real numbers scale.

9. Resorting to esoteric theories of mind which make use of the weird properties of subatomic particles to explain phenomena at the level of consciousness, would be a bit like the radiists discovering "e" and then saying that "pi" might be measured in some rational scale of it.

10. Before the Rosetta Stone was discovered, the meaning of Ancient Egyptian hieroglyphics was sealed. Nevertheless, it is unlikely that Egyptologists discussed its existence-in-principle. Although it might appear that without the Rosetta Stone there would be no way of choosing between the potential mappings of hieroglyphics onto any known language, the suggestion that no true mapping exists is one more likely to have entertained occultists.

11. What cognitive science needs is framework which not only combats dualism, but also obliterates the distinction between dualism and behaviourism. This is something that artificial intelligence fails to provide.

12. The obvious approach is to tackle the software-hardware distinction. Connectionism takes up the slack in Marr's theory (the three levels are "logically and causally related") to argue that formulating algorithms from the bottom up, based on known properties of brain architecture, is as valid an approach as top-down. The computational level is seen as the arena in which connectionist algorithms are to compete with the traditional artificial intelligence approach. Where traditionalists emphasize the relative autonomy of the the three levels, connectionists emphasize their interactions.

13. "Maxwell's demon" is an imaginary creature who violates the second law of thermodynamics by sorting fast from slow air molecules in a container and creating a heat (or pressure) difference between two sides of a partition. The entropy (or disorder) in the container is lowered at a minimal energy cost to the demon, who merely opens and closes a shutter in the partition. This violates the second law of thermodynamics which states that energy must be expended to decrease the entropy in a closed system. If the "demon" is considered an "intelligent being" it must expend energy while processing information about the air molecules to redeem the second law (Szilard, 1929).

14. Recent theories have emphasized the continuity between physical and algorithmic entropy. Zurek (1989a, 1989b) extended the definition of algorithmic entropy to physical systems, distinguishing between entropy that represents the randomness of known aspects of the system and entropy that represents the remaining ignorance of the observer about the systems actual state. This distinction resembles being able to specify an irrational number to a given number of digits. Algorithmic complexity is adopted as a measure of randomness, and sets limits on the thermodynamic cost of computation. As long as Maxwell's Demon is a universal Turing machine, it cannot violate the second law.

15. This provides two conclusions relevant to cognitive science. First, the ability of intelligent beings to process information must be subject to the same laws as universal Turing machines unless they are to be potentially capable of violating the laws of physics. Second, if there is a continuity between physical and algorithmic entropy, there must be a continuity between causal and logical necessity. Perhaps naive psychology is buried in the brain after all.

REFERENCES

Bringsjord, S. (1994) Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Bringsjord, S. (1996) The Inverted Turing Test Is Provably Redundant. PSYCOLOQUY 7(29) turing-test.4.Bringsjord.

Leff, H. S. and Rex, A. F. (1990) Chapter 1 Overview in Leff, H. S. and Rex, A. F. (Eds.) Maxwell's Demon. Entropy, Information, Computing. Adam Hilger: Bristol.

Marr, D. (1982) Vision. San Francisco: Freeman.

Popple, A.V. (1996) The Turing Test as a Scientific Experiment. PSYCOLOQUY 7(15) turing-test.2.popple.

Rumelhart, D. E. and McClelland, J. L. (1986) PDP Models and General Issues in Cognitive Science. in Rumelhart, D. E. and McClelland, J. L. (Eds.). Parallel Distributed Processing. Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press: Cambridge.

Szilard, L. (1929) On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings. Zeitschrift fuer Physik, 53, 840-856. English translation in Leff, H. S. and Rex, A. F. (Eds.) Maxwell's Demon. Entropy, Information, Computing. Adam Hilger: Bristol.

Watt, S. (1996a) Naive Psychology and the Inverted Turing Test. PSYCOLOQUY 7(14) turing-test.1.watt.

Watt, S. (1996b) A Scientific Turing test? PSYCOLOQUY 7(20) turing-test.3.watt.

Zurek, W. H. (1989a) Algorithmic randomness and physical entropy. Physical Review. A 40, 4731-4751.

Zurek, W. H. (1989b). Thermodynamic cost of computation, algorithmic complexity and the information metric. Nature, 341, 119-124.


Volume: 7 (next, prev) Issue: 37 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: