Christopher D. Green (2000) Can we Have a Discovery we Don't Understand?. Psycoloquy: 11(065) Ai Cognitive Science (5)

Volume: 11 (next, prev) Issue: 065 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(065): Can we Have a Discovery we Don't Understand?

Reply to Chiappe & Kukla on Green on AI-Cognitive-Science

Christopher D. Green
Department of Psychology,
York University
Toronto, Ontario M3J 1P3


Chiappe & Kukla argue that "saving the appearances" is not a sufficient basis for scientific success. In the main, I would agree, but I do not think we all need to "down tools" and develop an adequate conceptual scheme for cognitive science prior to doing any empirical or computational research in the field. Doing this and doing research go hand in hand, each informing the other.


artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. I find myself having relatively little to say by way of response to Chiappe and Kukla's (2000) commentary. In the main, I am in agreement with their, and Fodor's, opinions on the matter. From a certain vantage point, it seems little short of amazing that one would have to argue that the point of science is knowledge, and that predictions, far from being the main goal, are simply a part of the mechanism by which one tests whether one has acquired the knowledge one sought from the start. But that seems to be what things have come to in the late 20th century. Logical positivism, and its legacy, seem to have confused us quite deeply about what science is about.

2. To keep this from becoming too much of a love-in, however, I do have a couple of critical comments. I think that the story of the hacker who accidentally stumbles upon a Turing-test-passing program actually confuses the issue somewhat. I expect that the response from many AI-ists would be, "Of course this is a scientific discovery! Even if the hacker is some sort of idiot-savant, who doesn't understand why or how his program works--an assumption that itself stretches the boundaries of belief--now that we have the artefact we need, we can have experts study its code and figure out the principles underlying its operation." This response would, I suspect, strike Chiappe and Kukla as being entirely misguided, and I think I know why.

3. Philosophers often couch a priori arguments in everyday, or even science-fiction-like, terms, primarily for entertainment value. This is usually harmless enough, as other philosophers are able to "read through" the surface details to the core argument. When such arguments are circulated in the scientific community, however, something funny happens. Scientists often mistake the argument for a parable--a kind of argument-by-example--and then try to bring practical concerns to bear against it that don't really touch on the central issue. The philosophers, astounded at the misinterpretation, typically reply with something like, "No, you missed the point entirely...", to which the scientists, now insulted, fire back something along the lines of, "You philosophers, holed up in your ivory towers, don't understand the day-to-day facts of the matter...", and off we go. This is a common, if not prevalent, mode of discourse in discussions of Searle's Chinese Room and Putnam's Twin Earth.

4. In an effort to evade this trap of talking past each other, allow me to (attempt to) explicate. I take Chiappe and Kukla's point to be that the artefact alone does not constitute a scientific discovery because it does not, in itself, add to our understanding of cognition. Fascinating as it might be in its own right, if the serendipitous inventor literally has no idea why it works, then it poses just as many problems as does the human head. In this regard, at least, the project of science and the project of technology, so often conflated, depart from one another. One can have a technology one doesn't understand (aspirin, as I understand it, is such a technology; we don't know how it kills pain, we only know that it does), but one cannot have a scientific discovery that one does not understand because scientific discovery just is an act of understanding.

5. A second comment I have on Chiappe and Kukla is more of a hope than a critique. It involves the implications of cognitive holism. I can only hope, contra Fodor, that if general cognitive processes ("central systems", as Fodor call them) are holistic, that this does not make the science of cognition impossible in principle, as they suggest. My only argument is admittedly a weak one: other sciences have been found to contain holistic problems that are seemingly intractable, but this has not prevented these sciences from proceeding. The classic three-body problem in mechanics is among these, I believe. The motion of any one body cannot be determined without knowing the motions and masses of the other two bodies, but this doesn't stop physics dead in its tracks. Of course, the bodies themselves are not holistically defined--only their motions--and so psychology may have an extra problem in this regard. But if cognitive holism blocks psychology, the repercussions will be nothing short of shattering (viz., we may be forced to follow the Churchlands' lead, not because they are right about there not being any beliefs, but because there is no way to study them scientifically). Worse still, the impossibility of a science is a hard thing to prove, and an even harder thing to convince people of. So, if "higher" cognitive science were impossible in principle, I suspect that many of us would continue on blithely and futilely to the bitter end.

6. Finally, as I mentioned in the introduction, I do not think that AI researchers must "down tools and become philosophers," but I do hope that people in cognitive science learn to be AI researchers AND philosophers (and psychologists and linguists and anthropologists, etc.). Philosophy, like science, is not so much a topic as a way of doing business; a way to which everyone must resort from time to time.


Chiappe, D.L. & Kukla, A. (2000) Artificial Intelligence and Scientific Understanding. PSYCOLOQUY 11(064)

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061)

Volume: 11 (next, prev) Issue: 065 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary