One might consider artificial hearts to be real hearts in the sense that they carry out all the "essential" functions of real hearts--viz., pumping blood through the body. If so, might not AI programs be considered to be really intelligent, regardless of how similar their internal processes are to those of a real mind? I argue that this is to confuse technology with science. Useful as artificial hearts are, they are not cardiological per se. Similarly, AI is useful, but this does not necessarily make it cognitive science, which is interested in how cognition is generated IN US, among other things.
2. What is fascinating here is that, although the actual function of a biological heart (i.e., the pumping of blood at the right speed) is implemented in an artificial heart, this function is implemented in a manner very different from the one implemented in a biological heart. In a biological heart, the dynamics of the fluid flowing through the muscle are crucial to its proper functioning (viz., that the valves open and close at the precisely correct times). If there is a significant reduction in the volume of blood, the dynamics are changed, the heart goes into arrhythmia, the remaining blood fails to be pumped properly, and the organism dies. In contrast, timing in an artificial heart is controlled by an electronic timing device. If there is a reduction in blood volume, the timer is unaffected, the heart keeps on pumping regardless (even if there is no longer any blood to be pumped), and if the organism dies, the heart must actually be turned off.
3. It seems to me that this is analogous to having a computer program that computes the right function (i.e. the one that results in real cognition), but employs an utterly different implementational scheme from that used in the mind or brain. The obvious questions that come to mind here constitute a minefield of foundational questions. (1) Do we consider the artificial heart to be a real heart because it does what real hearts do? One might be inclined to say, "Yes, because hearts are functionally defined as blood-pumpers." (Of course "blood" itself also requires some sort of definition here, probably functional as well.) (2) Then does the actual manner of implementation of the "cardiac function" matter? One who answers "yes" to question (1) would seem to be committed to the view, at least with regard to hearts, that it doesn't matter, as long as the correct function is instantiated. (3) Then for minds, does it matter HOW the mental function is computed, as long as it IS computed? Again, the person who answers "yes" to question (1) would seem to be committed to the answer that doesn't matter, and thereby to "strong" AI.
4. I have made a crucial error along the way, however; one that I alluded to before in my reply to Chiappe and Kukla. Artificial hearts are great technology, but they are not cardiology itself. They are the technological RESULT of the science of cardiology. That they pump blood correctly is a great boon to cardiac patients, but they are of little interest to the cardiological researcher, who could not learn anything more from their study than was already known to their developer. In other words, just as I argued at the end of the target article, even if CF is right, the proper place for intelligent-program- building is after (at least some of) the principles of cognition are understood, and not so much as a way of figuring out what those principles are in the first place.
Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061
Lockhart, R.S. (2000) Modularity, cognitive penetrability and the Turing test. PSYCOLOQUY 11(068) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.068.ai-cognitive-science.8.lockhart http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.068