The Turing Test blurs the distinction between a model and (irrelevant) instantiation details. Modeling only functional modules is problematic if these are interconnected and cognitively penetrable.
2. I like Green's analysis of the Turing test. I've never liked the test, especially in its popularly-expressed form. But I would still defend functionalism. Science does not advance by producing indistinguishable replicas of reality but by producing models --representations of reality--where the level of indistinguishability is functional. A model of DNA made out of chopsticks and ping-pong balls may be true (makes all the correct predictions) but it would never be confused with a real DNA molecule; an artificial heart may instantiate a theory of real heart function but be made of very different stuff, etc. The point is that in such cases there is a very clear sense of where to draw the line between what is to be imitated (modelled) and what the largely irrelevant details of instantiation are.
3. With the Turing test, this line of demarcation is less than clear. Gross physical appearance is obviously one irrelevancy, but only one (and a trivial one); doing everything by keyboard doesn't go anywhere near far enough. The problem, it seems to me, is to a large extent the old one of interconnectedness--which is why modularity is so appealing and the fact of cognitive penetrability so worrying to folk like Pylyshyn. That is, no scientific model (or work of art) tries to capture all of reality; there is an agreed perspective that sets the rules (conventions) about what is to be preserved and what may be distorted. (Sorry to sound like Nelson Goodman.) But in studying mind, it is difficult to draw up these rules, and the Turing test says that (apart from the obvious ones) there aren't to be any. Perhaps this is why some people think that the role of AI/CogSci is to produce the software equivalent of a humanoid robot. The Turing test would be more plausible if the task were one in which the Turing Examiner attempted to distinguish between the human visual system and a computer's AI program in their effort to discriminate cats from dogs. The problem of course is that for the human, the discrimination task is likely to be influenced by the state of other parts of the system (internal and external context--cognitive penetrability).
4. I thought the ideas would be strengthened if you tackled more explicitly this issue of modularity in relation to the Turing Test.
Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061