Oatley's main tactic seems to be to call into question the computational competence of those who would question the progress of AI-based cognitive science. By contrast, however, getting "down and dirty" with computer programs was instrumental in convincing me that there are many questions about cognition that cannot be answered simply by more programming. The programs themselves cannot (or should not) set the whole of our intellectual agenda.
2. Few would doubt that "AI is a method that allows us to get a firmer theoretical grip on certain problems in vision, reasoning, planning, language, and so on" (para. 3). I say as much myself ("Ontology rules!", para. 5). Nor would many doubt that computer simulations allow for a "firm theoretical grip" in meteorology, demographics, population genetics, and a host of other sciences of complex phenomena. Unfortunately, that alone shows little more than that the computer is a better pencil with which to write down theories than the traditional one (viz., one that automatically checks for logical consistency and generates empirical predictions, among other things). It has little bearing, however, on the question of whether the mind is, literally, a computer. Oatley's insistence that "the issue is COMPARISON and of functional principles, not whether a biological and artificial system are the same" (para. 6), suggests that he rejects this important distinction altogether. In short, as far as I can see, everything Oatley says in favor of AI supports "weak" AI, but is relatively independent of the question of the truth of CF.
3. As for "Fodor's sneer", I would rather concern myself with the truth of what he says, rather than its tone. For what it's worth, I find Fodor amusing (but that may be a function of my "barbarian" American upbringing). It may be worth observing that Oatley's repeated insinuations to the effect that the original article, and the ensuing discussion of it, could only be of interest to armchair lollygaggers--not to serious scientists who "write programs" (which I have, by the way) and give lectures on "some basic AI concepts (which I also have)--bears a bit of the tone of sneer itself. It may be true, as Oatley says, that it is not about "legislating for what is and what is not good science" (para. 3), but there is certainly room for discussion about just what would be sufficient to constitute support-- empirical or rational--for CF. In fact, this question might not be a bad place to start one of those courses on AI and its applications to cognitive science.
Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061
Oatley, K. (2000) Fodor's sneer. PSYCOLOQUY 11(080) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.080.ai-cognitive-science.20.oatley http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.080