Appropriateness of "restricting access to the inside of the black box" depends on context and goals. For judging whether or not an artifact exhibits "intelligent" behaviour (which AI workers occasionally like to do) it may be appropriate to restrict access. However, if the goal is to determine whether a psychological model has the same underlying mechanisms as the human brain, it is appropriate to peer deep inside. Concerns about definitions of psychological entities have some validity but need not stop the work of cognitive scientists: if entities such as "thought" are a useful explanatory concept, then experimentation and modeling are as likely to help find good definitions of them as is philosophizing.
2. A system whose behavior is indistinguishable from that of people must have captured something of the essence of intelligent behavior.
3. This is not a controversial claim, especially if one is willing to separate intelligence from (other) elements that constitute the essence of being human. The troublesome claim seems to have been deduced from the above. It is that cognitive scientists who use AI methods in computational models must believe the following:
4. A system whose behavior is indistinguishable from that of people must have the same mechanisms underlying its behavior as people.
5. This claim does not follow from the first; there is no contradiction in maintaining that systems can have the same behavior while having different mechanisms. Furthermore, cognitive scientists who use AI methods are not prohibited from peering into the mechanisms and attempting to match aspects of those mechanisms with any kind of observations of people performing the same task.
6. To chastise "AI-ists" for "restricting access to the inside of the black box" is to confound the goals of AI and those of cognitive science. This restriction is not problematic when judging whether the black box can exhibit intelligent behavior or not. It is inappropriate when judging whether the mechanisms underlying the intelligent behavior of the black box are the same as those underlying the intelligent behavior of people.
7. The importance of the "Turing test" is overrated. Its usefulness does not arise from serving as a test of intelligence. Rather it is useful as a thought experiment which makes people think about what the essence of intelligence really is.
8. The difficulties with defining the entities of psychology are exaggerated. To be sure, there are difficulties with some mediating entities such as thoughts, but many entities are well defined. Two important classes are objectively measurable: stimuli (e.g. visual or aural stimuli, or direct neural excitation) and observable behavior (e.g. reactions, reaction time, and maybe NMR imaging.)
9. Reductionist explanations always bottom out at some level where the "explanation" is pure description of observed (or postulated) behavior. For example, highly successful and accurate explanations of the motion of the planets, including the precession of Mercury's orbit, were made well before a reductionist explanation of gravity was available. Physicists made much progress without waiting for philosophers to sort out an appropriate ontology for them.
10. Cognitive psychologists can go on using AI methods and examining the "insides", while working with the entities which are widely considered to be well-defined. Progress on the definitions of more troublesome entities may be made in alliance with philosophers, but there is no necessity to wait until philosophers have it all worked out by themselves.
Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061