Selmer Bringsjord (1998) Domain-independent Abstract Mediating States and AI. Psycoloquy: 9(53) Representation Mediation (2)

Volume: 9 (next, prev) Issue: 53 (next, prev) Article: 2 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(53): Domain-independent Abstract Mediating States and AI

DOMAIN-INDEPENDENT ABSTRACT MEDIATING STATES AND AI
Commentary on Markman & Dietrich on Representation-Mediation

Selmer Bringsjord
Dept. of Philosophy, Psychology & Cognitive Science
Rensselaer Polytechnic Institute
Troy, NY 12180

http://www.rpi.edu/~brings

selmer@rpi.edu

Abstract

Markman & Dietrich propose a compromise between representationalist and anti-representationalist approaches to the mind: a framework based on the intended-to-please-all and presumably-impossible-to-deny notion of a generic mediating state falling between a cognizer and its external environment. M & D offer an argument for a framework that explicitly excludes a specific kind of mediating state, viz., an abstract, domain-independent one based on deductive logic. Aided by an informal experiment I have been performing for the last two months, I explain why this argument is anemic. Along the way I assemble evidence for the view that abstract, deductive states are (i) undeniably present in some human minds, and (ii) their presence or absence in a given mind is a matter of what sort of education that mind has received. I conclude by explaining that (i) and (ii) support a pre-M & D framework for mediating states that is at the heart of contemporary "agent-based" AI.

Keywords

compositionality, computation, connectionism, discrete states, dynamic Systems, explanation, information, meaning, mediating states, representation, rules, semantic Content symbols
1. The main argument provided by Markman & Dietrich (1998) (M & D) in their "Defense of Representation" for the view that there are few if any domain-independent abstract mediating states is tied to the psychology of reasoning. The argument can be summarized as follows (based on a reading of paras. 49 and 50):

    Argument 1

    (A) If human cognizers have and use abstract, domain-independent
    mediating states specified by deductive logic, then humans do not
    perform poorly on problems P1, P2, ..., Pn.

    (B) Humans perform poorly on problems P1, P2, ..., Pn.

Therefore (by modus tollens and universal instantiation):

    (C) Human cognizers don't have and use abstract, domain-independent
    mediating states specified by deductive logic.

2. (The other [enthymematic] arguments given by M & D against abstract, domain-independent deductive schemas are simply appeals to the fact that many thinkers believe that reasoning is concrete and domain-dependent. Such thinkers come from computer science, robotics, cognitive linguistics, and "mainstream research in cognitive science" [para. 51-54]. Of course, in and of itself, M & D's reasoning here is nothing more than an appeal to authority. From the fact that X believes P it hardly follows that P.)

3. What are the Pi in premises (A) and (B) to be set to? In para. 49 M & D give us two specific kinds of problems, viz., Wason's selection task, and syllogisms. M & D give a version of the former; here is a sample problem involving syllogisms, given by Oakhill et al. (1988):

    Problem 1

    What follows from the following two premises?

    All the Frenchmen in the room are wine-drinkers.

    Some of the wine-drinkers in the room are gourmets.

Many subjects faced with this problem infer:

    Some of the Frenchman in the room are gourmets.

But this doesn't follow. (Do you see precisely why? I provide a diagnosis of Problem 1 below.)

4. In both cases (the selection task and problems like Problem 1) there is no denying that the majority of human subjects perform poorly; so (B), suitably instantiated, is apparently true.

5. Because the selection task (in many different forms) and syllogisms have been discussed ad nauseum in the psychology of reasoning literature, let's focus instead on a recently published problem from Johnson-Laird and Savary (1995):

    Problem 2

    What can you infer about the hand in question from the following
    statement?

    If there is a king in the hand, then there is an ace, or else if
    there isn't a king in the hand, then there is an ace.

The vast majority of subjects infer that there must be an ace in the hand. But this is alas wrong (for formal reasons to be explained shortly). Once again, because very few subjects solve Problem 2, premise (B), with appropriate instantiation, should presumably be granted.

6. But in granting (B) we'd be presupposing a particular construal of a key term in Argument 1. The term 'humans' in this argument is ambiguous between at least 'all humans,' 'some humans,' and 'most humans.' No doubt we should understand 'humans' as 'most humans'; here's why. With 'all humans' supplanting 'humans' premise (B) is false; with 'some humans' supplanting 'humans' premise (A) is presumably false. Furthermore, the experiments involving Problems 1 and 2 (and classics like the selection task) in fact demonstrate only that most_ humans perform poorly on such tasks (there is always a small group that performs well). In light of this, Argument 1 needs to be modified:

    Argument 1'

    (A') If human cognizers have and use abstract, domain-independent
    mediating states specified by deductive logic, then most humans do
    not perform poorly on problems P1, P2, ..., Pn.

    (B') Most humans perform poorly on problems P1, P2, ..., Pn.

Therefore:

    (C) Human cognizers don't have and use abstract, domain-independent
    mediating states specified by deductive logic.

7. It's important to face up to the brute fact that some humans breeze through problems such as Problems 1 and 2. As a case in point, I have given Problem 2 to over 20 subjects with formal training in mathematical logic. All responded essentially with this proof, which supports the correct answer that there is not_ (surprised?) an ace in the hand:

    Proof 1

    The statement here says that one of the conditionals is true, but
    not both. (The phrase 'or else' indicates so-called exclusive_
    disjunction.) So it's either K -> A or ~K -> A, but not both. This
    means that one conditional is false. If K -> A is false, K is true
    but A isn't. If ~K -> A is false, ~K is true, but A is false.
    Either way, A is false. QED

8. Problem 1 is easier. Subjects with a modicum of exposure to first-order logic almost invariably succeed on this problem: they declare that nothing non-trivial can be inferred. When presented with the purported syllogism in which 'Some of the Frenchmen are gourmets' is derived, and asked whether or not this specific inference is valid, they respond correctly via the following reasoning. (For a screen shot of a more explicit proof, carried out in the courseware for teaching deductive logic known as Hyperproof (Barwise & Etchemendy 1994), see Bringsjord et al.(1998), an on-line version of which is on my web site at http://www.rpi.edu/~brings/select.html#Technical Papers.)

    Proof 2

    'All Frenchmen in the room are wine--drinkers' becomes 'For all x,
    if x is an F, then x is a W,' i.e., in first-order logic (with 'Vx'
    for 'for all x...' and 'Ex' for 'there exists at least one x such
    that'...) this is:

     (a) Vx(Fx -> Wx)

    The second premise becomes

     (b) Ex(Wx & Gx)

	  The purported conclusion becomes

     (c) Ex(Fx & Gx)

    A case in which (a) and (b) are true, but (c) is false, provides a
    counter-example to the purported syllogism; here's such a case.
    Suppose there are 5 objects. Let 2 of them be at once W and G; then
    (b) is true. Let none of the objects be F's. Then (a) is true, but
    (c) is not. QED

9. Oddly enough, Johnson-Laird has declared that Problem 2 has drawn a blank from all but one cognitive scientist he has presented it to from "Seattle to Stockholm." When in personal communication I pointed out to him that in my experiments Problem 1 is easily solved by suitably trained thinkers, he responded by giving me a new, more difficult problem which he said would be a challenge unlikely to be routinely met even by those schooled in formal logic, viz.,

    Problem 3

    If one of the following assertions is true then so is the other:

    (1) There is a king in the hand if and only if there is an
      ace in the hand.

    (2) There is a king in the hand.

    Which is more likely to be in the hand, if either: the king or
    the ace?

10. I've been carrying out an informal experiment with Problem 2 for about two months now: I've given Problem 3 to a number of logicians and technical philosophers, and to deductively skilled student subjects as well. Each and every one has solved the problem within 10 minutes, some in a good deal less time. [Stop reading at this point for a bit if you want to try your hand on Problem 3.] Here's a typical solution:

    Proof 3

    The condition on (1) and (2) implies that either (1) and (2) are
    both true, or both false. But this matches the truth-table for
    biconditionals, i.e., formulas of the form P <-> Q are true just in
    case either P and Q are both true, or P and Q are both false. So we
    therefore know that (K <-> A) <-> K. But from this formula A can be
    proved, whereas K cannot. Therefore the answer is A. QED

11. Though my experiment is far from systematic, it should be obvious that my results have implications for Argument 1'. The clear upshot is that some human cognizers are in command of abstract, domain-independent mediating states based on deductive logic. This is established by the fact that subjects produce Proofs 1-3. These results highlight an ambiguity in premise (A'), one that parallels the ambiguity we detected above in (B), viz., the phrase 'human cognizers' in (A'), (A), and (C) is ambiguous between 'all human cognizers,' 'some human cognizers,' and 'most human cognizers.' My results demonstrate that the first of these three is unacceptable. The second can also be ruled out, because M & D, as well as, of course, all anti-representationalists, would hardly be content with the conclusion that some_ human cognizers aren't bearers of abstract mediating states based on domain-independent deductive logic. This yields the following rather weak argument.

    Argument 1''

    (A'') If most human cognizers have and use abstract,
    domain-independent mediating states specified by deductive logic,
    then most humans do not perform poorly on problems P1, P2, ...,
    Pn.

    (B') Most humans perform poorly on problems P1, P2, ..., Pn.

Therefore:

    (C') Most human cognizers don't have and use abstract,
    domain-independent mediating states specified by deductive logic.

12. The consequences of Argument 1'' for M & D's paper can be traced out from two points that we seem to have uncovered. Point 1 is that there are abstract, domain-independent, deductive states in some minds. In light of this, M & D should answer the question that forms the heading for their section C., "Are There Abstract Mediating States?", with a resounding "Yes". Such an answer would mark a significant change in this section of the paper (and would seem to imply that anti-representationalists cited in this section ought to change their views as well). Point 2 seems to imply that a much more serious adjustment of M & D's overall view may be in order. The point is that the nature of mediating states in humans, at least along the axis of concrete to abstract, may simply be determined by education. I see nothing in M & D's scheme that reflects this point.

13. Contemporary "agent-based" AI (Russell & Norvig 1995) seems to have long provided a perfect framework for both satisfying M & D's main desideratum (find a middle-ground between representationalists and anti-representationalists), and_ for accommodating Points 1 and 2. Such agents are defined (Russell & Norvig, Chapter 2) as supersets of the 'entities' referred to in the definition of mediating states given by M & D in their para. 12. So that this is clear, I assemble here a definition of agents from Russell & Norvig (1995):

    An agent is an entity with which takes in information from the
    environment outside it via sensors, processes that information, and
    returns information to the environment in the form of actions taken
    via effectors. The agent must have goal states that it seeks to
    reach. The nature of the agent's internal states, and the
    processing involved with them, can vary across a spectrum:
    Sometimes the internal states can be based on deductive logic (as
    would be the case in an agent that solves Problems 1-3; see
    Bringsjord & Ferrucci, forthcoming); sometimes the internal states
    can be non-symbolic (as would be the case in an agent whose
    internal processing is based exclusively on artificial neural
    networks); sometimes the internal states can correspond very
    closely to the external environment (as when a visual scene of the
    environment is represented in a nearly isomorphic internal array);
    sometimes the internal states of the agent can have little or no
    correspondence to the outside environment (as in the case of an
    agent that computes the square root operator).

The peculiar things is that here is what M & D say about their framework and AI:

    This definition [para. 12] of a mediating state is quite general.
    It is intended to capture something that all cognitive scientists
    can agree to, namely, that there is INTERNAL information used by
    organisms or systems that mediates between environmental
    information coming in and behavior going out. Interestingly, most
    AI systems to date do not use actual mediating states, because the
    internal states do not actually bear any correspondence to entities
    outside the system.  (para. 12)

14. But of course the definition of the agent scheme I gave above is a striking parallel to the definition of mediating states given by M & D in their para. 12. The virtue of the agent scheme is not only that it comes with parameters that can be adjusted to accommodate Points 1 and 2 from above. This scheme even comes replete with parameters that can be adjusted to include or exclude the five properties of mediating states (enduring, discrete, compositional, abstract, rule-governed) M & D discuss in their paper. In fact, in Russell & Norvig's book (1995), each of these five properties is explicitly parameterzed. M & D, and readers of their paper, would do well to study this material. Cog Sci may well find that AI, its supposedly narrower cousin, offers the best consensus-building scheme for studying minds.

REFERENCES

Barwise J. & Etchemendy, J. (1994) Hyperproof (Stanford, CA: CSLI).

Bringsjord, S., Bringsjord, E. and Noel, R. (1998) "In Defense of Logical Minds," in Proceedings of the 20th Annual Conference of the Cognitive Science Society (Hillsdale, NJ: Lawrence Erlbaum Associates), pp. 173-178.

Bringsjord, S. & Ferrucci, D. (forthcoming) "Logic and Artificial Intelligence: Divorced, Separated, Still Married..." Minds and Machines.

Johnson-Laird, P. and Savary, F. (1995) "How to Make the Impossible Seem Probable," in Proceedings of the 17th Annual Conference of the Cognitive Science Society (Hillsdale, NJ: Lawrence Erlbaum Associates), pp. 381-384.

Markman, A.B. & Dietrich, E. (1998) "In Defense of Representation as Mediation," PSYCOLOQUY 9 (48) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/psyc.98.9.48.representation-mediation.1.markman

Oakhill, J.V., Johnson-Laird, P.N. and Garnham, A. (1989) "Believability and Syllogistic Reasoning," Cognition 31: 117-140.

Russell, S. & Norvig, P. (1995) Artificial Intelligence: A Modern Approach (New York, NY: Prentice-Hall).


Volume: 9 (next, prev) Issue: 53 (next, prev) Article: 2 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: