Peter Ayton (1993) Base Rate Neglect: an Inside View of Judgment?. Psycoloquy: 4(63) Base Rate (5)

Volume: 4 (next, prev) Issue: 63 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 4(63): Base Rate Neglect: an Inside View of Judgment?

Commentary on Koehler on Base-Rate

Peter Ayton
Psychology Department
City University
Northampton Square
London EC1V 0HB
United Kingdom



Koehler (1993) shows that experiments purporting to show base rate neglect do not. I consider the implications for psychological accounts of human judgment and reflect on whether base-rate neglect might occur in some real world situations.


Base rate fallacy, Bayes' theorem, decision making, ecological validity, ethics, fallacy, judgment, probability.


1. An early conclusion from research carried out in the 1960s was that judgments of subjective probability were inappropriate by virtue of observed discrepancies with the probabilities derived from Bayes' theorem (Edwards, 1968). When confronted with evidence that required a revision of opinion, subjects typically gave judgments which implied that the evidence had insufficient influence on their judgments - they were "conservative."

2. After a decade or so of research investigating conservatism, it was dropped (see Fischhoff and Beyth-Marom, 1983). Ironically, the discovery of base-rate neglect -- the antithesis of conservatism -- with its compelling heuristic explanation, was one cause of this. Less attention was given to critics of conservatism research, who argued that the experimental tasks were not appropriate measures of the inferences that people need to make in the real world. For example, Winkler and Murphy (1973) complained that "conservatism may be an artifact caused by dissimilarities between the laboratory and the real world." Subjects might not interpret the evidence in the task as Bayes' theorem prescribes because, in the real world, information is usually ambiguous or unreliable and not often conditionally independent. Koehler's (1993) target article argues, quite persuasively I feel, that the same sort of mistake has been made again. Once again, after a period of holding the subject responsible for observed disparities between judgment and the Bayesian normative standard, a curious reversal of this conclusion is reached; the "mechanical application" of Bayes' theorem to assess performance of judgmental tasks is considered inappropriate -- the normative standard is wrong -- and, consequently, the subject may be reprieved.


3. The main concern of most psychological research, it seems to me, is how particular competencies operate -- not how well. For example, in spite of the fact that there are many documented visual illusions, we do not worry about the competence of human vision. Studies of forgetting do not trouble us; rather, they cast light on memory processes. Unfortunately, perhaps because judgment research has often been motivated by comparing judgment with normative models of "rationality," there has been a tendency to treat failures of the models as failures of the subjects. Yet comparing judgment with normative models was originally proposed as a strategy for research -- not a test of the idea that people are rational. (Think how odd it would be if, for example, we heard of someone commencing research into digestion who, in the absence of any real knowledge of the topic, proposed to put forward a normative model and then to look for discrepancies from it. It would be even odder if it was claimed that discrepancies implied that digestion was incompetent.)

4. The research into heuristics and biases provided a methodology, a very vivid explanatory framework and a strong suggestion that judgment is not as good as it might be. However, the idea that all of this should be taken for granted was denied by the proponents of the research some time ago. For example, Kahneman and Tversky (1982) wrote: "The presence of an error of judgment is demonstrated by comparing people's responses either with an established fact... or with an accepted rule of arithmetic, logic or statistics. However, not every response that appears to contradict an established fact or an accepted rule is a judgmental error. ...The student of judgment should avoid overly strict interpretations, which treat reasonable answers as errors" (pp. 493 - 494). Nonetheless, by ways that Koehler plausibly explains, the myth of the base-rate fallacy -- and indeed of the general fallibility of judgment -- grew into "established fact."

5. The principal reason for interest in base-rate neglect was not merely that subjects made errors, but that it supported the notion that people made use of relatively simple but error-prone heuristics for making judgments. The idea, spelled out in Kahneman, Slovic and Tversky (1982), is that, due to limited mental processing capacity, strategies of simplification are required to reduce the complexity of judgment tasks and make them tractable for the kind of mind that people happen to have. However, if base-rate neglect does not occur then we must ask how that affects the evidence for judgmental heuristics in general. Some have argued (Gigerenzer, 1991; in press) that as the evidence for the effects is not valid, we should conclude that judgment is not made by heuristics and that, insofar as the assumption concerning mental heuristics is concerned, people don't happen to have that kind of mind. Koehler's target article strongly suggests to me that, as people do respond to base rates, they are not using a simple representativeness heuristic.


6. Others have also argued that the typical judgment tasks are contrived, unrepresentative of real world problems, and that subjects may legitimately view the tasks differently from the way experimenters do. Beach, Barnes and Christensen-Szalanski (1987) discuss a hypothetical experiment where a wedding is randomly selected from all occurring that day and the best man asked for the probability that a randomly selected couple getting married that day would still be married in ten years time. Beach et al. assume that if he knew the (rather depressing) base rate he would probably give it. However, if he was asked about the chances for the couple for whom he was best man then: "The base rate may influence his answer but only if he is particularly cynical" (p. 56). They argue that for the experimenter the two questions are really the same, but for the best man, the question specific to his friends is properly based on his knowledge about them and his theories about what leads to successful and unsuccessful marriages.

7. Of course, if you are asked about a wedding where you know the couple then you may properly use theories and information about them. However, if I were the best man (or even the groom) confronted with this question I would want to reflect the influence of base rates. Charges of cynicism might hurt a little (especially if it was my wedding day), but I know couples for whom my estimate of their chances of success would be considerably improved by reflecting on the base rate for divorce.

8. Strong advocacy of the reality of base-rate neglect in such situations, if not in the lawyer-engineer experiments, is still offered. Kahneman and Lovallo (1993) argued that people have a strong tendency to see problems as unique when they would be more advantageously viewed as instances of a broader class. They claim that the natural tendency in thinking about a particular problem, such as the likelihood of success of a business venture, is to take the "inside" rather than the "outside" view. People pay particular attention to the distinguishing features of a particular case and reject analogies to other instances of the same general type as crudely superficial and unappealing. Consequently, they will fall prey to fallacies of planning -- anchoring their estimates on present values or extrapolations of current trends. Once forecasters take the inside view, they will not seek out relevant statistical knowledge, less likely to formulate a realistic estimate and overconfident about their forecasts.

9. Kahneman and Lovallo reviewed evidence which suggests that, because they take an inside view, people can be unrealistically optimistic (or, if failure is easier to imagine, pessimistic). They cite Cooper, Woo and Dunkelberger (1988), who showed that entrepreneurs interviewed about their chances of business success produced assessments that were unrelated to objective predictors such as college education, prior supervisory experience and initial capital. Moreover, more than 80% of them described their chances as 70% or better whilst the survival rate for new businesses is as low as 33%. Assuming that such bold forecasts are acted upon (though Kahneman and Lovallo argue that they are not), there is evidence for some real-world base-rate neglect.


10. Koehler argues that because the story is a beguiling and vivid one, the myth of base-rate neglect has persisted long after it should have been abandoned. I felt a (slight) pang of guilt on reading that; fallacies of judgment and reasoning make up a large part of my teaching. The students' reaction to the material, however, is perhaps more insightful than I had thought. When students are shown visual illusions for the first time, they are often quite fascinated. They usually want to see more. They express delight when they "see" the effects and some grumble their disappointment when they fail to experience certain effects that are dependent on specific lighting or viewing conditions. That this is not just because the demonstrations are a welcome break from the usual drudge of lectures is borne out by their quite different reactions to demonstrations of what some call "cognitive illusions." In my experience, those students who fall prey to base-rate neglect will be more likely to grumble than those who don't. They often argue that they were misled, didn't understand the problem properly or that it was all just some silly trick. The stark contrast with the look of smug glee on the faces of the few that didn't give the "wrong answer" is quite inverse to the reactions of those confronted with visual illusions.

11. No doubt students' reactions to the cognitive illusions are exacerbated by my reaction to the students. In the manner of a conjurer, I revel in their bafflement and celebrate their failure to solve problems as a vivid demonstration of the strength and pervasiveness of the effect. I am beginning to suspect, however, that my enjoyment of the situation is rather closely analogous to their enjoyment of visual illusions. Could it be that my enjoyment is caused by MY (unwittingly) suffering an illusion, rather than their doing so? Perhaps the only people who suffer any illusion in relation to cognitive illusions are cognitive psychologists; is there such a thing as the illusion illusion?


12. To the extent that base rates are utilised by subjects in the lawyer-engineer problems the evidence for the existence of simple heuristics in judgment is weakened. Perhaps, in reality, base rates are not properly used in judgments, but we lack strong evidence of this.


Beach, L.J., Barnes, V. and Christensen Szalanski, J.J.J. (1987) Assessing human judgment: Has it been done, can it be done, should it be done? In Wright, G. and Ayton, P. Judgmental Forecasting. Chichester: Wiley.

Cooper, A., Woo, C. and Dunkelberg, W. (1988) 'Entrepreneurs' perceived chances for success. Journal of Business Venturing, 3, 97-108.

Edwards, W. (1968) Conservatism in human information processing. In B. Kleinmuntz (Ed) Formal representation of human judgment. New York: Wiley.

Fischhoff, B. and Beyth-Marom, R. (1983) Hypothesis evaluation from a Bayesian perspective. Psychological Review, 90, 239-260.

Gigerenzer, G. (1991) How to make cognitive illusions disappear: Beyond heuristics and biases. In Stroebe, W. and Hewstone, M. (Eds) European review of Social Psychology, Vol 2. Chichester: Wiley.

Gigerenzer, G. (in press) Why the distinction between single event probabilities and frequencies is important for Psychology and vice-versa. In Wright, G. and Ayton, P. Subjective Probability. Chichester: Wiley.

Kahneman, D. and Lovallo, D. (1993) Timid choices and bold forecasts. A cognitive perspective on risk taking. Management Science, 39, 17-31.

Kahneman, D. and Tversky, A. (1982) On the study of statistical intuitions. In: Kahneman, D., Slovic, P. and Tversky, A. (Eds) Judgment under uncertainty: Heuristics and Biases. Cambridge: C.U.P.

Kahneman, D., Slovic, P. and Tversky, A. (Eds) (1982) Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press.

Koehler, J.J. (1993) The Base Rate Fallacy Myth. PSYCOLOQUY 4(49) base-rate.1.koehler.

Winkler, R.L. and Murphy, A.M. (1973) Experiments in the laboratory and the real world. Organizational Behavior and Human Performance, 20, 252-270.

Volume: 4 (next, prev) Issue: 63 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary