Robert M. Hamm (1998) Characterizing Individual Strategies Illuminates Nonoptimal Behavior. Psycoloquy: 9(49) Social Bias (2)

Volume: 9 (next, prev) Issue: 49 (next, prev) Article: 2 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(49): Characterizing Individual Strategies Illuminates Nonoptimal Behavior

CHARACTERIZING INDIVIDUAL STRATEGIES ILLUMINATES NONOPTIMAL BEHAVIOR
Commentary on Krueger on Social-Bias

Robert M. Hamm
Department of Family and Preventive Medicine
University of Oklahoma Health Sciences Center
900 NE 10th St.
Oklahoma City OK 73104 USA
http://www.fammed.ouhsc.edu/robhamm/index.html

robert-hamm@ouhsc.edu

Abstract

It may be good, as Krueger proposes, to test two theories with specific predictions against one another, rather than "people reason ideally" (one specific point) against "people are biased" (all other points). But the kind of theory is very important. An example with multiple specific predictions is described. Because of its theoretical framework, it was able to yield useful conclusions.

Keywords

Bayes' rule, bias, hypothesis testing, individual differences probability, rationality, significance testing, social cognition, statistical inference
1. The negative tone of research on the shortcomings of people's reasoning is just a question of style. Krueger (1998) is right to point out that we could adopt a more positive style, and perhaps our subjects, our students, and our funders would be happier to listen to us. Even so, it is still true that as science's understanding of optimal reasoning develops, we will frequently discover situations in which the average person's (or the expert's) reasoning neither equals nor approximates the optimal. Some of these demonstrations may seem trivial or contrived, but it is important to understand how we might improve reasoning in those situations that matter.

2. Krueger suggests that we are trapped in that negative tone because of the methodological imperative of the null hypothesis, which we don't really need to follow. Changing our methods might enable us to view human performance more positively. More important, changing our methods might enable us to more effectively improve people's reasoning.

3. One approach Krueger suggests is to pit the predictions of competing positively stated theories against one another, instead of setting one possible degree of belief, labeled "people make inferences right," against all else, "people make inferences wrong." Progress with this approach would depend on the theory we picked to test. What should such a theory look like? Furthermore, setting up competing theories does not necessarily deal with another feature of people's performance, which is the wide between subject variability. Should that simply be interpreted as "error" around the predictions provided by the two competing theories?

4. The most useful theories would assert how people represent the situation and the task they must perform in that situation, and specify the strategies they work with, within that representation, to accomplish their task (Cohen, 1993; Lipshitz, 1993; Anderson, 1995; Abernathy and Hamm, 1995, Chapters 4 and 5). Such a description would still offer us the opportunity to observe that people don't make judgments or inferences accurately. But now we would be able to describe the source of the inaccuracy more specifically. We could state that they do not accurately represent the situation they find themselves in or the task we ask them to do in that situation, or that they use an erroneous strategy when attempting that task. This approach might also help us identify the sources of the variability in subjects' responses. In so far as this type of theory provides an accurate description, it could also offer suggestions for improving the accuracy of people's judgments or inferences, either through increasing the validity of their representation of the situation and the task or through guiding people to choose more appropriate strategies for judging or inferring and to execute their strategies with more control.

5. I would like to give an example of a method, related to such a theory, which compared not two but a large number of hypotheses. I did a study (Hamm, 1987) of how people do probabilistic inference. College students stated their degree of belief in a hypothesis about an everyday situation or a medical diagnosis problem such as the Cab problem (Bar-Hillel, 1980). For each of three problems, subjects were given evidence E, a base rate p(H), and a conditional probability reflecting the accuracy of the evidence p(E|H) -- one piece of information at a time -- and asked to estimate the probability of the hypothesis given the evidence p(H|E). Because they were provided with number stimuli, one could make a reasonable guess about the strategy they used to combine the given numbers and produce their answer. (It is only a guess, because there is more than one way to combine a set of given numbers to produce an output number.) After the last piece of information had been provided, nine strategies for responding using either available numbers (including 0, 0.5, and 1 and their earlier answers to the problem) or combinations of available numbers accounted for 62%, 73%, and 79% of the responses to the Cab, Doctor, and Twins problems, respectively. (One particular strategy alone, to respond with the p(E|H) number when asked for p(H|E), accounted for 40%, 60%, and 9% of the responses, respectively.) Calculation of Bayes' Theorem never occurred.

6. This approach gave a way to account for the variability in subjects' responses other than simply assuming that it was due to noise added to a mean. Subjects gave different numerical answers because they had interpreted the task differently and had chosen strategies for accomplishing that task which made use of different numbers or combinations of numbers. Thus, the approach gave an account of most of the answers, not just the mean.

7. The description of the set of strategies subjects adopted provides additional information. It suggests that the subjects did not represent the problem and their task in the same way our "optimal analysis" did. They evidently did not understand what the task required of them. This type of conclusion goes farther than just "college students are biased in their probabilistic inference." It highlights an area where most people have no idea how to use the information provided to produce the requested answer. It tells us that to improve their reasoning, we need to attend to how they represent the situation and the task before we worry about their accuracy in selecting strategies for revising degree of belief or carrying those strategies out. It may also invite us to turn the interpretation around, to say "the problem is not phrased in a way that lets their ability to think about such things become manifest" (Gigerenzer, 1996).

8. This particular method may not be generally applicable, because it focusses on what the subjects did with numbers, and numbers may not play such a central role in other judgments. Nonetheless, it affirms Krueger's theme that it is helpful to use a variety of methods, and supports the claim that it is useful to consider both the representation and the strategies for accomplishing the task. Psychologists should work to form and test theories about how people think, not just to evaluate whether their answers conform to a rule (Hammond, 1990). Rather than arguing about whether people are or are not biased, we can interpret studies as demonstrating a lack of knowledge about how to solve a problem, and we can address how to provide that knowledge effectively (von Winterfeldt and Edwards, 1986).

REFERENCES

Abernathy, C. M., & Hamm, R. M. (1995). Surgical Intuition. Philadelphia, PA: Hanley and Belfus.

Anderson, J. R. (1993). Rules of the Mind. Hillsdale, NJ: Erlbaum.

Bar-Hillel, M. (1980). The base rate fallacy in probability judgments. Acta Psychologica, 44, 211-233.

Cohen, M.S. (1993). The naturalistic basis of decision biases. In G.A. Klein, J. Orasanu, R. Calderwood, and C.E. Zsambok (Eds.), Decision Making in Action: Models and Methods (pp. 51-99). Norwood, NJ: Ablex Publishing Corporation.

Gigerenzer, G. (1996). The psychology of good judgment: Frequency formats and simple algorithms. Medical Decision Making, 16, 273-280.

Hamm, R.M. (1987). Diagnostic Inference: People's Use of Information in Incomplete Bayesian Word Problems (ICS Publication # 87-11). Boulder CO: Institute of Cognitive Science, University of Colorado.

Hammond, K. R. (1990). Functionalism and illusionism: Can integration be usefully achieved? In R. M. Hogarth (Ed.), Insights in decision making: A tribute to Hillel J. Einhorn, (pp. 227-261). Chicago: University of Chicago Press.

Krueger, J. (1998). The bet on bias: A foregone conclusion? Psycoloquy 98 (6) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/psyc.98.9.46.social-bias.1.krueger.

Lipshitz, R. (1993). Converging themes in the study of decision making in realistic settings. In G.A. Klein, J. Orasanu, R. Calderwood, and C.E. Zsambok (Eds.), Decision Making in Action: Models and Methods (pp. 103-137). Norwood, NJ: Ablex Publishing Corporation.

von Winterfeld, D., & Edwards, W. (1986). Decision Analysis and Behavioral Research. New York, NY: Cambridge University Press.


Volume: 9 (next, prev) Issue: 49 (next, prev) Article: 2 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: