Joachim Krueger (2001) Social Bias Engulfs the Field. Psycoloquy: 12(009) Social Bias (22)

Volume: 12 (next, prev) Issue: 009 (next, prev) Article: 22 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(009): Social Bias Engulfs the Field

SOCIAL BIAS ENGULFS THE FIELD
Reply to Ward on Krueger on Social-Bias

Joachim Krueger
Department of Psychology
Brown University, Box 1853
Providence, RI 02912
http://www.brown.edu/Departments/Psychology/faculty/krueger.hml

Joachim_Krueger@Brown.edu

Abstract

Ward (2000) justifies contemporary research on social-perceptual biases by suggesting that biases are rare and that they, because of their rarity, reveal the properties of the social-perceptual apparatus. I take this argument to mean that social biases are analogous to visual illusions: odd but informative. Sometimes, this analogy works, but as a general theoretical platform, it is inadequate. I address this epistemic disagreement by disputing three of Ward's specific claims. Pragmatically, however, I agree with Ward on that some biases demand attention because they yield large effects and undesirable social consequences.

Keywords

Bayes' rule, bias, hypothesis testing, individual differences probability, rationality, significance testing, social cognition, statistical inference

I. EPISTEMIC DISAGREEMENTS

1. The errors-and-biases approach to social cognition has constructed a contemporary psychopathology of everyday life. In my target article, I argued that current research practices virtually guarantee the detection of social-perceptual biases because norms for rational (i.e., unbiased) responding are narrowly identified with null hypotheses, with ranges of bias lying on either side of the point of no difference (Krueger 1998a). I agree with Ward (2000) on the power of some these biases to attract attention and to stimulate the imagination, but I disagree on how much can be learned from this attention-grabbing property. Whereas Ward hopes that this property is an honest cue towards scientific merit, I suspect that it tells us more about the researchers' preconceptions than about social perception. Ward presents three specific arguments in defense of bias research. It is my impression that these arguments are widely shared presumptions in the research community. Therefore, I will address them in some detail.

2. The first argument is that a bias is a figure set against a ground of accurate and adaptive judgment. Biases are "inherently interesting topics of investigation, perhaps because they are NOT the norm" (Ward 2000, paragraph 3). The term "norm" usually refers to the prescriptive standard against which human performance is evaluated. Here, however, Ward suggests that biases are also non-normative in the sense of being rare. This is a surprising claim because the very success of the errors-and-biases paradigm has led to the view that errors are endemic and ubiquitous (see Piatelli-Palmarini 1994 for a particularly zealous exposition). As I argued in the target article, the use of NHST (Null Hypothesis Significance Testing) has been part and parcel of this development. NHST produces cumulative evidence of errors, whereas rational judgment is a non-finding (p > .05) and thus remains unaggregated (Krueger in press). By lumping individual differences with random error, the (systematic) errors-and-biases paradigm suggests that everyone is biased. The suggestion that biases are rare is contradicted by the archive of biases erected under the paradigm itself. To believe that biases are rare is to believe the conducted studies are risky (with p(H0) being high), when in fact they are safe with adequate statistical power.

3. The second argument is that a bias, when detected, enables us to "learn about both: We learn how social perception usually works and we learn how it is fallible" (Ward 2000, paragraph 4). Ward refers again to the 'novelty' of biases, but he does not explain how the conjunction of novelty and norm violation helps us learn about the nature of both biased and unbiased judgment. Others have postulated the revealing power of errors more explicitly. One version of this argument is that social-perceptual biases are "cognitive illusions" analogous to visual illusions (Kahneman & Tversky 1996). Visual illusions are both rare and revealing. They emerge when clever displays trick the visual system to disclose the secrets of its everyday success (Gregory 1991). In contrast, there is no simple way in which judgmental biases reveal the effective functioning of human inference under ordinary circumstances (Funder 1987; Krueger 1998b). How does the evidence for the fundamental attribution error, for example, reveal that most inferences are accurate? When insufficient adjustment for situational causes is cast as the finding of interest, the magnitude of the adjustment that did occur is overlooked or considered trivial. A bias toward dispositional inferences might be acceptable if indeed most actions were caused by the person rather than the situation. However, the same research tradition that documents the attribution bias also insists that social behavior is overwhelmingly controlled by the situation (Ross & Nisbett 1991). To learn more about how social perception usually works, it seems necessary to also measure its inferential successes, specifically those that are realized with minimum effort (Gigerenzer & Goldstein 1996). Then, some biases can be understood as overgeneralized ways of thinking that usually work well (McKenzie 1994). [1]

4. The third argument addresses the difficulties researchers have had finding consensus on norms for rational judgment. In the target article, I provided examples of such disagreements for the three exemplary biases in consensus estimation, self-perception, and attribution (Krueger 1998a). Others have addressed the normative question in, for example, the areas of confidence judgments (Dawes & Mulford 1996; Erev, Wallsten & Budescu 1995) and hypothesis testing (Oaksford & Chater 1994). To get past these disagreements, Ward suggests that criteria for bias might incorporate the research participants' own perspectives on rationality. "Perhaps researchers should [ask whether] participants themselves admit they have made an error" (Ward 2000, paragraph 7). Sometimes, this approach yields interesting results. Baron and Hershey (1988), for example, found that many participants both showed an outcome bias and realized that they should not have done so. Their evaluations of the quality of a decision depended in part on its consequences, which participants agreed should be ignored because the decision-maker did not know them at the moment of choice. Yet, had participants not meta-cognitively realized the irrelevance of the outcomes, their evaluations of the decisions would still have departed from the normative model. In some judgment domains, a separation of bias and knowledge of bias may not be feasible at all. Try for example to demonstrate an overconfidence bias using the criterion that participants must know that their own confidence levels are exaggerated. It is equally hard to imagine how participants in a Wason study could know that they can test the rule 'if p, then q' by turning over the -q card, but then turn over the q card instead. Often, participants passionately defend non-normative judgments, as I had occasion to observe when trying to persuade students of the irrationality of honoring sunk costs. In other cases, participants accept the normative model, but see no reason to apply it to themselves. In self-enhancement research, for example, individuals may agree with the normative rule that only half of them can be better than average, and yet, most of them can maintain that they are among that half. What is true in the aggregate need not be true for the individual. In still other cases, participants do not even know that they are doing what some investigators consider to be irrational. Social projection (i.e., false consensus) appears to occur without much insight (Krueger 1998c). One student managed to project and deny projection in the same breath. "I, like most people, do not generalize from myself to others" (Clement, Sinha & Krueger 1997, p. 134).

II. PRAGMATIC AGREEMENTS

5. Ward suggests that "sometimes the numbers themselves tell the story" (Ward 2000, paragraph 8). I agree that some effect sizes are so huge that NHST does not play a critical part in the evaluation of the evidence. Ward's studies on the reactive devaluation of negotiated settlements and attitudinal contrast effects are good examples. The normative principle violated by reactive devaluation appears to be coherence. If people reject whichever alternative they are offered but accept an alternative as soon as it becomes unavailable, the impediments to conflict resolution can indeed be serious. Judgmental coherence is a fundamental (and minimal) property of rational choice (Dawes 1998; Krueger 2000). Preference reversals, framing effects, and violations of transitivity are well known examples of incoherence. Ironically, the search for consistency (or coherence) was a central topic in social perception research before the cognitive turnaround. But even then the dim view of the social perceiver was common (Heider 1958 dissenting). According to the theory of cognitive dissonance, for example, people are motivated to establish consonance among their beliefs even if they can reach consonance only by irrational means (e.g., the denial of a prior attitude; Festinger 1957).

6. As Ward suggests, it is important to study the consequences of various social-perceptual judgment patterns. To be sure, sometimes what we call bias is associated with poor consequences and can "lead to deadly outcomes" (Ward 2000, paragraph 11). Again, however, these consequences cannot necessarily serve as criteria for whether the judgment was poor. Such an inference itself could be a case of outcome bias. Each of the three exemplary biases (consensus, enhancement, attribution) has been shown to yield both desirable and undesirable consequences. Therefore, it is essential to study individual differences in judgment and the conditions under which consequences vary (Stanovich & West 1998).

POSTSCRIPT ONE

7. I am not persuaded by the suggestion that the fathers of the false consensus effect never meant to imply that projection is erroneous (Ward 2000, paragraphs 5 and 6). They conceptualized this bias without reference to actual consensus and thus left the departures of consensus estimates from that reality benchmark unexamined (Ross, Greene & House 1977). Nevertheless, they referred to the difference between the consensus estimates provided by item endorsers and nonendorsers as "distortions" and "errors" (pp. 298-299). Ross and Anderson (1982) reiterated this view verbatim (pp. 143-144), and Nisbett and Ross (1980) explicitly equated consensus bias with inaccuracy. "People presume that a larger fraction of others behave as they themselves behave and hold opinions that they themselves hold, than is actually the case" (p. 76, emphasis added). Remaining convinced that consensus bias had to be false, Ross and Nisbett (1991) concluded a decade later that "people fail to recognize the degree to which their interpretations of the situation are just that-constructions and inferences rather than faithful reflections of some objective and invariant reality" (p. 85).

POSTSCRIPT TWO

8. Many studies on social-perceptual biases are flawed in that they set up rational judgment as a strawman hypothesis. The question of environmental determinism versus human agency is an instructive case for comparison. Successful studies demonstrate significant effects of experimentally manipulated environmental stimuli. Such studies extend the reaches of deterministic external causes of human behavior evermore, chipping away at what we already know is not demonstrable. With the success of this research paradigm (see Bargh & Chartrand 1999 for an excellent example), the range of the unexplained is condemned to perpetual shrinkage. Because that range confounds the uninteresting (random variation) with the metaphysical ("Free Will"), it remains scientifically intractable. In contrast, I hope that the contributions to the thread on social bias have shown that rational thought can be demonstrated with appropriate methods. Rational thought need not be what is left over when all irrationalities have been revealed.

NOTE

[1] Arkes and Ayton (1999), for example, attributed the failure to ignore sunk costs in decision making to the overgeneralization of the reasonable injunction against wastefulness. Similarly, Baron and Hershey (1988) emphasized that outcome bias may arise, in part, from people's knowledge that good decisions typically yield good results. The founders of the heuristics and biases paradigm themselves acknowledged that heuristic inferences are often correct (Tversky & Kahneman 1973). Frequency estimates by availability, for example, are correct inasmuch as actual observed frequencies are associated with stronger memory traces (and they are).

REFERENCES

Arkes, H. R. & Ayton, P. (1999). The Sunk cost and Concorde effects: Are humans less rational than lower animals? Psychological Bulletin 125: 591-600.

Bargh, J. A & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist 54: 462-479.

Baron, J. & Hershey, J. C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology 54: 569-579.

Clement, R. W., Sinha, R. R. & Krueger, J. (1997). A computerized demonstration of the false consensus effect. Teaching of Psychology 24: 131-135.

Dawes, R. M. (1998). Behavioral decision making. In D. T. Gilbert, S. T. Fiske & G. Lindzey (Eds.) Handbook of social psychology (4th ed., Vol. 1, pp. 497-548). Boston: McGraw-Hill.

Dawes, R. M. & Mulford, M. (1996). The false consensus effect and overconfidence: Flaws in judgment or flaws in how we study judgment? Organizational Behavior and Human Decision Processes 65: 201-211.

Erev, I., Wallsten, T. S. & Budescu, D. V. (1994). Simultaneous over- and underconfidence: The role of error in judgment processes. Psychological Review 101: 519-527.

Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

Gigerenzer, G. & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review 103: 650-669.

Gregory, R. L. (1991). Putting illusions in their place. Perception 20: 14.

Funder, D. C. (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin 101: 75-90.

Heider, F. (1958). The psychology of interpersonal relations. Hillsdale: Erlbaum.

Kahneman, D. & Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review 103: 582-591.

Krueger, J. (1998a). The bet on bias: A foregone conclusion? PSYCOLOQUY 9(46) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.46.social-bias.1.krueger http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?9.46

Krueger, J. (1998b). Enhancement bias in the description of self and others. Personality and Social Psychology Bulletin 24: 505-516.

Krueger, J. (1998c). On the perception of social consensus. Advances in Experimental Social Psychology 30: 163-240.

Krueger, J. (2000). Distributive judgments under uncertainty: Paccioli's game revisited. Journal of Experimental Psychology: General 129 (4).

Krueger, J. (in press). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist.

McKenzie, C. R. M. (1994). The accuracy of intuitive judgment strategies: Covariation assessment and Bayesian inference. Cognitive Psychology 26: 209-239.

Oaksford, M. & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological-Review 101: 608-631.

Piattelli-Palmarini, M. (1994). Inevitable illusions: How mistakes of reason rule our minds. New York: Wiley.

Ross, L. & Anderson, C. (1982). Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments. In D. Kahneman, P. Slovic & A. Tversky (Eds.) Judgment under uncertainty: Heuristics and biases (pp. 129-152). Cambridge University Press.

Ross, L., Greene, D. & House, P. (1977). The "false consensus effect": An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology 13: 279-301.

Ross, L. & Nisbett R. E. (1991). The person and the situation. New York: McGraw-Hill.

Stanovich, K. E. & West, R. F. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General 127: 161-188.

Tversky, A. & Kahneman, D. (1973). Availability: A heuristic for judging frequency and availability. Cognitive Psychology 5: 207-232.

Ward, A. (2000). Why the bias to study biases? PSYCOLOQUY 11(123). ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.123.social-bias.22.ward http://www.cogsci.soton.ac.uk/psyc-bin/newpsy?11.123


Volume: 12 (next, prev) Issue: 009 (next, prev) Article: 22 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: