Andrew Ward (2000) Why the Bias to Study Biases?. Psycoloquy: 11(009) Social Bias (22)

Volume: 11 (next, prev) Issue: 009 (next, prev) Article: 22 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(009): Why the Bias to Study Biases?

WHY THE BIAS TO STUDY BIASES?
Commentary on Krueger on Social-Bias

Andrew Ward
Department of Psychology
Swarthmore College
500 College Ave.
Swarthmore, PA 19081

award1@swarthmore.edu

Abstract

I agree with Krueger (1998) that social psychologists place disproportionate emphasis on errors and biases in social perception, often neglecting instances in which lay perceivers offer appropriate and reasonable responses. Yet even if Krueger is correct in asserting that such errors are rarer than portrayed in the social psychological literature, there are still valid reasons for studying them.

Keywords

Bayes' rule, bias, hypothesis testing, individual differences probability, rationality, significance testing, social cognition, statistical inference
1. Several months ago I passed a house that wasn't there. At least I assumed it had been a house -- now it was a vacant lot between two other homes that had obviously stood for many decades. I had passed those houses dozens of times but never noticed them or the one that (apparently) was no longer standing between them. The only reason I noticed them this time was because, in a sense, the usual perceptual pattern had been disrupted: Against a "ground" of homes a "figure" of a vacant lot stood out and drew my attention both to itself and the surrounding environs, and it caused me to pause and consider what had been an otherwise unremarkable piece of landscape.

2. Why do we social psychologists expend so much effort studying errors, biases, and flaws in social perception? In short, why are we so biased toward studying biases? To be sure, as McCauley (1999) has argued, one reason stems from perhaps an overly idealistic and even naive conviction that by studying what's "wrong" with everyday social judgments, we can make progress toward alleviating human suffering. Just delineate the psychological processes that contribute to conflict and misunderstanding, to take one example where psychologists' optimism may be unjustified, and individuals will come to appreciate how their own misperceptions and fallacious assumptions produce a picture of others that is too extreme, too uncharitable, too likely to produce needless animosity. A noble goal -- and one in which we can claim at least some success (e.g., Kelman & Cohen, 1986). But when it comes to disputes, to continue the example (and McCauley's point), psychologists have much to be modest about. As interdependent beings, humans are bound to experience conflict; perhaps the best we can hope for is that, with the right sort of scholarly wisdom and practical intervention, such conflict can, at the very least, be more constructive, productive, and most importantly, nonviolent, than were we psychologists not to intervene.

3. Why else do we study human errors and biases? Admittedly, they are inherently interesting topics of investigation, perhaps especially because they are NOT the norm. "If it bleeds, it leads" is undoubtedly the dictum of many newsgathering organizations. But the very fact that murders and robberies are still considered "news" (that is, "novel") suggests that they are not common, that they have the capacity to grab our attention because they represent something that does not normally occur. "Our top story: 33,332 airplanes took off and landed in the U.S. today without incident" is not considered news; that a single commercial airliner had to turn back and make an unscheduled landing because of a suspicious smell in the cabin is. Both stories hold implications for our safety when we fly -- only one is considered newsworthy. And perhaps so too with errors in social perception: That most of the time perceivers make accurate assessments and judgments is not considered "news" (read "worthy of research attention") -- though, to be fair, surely errors are more prevalent than plane crashes (and it should be acknowledged that, in addition to novelty, negativity itself may be especially attention-grabbing [Pratto & John, 1991]).

4. But there is another reason to study what is wrong with social perception. Like the missing house, when we study what is wrong against a sea of what is considered to be "right," we learn about both: We learn how social perception usually works and we learn how it is fallible. Could we learn as much by focusing on the "right"? Perhaps, especially if some force acts to draw our collective research efforts to the appropriate "positive" stimulus and provides a compelling rationale for how documenting human strengths and wisdom can benefit society (e.g., Seligman & Csikszentmihalyi, 2000). But it may be hard to beat novelty when it comes to getting researchers' (and readers') attention, unless, of course, one could claim that studying the positive has now become a novel act -- defensible perhaps in the case of some investigators -- somewhat less tenable, I fear, when it comes to the intuitions of the average consumer of psychological research.

5. Of course "right" and "wrong" and "positive" and "negative" are tricky terms, and ripe for debate. "False consensus," to address a specific example cited by Krueger (1998), is another one -- especially because it's a misnomer. According to the original definition provided by Ross, House, and Greene (1977) and elaborated by Ross and Anderson (1982), there is nothing inherently "false" or incorrect in using one's own choice or judgment to estimate the choices or judgments of others. Indeed, as Krueger (1998) suggests, most of the time, relying on such a strategy will prove fairly successful for most people. After all, if everyone uses his or her own choice to, for example, estimate whether fellow study participants will agree to walk around a college campus wearing an unwieldy sandwichboard sign advertising a restaurant, then most people will be "right." That is, if 70% agree to the request and therefore assume that most others will as well, they will provide a more accurate estimate of what most people actually do then will the 30% who refuse and accordingly assume that most others will also refuse. It helps to be in the majority when you are asked what the majority will do. Indeed, at the limit, I as a perceiver will be perfectly accurate: If everyone shares my view, then estimating the responses of others based on my own response is not only appropriate, it's infallible (L. Ross, personal communication).

6. But notice two points: (1) Ross et al. (1977) were careful NOT to invoke "accuracy" in their definition (hence my contention that "false" consensus is somewhat misleading, for it implies that there is a "true" consensus that folks are somehow failing to appreciate). They simply claimed that, in many (but certainly not all) domains, perceivers who make a certain choice will provide a higher estimate of the number of others who make that same choice than will folks who make a different (e.g., opposite) choice; (2) Although such a strategy for generating estimates will generally be successful in the aggregate (i.e., across perceivers; J. A. L. Smith, personal communication), any individual perceiver can be led far astray by relying solely or almost exclusively on his or her own choices and judgments when called upon to predict how others will respond. For example, in the sandwichboard sign case, my estimate is likely to depend on what events I think will befall anyone who is so "generous" (or "foolish") to agree to the request (Ross, 1990). Surely sometimes I will imagine different events (or even the "same" event differently [Asch, 1940]) than will most others (of course, it's also critical to consider what group of respondents I'm being asked to make judgments about). If, for example, I assume most respondents will construe the task as a harmless favor to an experimenter, but in fact most people see it as a humiliating ordeal, then I am probably more likely than most others to "incorrectly" believe that people will generally agree to the request and shoulder the burdensome sign.

7. Sometimes, then, we are "wrong" in our judgments -- not only in the case of false consensus but in many domains involving social perception. That is, our responses differ from those provided by most others, or we depart from some "accepted" notion of rationality or reason (which, too, at some level, relies on social consensus), or, importantly, we diverge from what we ourselves would (usually some time later) acknowledge to be the most acceptable and appropriate response. Indeed, perhaps more researchers should attempt to employ this last criterion: Would participants themselves admit they have made an error? (self-enhancing motivations and biases not withstanding). But the question becomes, How wrong is wrong? How far does one have to depart from some "acceptable" response before one is legitimately accused of making an error. As Krueger (1998) points out, the typical answer from psychologists has enjoyed many incarnations, and may again be in need of revision.

8. In my own collaborative work, we've tried to answer the question in a number of different ways, and I agree with Krueger that a multi- pronged approach may generally be what's called for. Admittedly, some of the answers my colleagues and I have provided are almost certainly less compelling than others. Sometimes we rely on statistics: The magnitude of "reactive devaluation" effects, for example, where the recipient of a settlement offer in a negotiation is likely to devalue that offer in favor of an alternative proposal (even though the offer would almost certainly have been acceptable if it had instead represented the unavailable alternative rather than the offered proposal), routinely yields an F statistic greater than 10 (with relatively modest sample sizes, yielding an effect size on the order of d = .80), and sometimes greater than 70 (d = 1.74; Lepper, Ross, Ward, & Tsai 2000). Sometimes the "numbers themselves" tell the story. In my work on naive realism and so-called "false polarization" effects, to take another example, not a single politically conservative Stanford student in our sample responded to a racial incident with the degree of ideological extremity that was predicted to be the response of the "average conservative Stanford student" -- all the more striking when one considers that the predictions were provided by the sample themselves (i.e., conservatives at Stanford; Robinson, Keltner, Ward, & Ross, 1995; see also Ross & Ward, 1996). But most studies (at least most of my studies) do not provide such "clear-cut" indications of accuracy and inaccuracy. The effects are not that large (even given, as Krueger [1998] might argue, a null hypothesis of questionable validity), or the sample is too small, or (most of the time) both.

9. In those cases, and indeed, in cases where one or just a few studies supposedly documents an "error," we need to resist the ever-present pressure to generalize, to make too much from too little. But the "we" in that sentence must be readers as much as (or more than) researchers. One principal benefit of Krueger's argument (and suggestions for improvement to analytical strategies that currently rely on null hypothesis statistical testing) may well be its capacity to caution readers and students, not just investigators, against assuming that we as psychologists have definitively demonstrated that humans are hopelessly flawed creatures, incapable of making even the most simple social judgment or decision with a level of accuracy surpassing that of a coin toss.

10. And so we return to our somewhat-maligned studies of so-called errors and biases in social perception. Aside from their inherent interest (i.e., their "newsworthiness"), what do findings from these studies tell us about our own abilities? If errors are not the norm (and at least sometimes, they are not), why should we pay attention to them? Again, I think because ultimately they tell us at least as much about what we do right as what we do wrong. In the case of false consensus, the findings surely indicate that often times, when we as social perceivers try to imagine the choices or judgments of other, we look to ourselves first. "What would I do? Well, then, that's was most people will do." And most of the time, that's fine -- especially if our subsequent social interactions occur within a sphere largely limited to individuals who are like us -- perhaps not so fine when we try to bridge cultural, ethnic, racial, or national divides, in which case differences that we were previously unaware of may become salient and perhaps even overemphasized (Ross & Ward, 1995).

11. And maybe that's the point: The documentation of a (putative) error or bias is not so useful in telling us that we usually get things wrong; rather, it tells us that we CAN get them wrong (cf. Mook, 1983). And if the consequences of such an error are sufficiently pernicious, then even if a negative outcome is rare, we might be well-served to attempt to redress it in some way. Most Firestone tires have not (as of this writing) been linked to fatal auto accidents, most Tylenol capsules have not been laced with poison, most nuclear power plants have not leaked radiation, but the fact that these rare events do happen is enough to focus our attention on preventing them. In terms of errors in social perception, fatal consequences are not the norm, though surely some biases, even if rare (and even if some would dispute their status as "errors"), sometimes lead to deadly outcomes (e.g., judgment and decision-making biases that lead companies to eschew safety recalls; or cognitive biases that cause ethnic groups to despise each other; or perceptual shortcomings that result in national leaders successfully fomenting genocide). But even if a negative outcome does not typically follow from an admittedly ephemeral bias, if by documenting an error we learn more about how we normally make decisions or judgments -- and learn it in a way that we would not have, had we not been prompted by the "novel" error to examine the typical processes involved in social perception and judgment, then surely the value of such a research approach is affirmed.

12. More could be said, but I am constrained by space limitations. Besides, there's now a new house on that formerly vacant lot and I want to check it out while it's still new. Otherwise, I'm liable to miss it.

    POSTSCRIPT: After attending a recent open house held by a realtor,
    I learned that my assumption was wrong: The new house did not
    replace an old one -- it replaced a secluded ornamental pond that
    until recently had been the property of the next-door neighbors. I
    would submit, however, that here too is a case where a (hopefully)
    rare error nevertheless reveals something about normally
    unremarkable everyday judgments (such as my previously unquestioned
    assumption that every piece of available property in my town must
    include a house).

REFERENCES

Asch, S. E. (1940). Studies in the principles of judgments and attitudes: II. Determination of judgments by group and by ego standards. Journal of Social Psychology, 12_, 433-465.

Kelman, H. C., & Cohen, S. P. (1986). Resolution of international conflict: An interactional approach. In S. Worchel & W. G. Austin (Eds.), Psychology of intergroup relations_ (pp. 323-342). Chicago: Nelson-Hall.

Krueger, J. (1998). The Bet On Bias: A Foregone Conclusion? PSYCOLOQUY 9(46) Fri Oct 2 1998 http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?9.46 ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/psyc.98.9.46.social- bias.krueger

Lepper, M., Ross, L., Ward, A., & Tsai, J. (2000). The Grass Is Always Greener: "Reactive Devaluation" of Proffered Concessions. Manuscript in preparation.

McCauley, C. R. (1999). The bet on bias is cockeyed optimism. Commentary on Krueger on Social-Bias. _PSYCOLOQUY 9(71)_ http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?9.71 ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/psyc.98.9.71.social- bias.9.krueger

Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38_, 379-387.

Pratto, F., & John, O. P. (1991). Automatic vigilance: The attention-grabbing power of negative social information. Journal of Personality and Social Psychology, 61_, 380-391.

Robinson, R. J., Keltner, D., Ward, A., & Ross, L. (1995). Actual versus assumed differences in construal: "Naive realism" in intergroup perception and conflict. Journal of Personality and Social Psychology, 68_, 404-417.

Ross, L., & Anderson, C. A. (1982). Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments. In D. Kahneman, P. Slovic, & A. Tversky (Eds.). Judgment under uncertainty: Heuristics and biases_ (pp. 129-152). New York: Cambridge University Press.

Ross, L., Greene, D., & House, P. (1977). The false consensus effect: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13_, 279-301.

Ross, L., & Ward, A. (1995). Psychological barriers to dispute resolution. In M. Zanna (Ed.), Advances in Experimental Social Psychology, Volume 27 (pp. 255-304). San Diego: Academic Press.

Ross, L., & Ward, A. (1996). Naive realism: Implications for social conflict and misunderstanding. In T. Brown, E. Reed, and E. Turiel (Eds.),Values and Knowledge (pp. 103-135). Hillsdale, NJ: Lawrence Erlbaum Associates.

Seligman, M. E. P. & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55_, 5-14.

psyc.01.12.009.social-bias.22.krueger Thu Apr 12 2001 ISSN 1055-0143 (8 paras, 27 refs, 1 note, 318 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA)

        Copyright 2001 Joachim Krueger.

        SOCIAL BIAS ENGULFS THE FIELD
        Reply to Ward on Krueger on Social-Bias

        Joachim Krueger
        Department of Psychology
        Brown University, Box 1853
        Providence, RI 02912
        JoachimKrueger@Brown.edu
        http://www.brown.edu/Departments/Psychology/faculty/krueger.hml

    ABSTRACT: Ward (2000) justifies contemporary research on
    social-perceptual biases by suggesting that biases are rare and
    that they, because of their rarity, reveal the properties of the
    social-perceptual apparatus. I take this argument to mean that
    social biases are analogous to visual illusions: odd but
    informative. Sometimes, this analogy works, but as a general
    theoretical platform, it is inadequate. I address this epistemic
    disagreement by disputing three of Ward's specific claims.
    Pragmatically, however, I agree with Ward on that some biases
    demand attention because they yield large effects and undesirable
    social consequences.

I. EPISTEMIC DISAGREEMENTS

1. The errors-and-biases approach to social cognition has constructed a contemporary psychopathology of everyday life. In my target article, I argued that current research practices virtually guarantee the detection of social-perceptual biases because norms for rational (i.e., unbiased) responding are narrowly identified with null hypotheses, with ranges of bias lying on either side of the point of no difference (Krueger 1998a). I agree with Ward (2000) on the power of some these biases to attract attention and to stimulate the imagination, but I disagree on how much can be learned from this attention-grabbing property. Whereas Ward hopes that this property is an honest cue towards scientific merit, I suspect that it tells us more about the researchers' preconceptions than about social perception. Ward presents three specific arguments in defense of bias research. It is my impression that these arguments are widely shared presumptions in the research community. Therefore, I will address them in some detail.

2. The first argument is that a bias is a figure set against a ground of accurate and adaptive judgment. Biases are "inherently interesting topics of investigation, perhaps because they are NOT the norm" (Ward 2000, paragraph 3). The term "norm" usually refers to the prescriptive standard against which human performance is evaluated. Here, however, Ward suggests that biases are also non-normative in the sense of being rare. This is a surprising claim because the very success of the errors-and-biases paradigm has led to the view that errors are endemic and ubiquitous (see Piatelli-Palmarini 1994 for a particularly zealous exposition). As I argued in the target article, the use of NHST (Null Hypothesis Significance Testing) has been part and parcel of this development. NHST produces cumulative evidence of errors, whereas rational judgment is a non-finding (p > .05) and thus remains unaggregated (Krueger in press). By lumping individual differences with random error, the (systematic) errors-and-biases paradigm suggests that everyone is biased. The suggestion that biases are rare is contradicted by the archive of biases erected under the paradigm itself. To believe that biases are rare is to believe the conducted studies are risky (with p(H0) being high), when in fact they are safe with adequate statistical power.

3. The second argument is that a bias, when detected, enables us to "learn about both: We learn how social perception usually works and we learn how it is fallible" (Ward 2000, paragraph 4). Ward refers again to the 'novelty' of biases, but he does not explain how the conjunction of novelty and norm violation helps us learn about the nature of both biased and unbiased judgment. Others have postulated the revealing power of errors more explicitly. One version of this argument is that social-perceptual biases are "cognitive illusions" analogous to visual illusions (Kahneman & Tversky 1996). Visual illusions are both rare and revealing. They emerge when clever displays trick the visual system to disclose the secrets of its everyday success (Gregory 1991). In contrast, there is no simple way in which judgmental biases reveal the effective functioning of human inference under ordinary circumstances (Funder 1987; Krueger 1998b). How does the evidence for the fundamental attribution error, for example, reveal that most inferences are accurate? When insufficient adjustment for situational causes is cast as the finding of interest, the magnitude of the adjustment that did occur is overlooked or considered trivial. A bias toward dispositional inferences might be acceptable if indeed most actions were caused by the person rather than the situation. However, the same research tradition that documents the attribution bias also insists that social behavior is overwhelmingly controlled by the situation (Ross & Nisbett 1991). To learn more about how social perception usually works, it seems necessary to also measure its inferential successes, specifically those that are realized with minimum effort (Gigerenzer & Goldstein 1996). Then, some biases can be understood as overgeneralized ways of thinking that usually work well (McKenzie 1994). [1]

4. The third argument addresses the difficulties researchers have had finding consensus on norms for rational judgment. In the target article, I provided examples of such disagreements for the three exemplary biases in consensus estimation, self-perception, and attribution (Krueger 1998a). Others have addressed the normative question in, for example, the areas of confidence judgments (Dawes & Mulford 1996; Erev, Wallsten & Budescu 1995) and hypothesis testing (Oaksford & Chater 1994). To get past these disagreements, Ward suggests that criteria for bias might incorporate the research participants' own perspectives on rationality. "Perhaps researchers should [ask whether] participants themselves admit they have made an error" (Ward 2000, paragraph 7). Sometimes, this approach yields interesting results. Baron and Hershey (1988), for example, found that many participants both showed an outcome bias and realized that they should not have done so. Their evaluations of the quality of a decision depended in part on its consequences, which participants agreed should be ignored because the decision-maker did not know them at the moment of choice. Yet, had participants not meta-cognitively realized the irrelevance of the outcomes, their evaluations of the decisions would still have departed from the normative model. In some judgment domains, a separation of bias and knowledge of bias may not be feasible at all. Try for example to demonstrate an overconfidence bias using the criterion that participants must know that their own confidence levels are exaggerated. It is equally hard to imagine how participants in a Wason study could know that they can test the rule 'if p, then q' by turning over the -q card, but then turn over the q card instead. Often, participants passionately defend non-normative judgments, as I had occasion to observe when trying to persuade students of the irrationality of honoring sunk costs. In other cases, participants accept the normative model, but see no reason to apply it to themselves. In self-enhancement research, for example, individuals may agree with the normative rule that only half of them can be better than average, and yet, most of them can maintain that they are among that half. What is true in the aggregate need not be true for the individual. In still other cases, participants do not even know that they are doing what some investigators consider to be irrational. Social projection (i.e., false consensus) appears to occur without much insight (Krueger 1998c). One student managed to project and deny projection in the same breath. "I, like most people, do not generalize from myself to others" (Clement, Sinha & Krueger 1997, p. 134).

II. PRAGMATIC AGREEMENTS

5. Ward suggests that "sometimes the numbers themselves tell the story" (Ward 2000, paragraph 8). I agree that some effect sizes are so huge that NHST does not play a critical part in the evaluation of the evidence. Ward's studies on the reactive devaluation of negotiated settlements and attitudinal contrast effects are good examples. The normative principle violated by reactive devaluation appears to be coherence. If people reject whichever alternative they are offered but accept an alternative as soon as it becomes unavailable, the impediments to conflict resolution can indeed be serious. Judgmental coherence is a fundamental (and minimal) property of rational choice (Dawes 1998; Krueger 2000). Preference reversals, framing effects, and violations of transitivity are well known examples of incoherence. Ironically, the search for consistency (or coherence) was a central topic in social perception research before the cognitive turnaround. But even then the dim view of the social perceiver was common (Heider 1958 dissenting). According to the theory of cognitive dissonance, for example, people are motivated to establish consonance among their beliefs even if they can reach consonance only by irrational means (e.g., the denial of a prior attitude; Festinger 1957).

6. As Ward suggests, it is important to study the consequences of various social-perceptual judgment patterns. To be sure, sometimes what we call bias is associated with poor consequences and can "lead to deadly outcomes" (Ward 2000, paragraph 11). Again, however, these consequences cannot necessarily serve as criteria for whether the judgment was poor. Such an inference itself could be a case of outcome bias. Each of the three exemplary biases (consensus, enhancement, attribution) has been shown to yield both desirable and undesirable consequences. Therefore, it is essential to study individual differences in judgment and the conditions under which consequences vary (Stanovich & West 1998).

POSTSCRIPT ONE

7. I am not persuaded by the suggestion that the fathers of the false consensus effect never meant to imply that projection is erroneous (Ward 2000, paragraphs 5 and 6). They conceptualized this bias without reference to actual consensus and thus left the departures of consensus estimates from that reality benchmark unexamined (Ross, Greene & House 1977). Nevertheless, they referred to the difference between the consensus estimates provided by item endorsers and nonendorsers as "distortions" and "errors" (pp. 298-299). Ross and Anderson (1982) reiterated this view verbatim (pp. 143-144), and Nisbett and Ross (1980) explicitly equated consensus bias with inaccuracy. "People presume that a larger fraction of others behave as they themselves behave and hold opinions that they themselves hold, than is actually the case" (p. 76, emphasis added). Remaining convinced that consensus bias had to be false, Ross and Nisbett (1991) concluded a decade later that "people fail to recognize the degree to which their interpretations of the situation are just that-constructions and inferences rather than faithful reflections of some objective and invariant reality" (p. 85).

POSTSCRIPT TWO

8. Many studies on social-perceptual biases are flawed in that they set up rational judgment as a strawman hypothesis. The question of environmental determinism versus human agency is an instructive case for comparison. Successful studies demonstrate significant effects of experimentally manipulated environmental stimuli. Such studies extend the reaches of deterministic external causes of human behavior evermore, chipping away at what we already know is not demonstrable. With the success of this research paradigm (see Bargh & Chartrand 1999 for an excellent example), the range of the unexplained is condemned to perpetual shrinkage. Because that range confounds the uninteresting (random variation) with the metaphysical ("Free Will"), it remains scientifically intractable. In contrast, I hope that the contributions to the thread on social bias have shown that rational thought can be demonstrated with appropriate methods. Rational thought need not be what is left over when all irrationalities have been revealed.

NOTE

[1] Arkes and Ayton (1999), for example, attributed the failure to ignore sunk costs in decision making to the overgeneralization of the reasonable injunction against wastefulness. Similarly, Baron and Hershey (1988) emphasized that outcome bias may arise, in part, from people's knowledge that good decisions typically yield good results. The founders of the heuristics and biases paradigm themselves acknowledged that heuristic inferences are often correct (Tversky & Kahneman 1973). Frequency estimates by availability, for example, are correct inasmuch as actual observed frequencies are associated with stronger memory traces (and they are).

REFERENCES

Arkes, H. R. & Ayton, P. (1999). The Sunk cost and Concorde effects: Are humans less rational than lower animals? Psychological Bulletin 125: 591-600.

Bargh, J. A & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist 54: 462-479.

Baron, J. & Hershey, J. C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology 54: 569-579.

Clement, R. W., Sinha, R. R. & Krueger, J. (1997). A computerized demonstration of the false consensus effect. Teaching of Psychology 24: 131-135.

Dawes, R. M. (1998). Behavioral decision making. In D. T. Gilbert, S. T. Fiske & G. Lindzey (Eds.) Handbook of social psychology (4th ed., Vol. 1, pp. 497-548). Boston: McGraw-Hill.

Dawes, R. M. & Mulford, M. (1996). The false consensus effect and overconfidence: Flaws in judgment or flaws in how we study judgment? Organizational Behavior and Human Decision Processes 65: 201-211.

Erev, I., Wallsten, T. S. & Budescu, D. V. (1994). Simultaneous over- and underconfidence: The role of error in judgment processes. Psychological Review 101: 519-527.

Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

Gigerenzer, G. & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review 103: 650-669.

Gregory, R. L. (1991). Putting illusions in their place. Perception 20: 14.

Funder, D. C. (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin 101: 75-90.

Heider, F. (1958). The psychology of interpersonal relations. Hillsdale: Erlbaum.

Kahneman, D. & Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review 103: 582-591.

Krueger, J. (1998a). The bet on bias: A foregone conclusion? PSYCOLOQUY 9(46) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.46.social-bias.1.krueger http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?9.46

Krueger, J. (1998b). Enhancement bias in the description of self and others. Personality and Social Psychology Bulletin 24: 505-516.

Krueger, J. (1998c). On the perception of social consensus. Advances in Experimental Social Psychology 30: 163-240.

Krueger, J. (2000). Distributive judgments under uncertainty: Paccioli's game revisited. Journal of Experimental Psychology: General 129 (4).

Krueger, J. (in press). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist.

McKenzie, C. R. M. (1994). The accuracy of intuitive judgment strategies: Covariation assessment and Bayesian inference. Cognitive Psychology 26: 209-239.

Oaksford, M. & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological-Review 101: 608-631.

Piattelli-Palmarini, M. (1994). Inevitable illusions: How mistakes of reason rule our minds. New York: Wiley.

Ross, L. & Anderson, C. (1982). Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments. In D. Kahneman, P. Slovic & A. Tversky (Eds.) Judgment under uncertainty: Heuristics and biases (pp. 129-152). Cambridge University Press.

Ross, L., Greene, D. & House, P. (1977). The "false consensus effect": An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology 13: 279-301.

Ross, L. & Nisbett R. E. (1991). The person and the situation. New York: McGraw-Hill.

Stanovich, K. E. & West, R. F. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General 127: 161-188.

Tversky, A. & Kahneman, D. (1973). Availability: A heuristic for judging frequency and availability. Cognitive Psychology 5: 207-232.

Ward, A. (2000). Why the bias to study biases? PSYCOLOQUY 11(123). ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.123.social-bias.22.ward http://www.cogsci.soton.ac.uk/psyc-bin/newpsy?11.123


Volume: 11 (next, prev) Issue: 009 (next, prev) Article: 22 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: