Andrew Ward (2000) Why the Bias to Study Biases?. Psycoloquy: 11(123) Social Bias (22)

Volume: 11 (next, prev) Issue: 123 (next, prev) Article: 22 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(123): Why the Bias to Study Biases?

WHY THE BIAS TO STUDY BIASES?
Commentary on Krueger on Social-Bias

Andrew Ward
Department of Psychology
Swarthmore College
500 College Ave.
Swarthmore, PA 19081

award1@swarthmore.edu

Abstract

I agree with Krueger (1998) that social psychologists place disproportionate emphasis on errors and biases in social perception, often neglecting instances in which lay perceivers offer appropriate and reasonable responses. Yet even if Krueger is correct in asserting that such errors are rarer than portrayed in the social psychological literature, there are still valid reasons for studying them.

Keywords

Bayes' rule, bias, hypothesis testing, individual differences probability, rationality, significance testing, social cognition, statistical inference
1. Several months ago I passed a house that wasn't there. At least I assumed it had been a house -- now it was a vacant lot between two other homes that had obviously stood for many decades. I had passed those houses dozens of times but never noticed them or the one that (apparently) was no longer standing between them. The only reason I noticed them this time was because, in a sense, the usual perceptual pattern had been disrupted: Against a "ground" of homes a "figure" of a vacant lot stood out and drew my attention both to itself and the surrounding environs, and it caused me to pause and consider what had been an otherwise unremarkable piece of landscape.

2. Why do we social psychologists expend so much effort studying errors, biases, and flaws in social perception? In short, why are we so biased toward studying biases? To be sure, as McCauley (1999) has argued, one reason stems from perhaps an overly idealistic and even naive conviction that by studying what's "wrong" with everyday social judgments, we can make progress toward alleviating human suffering. Just delineate the psychological processes that contribute to conflict and misunderstanding, to take one example where psychologists' optimism may be unjustified, and individuals will come to appreciate how their own misperceptions and fallacious assumptions produce a picture of others that is too extreme, too uncharitable, too likely to produce needless animosity. A noble goal -- and one in which we can claim at least some success (e.g., Kelman & Cohen, 1986). But when it comes to disputes, to continue the example (and McCauley's point), psychologists have much to be modest about. As interdependent beings, humans are bound to experience conflict; perhaps the best we can hope for is that, with the right sort of scholarly wisdom and practical intervention, such conflict can, at the very least, be more constructive, productive, and most importantly, nonviolent, than were we psychologists not to intervene.

3. Why else do we study human errors and biases? Admittedly, they are inherently interesting topics of investigation, perhaps especially because they are NOT the norm. "If it bleeds, it leads" is undoubtedly the dictum of many newsgathering organizations. But the very fact that murders and robberies are still considered "news" (that is, "novel") suggests that they are not common, that they have the capacity to grab our attention because they represent something that does not normally occur. "Our top story: 33,332 airplanes took off and landed in the U.S. today without incident" is not considered news; that a single commercial airliner had to turn back and make an unscheduled landing because of a suspicious smell in the cabin is. Both stories hold implications for our safety when we fly -- only one is considered newsworthy. And perhaps so too with errors in social perception: That most of the time perceivers make accurate assessments and judgments is not considered "news" (read "worthy of research attention") -- though, to be fair, surely errors are more prevalent than plane crashes (and it should be acknowledged that, in addition to novelty, negativity itself may be especially attention-grabbing [Pratto & John, 1991]).

4. But there is another reason to study what is wrong with social perception. Like the missing house, when we study what is wrong against a sea of what is considered to be "right," we learn about both: We learn how social perception usually works and we learn how it is fallible. Could we learn as much by focusing on the "right"? Perhaps, especially if some force acts to draw our collective research efforts to the appropriate "positive" stimulus and provides a compelling rationale for how documenting human strengths and wisdom can benefit society (e.g., Seligman & Csikszentmihalyi, 2000). But it may be hard to beat novelty when it comes to getting researchers' (and readers') attention, unless, of course, one could claim that studying the positive has now become a novel act -- defensible perhaps in the case of some investigators -- somewhat less tenable, I fear, when it comes to the intuitions of the average consumer of psychological research.

5. Of course "right" and "wrong" and "positive" and "negative" are tricky terms, and ripe for debate. "False consensus," to address a specific example cited by Krueger (1998), is another one -- especially because it's a misnomer. According to the original definition provided by Ross, House, and Greene (1977) and elaborated by Ross and Anderson (1982), there is nothing inherently "false" or incorrect in using one's own choice or judgment to estimate the choices or judgments of others. Indeed, as Krueger (1998) suggests, most of the time, relying on such a strategy will prove fairly successful for most people. After all, if everyone uses his or her own choice to, for example, estimate whether fellow study participants will agree to walk around a college campus wearing an unwieldy sandwichboard sign advertising a restaurant, then most people will be "right." That is, if 70% agree to the request and therefore assume that most others will as well, they will provide a more accurate estimate of what most people actually do then will the 30% who refuse and accordingly assume that most others will also refuse. It helps to be in the majority when you are asked what the majority will do. Indeed, at the limit, I as a perceiver will be perfectly accurate: If everyone shares my view, then estimating the responses of others based on my own response is not only appropriate, it's infallible (L. Ross, personal communication).

6. But notice two points: (1) Ross et al. (1977) were careful NOT to invoke "accuracy" in their definition (hence my contention that "false" consensus is somewhat misleading, for it implies that there is a "true" consensus that folks are somehow failing to appreciate). They simply claimed that, in many (but certainly not all) domains, perceivers who make a certain choice will provide a higher estimate of the number of others who make that same choice than will folks who make a different (e.g., opposite) choice; (2) Although such a strategy for generating estimates will generally be successful in the aggregate (i.e., across perceivers; J. A. L. Smith, personal communication), any individual perceiver can be led far astray by relying solely or almost exclusively on his or her own choices and judgments when called upon to predict how others will respond. For example, in the sandwichboard sign case, my estimate is likely to depend on what events I think will befall anyone who is so "generous" (or "foolish") to agree to the request (Ross, 1990). Surely sometimes I will imagine different events (or even the "same" event differently [Asch, 1940]) than will most others (of course, it's also critical to consider what group of respondents I'm being asked to make judgments about). If, for example, I assume most respondents will construe the task as a harmless favor to an experimenter, but in fact most people see it as a humiliating ordeal, then I am probably more likely than most others to "incorrectly" believe that people will generally agree to the request and shoulder the burdensome sign.

7. Sometimes, then, we are "wrong" in our judgments -- not only in the case of false consensus but in many domains involving social perception. That is, our responses differ from those provided by most others, or we depart from some "accepted" notion of rationality or reason (which, too, at some level, relies on social consensus), or, importantly, we diverge from what we ourselves would (usually some time later) acknowledge to be the most acceptable and appropriate response. Indeed, perhaps more researchers should attempt to employ this last criterion: Would participants themselves admit they have made an error? (self-enhancing motivations and biases not withstanding). But the question becomes, How wrong is wrong? How far does one have to depart from some "acceptable" response before one is legitimately accused of making an error. As Krueger (1998) points out, the typical answer from psychologists has enjoyed many incarnations, and may again be in need of revision.

8. In my own collaborative work, we've tried to answer the question in a number of different ways, and I agree with Krueger that a multi- pronged approach may generally be what's called for. Admittedly, some of the answers my colleagues and I have provided are almost certainly less compelling than others. Sometimes we rely on statistics: The magnitude of "reactive devaluation" effects, for example, where the recipient of a settlement offer in a negotiation is likely to devalue that offer in favor of an alternative proposal (even though the offer would almost certainly have been acceptable if it had instead represented the unavailable alternative rather than the offered proposal), routinely yields an F statistic greater than 10 (with relatively modest sample sizes, yielding an effect size on the order of d = .80), and sometimes greater than 70 (d = 1.74; Lepper, Ross, Ward, & Tsai 2000). Sometimes the "numbers themselves" tell the story. In my work on naive realism and so-called "false polarization" effects, to take another example, not a single politically conservative Stanford student in our sample responded to a racial incident with the degree of ideological extremity that was predicted to be the response of the "average conservative Stanford student" -- all the more striking when one considers that the predictions were provided by the sample themselves (i.e., conservatives at Stanford; Robinson, Keltner, Ward, & Ross, 1995; see also Ross & Ward, 1996). But most studies (at least most of my studies) do not provide such "clear-cut" indications of accuracy and inaccuracy. The effects are not that large (even given, as Krueger [1998] might argue, a null hypothesis of questionable validity), or the sample is too small, or (most of the time) both.

9. In those cases, and indeed, in cases where one or just a few studies supposedly documents an "error," we need to resist the ever-present pressure to generalize, to make too much from too little. But the "we" in that sentence must be readers as much as (or more than) researchers. One principal benefit of Krueger's argument (and suggestions for improvement to analytical strategies that currently rely on null hypothesis statistical testing) may well be its capacity to caution readers and students, not just investigators, against assuming that we as psychologists have definitively demonstrated that humans are hopelessly flawed creatures, incapable of making even the most simple social judgment or decision with a level of accuracy surpassing that of a coin toss.

10. And so we return to our somewhat-maligned studies of so-called errors and biases in social perception. Aside from their inherent interest (i.e., their "newsworthiness"), what do findings from these studies tell us about our own abilities? If errors are not the norm (and at least sometimes, they are not), why should we pay attention to them? Again, I think because ultimately they tell us at least as much about what we do right as what we do wrong. In the case of false consensus, the findings surely indicate that often times, when we as social perceivers try to imagine the choices or judgments of other, we look to ourselves first. "What would I do? Well, then, that's was most people will do." And most of the time, that's fine -- especially if our subsequent social interactions occur within a sphere largely limited to individuals who are like us -- perhaps not so fine when we try to bridge cultural, ethnic, racial, or national divides, in which case differences that we were previously unaware of may become salient and perhaps even overemphasized (Ross & Ward, 1995).

11. And maybe that's the point: The documentation of a (putative) error or bias is not so useful in telling us that we usually get things wrong; rather, it tells us that we CAN get them wrong (cf. Mook, 1983). And if the consequences of such an error are sufficiently pernicious, then even if a negative outcome is rare, we might be well-served to attempt to redress it in some way. Most Firestone tires have not (as of this writing) been linked to fatal auto accidents, most Tylenol capsules have not been laced with poison, most nuclear power plants have not leaked radiation, but the fact that these rare events do happen is enough to focus our attention on preventing them. In terms of errors in social perception, fatal consequences are not the norm, though surely some biases, even if rare (and even if some would dispute their status as "errors"), sometimes lead to deadly outcomes (e.g., judgment and decision-making biases that lead companies to eschew safety recalls; or cognitive biases that cause ethnic groups to despise each other; or perceptual shortcomings that result in national leaders successfully fomenting genocide). But even if a negative outcome does not typically follow from an admittedly ephemeral bias, if by documenting an error we learn more about how we normally make decisions or judgments -- and learn it in a way that we would not have, had we not been prompted by the "novel" error to examine the typical processes involved in social perception and judgment, then surely the value of such a research approach is affirmed.

12. More could be said, but I am constrained by space limitations. Besides, there's now a new house on that formerly vacant lot and I want to check it out while it's still new. Otherwise, I'm liable to miss it.

    POSTSCRIPT: After attending a recent open house held by a realtor,
    I learned that my assumption was wrong: The new house did not
    replace an old one -- it replaced a secluded ornamental pond that
    until recently had been the property of the next-door neighbors. I
    would submit, however, that here too is a case where a (hopefully)
    rare error nevertheless reveals something about normally
    unremarkable everyday judgments (such as my previously unquestioned
    assumption that every piece of available property in my town must
    include a house).

REFERENCES

Asch, S. E. (1940). Studies in the principles of judgments and attitudes: II. Determination of judgments by group and by ego standards. Journal of Social Psychology, 12_, 433-465.

Kelman, H. C., & Cohen, S. P. (1986). Resolution of international conflict: An interactional approach. In S. Worchel & W. G. Austin (Eds.), Psychology of intergroup relations_ (pp. 323-342). Chicago: Nelson-Hall.

Krueger, J. (1998). The Bet On Bias: A Foregone Conclusion? PSYCOLOQUY 9(46) Fri Oct 2 1998 http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?9.46 ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/psyc.98.9.46.social- bias.krueger

Lepper, M., Ross, L., Ward, A., & Tsai, J. (2000). The Grass Is Always Greener: "Reactive Devaluation" of Proffered Concessions. Manuscript in preparation.

McCauley, C. R. (1999). The bet on bias is cockeyed optimism. Commentary on Krueger on Social-Bias. _PSYCOLOQUY 9(71)_ http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?9.71 ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/psyc.98.9.71.social- bias.9.krueger

Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38_, 379-387.

Pratto, F., & John, O. P. (1991). Automatic vigilance: The attention-grabbing power of negative social information. Journal of Personality and Social Psychology, 61_, 380-391.

Robinson, R. J., Keltner, D., Ward, A., & Ross, L. (1995). Actual versus assumed differences in construal: "Naive realism" in intergroup perception and conflict. Journal of Personality and Social Psychology, 68_, 404-417.

Ross, L., & Anderson, C. A. (1982). Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments. In D. Kahneman, P. Slovic, & A. Tversky (Eds.). Judgment under uncertainty: Heuristics and biases_ (pp. 129-152). New York: Cambridge University Press.

Ross, L., Greene, D., & House, P. (1977). The false consensus effect: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13_, 279-301.

Ross, L., & Ward, A. (1995). Psychological barriers to dispute resolution. In M. Zanna (Ed.), Advances in Experimental Social Psychology, Volume 27 (pp. 255-304). San Diego: Academic Press.

Ross, L., & Ward, A. (1996). Naive realism: Implications for social conflict and misunderstanding. In T. Brown, E. Reed, and E. Turiel (Eds.),Values and Knowledge (pp. 103-135). Hillsdale, NJ: Lawrence Erlbaum Associates.

Seligman, M. E. P. & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55_, 5-14.


Volume: 11 (next, prev) Issue: 123 (next, prev) Article: 22 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: