Скачать книгу

for the plaintiff often initially ask for unreasonable large dollar amounts in a legal settlement. Similar findings have been found with regard to the valuation of homes in real estate (Northcraft & Neale, 1987), estimating how long Gandhi lived (F. Strack & Mussweiler, 1997), and guessing the year in which George Washington was elected U.S. president (Epley & Gilovich, 2001).

      Anchoring and Adjustment Heuristic: Mental shortcut in which people use readily available information on which to base estimation and then adjust that estimate up or down to arrive at a final judgment

      Research Box 3.1

      Anchoring and Adjustment

      Hypothesis: Estimates of the value of goods would diverge more from the anchor if given imprecise or rounded anchors versus more precise anchors.

      Research Method: Participants were randomly assigned to receive either imprecise or precise anchor values for a variety of consumer goods, including a plasma TV, beverage, or a chunk of cheese. For example, some participants were given the estimates $5,000, $10, and $5, respectively, whereas others were given $4,998, $9.80, and $4.85. Participants were asked to estimate the actual costs of these items.

      Results: Estimates in the imprecise condition diverged significantly more from their respective anchors than did estimates in the precise condition.

      Conclusion: People seem to have greater confidence in the validity of precise values than rounded values and consequently make smaller adjustments to them. This finding has real-world implications. For example, people who are negotiating the price of a home or the amount of a legal settlement may gain a more favorable outcome if they initially offer a specific price or settlement amount.

      Source: Janiszewski, C., & Uy, D. (2008). Precision of the anchor influences the amount of adjustment. Psychological Science, 19, 121–127.

      Think Again!

      1 What are heuristics and why do we rely on them?

      2 Give an explanation and your own example of the availability, representativeness, and anchoring and adjustment heuristics.

      Doing Research: Reliability And Validity

      In Chapter 1 we discussed how experiments can help determine whether one variable is causally related to another. Specifically, we saw how using multiple conditions (typically at least one experimental and one control) and random assignment to condition can enhance our confidence in the experimental results. Now let’s turn to a couple of other features of experiments—and other types of studies—that can similarly impact our confidence in social psychological research: reliability and validity.

      Say you jumped on the scale to weigh yourself one morning and the digital readout displayed 142 pounds, which is a bit more than you hoped for. So you step off and then on again and are surprised to see that now the scale shows 133, which is closer to, perhaps a little lower than, what you’d be happy with. But just to be sure you reweigh yourself, and this time the scale reads 148! Confused, you try a few more times, with results of 146, 136, 142, 139, and 137 pounds. Okay, so what do I really weigh? you wonder. Using that scale and those results, it is impossible to answer that question with much confidence. Given how the results of your measurement attempts fluctuate so widely, you cannot determine your true weight. In order to feel more certain that your measurement tool—your scale—is reasonably accurate, it first needs to be reliable.

      Reliability of a given measurement method is how consistently each measurement of the same phenomenon produces approximately the same result under the same conditions. In other words, a reliable measure should provide the same result across multiple measurement occasions of the same phenomenon. This of course assumes that the thing being measured has not changed between measurement attempts. If your scale gave you the same or close to the same weight—such as 138 pounds—each time you stepped on it, then it would be reliable.

      Note, however, that simply because the scale gives the same result each time it does not necessarily mean that that weight is the correct one. That is, the scale might indicate that you weigh 142 pounds on four consecutive measurements, but your actual weight may be 138. The scale could be systematically providing results that are too high (or too low) every time it is used: Perhaps it is not properly calibrated, and the “” is in fact “4” pounds. In addition to being reliable, then, the ideal measurement tool is also valid: It indicates your true weight. Validity is the extent to which a particular measurement tool provides accurate results. If you actually weigh 138, then a valid scale would report this.

      Social psychologists seek to develop methods of measurement that are both reliable and valid. If we would like to measure attitudes toward, say, the environment, then we want to construct the scale so that it is reliable. For instance, if the wording of the questions were ambiguous—and could be interpreted in different ways—then it is unlikely to be reliable because it might give different results at different times. Say we want to assess a person’s need for cognition or the extent to which she tends to enjoy and engage in careful thinking (Cacioppo & Petty, 1982; Cacioppo, Petty, & Kao, 1984). To do so we administer the 18-item Need for Cognition Scale (NCS), in which responses are recorded on nine-point scale (–4 to 4+). Let’s say that Gisele completes the scale with a mean response of 3.2, suggesting a high need for cognition. A couple of weeks later, Gisele completes the scale again, and this time the mean response is –1.4. With two widely divergent results, it is impossible to know what Gisele’s “true” score is, and as a result, the scale would be considered unreliable. Valid methods provide accurate or correct results—in the case of NCS, the mean response for a given person would be fairly close to her “true” need for cognition.

      Social psychologists are most concerned with two types of validity. One type, called internal validity, refers to the extent to which we can be sure that the purported cause—the IV—is the only factor influencing the purported effect—the DV (Campbell & Stanley, 1963). If, as we described in Chapter 1, the researcher successfully controlled all extraneous variables and confounds and used random assignment, then the experiment can be said to have internal validity. In our earlier example of the effects of playing a violent video game on aggression, if we are confident that only the manipulation of the video game led to the different amounts of aggression in the subsequent game, then the study has good internal validity.

      The second type, external validity, indicates how well the results of the study can be generalized or applied to other settings and populations (Campbell & Stanley, 1963). For instance, a laboratory study is said to have external validity if the effects can be generalized to real-life situations. In the video game example, if real-world aggression increased as a result of playing violent games, then we can say that the study has external validly. In Chapter 8 on prosocial behavior, we’ll elaborate on external validity and generalizability.

      Reliability: How consistently each measurement of the same phenomenon using the same measurement tool produces approximately the same result under the same conditions

      Validity: Extent to which a particular measurement tool provides accurate results

      Internal Validity: Extent to which an experimenter can be sure that the purported cause—the IV—is the only factor influencing the purported effect—the DV

      External Validity: Extent to which the results of a study can be generalized or applied to other settings and populations (also called generalizability)

      Motivated Reasoning

      The errors that we have discussed so far are rooted in “cold” mental processes that reflect simple overreliance on shortcuts and misperceptions of reality. That is, the perceiver has no particular

Скачать книгу