Скачать книгу

of particular concern in countries such as the United States where there has been a lot of immigration from different cultures leading to an increasingly diverse population. At the same time, increases in the numbers of women as well as individuals from different cultures becoming mental health professionals have led to significant changes in the diversity of those offering health and mental health services.

      For some researchers, there is a dynamic tension between cultural considerations with an emphasis on the individual client and his or her way of expressing and experiencing mental illness, and empirically based principles that emphasize treating all clients in a consistent manner (Delvecchio Good & Hannah, 2015). That is, there is a tension between flexibility and consistency. Other researchers suggest this dynamic tension can be overcome by beginning with particular cultural groups and developing an intervention based on the cultural factors found in that particular group (Weisner & Hay, 2015).

      One alternative is to classify treatments in terms of culture (Evans, 2009). Transcultural concepts and treatments would be appropriate to individuals in all cultures. Multicultural concepts and treatments would be appropriate for individuals from groups that have similar worldviews, practices, and traditions. Culturally adapted and culture-specific concepts and treatments would be designed for individuals from a specific group. At this point, however, there has been limited research that fully integrates cultural factors with empirically supported approaches to treatment (Helms, 2015; V. H. Jackson, 2015).

      Thought Question: What are some particular benefits that each of these two approaches—empirically based principles and cultural competence—bring to psychological treatment? If you were a mental health professional, how would you bring the benefits of the two approaches to your clients?

      Reliability and Validity in Relation to Psychopathology

      Concerns about the accuracy of assessment and classification of psychopathology require us to consider two very different questions. The first has to do with the person who is being interviewed. We need to know if the person is giving us information that is accurate or not. Sometimes, individuals will “fake bad” if there is some advantage such as receiving a larger disability payout. Other times, individuals will “fake good” and deny there are any problems.

      The second question is which assessment instruments to use. An assessment instrument can be an interview, an inventory, a mood scale, and so forth. In considering instruments, we think about measurement. Measurement considerations help to define the variety of instruments that we use and the theoretical variables that these reflect.

      Traditionally, the two key measurement issues are reliability and validity. That is, does an instrument measure the construct consistently (reliability) and accurately (validity)? The measurement of temperature, for example, is based on the kinetic theory of heat, which helped define the type of devices used. With psychopathology, however, we lack exact formal definitions that tell us exactly how to make measurements. In fact, we are both trying to learn about disorders and creating techniques for making diagnoses. This makes reliability and validity considerations both more difficult and more important.

      reliability: consistency of the measurement by an assessment instrument

      Reliability

      Reliability asks the question of whether the instrument is consistent. We would expect, for example, that the odometer in our car would reflect that we drove a mile each time we drove 5,280 feet. We would also expect our bathroom scale to show the same reading if our weight had not changed. Researchers interested in questions of measurement discuss a number of types of reliability:

       Internal reliability—Internal reliability assesses whether different questions on an instrument relate to one another. If we were seeking a general measure of depression, for example, we would want to use questions that relate to one another. Questions related to feeling sad, not having energy, and wanting to stay in bed would be expected to show internal reliability.

       Test–retest reliability—Test–retest reliability determines whether two measurement opportunities result in similar scores. A key consideration with test–retest reliability is the nature of the underlying construct. Constructs seen as stable, such as intelligence or hypnotizability, would be expected to show similar scores if the same instrument was given on more than one occasion. In psychopathological research, measures of long-term depression or trait anxiety would be expected to show a higher index of test–retest reliability than measures that reflect momentary feelings of mood.

       Alternate-form reliability—As the name implies, alternate-form reliability asks whether different forms of an instrument give similar results. If you were giving an IQ test, for example, you would not want to ask the same question each time, since the individual could learn the answers from taking the test. Thus, it would be important to create alternate forms that reflect the same underlying construct.

       Inter-rater reliability—Inter-rater reliability asks how similar two or more individuals are when they observe and rate specific behaviors. Psychopathology researchers often rate the emotional responses of children as they engage in various activities. An index of inter-rater reliability would measure how consistent different observers would be in rating the same situation. Historically, one of the motivating factors for developing the DSM classification system was the discovery that different clinicians in different locations watched a film of a person with a mental health disorder and diagnosed it in different ways.

Image 94

      We expect our bathroom scale to show the same reading if our weight has not changed. Likewise, researchers are concerned with the reliability, or consistency of measurement, by assessment instruments.

      © iStockphoto.com/tetmc

      Assessment Validity

      Validity asks whether the instrument we are using is accurate. A clock, for example, could be reliable if it was always 5 minutes fast, but it would not be accurate. Unlike time, for which there is a definition in terms of atomic clocks, psychopathological disorders lack exact unchanging definitions. Although measures such as neuropsychological tests, brain images, and molecular and genetic changes suggest possible variables to be considered, there is currently no exact measure by which to diagnose psychopathology. This makes validity an important but complex concept. Partly for this reason, we consider a number of types of validity.

       Content validity—the degree to which an instrument measures all aspects of the phenomenon. If a final exam only had questions from 1 week of the course, it would not be representative of what the students had learned. A variety of psychopathological disorders, such as depression, for example, have cognitive, emotional, and motor components. A measure that just asks if a person felt negative about the future would be seen as a less useful measure of depression than one that also asks about feeling sad and thoughts about suicide and self-worth.

       Predictive validity—the degree to which an instrument can predict cognitions, emotions, or actions that a person will experience in the future. If an IQ test in high school predicted college performance, then it would be seen to have predictive validity. Many medical tests such as cholesterol measurements are designed to predict who is at risk for later medical conditions such as cardiovascular problems.

       Concurrent validity—the ability of an instrument to show similar results as other established measures of the construct.

       Construct validity—the extent that an instrument measures what it was designed to measure (Cronbach & Meehl, 1955). If a test was designed to measure what students learned in a course, then it would be a problem if the test was also sensitive to other factors such as intelligence or the ability to understand test questions asked in terms of double negatives.

       Ecological validity—the manner in which data collected has been considered beyond the local context. For example, considering which

Скачать книгу