ТОП просматриваемых книг сайта:
Applied Univariate, Bivariate, and Multivariate Statistics. Daniel J. Denis
Читать онлайн.Название Applied Univariate, Bivariate, and Multivariate Statistics
Год выпуска 0
isbn 9781119583011
Автор произведения Daniel J. Denis
Жанр Математика
Издательство John Wiley & Sons Limited
As another example of a model, consider the number of O‐ring incidents on NASA's space shuttle (the fleet is officially, and sadly, retired now) as a function of temperature (Figure 1.4). At very low or high temperatures, the number of incidents appears to be elevated. A square function seems to adequately model the relationship. Does it account for all points? No. But nonetheless, it provides a fairly good summary of the available data. Some have argued that had NASA had such a model (i.e., essentially the line joining the points) available before Challenger was launched on January 28, 1986, the launch may have been delayed and the shuttle and crew saved from disaster.2 We feature this data in our chapter on logistic regression.
Figure 1.4 Number of O‐ring incidents on boosters as a function of temperature.
Why did George Box say that all models are wrong, some are useful? The reason is that even if we obtain a perfectly fitting model, there is nothing to say that this is the only model that will account for the observed data. Some, such as Fox (1997), even encourage divorcing statistical modeling as accounting for deterministic processes. In discussing the determinants of one's income, for instance, Fox remarks:
I believe that a statistical model cannot, and is not literally meant to, capture the social process by which incomes are “determined” … No regression model, not even one including a residual, can reproduce this process … The unfortunate tendency to reify statistical models – to forget that they are descriptive summaries, not literal accounts of social processes – can only serve to discredit quantitative data analysis in the social sciences. (p. 5)
Indeed, psychological theory, for instance, has advanced numerous models of behavior just as biological theory has advanced numerous theories of human functioning. Two or more competing models may each explain observed data quite well. Sometimes, and unfortunately, the model we adopt may have more to do with our sociological (and even political) preferences than anything to do with whether one is more “correct” than the other. Science (and mathematics, for that matter) is a human activity, and often theories that are deemed valid or true have much to do with the spirit of the times (the so‐called Zeitgeist) and what the scientific community will actually accept and tolerate as being true.3 Of course, this is not true in all circumstances, but you should be aware of the factors that make theories popular, especially in fields such as social science where “hard evidence” can be difficult to come by. The reason the experiment is often considered the “gold standard” for evidence is because it often (but not always) helps us narrow down narratives to a few compelling possibilities. In strictly correlational research, isolating the correct narrative can be exceedingly difficult or nearly impossible, despite which narrative we wish upon our data the most. Good science requires a very critical eye. Whether the theory is that of the Big Bang, the determinants of cancer, or theories of bystander intervention, all of these are narratives to help account for observed data.
1.3 SOCIAL SCIENCES VERSUS HARD SCIENCES
There is often stated a distinction between the so‐called “soft” sciences and the “hard” sciences (Meehl, 1967). The distinction, as is true in many cases of so many things, is fuzzy and blurry and requires deeper analysis to fully understand the issue. The difference between what is “soft” and what is “hard” science has usually only to do with the object of study, and not with the method of analytical inquiry.
For example, consider what distinguishes the scientist who studies temperature of a human organism compared to a scientist who studies the self‐esteem of adolescents. Their analytical approaches, at their core, will be remarkably similar. They will both measure, collect data, and subject that data to curve‐fitting or probabilistic analysis (i.e., statistical modeling). Their objects, however, are quite different. Indeed, some may even doubt the measurability of something called “self‐esteem” in the first place. Is self‐esteem real? Does it actually exist? At the heart of the distinction, really, is that of measurement. Once measurement of an object is agreed upon, the debate between the hard and soft sciences usually vanishes. Both scientists, natural and social, are generally aiming to do the same thing, and that is to understand, document phenomena, and to identify relations among these phenomena. As Hays (1994) put it so well, the overreaching goal of science, at its core, is to determine what goes with what. Virtually every scientific investigation you read about has this underlying goal but may operationalize and express it in a variety of different ways.
Social science is a courageous attempt. Hard sciences are, in many respects, much easier than the softer social sciences, not necessarily in their subject matter (organic chemistry is difficult), but rather in what they attempt to accomplish. Studying beats‐per‐minute in an organism is relatively easy. It is not that difficult to measure. Studying something called intelligence is much, much harder. Why? Because even arriving at a suitable and agreeable operational definition of what constitutes intelligence is difficult. Most more or less agree on what “heart rate” means. Fewer people agree on what intelligence really means, even if everyone can agree that some people have more of the mysterious quality than do others. But the study of an object of science should imply that we can actually measure it. Intelligence, unlike heart rate, is not easily measured largely because it is a construct open to much scientific criticism and debate. Even if we acknowledge its existence, it is a difficult thing to “tap into.”
Given the difficulty in measuring social constructs, should this then mean the social scientist give up and not study the objects of his or her craft? Of course not. But what it does mean is that she must be extremely cautious, conservative, and tentative regarding conclusions drawn from empirical observations. The social scientist must be up front about the weaknesses of her research and must be very careful not to overstate conclusions. For instance, we can measure the extent to which melatonin, a popular sleep aid, reduces the time to sleep onset (i.e., the time it takes to fall asleep). We can perform experimental trials where we give some subjects melatonin and others none and record who falls asleep faster. If we keep getting the same results time and time again across a variety of experimental settings, we begin to draw the conclusion that melatonin has a role in decreasing sleep onset. We may not know why this is occurring (maybe we do, but I am pretending for the moment we do not), but we can be reasonably sure the phenomenon exists, that “something” is happening.
Now, contrast the melatonin example to the following question—Do people of greater intelligence, on average, earn more money than those of lesser intelligence? We could correlate a measure of intelligence to income, and in this way, we are proceeding in a similar empirical (even if not experimental, in this case) fashion as would the natural scientist. However, there is a problem. There is a big problem. Since few consistently agree on what intelligence is or how to actually measure it, or even whether it “exists” in the first place, we are unsure of where to even begin. Once we agree on what IQ is, how it is measured, and how we will identify it and name it, the correlation between IQ and income is as reputable and respectable as the correlation between such variables as height and weight. It is getting to the very measurement of IQ that is the initial hard, and skeptics would argue, impossible part. But we know this already