ТОП просматриваемых книг сайта:
Applied Univariate, Bivariate, and Multivariate Statistics. Daniel J. Denis
Читать онлайн.Название Applied Univariate, Bivariate, and Multivariate Statistics
Год выпуска 0
isbn 9781119583011
Автор произведения Daniel J. Denis
Жанр Математика
Издательство John Wiley & Sons Limited
2 The significance level, or type I error rate (α) at which you set your test. All else equal, a more liberal setting such as 0.05 or 0.10 affords more statistical power than a more conservative setting such as 0.01 or 0.001, for instance. It is easier to detect a false null if you allow yourself more of a risk of committing a type I error. Since we usually want to minimize type I error, we typically want to regard α as fixed at a nominal level (e.g., 0.05 or 0.01) and consider it not amenable to adjustment for the purpose of increasing power. Hence, when it comes to boosting power, researchers usually do not want to “mess with” the type I error rate.
3 Population variability, σ2, often unknown but estimated by s2. All else equal, the greater the variance of objects studied in the population, the less sensitive the statistical test, and the less power you will have. Why is this so? As an analogy, consider a rock thrown into the water. The rock will make a definitive particular “splash” in that it will displace a certain amount of water when it hits the surface. This can be considered to be the “effect size” of the splash. If the water is noisy with wind and waves (i.e., high population variability), it will be difficult to detect the splash. If, on the other hand, the water is calm and serene (i.e., low population variability), you will more easily detect the splash. Either way, the rock made a particular splash of a given size. The magnitude of the splash is the same regardless of whether the waters are calm or turbulent. Whether we can detect the splash or not is in part a function of the variance in the population.
4 Applying this concept to research settings, if you are sampling from “noisy” populations, it is harder to see the effect of your independent variable than if you are sampling from less noisy and thus, less variable, populations. This is why research using lab rats or other equally controllable objects can usually detect effects with relatively few animals in a sample, whereas research studying humans on variables such as intelligence, anxiety, attitudes, etc., usually requires many more subjects in order to detect effects. A good way to boost power is to study populations that have relatively low variability before your treatment is administered. If your treatment works, you will be able to detect its efficacy with fewer subjects than if dealing with a highly variable population. Another approach is to covary out one or two factors that are thought to be related to the dependent variable through a technique such as the analysis of covariance (Keppel and Wickens, 2004), discussed and demonstrated later in the book.
5 Sample size, n. All else equal, the greater the sample size, the greater the statistical power. Boosting sample size is a common strategy for increasing power. Indeed, as will be discussed at the conclusion of this chapter, for any significance test in which there is at least some effect (i.e., some distance between the null and alternative), statistical significance is assured for a large‐enough sample size. Obtaining large samples is a good thing (since after all, the most ideal goal would be to have the actual population), but as sample size increases, the p‐value becomes an increasingly poor indicator or measure of experimental effect. Effect sizes should always be reported alongside any significance test.
2.21.1 Visualizing Power
Figure 2.12, adapted from Bollen (1989), depicts statistical power under competing values for detecting the population parameter θ. Note carefully in the figure that the critical value for the test remains constant as a result of our desire to keep the type I error rate constant. It is the distance from θ = 0 to θ = C1 or θ = C2 that determines power (the shaded region in distributions (b) and (c)).
Statistical power matters so long as we have the inferential goal of rejecting null hypotheses. A study that is underpowered risks not being able to reject null hypotheses even if such null hypotheses are in reality false. A failure to reject a null hypothesis under the condition of minimal power could either mean a lack of inferential support for the obtained finding, or it could simply suggest an underpowered (and consequently poorly designed) experiment or study. Ensuring adequate statistical power before one engages in a research study or experiment is mandatory (Cohen, 1988).
2.22 POWER ESTIMATION USING R AND G*POWER
To demonstrate the estimation of power using software, we first use pwr.r.test
(Champely, 2014) in R to estimate required sample size for a Pearson r correlation coefficient. As an example, we estimate required sample size for a population correlation coefficient of ρ = 0.10 at a significance level set to 0.05, with desired power equal to 0.90. Note that in the code that follows, we purposely leave n empty so R can estimate this figure for us:
> install.packages(“pwr”) > library(pwr) > pwr.r.test(n =, r = .10, sig.level = .05, power = .90) approximate correlation power calculation (arctangh transformation) n = 1046.423 r = 0.1 sig.level = 0.05 power = 0.9 alternative = two.sided
Figure 2.12 Power curves for detecting parameters C1 and C2.
Source: Bollen (1989). Reproduced with permission from John Wiley & Sons, Inc.
We see that to detect a correlation coefficient of 0.10 at a desired level of power equal to 0.9, a sample size of 1046 is required. We could round up to 1047 for a slightly more conservative estimate. It is a more conservative estimate because 1047 is slightly more “generous” of a sample than R is reporting is necessary (1046). Now, in this case, the difference is extremely slight, but in general, when you provide your analysis with more subjects than what may be necessary for a given level of power, you are guarding against the possibility of obtaining smaller effects than what you believe are “out there” in your population. If in doubt, larger samples are always preferable to smaller ones, and thus rounding “up” on sample size requirements is usually a good idea.
Estimating in G*Power,10 we obtain that given in Figure 2.13.
Note that our power estimate using G*Power is identical to that using R (i.e., power of 0.90 requires a sample size of 1046 for an effect size of ρ = 0.10). G*Power also allows us to draw the corresponding power curve. A power curve is a simple depiction of required sample size as a function of power and estimated effect size. What is nice about power curves is that they allow one to see how estimated sample size requirements and power increase or decrease as a function of effect size. For the estimation of estimated sample size for detecting ρ = 0.10, G*Power generates the curve in Figure 2.14 (top curve).
Figure 2.13 G*Power output for estimating required sample size for r = 0.10.
Figure 2.14 Power curves generated by G*Power for detecting correlation coefficients of ρ = 0.10 to 0.50.
Especially for small hypothesized values of ρ, the required sample