Скачать книгу

href="#ulink_475f5fba-be26-5d8e-8924-25e5341535b1">Example 2.1. The upper portion of the table contains the fitted regression model. Notice that before rounding the regression coefficients agree with those we calculated manually. Table 2.3 also contains other information about the regression model. We return to this output and explain these quantities in subsequent sections.

      The least-squares estimators in18-1 and in18-2 have several important properties. First, note from Eqs. (2.6) and (2.7) that in18-3 and in18-4 are linear combinations of the observations yi. For example,

ueqn18-1

      where in18-5 for i = 1, 2, …, n.

      The least-squares estimators in18-6 and in18-7 are unbiased estimators of the model parameters β0 and β1. To show this for in18-8, consider

ueqn18-2 ueqn19-1

      That is, if we assume that the model is correct [E(yi) = β0 + β1xi], then in19-3 is an unbiased estimator of β1. Similarly we may show that of in19-4 is an unbiased estimator of β0, or

ueqn19-2

      The variance of in19-5 is found as

      (2.13) image

      because the observations yi are uncorrelated, and so the variance of the sum is just the sum of the variances. The variance of each term in the sum is in19-6, and we have assumed that Var(yi) = σ2; consequently,

      (2.14) image

      The variance of in19-7 is

ueqn19-3

      Now the variance of in19-8 is just in19-9, and the covariance between in19-10 and in19-11 can be shown to be zero (see Problem 2.25). Thus,

      (2.15) image

      Another important result concerning the quality of the least-squares estimators in19-12 and in19-13 is the Gauss-Markov theorem, which states that for the regression model (2.1) with the assumptions E(ε) = 0, Var(ε) = σ2, and uncorrelated errors, the least-squares estimators are unbiased and have minimum variance when compared with all other unbiased estimators that are linear combinations of the yi. We often say that the least-squares estimators are best linear unbiased estimators, where “best” implies minimum variance. Appendix C.4 proves the Gauss-Markov theorem for the more general multiple linear regression situation, of which simple linear regression is a special case.

      1 The sum of the residuals in any regression model that contains an intercept β0 is always zero, that is,This property follows directly from the first normal equation in Eqs. (2.5) and is demonstrated in Table 2.2 for the residuals from Example 2.1. Rounding errors may affect the sum.

      2 The sum of the observed values yi equals the sum of the fitted values , orTable 2.2 demonstrates this result for Example 2.1.

      3 The least-squares regression line always passes through the centroid [the point ] of the data.

      4 The sum of the residuals weighted by the corresponding value of the regressor variable always equals zero, that is,

      5 The sum of the residuals weighted by the corresponding fitted value always equals zero, that is,

      2.2.3 Estimation of σ2

      In addition to estimating β0 and β1, an estimate of σ2 is required to test hypotheses and construct interval estimates pertinent to the regression model. Ideally we would like this estimate not to depend on the adequacy of the fitted model. This is only possible when there are several observations on y for at least one value of x (see Section 4.5) or when prior information concerning σ2 is available. When this approach cannot be used, the estimate of σ2 is obtained from the residual or error sum of squares,

Скачать книгу