Скачать книгу

upper G Endscripts normal y Subscript italic g j Baseline ln left-parenthesis normal e Superscript normal upper X Super Subscript j Superscript normal upper B Super Subscript g Superscript Baseline slash left-parenthesis sigma-summation Underscript s equals 1 Overscript upper G Endscripts normal e Superscript normal upper X Super Subscript j Superscript normal upper B Super Subscript s Superscript Baseline right-parenthesis right-parenthesis 2nd Row equals sigma-summation Underscript normal j equals 1 Overscript normal upper N Endscripts left-bracket sigma-summation Underscript normal g equals 1 Overscript normal upper G Endscripts y Subscript italic g j Baseline normal upper X Subscript j Baseline normal upper B Subscript g Baseline minus ln left-parenthesis sigma-summation Underscript g equals 1 Overscript upper G Endscripts normal e Superscript normal upper X Super Subscript slash Superscript normal upper B Super Subscript g Superscript Baseline right-parenthesis right-bracket EndLayout"/>

      Because of the nonlinear nature of the parameters, there is no closed‐form solution to these equations, and they must be solved iteratively. The Newton–Raphson [4–7] method is used to solve these equations. This method makes use of the information matrix, I(β), which is formed from the matrix of second partial derivatives.

      The elements of the information matrix are given by partial-differential squared upper L slash partial-differential normal beta Subscript italic i k Baseline partial-differential normal beta Subscript italic i k prime Baseline equals minus sigma-summation Underscript j equals 1 Overscript upper N Endscripts x Subscript italic k j Baseline x Subscript k prime j Baseline normal pi Subscript italic i g Baseline left-parenthesis 1 minus normal pi Subscript italic i g Baseline right-parenthesis

and partial-differential squared upper L slash partial-differential normal beta Subscript italic i k Baseline partial-differential normal beta Subscript i prime k Sub Superscript prime Subscript Baseline equals sigma-summation Underscript j equals 1 Overscript upper N Endscripts x Subscript italic k j Baseline x Subscript k prime j Baseline normal pi Subscript italic i g Baseline normal pi Subscript i prime g Baseline period

      The information matrix is used because the asymptotic covariance matrix of the maximum likelihood estimates is equal to the inverse of the information matrix. That is, upper V left-parenthesis ModifyingAbove beta With ampersand c period circ semicolon right-parenthesis equals I (β)−1. This covariance matrix is used in the calculation of confidence intervals for the regression coefficients, odds ratios, and predicted probabilities.

      The interpretation of the estimated regression coefficients is not straightforward. In logistic regression, not only is the relationship between X and Y nonlinear, but also, if the dependent variable has more than two unique values, there are several regression equations. Consider the usual case of a binary dependent variable, Y, and a single independent variable, X. Assume that Y is coded so it takes on the values 0 and 1. In this case, the logistic regression equation is ln(p/(1 − p)) = β0 + β1 X. Now consider impact of a unit increase in X. The logistic regression equation becomes ln(p ′ /(1 − p′)) = β0 + β1(X + 1) = β0 + β1 X + β1. We can isolate the slope by taking the difference between these two equations. We have

      (2.9)StartLayout 1st Row normal beta 0 plus normal beta 1 left-parenthesis upper X plus 1 right-parenthesis minus left-parenthesis normal beta 0 plus normal beta 1 upper X right-parenthesis normal beta 1 equals italic ln left-parenthesis p prime slash left-parenthesis 1 minus p prime right-parenthesis right-parenthesis minus italic ln left-parenthesis p slash left-parenthesis 1 minus p right-parenthesis right-parenthesis equals 2nd Row ln left-parenthesis StartFraction p prime slash left-parenthesis 1 minus p prime right-parenthesis Over p slash left-parenthesis 1 minus p right-parenthesis EndFraction right-parenthesis equals ln left-parenthesis StartFraction italic o d d s prime Over italic odds EndFraction right-parenthesis EndLayout

      That is, β1 is the log of the ratio of the odds at X + 1 and X. Removing the logarithm by exponentiating both sides gives e Superscript normal beta 1 Baseline equals italic o d d s Superscript prime Baseline slash italic odds. The regression coefficient β1 is interpreted as the log of the odds ratio comparing the odds after a one unit increase in X to the original odds. Note that the interpretation of β1 depends on the particular value of X since the probability values, the p ′ s, will vary for different X.

      Inferences about individual regression coefficients, groups of regression coefficients, goodness of fit, mean responses, and predictions of group membership of new observations are all of interest. These inference procedures can be treated by considering hypothesis tests and/or confidence intervals. The inference procedures in logistic regression rely on large sample sizes for accuracy. Two procedures are available for testing the significance of one or more independent variables in a logistic regression: likelihood ratio tests and Wald tests. Simulation studies usually show that the likelihood ratio test performs better than the Wald test. However, the Wald test is still used to test the significance of individual regression coefficients because of its ease of calculation.

      (2.10)upper D equals 2 sigma-summation Underscript j equals 1 Overscript upper J Endscripts sigma-summation Underscript g equals 1 Overscript upper G Endscripts w Subscript normal g j Baseline ln left-parenthesis StartFraction w Subscript normal g j Baseline Over n Subscript j Baseline p Subscript italic g j Baseline EndFraction right-parenthesis

      This expression may be used to calculate the log likelihood of the saturated model without actually fitting a saturated model. The formula is LSaturated = LReduced + D/2.

      The deviance in logistic regression is analogous to the residual sum of squares in multiple regression. In fact, when the deviance is calculated in multiple regression, it is equal to the sum of the squared residuals. Deviance residuals, to be discussed later, may be squared and summed as an alternative way to calculate the deviance D.

      The change in deviance, ΔD, due to excluding (or including) one or more variables is used in logistic regression just as the partial F test is used in multiple regression. Many texts use the letter G to represent ΔD, but we have already used G to represent the number of groups in Y. Instead of using the F

Скачать книгу