2.6 Residuals and Model Checking
Recall that for linear models, we define the residuals to be the differences between the observed and fitted values \(y_i-\hat{\mu}^{(0)}_i,\; i = 1, \ldots, n\). In fact, both the scaled deviance and Pearson \(X^2\) statistic for a normal GLM are the sum of the squared residuals divided by \(\sigma^2\). We can generalise this to define residuals for other generalised linear models in a natural way.
For any GLM~we define the Pearson residuals to be \[ e^P_i={{y_i-\hat{\mu}_i^{(0)}}\over{\widehat{Var}(Y_i)^{1\over 2}}}\qquad i = 1, \ldots, n. \] Then, from (2.12), \(X^2\) is the sum of the squared Pearson residuals.
For any GLM~we define the deviance residuals to be \[ e^D_i={\rm sign}(y_i-\hat{\mu}_i^{(0)}) \left[{{y_i[\hat{\theta}^{(s)}_i-\hat{\theta}^{(0)}_i] -[b(\hat{\theta}^{(s)}_i)-b(\hat{\theta}^{(0)}_i)]} \over{\phi_i}}\right]^{1\over 2},\qquad i = 1, \ldots, n, \] where \({\rm sign}(x)=1\) if \(x>0\) and \(-1\) if \(x<0\). Then, from (2.10), the scaled deviance, \(L_{0}\), is the sum of the squared deviance residuals.
When \(\phi_i=\sigma^2/m_i\) and \(\sigma^2\) is unknown, as in Section 2.5, the expressions above need to be multiplied through by \(\sigma^2\) to eliminate dependence on the unknown dispersion parameter. Therefore, for a normal GLM~the Pearson and deviance residuals are both equal to the usual residuals, \(y_i-\hat{\mu}^{(0)}_i,\; i = 1, \ldots, n\).
Both the Pearson and deviance residuals can be standardised by dividing through by \((1-h_{ii})^{1/2}\), as in Section 1.4. The derived residuals \[ r_i^*=r^D_i+\frac{1}{r^D_i}\log(r^P_i/r^D_i) \] are close to normal for a wide range of models, where \(r^D_i\) and \(r^P_i\) are the standardised deviance and Pearson residuals, respectively.
Generalised linear model checking, using residuals, is based on the same kind of diagnostic plots, as were suggested for linear models in Section 1.7. Similarly, the Cook’s distance \(C_j\) for linear models can be adapted for GLMs by using Pearson residuals.