\( \newcommand{\bm}[1]{\boldsymbol{\mathbf{#1}}} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \DeclareMathOperator{\corr}{corr} \newcommand{\indep}{\perp\!\!\!\perp} \newcommand{\nindep}{\perp\!\!\!\perp\!\!\!\!\!\!/\;\;} \)

1.4 Inference

It follows from the distribution of \(\hat\beta\) that \[ {{\hat\beta_k-\beta_k}\over{\sigma[(X^TX)^{-1}]_{kk}^{1/2}}}\sim N(0,1),\qquad k=0,\ldots ,p. \] The dependence on unknown \(\sigma\) can be eliminated by replacing \(\sigma\) with its estimate \(s\), in which case it can be shown that \[ {{\hat\beta_k-\beta_k}\over{s.e.(\hat\beta_k)}}\sim t_{n-p-1}, \] where the standard error \(s.e.(\hat\beta_k)\) is given by \[ s.e.(\hat\beta_k)=s[(X^TX)^{-1}]_{kk}^{1/2}. \]

This enables confidence intervals for any \(\beta_k\) to be calculated, or hypotheses of the form H\(_0\): \(\beta_k=0\) to be tested.

The sampling distributions of the fitted values and residuals can be obtained, straightforwardly as \[ \hat y\sim N(X\beta,\sigma^{2}H) \] and \[ e\sim N(0,\sigma^{2}[I_n-H]). \] The latter expression allows us to calculate standardised residuals, for comparison purposes, as \[ r_i={{e_i}\over{s(1-h_{ii})^{1/2}}}. \]