This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

2.4 Confidence and prediction intervals

As mentioned in the previous section, we always work with samples and inevitably we deal with randomness just because of that even, when there are no other sources of uncertainty in the data. For example, if we want to estimate the mean of a variable based on the observed data, the value we get will differ from one sample to another. This should have become apparent from the examples we discussed earlier. And, if the LLN and CLT hold, then we know that the estimate of our parameter will have its own distribution and will converge to the population value with the increase of the sample size. This is the basis for the confidence and prediction interval construction, discussed in this section. Depending on our needs, we can focus on the uncertainty of either the estimate of a parameter, or the random variable \(y\) itself. When dealing with the former, we typically work with the confidence interval - the interval constructed for the estimate of a parameter, while in the latter case we are interested in the prediction interval - the interval constructed for the random variable \(y\).

In order to simplify further discussion in this section, we will take the population mean and its in-sample estimate as an example. In this case we have:

  1. A random variable \(y\), which is assumed to follow some distribution with finite mean \(\mu\) and variance \(\sigma^2\);
  2. A sample of size \(T\) from the population of \(y\);
  3. Estimates of mean \(\hat{\mu}=\bar{y}\) and variance \(\hat{\sigma}^2 = s^2\), obtained based on the sample of size \(T\).

2.4.1 Confidence interval

What we want to get by doing this is an idea about the population mean \(\mu\). The value \(\bar{y}\) does not tell us much on its own due to randomness and if we do not capture its uncertainty, we will not know, where the true value \(\mu\) can be. But using LLN and CLT, we know that the sample mean should converge to the true one and should follow normal distribution. So, the distribution of the sample mean would look like this (Figure 2.23).

Distribution of the sample mean.

Figure 2.23: Distribution of the sample mean.

On its own, this distribution just tells us that the variable is random around the true mean \(\mu\) and that its density function has a bell-like shape. In order to make this more useful, we can construct the confidence interval for it, which would tell us where the true parameter is most likely to lie. We can cut the tails of this distribution to determine the width of the interval, expecting it to cover \((1-\alpha)\times 100\)% of cases. In the ideal world, asymptotically, the confidence interval will be constructed based on the true value, like this:

Distribution of the sample mean and the confidence interval based on the population data.

Figure 2.24: Distribution of the sample mean and the confidence interval based on the population data.

Figure 2.24 shows the classical normal distribution curve around the population mean \(\mu\), confidence interval of the level \(1-\alpha\) and the cut off tails, the overall surface of which corresponds to \(\alpha\). The value \(1-\alpha\) is called confidence level, while \(\alpha\) is the significance level. By constructing the interval this way, we expect that in the \((1-\alpha)\times 100\)% of cases the value will be inside the bounds, and in \(\alpha\times 100\)% it will not.

In reality we do not know the true mean \(\mu\), so we do a slightly different thing: we construct a confidence interval based on the sample mean \(\bar{y}\) and sample variance \(s^2\), hoping that due to LLN they will converge to the true values. We use Normal distribution, because we expect CLT to work. This process looks something like in Figure 2.25, with the bell curve in the background representing the true distribution for the sample mean and the curve on the foreground representing the assumed distribution based on our sample:

Distribution of the sample mean and the confidence interval based on a sample.

Figure 2.25: Distribution of the sample mean and the confidence interval based on a sample.

So, what the confidence interval does in reality is tries to cover the unknown population mean, based on the sample values of \(\bar{y}\) and \(s^2\). If we construct the confidence interval of the width \(1-\alpha\) (e.g. 0.95) for thousands of random samples (thousands of trials), then in \((1-\alpha)\times 100\)% of cases (e.g. 95%) the true mean will be covered by the interval, while in \(\alpha \times 100\)% cases it will not be. The interval itself is random, and we rely on LLN and CLT, when constructing it, expecting for it to work asymptotically, with the increase of the number of trials.

Mathematically the red bounds in Figure 2.25 are represented using the following well-known formula for the confidence interval: \[\begin{equation} \mu \in (\bar{y} + t_{\alpha/2}(df) s_{\bar{y}}, \bar{y} + t_{1-\alpha/2}(df) s_{\bar{y}}), \tag{2.4} \end{equation}\] where \(t_{\alpha/2}(df)\) is Student’s t-statistics for \(df=T-k\) degrees of freedom (\(T\) is the sample size and \(k\) is the number of estimated parameters, e.g. \(k=1\) in our case) and level \(\alpha/2\), and \(s_{\bar{y}}=\frac{1}{\sqrt{T}}s\) is the estimate of the standard deviation of the sample mean. If we knew for some reason the true variance \(\sigma^2\), then we could use z-statistics instead of t, but we typically do not, so we need to take the uncertainty about the variance into account as well, thus the use of t-statistics.

Note, that in order to construct confidence interval, we do not care what distribution \(y\) follows, as long as LLN and CLT hold.

2.4.2 Prediction interval

If we are interested in capturing the uncertainty about the random variable \(y\), then we should refer to prediction interval. In this case, we typically rely on LLN and the assumed distribution for the random variable \(y\). For example, if we know that \(y \sim \mathcal{N}(\mu, \sigma^2)\), then based on our sample we can construct a prediction interval of the width \(1-\alpha\): \[\begin{equation} y \in (\bar{y} + z_{\alpha/2} s, \bar{y} + z_{1-\alpha/2} s), \tag{2.5} \end{equation}\] where \(z_{\alpha/2}\) is the z-statistics (quantile of standard normal distribution) for the level \(\alpha/2\) and \(\bar{y}\) is the sample estimate of \(\mu\) and \(s\) is the sample estimate of \(\sigma\). If we assume some other distributions for the random variable, the formula would change. In a way, the prediction interval just comes to getting the quantiles of the assumed distribution based on estimated parameters. In some cases, when some of the assumptions do not hold, we might switch to more advanced methods for prediction interval construction.