\( \newcommand{\mathbbm}[1]{\boldsymbol{\mathbf{#1}}} \)

5.3 Conditional expectation and variance

Now, why is the recursion (5.10) important? This is because we can take the expectation and variance of (5.10) conditional on the values of the state vector \(\mathbf{v}_{t}\) on the observation \(t\) and all the matrices and vectors (\(\mathbf{F}\), \(\mathbf{w}\), and \(\mathbf{g}\)), assuming that the basic model assumptions hold (error term is homoscedastic, uncorrelated, and has the expectation of zero, Subsection 1.4.1), in order to get: \[\begin{equation} \begin{aligned} \mu_{y,t+h} = \text{E}(y_{t+h}|t) = & \sum_{i=1}^d \left(\mathbf{w}_{m_i}^\prime \mathbf{F}_{m_i}^{\lceil\frac{h}{m_i}\rceil-1} \right) \mathbf{v}_{t} \\ \sigma^2_{h} = \text{V}(y_{t+h}|t) = & \left( \sum_{i=1}^d \left(\mathbf{w}_{m_i}^\prime \sum_{j=1}^{\lceil\frac{h}{m_i}\rceil-1} \mathbf{F}_{m_i}^{j-1} \mathbf{g}_{m_i} \mathbf{g}^\prime_{m_i} (\mathbf{F}_{m_i}^\prime)^{j-1} \mathbf{w}_{m_i} \right) + 1 \right) \sigma^2 \end{aligned}, \tag{5.12} \end{equation}\] where \(\sigma^2\) is the variance of the error term. The formulae (5.12) are cumbersome, but they give the analytical solutions to the two moments for any model that can be formulated in the pure additive form (5.5). Having obtained both of them, we can construct prediction intervals, assuming, for example, that the error term follows Normal distribution (see Section 18.3 for details): \[\begin{equation} y_{t+h} \in \left( \text{E}(y_{t+h}|t) + z_{\frac{\alpha}{2}} \sqrt{\text{V}(y_{t+h}|t)}, \text{E}(y_{t+h}|t) + z_{1-\frac{\alpha}{2}} \sqrt{\text{V}(y_{t+h}|t)} \right), \tag{5.13} \end{equation}\] where \(z_{\frac{\alpha}{2}}\) is the quantile of standardised Normal distribution for the level \(\frac{\alpha}{2}\). When it comes to other distributions (see Section 5.5), in order to get the conditional \(h\) steps ahead scale parameter, we can first calculate the variance use (5.12) and then using the relation between the scale and the variance for the specific distribution to get the necessary value (this is discussed in Section 5.5).

5.3.1 Example with ETS(A,N,N)

For example, for the ETS(A,N,N) model, we get: \[\begin{equation} \begin{aligned} \text{E}(y_{t+h}|t) = & \mathbf{w}_{1}^\prime \mathbf{F}_{1}^{h-1} \mathbf{v}_{t} \\ \text{V}(y_{t+h}|t) = & \left(\mathbf{w}_{1}^\prime \sum_{j=1}^{h-1} \mathbf{F}_{1}^{j-1} \mathbf{g}_{1} \mathbf{g}^\prime_{1} (\mathbf{F}_{1}^\prime)^{j-1} \mathbf{w}_{1} + 1 \right) \sigma^2 \end{aligned}, \tag{5.14} \end{equation}\] or by substituting \(\mathbf{F}=1\), \(\mathbf{w}=1\), \(\mathbf{g}=\alpha\) and \(\mathbf{v}_t=l_t\): \[\begin{equation} \begin{aligned} \mu_{y,t+h} = & l_{t} \\ \sigma^2_{h} = & \left((h-1) \alpha^2 + 1 \right) \sigma^2 \end{aligned}, \tag{5.15} \end{equation}\] which is the same conditional expectation and variance as in the Hyndman et al. (2008) monograph on page 81.

References

• Hyndman, R.J., Koehler, A.B., Ord, J.K., Snyder, R.D., 2008. Forecasting with Exponential Smoothing: The State Space Approach. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-71918-2