Fathoming the changes to the Lee-Carter model

Ancient Greek philosophers had a paradox called "The Ship of Theseus"; if pieces of a ship are replaced over time as they wear out until every one of the original components is gone, is it still the same ship?  At this point you could be forgiven for thinking (a) that this couldn't possibly be further removed from mortality modelling, and (b) that I had consumed something a lot more potent than tea at breakfast.  However, this philosophical parable is relevant to the granddaddy of all stochastic projection models: the one proposed by Lee & Carter (1992).

In their original paper Lee & Carter (1992) proposed the following model:

\[\log m_{x,y} = \alpha_x+\beta_x\kappa_y+\epsilon_y\]

where \(m_{x,y}\) is the central rate of mortality at age \(x\) in year \(y\) and \(\alpha_x\), \(\beta_x\) and \(\kappa_y\) are parameters to be estimated, subject to the selection of two suitable identifiability constraints. \(\epsilon_y\) is an error term with mean zero and constant variance, \(\sigma_\epsilon\). Lee & Carter estimated the parameters using singular-value decomposition (SVD), and forecast \(\kappa_y\) as a random walk with drift.

One issue with SVD is that it assumes that all mortality rates have equal variance on the logarithmic scale. This isn't the case when population sizes change over time, and it certainly cannot be the case when you consider the dramatic changes in scale between the population at age 60 compared with age 100. Instead, we need an estimation procedure which specifies the number of deaths as a random variable, and thus allows for the differences in variance of mortality rates. This was provided by Brouhns et al (2002), who proposed a Poisson distribution for the death counts and estimated the parameters by the method of maximum likelihood. Brouhns et al (2002) recast the Lee-Carter model in terms of the force of mortality, \(\mu_{x,y}\):

\[\log\mu_{x,y}=\alpha_x+\beta_x\kappa_y\]

By making the switch to the Poisson model and the force of mortality, Brouhns et al (2002) made a major improvement in turning the Lee-Carter model into a fully statistical model. However, another issue is that the Lee-Carter model is over-parameterized: patterns of \(\beta_x\) are rather regular and therefore don't need an individual parameter for each age; \(\beta_x\) could therefore be replaced with a smooth curve to reduce the dimensionality of the model. This is not an academic point, either: Delwarde et al (2007) showed that smoothing \(\beta_x\) using penalized splines reduced the tendency for mortality rates at adjacent ages to cross over in the forecast. Smoothing \(\beta_x\) thus improved the quality of the forecast; Currie (2014) took this a step further and smoothed \(\alpha_x\) as well.

So much for the fitting, what about the forecasting of \(\kappa_y\)? Lee & Carter (1992) used a simple random walk with drift, but this fits past data in the UK poorly. A better alternative for both modelling and forecasting \(\kappa_y\) is to use an ARIMA model, of which a random walk with drift is a specific subset. ARIMA models for \(\kappa_y\) in the Lee-Carter model have been used by Richards et al (2014) and many others.

Compared to the original Lee-Carter paper, modern analysts have therefore made the following changes:

  • Switched from modelling the central rate of mortality to the force of mortality.
  • Dropped SVD in favour of the method of maximum likelihood.
  • Smoothed both \(\alpha_x\) and \(\beta_x\) to avoid crossover.
  • Forecast \(\kappa_y\) using ARIMA models instead of a simple random walk with drift.

Is it then still the same Lee-Carter model? Much has changed, but the central feature is still the same: an age term (\(\alpha_x\)), plus a period term (\(\kappa_y\)) modulated by another age term (\(\beta_x\)). Arguably it is this structure which captures the essence of the Lee-Carter model, and this has not changed. We have improved upon some of the time-worn components, but it is still the same model at heart.

References

Brouhns, N, Denuit, M. and Vermunt, J. (2002), A Poisson log-bilinear regression approach to the construction of projected lifetables, Insurance Mathematics & Economics, 31 (2002), pages 373–393.

Currie, I. D. (2014), On fitting generalized linear and nonlinear models of mortality, Scandinavian Actuarial Journal, 2014, pages 1–28.

Delwarde, A., Denuit, M. & Eilers, P. (2007), Smoothing the Lee–Carter and Poisson log-bilinear models for mortality forecasting: A penalized log-likelihood approach, 1, page 29–48.

Lee, R. D. and Carter, L. (1992), Modeling and forecasting US mortality, Journal of the American Statistical Association, 87, pages 659–671.

Richards, S. J., Currie, I. D. and Ritchie, G. P. (2014) A value-at-risk framework for longevity trend risk, British Actuarial Journal, Vol. 19, Part 1, pages 116–167 (with discussion).

 

Comments

Stuart McDonald
(Mar 12, 2018)
The Ship of Theseus? To those of us who have a less literary bent this will always be Trigger's Broom!
captcha

Find by key-word


RECENT POSTS

The upcoming EU General Data Protection Regulation places focus on ... Read more
Assume we have a random variable, \(X\), with expected value ... Read more
Our new book, Modelling Mortality with Actuarial Applications , describes ... Read more
Stephen Richards
Stephen Richards is the Managing Director of Longevitas
Lee-Carter models in the Projections Toolkit
The Projections Toolkit contains models within the Lee-Carter family, including those with both smoothed and time-series projections.  All models are for the force of mortality, and are fitted using the method of maximum likelihood.  Users can optionally smooth certain parameters, and can choose between forecasting using a simple drift model or else a more sophisticated ARIMA model.