### Analysis of VaR-iance

#### (Mar 13, 2018)

In recent years we have published a number of papers on stochastic mortality models.  A particular focus has been on the application of such models to longevity trend risk in a one-year, value-at-risk (VaR) framework for Solvency II.  However, while a small group of models has been common to each paper, there have been changes in the calculation basis, most obviously where updated data have been used.  Sometimes these changes stemmed from more data being available, but, as Richard Willets covered in his blog, the ONS also restated the population estimates following the 2011 census.  This makes it tricky to compare results between papers. We therefore thought it would be instructive to do a step-by-step analysis�

### Fathoming the changes to the Lee-Carter model

#### (Feb 19, 2018)

Ancient Greek philosophers had a paradox called "The Ship of Theseus"; if pieces of a ship are replaced over time as they wear out until every one of the original components is gone, is it still the same ship?  At this point you could be forgiven for thinking (a) that this couldn't possibly be further removed from mortality modelling, and (b) that I had consumed something a lot more potent than tea at breakfast.  However, this philosophical parable is relevant to the granddaddy of all stochastic projection models: the one proposed by Lee & Carter (1992).

In their original paper Lee & Carter (1992) proposed the following model:

$\log m_{x,y} = \alpha_x+\beta_x\kappa_y+\epsilon_y$

where $$m_{x,y}$$�

Tags: Lee-Carter, P-splines, ARIMA

### Signal or noise?

#### (Nov 25, 2016)

Each year since 2009 the CMI in the UK has released a spreadsheet tool for actuaries to use for mortality projections.  I have written about this tool a number of times, including how one might go about setting the long-term rate.  The CMI now wants to change how the spreadsheet is calibrated and has proposed the following model in CMI (2016a):

$\log m_{x,y} = \alpha_x + \beta_x(y-\bar y) + \kappa_y + \gamma_{y-x}\qquad (1)$

which the CMI calls the APCI model.  $$m_x$$ is the central rate of mortality at age $$x$$ in year $$y$$ and $$\alpha_x$$, $$\beta_x$$, $$\kappa_y$$ and $$\gamma_{y-x}$$ are vectors of parameters to be estimated.  $$\bar y$$ is the average year, which is used to centre the time index around�

Tags: CMI, APCI, APC, Lee-Carter, Age-Period, smoothing

### Parameterising the CMI projection spreadsheet

#### (May 18, 2016)

The CMI is the part of the UK actuarial profession which collates mortality data from UK life offices and pension consultants.  Amongst its many outputs is an Excel spreadsheet used for setting deterministic mortality forecasts.  This spreadsheet is in widespread use throughout the UK at the time of writing, not least for the published reserves for most insurers and pension schemes.

Following Willets (1999), the basic unit of the CMI spreadsheet is the mortality-improvement rate:

$1 - \frac{q_{x,t}}{q_{x,t-1}}\qquad(1)$

where $$q_{x,t}$$ is the probability of death aged $$x$$ in year $$t$$, assuming a life is alive at the start of the year.

### Working with constraints

#### (Feb 9, 2016)

Regular readers of this blog will be aware of the importance of stochastic mortality models in insurance work.  Of these models, the best-known is that from Lee & Carter (1992):

$\log \mu_{x,y} = \alpha_x + \beta_x\kappa_y\qquad(1)$

where $$\mu_{x,y}$$ is the force of mortality at age $$x$$ in year $$y$$ and $$\alpha_x$$, $$\beta_x$$ and $$\kappa_y$$ are parameters to be estimated.  Lee & Carter used singular value decomposition (SVD) to estimate their parameters, but the modern approach is to use the method of maximum likelihood - by making an explicit distributional assumption for the number of deaths, the fitting process can make proper allowance for the amount of information available�

### Excel's Limits

#### (Jun 27, 2014)

We have written in the past about some of the reasons why we don't use Excel to fit our models.  However, we do use Excel for validation purposes - fitting models using two entirely separate tools is a good way of checking production code.  That said, there are some important limits to Excel, especially when it comes to fitting projection models.  Some of these limits are rather subtle, so it is important that an analyst is aware of all of Excel's limitations.

The first issue is that Excel's standard Solver feature won't work with more than 200 variables, i.e. parameters which have to be optimised in order to fit the model.  This is a problem for a number of important stochastic projection models, as shown in Table 1. �

Tags: Excel, Lee-Carter, APC, CBD

### (Un)Fit for purpose

#### (May 26, 2014)

Academics lay great store by anonymous peer review and in openly publishing their results.  There are good reasons for this - anonymous peer review allows expert third parties (usually two) to challenge assumptions without fear of retribution, while open publishing allows others to test things and find their limitations.  For example, the model from Lee & Carter (1992) has been thoroughly researched over the past two decades, and its limitations are well known.  However, the Lee-Carter model has stood the test of time in no small part because these limitations have been publicly documented by other researchers.

One advantage of all this open publication is that major problems can be spotted and brought�

### The perils of parameter interpretation

#### (Dec 8, 2013)

With some notable exceptions, such as the Kaplan-Meier estimator, most mortality models contain parameters.  In a statistical model these parameters need to be estimated, and it is a natural thing for people to want to place interpretations on those parameter estimates.  However, this can be tricky, as parameters in a multi-parameter model are dependent on each other.

We will illustrate this with the Lee-Carter model, which is perhaps the most durable stochastic projection model in use today.  It is structured as follows:

log μx,y = αx + βxκy      (1)

where μx,y is the force of mortality at age x in year y, and αx, βx and κy are parameters to be estimated.  The Lee-Carter�

### Volatility v. Trend Risk

#### (Oct 8, 2010)

The year 1992 was important in the development of forecasting methods: Ronald Lee and Lawrence Carter published their highly influential paper on forecasting US mortality.  The problem is difficult: given matrices of deaths and exposures (rows indexed by age and columns by year) can we forecast future death rates?  Lee and Carter designed a model specifically to solve this problem:

log μx,yαx + βxκy        (1)

where αx measures the average mortality at age xκy measures the effect of year y; this year effect is modulated by an age dependent coefficient, βx.  Lee and Carter used US data up to 1989 and here I've followed them by using data on US males aged 60-90 between 1933-1989,�

#### (Jul 12, 2010)

One of the most written-about models for stochastic mortality projections is that from Lee & Carter (1992).  As Iain described in an earlier post, the genius of the Lee-Carter model lies in reducing a two-dimensional forecasting problem (age and time) to a simpler one-dimensional problem (time only).

A little-appreciated fact is that there are two ways of approaching the time-series projection of future mortality rates.  A simple method is to treat the future mortality index as a simple random walk with drift.  This makes the strong simplifying assumption that the mortality trend changes at a constant rate (apart from the random noise).  Figure 1 shows an example projection for males in England &�