# Information Matrix

## Filter Information matrix

Posts feed

### M is for Estimation

In earlier blogs I discussed two techniques for handling outliers in mortality forecasting models:

Written by: Stephen Richards

### Measuring liability uncertainty

Pricing block transactions is a high-stakes business.  An insurer writing a bulk annuity has one chance to assess the price to charge for taking on pension liabilities.  There is a lot to consider, but at least there is data to work with: for the economic assumptions like interest rates and inflation, the insurer has market prices.  For the mortality basis, the insurer usually gets several years of mortality-experience data from the pensi

Written by: Stephen Richards

### Normal behaviour

One interesting aspect of maximum-likelihood estimation is the common behaviour of estimators, regardless of the nature of the data and model.  Recall that the maximum-likelihood estimate, $$\hat\theta$$, is the value of a parameter $$\theta$$ that maximises the likelihood function, $$L(\theta)$$, or the log-likelihood function, $$\ell(\theta)=\log L(\theta)$$.  By way of example, consider the following three single-parameter distributions:

Written by: Stephen Richards

### Lost in translation (reprise)

Late last year I drew up a table of actuarial terms and their translation for statisticians.  I had thought that it was a uniquely actuarial trait to use different names compared to other disciplines.  It turns out that statisticians are almost as guilty.
Written by: Stephen Richards

### Laying down the law

In actuarial terminology, a mortality "law" is simply a parametric formula used to describe the risk. A major benefit of this is automatic smoothing and in-filling for areas where data is sparse. A common example in modern annuity portfolios is that there is often plenty of data up to age 75 (say), but relatively little data above age 90.

For example, if we use a parametric formula like the Gompertz law:

Written by: Stephen Richards

### One small step

When fitting mortality models, the foundation of modern statistical inference is the log-likelihood function. The point at which the log-likelihood has its maximum value gives you the maximum-likelihood estimates of your parameters, while the curvature of the log-likelihood tells you about the standard errors of those parameter estimates.
Written by: Stephen Richards

### A likely story

The foundation for most modern statistical inference is the log-likelihood function.  By maximising the value of this function, we find the maximum-likelihood estimate (MLE) for a given parameter, i.e. the most likely value given the model and data.  For models with more than one parameter, we find the set of values which jointly maximise the log-likelihood.

Written by: Stephen Richards

### Choosing between models

In any model-fitting exercise you will be faced with choices. What shape of mortality curve to use? Which risk factors to include? How many size bands for benefit amount? In each case there is a balance to be struck between improving the model fit and making the model more complicated.
Written by: Stephen Richards

### Choosing between models - a business view

We discussed how we use the AIC to choose between models.
Written by: Stephen Richards