# Information Matrix

## Filter Information matrix

Posts feed

### M is for Estimation

In earlier blogs I discussed two techniques for handling outliers in mortality forecasting models:

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: outliers, Filter information matrix by tag: robustness, Filter information matrix by tag: log-likelihood

### Measuring liability uncertainty

Pricing block transactions is a high-stakes business. An insurer writing a bulk annuity has one chance to assess the price to charge for taking on pension liabilities. There is a lot to consider, but at least there is data to work with: for the economic assumptions like interest rates and inflation, the insurer has market prices. For the mortality basis, the insurer usually gets several years of mortality-experience data from the pensi

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: mis-estimation risk, Filter information matrix by tag: covariance matrix, Filter information matrix by tag: log-likelihood

### Normal behaviour

One interesting aspect of maximum-likelihood estimation is the common behaviour of estimators, regardless of the nature of the data and model. Recall that the maximum-likelihood estimate, \(\hat\theta\), is the value of a parameter \(\theta\) that maximises the likelihood function, \(L(\theta)\), or the log-likelihood function, \(\ell(\theta)=\log L(\theta)\). By way of example, consider the following three single-parameter distributions:

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: mis-estimation risk, Filter information matrix by tag: log-likelihood

### Lost in translation (reprise)

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: hazard function, Filter information matrix by tag: information matrix, Filter information matrix by tag: score function, Filter information matrix by tag: log-likelihood

### Laying down the law

In actuarial terminology, a mortality "law" is simply a parametric formula used to describe the risk. A major benefit of this is automatic smoothing and in-filling for areas where data is sparse. A common example in modern annuity portfolios is that there is often plenty of data up to age 75 (say), but relatively little data above age 90.

For example, if we use a parametric formula like the Gompertz law:

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: log-likelihood, Filter information matrix by tag: mortality law, Filter information matrix by tag: CMI, Filter information matrix by tag: Gompertz-Makeham family

### One small step

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: log-likelihood, Filter information matrix by tag: numerical approximation, Filter information matrix by tag: derivatives

### A likely story

The foundation for most modern statistical inference is the log-likelihood function. By maximising the value of this function, we find the maximum-likelihood estimate (MLE) for a given parameter, i.e. the most likely value given the model and data. For models with more than one parameter, we find the set of values which jointly maximise the log-likelihood.

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: Makeham, Filter information matrix by tag: log-likelihood

### Choosing between models

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: AIC, Filter information matrix by tag: log-likelihood, Filter information matrix by tag: model fit

### Choosing between models - a business view

**Written by:**Stephen Richards

**Tags:**Filter information matrix by tag: AIC, Filter information matrix by tag: log-likelihood, Filter information matrix by tag: model fit