### A likely story

#### (Nov 27, 2008)

The foundation for most modern statistical inference is the log-likelihood function.  By maximising the value of this function, we find the maximum-likelihood estimate (MLE) for a given parameter, i.e. the most likely value given the model and data.  For models with more than one parameter, we find the set of values which jointly maximise the log-likelihood.

This much is basic statistics.  However, the log-likelihood function can give you more insight than just yielding MLEs.  In particular the shape and curvature of the log-likelihood tells you how much confidence you can have in a particular MLE.  By way of example, consider fitting a simple Makeham model for the force of mortality, μx:

μx =…

Tags: Makeham, log-likelihood