Deterministics Anonymous
In Macdonald & Richards (2025), Stephen and I pointed out some benefits of models built up from instantaneous Bernoulli trials by product-integration (both of which have featured in previous blogs).
References:
In Macdonald & Richards (2025), Stephen and I pointed out some benefits of models built up from instantaneous Bernoulli trials by product-integration (both of which have featured in previous blogs).
References:
If this and other recent blogs have a historical flavour, the reason is the 200th anniversary of the 1825 paper by Benjamin Gompertz that introduced his eponymous law of mortality. In the course of his own researches, Stephen drew to my attention a short letter published by Wilhelm Lazarus in Journal of the Institute of Actuaries in 1862. It is a remarkable document.
References:
Our paper on the practical aspects of contemporary mortality modelling for actuaries has was published today by the British Actuarial Journal. The article is free to access at https://www.doi.org/10.1017/S1357321725000121. The preprint is also available.
In 1860 William Makeham published a famous paper. In it he extended Gompertz's mortality law to include a constant term:
\[\mu_x=e^\epsilon+e^{\alpha+\beta x}\qquad(1),\]
References:
Today is the 200th anniversary of Benjamin Gompertz's reading of his famous paper before the Royal Society of London. Generations of actuaries and demographers are familiar with his law of mortality:
\[\mu_x = e^{\alpha+\beta x}\qquad(1),\]
References:
As discussed in earlier blogs, trailblazing actuaries Benjamin Gompertz and William Makeham used parametric models for the mortality hazard. However, the data they worked with were typically grouped into wide age ranges, which involves a loss of information if mortality rates are continually increasing.
References:
Stephen recently questioned whether the hype around AI models for Life Insurance might be a case of The Emperor's New Clothes. In this blog we discuss an important point of difference: whereas in the fable, a youth reveals the expensive "invisible" new clothes have no substance at all, in our scenario, we find precisely the opposite. AI models utilising machine learning are, far from being see-through, simply not transparent enough.
References:
When we first wrote our survival-modelling software in late 2005, we had to decide how to represent dates for the purpose of calculating exposure times. We decided to adopt a real-valued approach, e.g. 14th March 1968 would be represented as 1968.177596 (the fractional part is \(\frac{31+29+14}{366}\), since 1968 is a leap year).
References:
In my previous blog I described a real case where so-called artificial intelligence (AI) would have struggled to spot data problems that a (suspicious) human could find. But what if the input data are clean and reliable?
References:
There is emerging hype about the application of artificial intelligence (AI) to mortality analysis, specifically the use of machine learning via neural networks. In this blog I provide a counter-example that illustrates why the human element is an absolutely indispensable part of actuarial work, and why I think it always will be.
References: