Actuarial cycle time

The ongoing COVID-19 pandemic has introduced millions of ordinary people to some basic aspects of epidemiology, such as the R number to measure the reproductive ability of a virus.  However, there is another aspect to virus reproduction that is possibly even more important: cycle time.  This was pointed out recently by the epidemiologist Larry Brilliant, when he described the Delta variant of SARS-Cov-2 as "the most infectious disease of our lifetimes":

Sometimes we think of measles or chickenpox as the most explosive. What people get wrong, including a lot of my friends, is that they're forgetting about the cycle time. The incubation period of measles and chickenpox are approximately two weeks. The Delta variant has a cycle of about three and a half days.

Brilliant (2021)

If a virus has half the R number, but a quarter of the cycle time, then overall it is still twice as infectious.  The very short cycle time of the Delta variant makes it a very productive virus indeed, as demonstrated in an earlier blog.

Productivity is of course relative: what's good for the virus isn't good for us.  That said, the concept of cycle time will be familiar to many actuaries.  A lot of actuarial work involves iterative models and "what if?" scenarios, and often answering one question leads to another.  Work can therefore often only proceed as fast as the time taken to perform a single calculation cycle. If a particular task takes two hours to run, then there is a hard limit on how many "what if?" questions can be answered in a working day.

Since actuaries are expensive, any investment that shortens run-times is worth considering to boost productivity.  In the 1990s and 2000s the answer was to buy desktop PCs with faster processors (CPUs) and more memory. Nowadays the answer lies in parallel computing; ignoring the overhead of setting up the calculations, doubling the number of executing CPU threads can halve the time it takes to get your results.

However, there is another way to reduce actuarial cycle time, namely to optimise your software for speed.  There are many levels to program optimisation, and it is usually a separate stage of software maturation as it requires a different mindset and skills from the original development.  Optimisation can involve a lot of extra code, too.  To pick a recent example here at Longevitas, consider the following summation for modelling mortality shocks in portfolio data:

 

\[\displaystyle\sum_{j\ge 1} \kappa_{0,j} B_j(y)\qquad (1)\]

where \(B_j(y)\) is the \(j^{\rm th}\) \(B\)-spline evaluated at time \(y\), and \(\kappa_{0,j}\) is the coefficient of \(B_j\).  When programmed originally, the function for equation (1) had 18 lines of code.  However, we recently developed an equivalent function running ten times faster, albeit with 118 lines of code.

The above example explains why optimisation is a separate task from the original development.  The first priority is to get the answers correct, for which the simplicity of the 18 lines of code is essential.  The second priority is to get the software into the hands of users — the need to handle COVID-19 shocks in mortality analysis is pressing for actuaries.  The third priority is to run these calculations as fast as possible to reduce the cycle time and improve productivity.

One might ask why such optimisations cannot be part of the initial development?  In the example above, the optimised function for handling mortality shocks is over six times longer than the simple version.  However, rewriting working code is an incremental task with many test cycles — getting to those optimised lines is more painful than implied by the simple line count.  Indeed, trying to optimise a program too early has long been recognised as a major risk in software development:

premature optimization is the root of all evil.

Knuth (1974)

 

Optimisation is a lot of work that would otherwise delay the software getting into the hands of users.  (Such extra effort can easily be overlooked in the decision whether to buy or build.)  Of course, eventually we approach the limit of performance achievable by rewriting code.  Fortunately, one can always use additional threads — when optimised code is combined with more parallel processing, run-times can be radically reduced.

References

Brilliant, L. (2021), quoted in "The ‘Forever Virus’ Won't Go Away Until Kids Get Vaccinated", Wired magazine.

Knuth, D. E. (1974) "Structured Programming with \(\tt go\ to\) Statements", ACM Computing Surveys6 (4): 268.

Parallel processing in Longevitas

Longevitas is designed to use multi-core processors for various operations, including data audit, model optimisation and run-off simulations. Users with dedicated servers will automatically have their work distributed over the cores available. 

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.