Parallel (Va)R

One of our services, the Projections Toolkit, is a collaboration with Heriot-Watt University.  Implementing stochastic projections can be a tricky business, so it is good to have the right people on the job.  Although our development platform is traditionally Java and C++ based, one consequence of our collaboration is that parts of the system are now written in R.

R has a number of notable positive attributes, including thriving development and user communities, powerful graphics capabilities, expressive language features and a comprehensive module library.  However, blistering raw performance, and more specifically multi-processor performance, are not features standard R would lay claim to.  Standard R is neither multi-threaded nor fully thread-safe.  When we run a stochastic projection in R (which can take from seconds to minutes, depending on the model type), it will exploit one CPU no matter how many the server has available.  It isn't possible to make R any greedier in this area by splitting the projection over multiple threads.

This might not be a concern if it weren't for value-at-risk (VaR) calculations. The nature of VaR is that we must run many stochastic projections against different simulations of next year's population — one thousand is a realistic minimum to calculate a 1-in-200 level, although even higher simulation counts are desirable. Here, the single-thread nature of those R projections starts to become quite painful, with elapsed run times stretching over many hours (and even days in the most extreme instances).

The solution is to dip back into our kit-bag and exploit something other than threads: multi-process parallelism.  Since we can't use threads within R to speed up single projections, we instead spawn multiple independent R processes, each running a different simulation. This is very effective, since the simulations can be made entirely independent (so effective it is sometimes called embarrassingly parallel). Our  biggest problem is that processes are much heavier than threads — the start-up costs and interfacing overheads can bite when the simulations complete quickly,  Slower model fits place less stress on the controlling routine and thus provide better scalability — which is ideal since they are where we need parallel VaR most! You can see this limiting effect as we increased the process count on one of our test servers. Table 1 shows the results from applying multi-process parallelism to 1,000 VaR simulations, where the average simulation runtime is under five seconds.

Table 1. Execution times for 1,000 VaR simulations of a Lee-Carter DDE model. Source: Longevitas Ltd.

Number
of processes
Time taken Performance
factor relative
to one process
1 1 hr 21 min 1.00x
4 20.0 min 4.05x
7 12.5 min 6.48x

 

As you can see, this works and it works well.  While VaR remains a resource-intensive calculation, parallel processing makes slow VaR runs fast and unfeasible VaR runs achievable.  And we can't ask for more than that.

Written by: Gavin Ritchie
Publication Date:
Last Updated:

Parallel processing in Longevitas

Longevitas is designed to use multi-core processors for various operations, including data audit, model optimisation and run-off simulations. Users with dedicated servers will automatically have their work distributed over the cores available. 

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.