• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Model validation and model selection

Minh Ho

Very Active Member
Exam ST8 in Sep 2015 provide 2 distinctive questions about model validation and model (factor selection) in question 10 and 11. The summary is:
Model (factor) selection

- Test score (AIC, BIC, Chi Square, F test)

- Hat matrix (score at which log likelihood falls off from optimum solution). ie Steep curvature means the parameters is tightly defined.

- Compare model relativity with expert judgment. Ie: draw graph of best estimate of predicted model +- 2stddev to see actual result fall inside the graph

- Consistency check with factor and interaction


Model validation

- Actual against expected

- Plot residual

- Gain curves

- Lift curves

With me, the differences between these two are very small. Both using some statistical method to compare between models.
Many of the method in model validation can be used in model selection and vice versa. After all it is just the trade off between over fitting and under fitting (ie: Test score AIC can be used in model selection as well).
In Kaggle some machine learning practioner is using AIC from both train, validation and test.
 
True enough.

In the exam, I imagine it's worth making the distinction between the two. The marking scheme is likely do the same, and after all it's all about maximising your chances of picking up marks.

In practice though, an actuary is likely to use a wide variety of tools to optimise the model and analyse results.

At the end of the day, an actuary's job is not particularly to give 'the right answer' but to understand (and communicate) the model results, the uncertainties in the model and the limitations of the methods used.
 
Back
Top