Worked example

Predictive Analytics Memo Worked Example

This worked example focuses on the part many candidates underpractice: choosing between plausible predictive models and then writing a recommendation that sounds like advice, not like a statistics worksheet.

Page Contract
Role
Worked Example
Level
Core
Time
Reference
Freshness
Stable
Search Intent
predictive analytics memo example

Problem

A pricing team wants a model to flag policies most likely to generate above-expected claim frequency next renewal cycle so that underwriters can review the riskiest 15% of the book. Two candidate models are tested on validation data.

Model A is a GLM. It has slightly lower discrimination than Model B but better calibration and clearer variable interpretation. Model B is a tree-based model. It has the highest lift in the top-risk decile but is less stable across validation slices and is harder to explain to underwriting leadership. Write a concise recommendation memo: which model should the team use now, and what follow-up would you require before broader rollout?

What This Example Is Testing

This tests the judgment both PA and PCPA care about: not just whether you can compare models, but whether you can turn that comparison into a recommendation that respects business use, interpretability, and model risk.

Step-By-Step Solution

Start with the business objective. The team is not trying to publish a benchmark leaderboard. It needs a defensible model for underwriting review of the riskiest part of the portfolio. That means calibration, stability, and explainability matter alongside raw lift.

Model B has the stronger top-decile lift, so it may rank risk more aggressively. But the prompt also says it is less stable across validation slices and harder to explain. In a real actuarial workflow, those are not side notes. They are deployment risks.

Model A is the safer immediate recommendation because its calibration is better and its variables are easier to explain to underwriting leadership. It is more likely to support a controlled operational rollout. The right follow-up is not to ignore Model B, but to keep testing it: check whether the extra lift persists over time, review its stability on new cohorts, and evaluate whether the explainability gap can be narrowed enough for governance use.

Final Answer

Recommend Model A for the initial rollout because it better matches the business need for a stable and explainable underwriting-review tool. Note that Model B remains a strong challenger and should continue through additional validation, especially stability checks, before any broader deployment decision.

Common Wrong Answer

A common wrong answer is to recommend Model B immediately because it has the strongest lift without discussing calibration, stability, or communication risk. That misses the point of actuarial predictive analytics, where model performance has to survive operational and governance reality.

References And Official Sources