Definition:Frequency-severity method

📊 Frequency-severity method is an actuarial technique that models total insurance losses by separately estimating two components: how often claims occur (frequency) and how large each claim is when it does occur (severity). Rather than projecting aggregate losses as a single number, this approach treats the number of claims and the cost per claim as independent random variables, each with its own probability distribution. The product of mean frequency and mean severity yields expected losses, but the real power lies in the ability to analyze each component's drivers, variability, and trends on its own terms.

🔬 Actuaries typically fit frequency data to discrete distributions — Poisson or negative binomial — and severity data to continuous distributions such as lognormal, Pareto, or gamma. By simulating thousands of outcomes from these fitted distributions, they can generate a full loss distribution that reveals not only the expected loss but also the probability of extreme outcomes, which is essential for setting reserves, pricing excess-of-loss layers, and calibrating catastrophe models. The method is widely applied across property, casualty, and workers' compensation lines. When frequency and severity are trending in different directions — for example, claim counts declining due to safer technology but average claim costs rising due to medical inflation — this separation provides clarity that aggregate methods would obscure.

💡 The frequency-severity method gives underwriters and risk managers a far more granular view of what is driving loss experience, enabling more precise pricing adjustments and portfolio decisions. If a book of auto insurance shows stable frequency but escalating severity, the response might focus on claims management and medical cost containment rather than tightening eligibility criteria. Similarly, reinsurers use the technique to price treaties where the attachment point depends critically on the tail behavior of severity, not just average outcomes. Its versatility and analytical transparency have made it one of the most foundational tools in insurance quantitative practice.

Related concepts: