Machine learning for UBI: An optimal path to insurance ratemaking?By Jim Weiss | November 18, 2016
Anyone who relies on automated vehicle navigation systems while driving in congested traffic knows to disregard the principle that the shortest distance between two points is a straight line. One-way streets, missed turns, and dubious directions often make reaching a destination by car seem to take far longer than by foot.
The same could be said of traditional insurance ratemaking approaches. Like a car navigating heavy traffic, the traditional ratemaking approach, otherwise known as multivariate regression, considers numerous rating variables to estimate risk. Sure, they may accurately reflect a policyholder’s loss propensity. But at times, it seems the path is more circuitous than necessary.
Sophisticated data science
Now consider applying an alternative approach to usage-based insurance (UBI) programs, which typically offer discounts for safe driving behaviors confirmed by vehicle-generated data.
UBI programs can be fertile ground to test various data science techniques that constitute the field of advanced pattern recognition, commonly known as machine learning.
By using an alternative approach that relies on sophisticated data science techniques and considers potentially more powerful variable combinations, one could conceivably arrive at similar, if not better, results — and avoid unnecessary twists and turns in the process. Think of the swift-footed traveler.
When applied to a UBI-based risk analysis, a machine learning approach can reveal perceived drawbacks as potential pluses.
This approach applies such sophisticated decision-making tools as decision trees, random forests, and neural networks. These tools can be applied to help the inquirer quickly discern the set of variables that most closely “maps” to each policyholder’s estimated loss propensity. Such an application would reduce the need to use every significant variable that could potentially apply to each policyholder — which occurs with traditional approaches even though some variables might not always be relevant for all cases.
To see how machine learning is applied, in contrast to a more conventional method, consider the following example involving a hypothetical auto risk:
- An insured vehicle is garaged in a traffic-congested metropolitan area.
- Its primary operator is a 19-year-old who recently obtained a driver’s license.
- The vehicle is usually operated between midnight and 4 a.m.
- The operator makes several sharp right turns during each trip.
It’s easy to see from a traditional underwriting perspective why an insurer might assess a high premium for this hypothetical risk. Typically, the territory base rate may be elevated due to an increased likelihood of crashes and higher cost of living in some metropolitan areas. Also, younger or inexperienced operators may exhibit higher accident rates, justifying a surcharge. Finally, driving during hours when there’s lower visibility and taking turns quickly may both demonstrably amplify risk and potentially disqualify policyholders from more significant UBI savings. Considering each risk factor more or less in isolation, an almost “perfect storm” of risk surfaces. This analysis reflects what data scientists might refer to as multivariate regression and resembles the way many insurers set rates.
The same insured risk characteristics can be viewed differently through the lens of machine learning, where we attempt to create the optimal map to the policyholder's estimated risk rather than taking a series of winding streets. This perspective suggests that since the vehicle is garaged in a traffic-congested area, it’s best to drive at night because fewer cars are on the road. The policyholder’s youthful inexperience could be viewed favorably with nighttime driving when correlated with improved eyesight and reaction time.
Finally, making sharp right turns might be expected on narrow city streets that intersect at tricky angles — and also far preferable to left turns, which some generally regard as riskier. Looking at a homogenous cohort of policyholders who exhibit these exact same risk factors, it might be observed that this hypothetical policyholder, on average, experiences lower losses than the traditional approach would predict.
Decision tree application
If we were to describe the previous paragraph as an application of machine learning, it would essentially be a decision tree application. A decision tree algorithm generally helps the inquirer to discern which risk factor, such as local traffic density, best explains losses for a particular cohort and then divides the sample into two (or more) groups — such as high versus low.
Within each subgroup, the method similarly identifies the most salient explanatory factor, such as whether the primary operator is young. Then within the youthful subgroups, the method may again identify the most salient explanatory factor, such as whether the vehicle is typically operated late at night. Note that the risk factor need not be the same for each subgroup. The automated process continues in this manner until a human intervenes or until the automated process cannot continue splitting information in this manner.
The elegance of this approach is that, in a low-traffic-density subgroup, predictions may hold true about youthful inexperience amplifying risk, with risk increased by late-night driving — corresponding with the predictions of the traditional multivariate regression. But for the higher-traffic-density subgroup, youthful inexperience may prove less of a factor, and (when combined with late-night driving) may significantly reduce loss propensity.
In other words, changing one risk factor, even with other elements being equal, can mean a world of difference. By defining groups based on risk characteristics, decision trees have the potential to save us from making that one “wrong turn” that could conceivably have us traveling quite a distance in the wrong direction.
With respect to UBI, machine learning approaches can reveal perceived drawbacks as potential pluses. For example, a demonstrably risky driving behavior, such as applying intense braking pressure, may occur infrequently and perhaps even be misinterpreted when it does occur. Consider if a vehicle’s brakes are applied to avoid a pedestrian collision in an area where there’s a high volume of foot traffic. One could argue that defensive braking under such circumstances should not be penalized.
More variables in risk equation
A decision tree may identify braking as less significant in areas with high pedestrian traffic and, in turn, bring other variables into consideration under these circumstances, such as how fast or often one drives during congested times of day or when pedestrians are more likely to enter the road unexpectedly. Taking such an approach has the potential to bring more variables into the risk equation, which, from a UBI perspective, may be one potential benefit.
There will likely always be some people who prefer to use their own feet (and intuition) as opposed to relying on vehicle navigation. Similarly, data scientists may have valid reasons for electing multivariate regression over machine learning — or might opt for some combination of both. In any case, applying machine learning to UBI data demonstrates that, in some contexts, the optimal path to ratemaking is the one less traveled, however the traveler gets there.
For more information, visit the Verisk Data Exchange™.
This article was produced by Verisk Telematics and first appeared as part nine of a ten-part series of articles on PropertyCasualty360.com, which has permitted its reuse.