Risk-Based Pricing Helps Insurers Find Dollars in the Details
By Mark Anquillare
Insurance is a unique industry. It’s one of probably very few in which a company doesn’t know the actual cost of the goods sold as the product is being sold. Nevertheless, the U.S. property/casualty market is the largest in the world at about $540 billion in direct written premium for 2013. Loss and loss adjustment expenses (LLAE) amounted to about 66 percent of earned premiums last year.
Insurance 101 would teach that loss costs are the actual or expected cost to an insurer of indemnity payments and allocated loss adjustment expenses (ALAEs) — for example, fees paid to outside attorneys, experts, and investigators used to defend claims. Loss costs do not include overhead costs or profit loadings. Historical loss costs reflect only the costs and ALAEs associated with past claims. Prospective loss costs are estimates of future loss costs, which are derived by trending and developing historical loss costs — basically predictive modeling the future. Insurers add their own expense and profit loadings to those loss costs to develop rates. Many insurers will file their own rates with the states in which they write business
So far, so good — but so what? If accurately assessing loss costs plays a big role in insurer profitability, then it seems reasonable to assert that the more accurate the assessment, the greater the advantage to the insurer. In other words, an insurer must be skilled in knowing how to evaluate and account for roughly $350 billion in claims in 2013.
If an insurer wants to put together a new policy, it needs to start by understanding the risk and how to price that risk. But that’s just the beginning. In addition, are there regional differences? What will the policy coverages look like? What are the underwriting guidelines? What risks should an insurer take? The answer to the last question has changed over the last decade: Today, many insurers believe there is no bad risk — it’s just got to be priced properly.
Simple? No. But doable.
Methods for improving market share, loss ratios, and combined ratios include better pricing models, improved underwriting decisions, better marketing campaigns, and enhanced claims handling. Whatever strategy an insurer takes to reach its goals, every aspect of a business can benefit from an improved understanding of the risk it writes.
As the market trends toward more analytic- and data-driven decisions, insurers are continually exploring ways to rate better and more precisely. Increased quantities of data and highly refined rating plans give insurers the opportunity to become extremely risk-specific in their pricing. Risk-based pricing — charging different rates depending on different risk characteristics — leads to stability and confidence in pricing. The ability to differentiate between perceived risk and actual risk affords insurers a better way to achieve their financial goals.
To provide perspective on the value of risk-based pricing in the industry, we’ve asked several of our experts to share their insights on how risk-based pricing can impact the homeowners, commercial property, and auto lines in addition to offering some thoughts on the price movements that affect premiums — the other side of the combined ratio. The knowledge found through risk-based pricing provides a good opportunity to achieve the goal of trying to grow while improving profitability. Our panel of respondents include Douglas K. Wing, assistant vice president, Analytic Products, ISO Insurance Programs and Analytic Services; Edward Beres, senior technical coordinator of products and services, Verisk Insurance Solutions – Commercial Property; Kevin Kuntz, assistant vice president Verisk Insurance Solutions – Commercial Property; Dr. Raj Bhat, vice president, auto operations, Verisk Insurance Solutions – Underwriting; John Buchanan, principal – reinsurance, Actuarial Consulting, Reinsurance & Contractual Services, ISO Insurance Programs and Analytic Services; and Joseph M. Izzo, senior vice president, Data Assets and Analytics, ISO Insurance Programs and Analytic Services.
How can more accurate assessment of risk help in rating homeowners policies?
Doug Wing: There are a variety of tactics carriers have taken and can take to give themselves a competitive advantage. Early adoption of risk-based pricing may be the most substantial. One specific and successful methodology being used by a small percentage of insurers in the homeowners market is by-peril rating, arguably the top differentiator in homeowners pricing in the last five years. Homeowners insurance is typically sold as an all-risk policy, which covers all causes of loss except those specifically excluded in the policy. Rating by peril, or cause of loss, allows insurers to rate more accurately by having a better evaluation of their true loss exposures. Developing a peril-specific rating plan provides insurers flexibility to rate policies adequately, increase market share, reduce loss ratios, and lower combined ratios.
By rating at the individual-peril level, insurers are better able to define and price risks. Breaking down risks to the individual-peril level is intuitively appealing because some variables do well in predicting certain perils but not others. For example, population density may be an excellent predictor for theft and vandalism but provides little useful information for the hail or lightning peril.
To get a better understanding of the impact of by-peril ratemaking on company growth and profitability, ISO identified insurers that were currently using by-peril rates in 2013 compared with 2007. We examined the market share, profitability challenges, and successes that all those insurers have seen in the last six years. We found that from 2007 to 2012, the homeowners insurance industry saw the number of insurers using by-peril rating double to the 25 carriers rating by peril last year. The vast majority of the insurers implemented their by-peril rating plans within the last three years. Today, additional carriers are playing catch-up and joining the pursuit of risk-based pricing.
Those 25 insurers made up a combined 28 percent of the total homeowners premium market in 2007. Today, with the adoption of peril-specific rating, those same 25 insurers have boosted their combined market share to 34 percent, an increase of $4.6 billion in direct written premiums. While many strategic aspects of those companies may contribute to the overall growth in market share, all other factors being equal, it appears that those insurers have been able to take advantage of risk-specific pricing to increase their market share in a relatively short time. That’s a great testament to the opportunities that exist for insurers planning to implement by-peril rating plans for more individualized pricing.
In addition to increasing their market share, those same insurers have also improved loss ratios and lowered combined ratios over the same time frame. In 2012, insurers using by-peril rating plans had loss ratios 7.4 points lower than insurers not using by-peril rating (69.2 percent vs. 76.6 percent). Similarly, combined ratios for by-peril companies were 8.8 points lower than for insurers not rating by peril (96.1 percent vs. 104.9 percent). Within the combined ratio, not only are loss ratios lower (7.4 points), but expenses are also lower (1.4 points), thus demonstrating multiple benefits.
While this is one example of success by better identifying the risk, many more possibilities exist in homeowners insurance. Carriers that more accurately validate and price for roof age and wear and tear may have a leg up on the competition. Furthermore, as more data is collected from “smart homes,” insurers that use such information to better define risks will likely be leaders in the market. Smart-home data includes monitoring water and gas usage, applying sensors to detect moisture, enhanced security features, and so on. We live in a world with an escalating quantity of data, and the insurers that integrate superior risk identification will be better positioned to achieve their financial goals.
Given all the elements and complexity that are part of underwriting commercial property, how can an insurer improve on its pricing?
Edward Beres and Kevin Kuntz: Though every insurer has its own considerations when underwriting and rating commercial properties, achieving consistent analysis is the key to a successful business model. Consistency helps insurers better understand and mitigate hazards, reduce deficiencies, and, most important, improve the bottom line. Having a reliable source of information about the nature of the risk is a key element in achieving consistency in risk evaluation.
Commercial property poses even larger risks than personal auto on several fronts. The foundation of understanding those risks is the use of reliable, high-quality data. As it has been for many years, such data is categorized under the designation of COPE — construction, occupancy, protection, exposure.
- Construction. A primary goal of underwriting a property insurance policy is determining the risk of fire. To help make that determination, underwriters review a number of construction factors to assign the appropriate construction class. Construction classes are based on materials used to build the property, the percentage of the structure that consists of each kind of material, and the estimated amount of damage that the building will sustain when exposed to fire.
- Occupancy and hazard mitigation. A building’s occupancy is another important factor underwriters must consider when evaluating a property. Occupancy is measured by the effect of flammable contents on the structure under fire conditions; the potential damage to materials or merchandise from the effects of fire, smoke, and water; the square footage of each floor level; and sprinkler and extinguisher protection credits.
When underwriting a commercial property, insurers should consider providing policyholders with recommendations for reducing hazards and improving protection deficiencies at the property.
Underwriters should also have an understanding of common and special hazards to adequately price the risk based on current conditions. Common hazards can include deficiencies in electrical components, heating systems, and housekeeping, while special hazards may include flammable and combustible liquids, spray-painting operations, commercial cooking, and welding and cutting.
- Protection. Underwriters must consider private and public fire protection services and safeguards on or near the property being assessed. Private fire protection systems on a property can include extinguishers, alarms, and automatic sprinkler systems.
- Exposure. Obtaining accurate exposure data about adjacent buildings, including exposing walls, hazards, construction, and distance, is critical for underwriters. For instance, a property close to a high-hazard operation or next to a storage tank with flammable liquids can present potential hazards. In addition, underwriters should also be aware of exposures such as local wildfire risk, the possibility for damaging winds and/or water, and the proximity of a structure to a flood or earthquake zone.
While the tradition of using COPE information as the basis of commercial property insurance decision making has not changed, what is more prevalent in recent years is the increased use of analytics. Analytics allow commercial property insurers to make informed decisions in efficient and effective ways while reducing risk.
Pricing models have been used throughout the industry for many years. Accurate COPE information that feeds pricing models and uses a methodology based on sound engineering principles with sufficient actuarial support can do a good job of predicting loss. Those models should adjust over time to account for emerging technologies, such as changing construction materials and techniques, and new and evolving hazards.
Loss estimates under normal and adverse conditions are a valuable tool and help in understanding the risk. Analytics that allow an insurer to benchmark a risk by providing detail on how a risk compares within specific classes or types can be a useful underwriting tool that provides different insight. Analyzing deficiencies and recommendations related to a risk can help quantify the benefits of improving the risk.
The use of analytics helps to identify and differentiate between risks, saving resources and time. In concert with quality COPE data, analytics can help provide better risk evaluation, pricing, and profitability.
How much revenue have auto insurers lost to rating error and premium leakage, and what can they do about it?
Raj Bhat: To understand the loss, let’s take 2012 as an example. In that year, rating error reduced premium revenue in the private passenger auto insurance industry by $16.07 billion. The reduced revenue was essentially a multibillion-dollar industry “windfall” handed directly to policyholders. Premium rating error represents 9.3 percent of a total $172.3 billion in personal auto written premium. Without action, insurers will continue to lose billions more over the coming years — all because of premium leakage that could be stemmed if appropriate steps are taken.
The year 2012 saw a slight decrease in auto premium leakage compared with the prior year. On average, people drove less year over year, resulting in a slight decline in premium leakage attributed to annual miles and commute; however, these categories still represent a significant proportion of the total premium leakage. There is also anecdotal evidence that a significant proportion of grown children moved back to their parents’ or grandparents’ homes because of the weak job outlook. As a result of that household consolidation, there was a sizable increase in premium leakage from unreported drivers and garaging address misreporting. Addressing premium leakage across all rating factors can make a considerable difference — in many cases reducing the auto underwriting ratio by as much as three percentage points.
For individual carriers, reducing rating error is a significant opportunity for profit gains. A reduction in auto rating error directly affects the bottom line. In a small-margin business where an individual insurer might have average profits of 5 percent of premium in a good year, adding 1 or 2 percent of loss-free premium can make a substantial difference to an auto writer’s combined ratio.
Conversely, unchecked rating error leads to failures in risk management. As an example, policies with unrated 16-year-old male drivers in the household experience an average loss ratio of more than 160 percent.
All stages of the underwriting cycle are susceptible to rating error: sales, risk analysis, policy servicing, and renewal. While significant error occurs at initial application, analysis shows that most rating errors — and massive premium leakage — arise from changes in rating data for a policyholder over time.
Using superior analytics to screen for policy rating errors, between 35 to 55 percent of policies can be identified to have potential rating errors. After actual audit of the policies with potential errors, 82 percent of the policies lack enough premium to cover the intended risk. By effectively identifying errors in rating data, insurers can take the steps necessary to correct costly errors and restore profitability to their book of auto policies.
Auto insurers can help prevent premium leakage by taking advantage of data and advances in analytic and decision support methodologies. Those tools address changes in jobs, location, life circumstances, and so forth. Through accurate rating, insurers can reduce premium leakage and increase profits.
Step 1: Get it right at the point of sale. To ensure rating integrity and limit premium leakage, insurers need to employ available technologies and conduct sophisticated data analyses. All new business should undergo this process, in real time, from an agent’s or customer service representative’s desktop. Numerous rating variables also require new-business audit checks to identify potential rating errors.
Audit checks should verify an extensive range of data — from vehicle-driver assignment, annual mileage, vehicle garaging territory, and unlisted drivers to accidents and violations, identity theft, commute distance, VIN identification, and more.
With additional analytic tools, such as pattern analysis and statistical algorithms, insurers can flag questionable policyholder information for a variety of findings, including vehicles garaged at mail drop addresses, households with unreported youthful operators, incorrect vehicle-driver assignments, underreported annual mileage and commute distances, false driver’s licenses and Social Security numbers, and commercial vehicles insured as private passenger autos.
Step 2: Get it right at renewal. Even if a policy undergoes careful analysis at initial application, there is a significant likelihood that changes in everyday life will create a different risk profile over time. Changes in marital status, job changes, new cars, new houses, and “new” 16-year-olds all create very different risks. Annual review of renewal policies can keep insurers on top of those changes.
The renewal process is similar to the new-business review but should be more in-depth. Intelligent reviews would identify only those policies that have a high likelihood of changes to rating elements. Once those policies are identified, insurers should employ an efficient and effective validation program.
The policyholder contact strategy should be comfortable yet comprehensive. To achieve optimal household contact, today’s active policyholders also need multiple response options, most important of which is the Internet. With this strategy, it’s possible to achieve customer contact rates of more than 80 percent.
Contact is important not only for securing customer acknowledgment of rating variable changes but also for improving the overall customer insurance experience. Far too often, a policyholder’s only contact with his or her insurance company is the annual or semiannual invoice. An annual “checkup” call or contact goes a long way toward maintaining a positive — and long-term — company-policyholder relationship.
Step 3: Develop a long-term approach. Once an insurer has completed steps one and two — and cleaned up its auto book of business — a baseline of accurate information exists to enable regular maintenance of a premium leakage program. The baseline will help enable affordable rating integrity effectiveness for the future. The key is to avoid lapses that could allow rating error to creep back in. Studies show that changes made during reunderwriting have an average life span of two to three years. That means there is substantial lifetime value in a rating integrity program that goes far beyond the first-year cost of execution.
The bottom line is that a great deal of time and effort goes into a one-year cleanup. However, significantly less time and effort are required to keep an auto book clean. Smart companies realize the advantages and have integrated rating integrity into their policy renewal process.
The most common metrics for measuring profitability are combined ratios, or operating ratios. Most focus is usually on the numerator of those ratios — which in large part reflects losses and loss adjustment expenses. But what about the denominator — the premiums — and the price movements that drive those premiums?
John Buchanan and Joe Izzo: Proper risk segmentation and exposure recognition are very important pieces to the correct price puzzle. Building on what Doug, Ed, Kevin, and Raj said, we agree that solving for the correct price begins by proceeding from the most accurate starting point, or technical price. There are many tools and analytics available to insurers and reinsurers for that task in the pricing process.
One of the most important functions of property/casualty upper management and chief actuaries is the measurement of profitability. That measurement, like a roller-coaster ride, goes up and down dramatically depending on where an insurer is in the ongoing underwriting cycle. That cycle — driven by competitive pressures, especially for commercial lines business — has a dramatic impact on the actual charged premiums.
Subsequent to the devastating soft market cycle of 1997 to 2001, companies have invested many resources to create robust internal price monitoring systems. To supplement price monitoring information, either for comparison or credibility enhancement, companies require accurate external pricing indications that look both back in time and forward toward the future. To help measure historical external pricing changes, underwriters and actuaries have access to numerous rate-level survey sources. To guesstimate future expected changes, there are as many, if not more, learned opinions from various industry segments and leaders.
Not surprising, in a highly competitive marketplace such as insurance or reinsurance, the pricing actions and results for an individual company strongly correlate to the pricing actions and results of its competitors. For successful companies, staying on top of the pricing feedback mechanism is critical. An accurate assessment of the market involves understanding not only a company’s own internal price monitors but also relating them to the broader marketplace. The ability to use that information in augmenting the technical price allows the astute user to optimize the market share versus profitability tradeoff.
Conclusion: Go with the Flow
Mark Anquillare: The insurance industry is focused on business workflow — and achieving that flow in an automated way. That’s certainly happening in personal lines and heading in that direction in commercial lines.
The flow of business means trying to move risk decisions up front, so as an application comes in, an insurer can use the analytics and information at point of sale to understand the particular risk. That said, if an insurer can price a piece of business inexpensively and quickly, that company usually wins the business.
Using risk-based analytics, insurers can help customers by ultimately lowering the cost of insurance. In that way, risk-based pricing not only can create more dollars for insurers, it can also save some dollars for customers. Everyone winds up a winner.
Mark V. Anquillare is group executive of Verisk Insurance Solutions – Risk Assessment and executive vice president and chief financial officer of Verisk Analytics.