The rapid growth of the web has transformed “analytics” and “big data” into the buzzwords for all industries, including insurance. But what do those terms actually mean, and how do they affect business today? When you strip away the hype, the terms refer to three key concepts:
- Size and speed: With the growth of e-commerce and mobile technology, businesses are generating much larger amounts of data in less time than ever before.
- Talent and technology: To capture and analyze data quickly, companies are looking to upgrade their information systems and hire staff with the latest technical skills.
- Agility and competitive advantage: Data is valuable only if you use it to help your business. Companies are trying to harness insights from the data to cut costs and grow revenue.
In many ways, the concepts of big data and analytics are the bedrock of modern underwriting and pricing: By collecting and using data from multiple sources, insurers can segment risks to develop profitable, competitive insurance policies. Unfortunately, many insurers still rely on tools from the smaller-data era, putting themselves at a major disadvantage in the marketplace.
Auto insurance: Small vs. big data
In personal auto insurance, big data is making a big difference. Traditionally, underwriters have developed auto insurance prices based on smaller data — such as the car’s make, model, and manufacturer’s suggested retail price (MSRP). But “bigger data” is now available, providing far more information and allowing insurers to price policies with a better understanding of the vehicle’s safety. From manufacturers and third-party vendors, insurers can learn about a car’s horsepower, weight, bumper height, crash test ratings, and safety features. That big data helps insurers create sophisticated predictive models and more accurate vehicle-based rate segmentation.
Big data also helps manufacturers redesign their vehicles. The new car model may look and cost the same as its predecessor but pose a very different risk on the road. For example, the 2014 Subaru Forester carries the same MSRP as the 2013 model but is 4 inches longer, weighs 5 percent more, and has significantly higher crash ratings for side impacts. Each of those characteristics differentiate the 2014 model and help bigger-data insurers price the vehicle accurately, right off the assembly line.
Insurers can also use bigger data to classify a vehicle’s risk potential, even when it’s part of a new line. When Nissan released the Versa in 2012, insurers didn’t have the luxury of years of experience to help them price the car. But with analytics from multiple data sources, insurers could rapidly derive an insurance cost estimate based on known characteristics of the vehicle. Similarly, new vehicles like the 2014 Kia Forte, Infiniti Q50, and the Mitsubishi Mirage don’t have comparable prior models, and MSRP alone won’t tell the full story. Insurers that use big data and powerful analytics can quickly respond and assign more accurate levels of risk to those new vehicles.
The challenge of big data
There are challenges, though, that come with entering the world of bigger data. First and foremost is the ability to access and compile the various data sources so actuaries and data scientists can analyze them. Product managers must remember that this isn’t a “one and done” statistical activity. To use big data sources in a real-world underwriting environment, insurers need to stay on top of all of the data as manufacturers develop new cars and provide new information. Some of the data is available through vehicle identification number (VIN) lookup tools, perhaps even before a new model rolls off the assembly line. Other data, such as crash test results, may come months later. Insurers need to be ready for the ongoing flow of data if they want to take advantage of the opportunities that bigger data offers.
Wise product managers will consider the following questions as they prepare to adopt big data tools:
- What data will insurers need to update in the future, and how frequently?
- How quickly will updated data be available for rating and underwriting decisions?
- How will changes in underlying data sources affect rate filings? What plans are in place to check that insurers comply with those filings?
- What people, processes, and technology keep underwriting operations moving and responsive to data updates?
Answering those questions will help insurers overcome the challenges and enjoy the enormous benefits of big data.
Helping insurers deal with bigger data
ISO, a member of the Verisk Insurance Solutions group at Verisk Analytics, has addressed many of the challenges of big data in developing the ISO Risk Analyzer® Symbols. With data from a variety of sources, we used statistical expertise to create vehicle-specific symbols. The ISO Risk Analyzer Symbols enable insurers to classify vehicles at the VIN level while taking advantage of the statistical research and models ISO integrated into a simple-to-use tool. The Symbols are highly granular — 448 combinations per coverage — and we derived them from an extensive set of loss-related vehicle characteristics.
ISO continues to monitor various data sources and routinely issues updates as new VINs are added or data sources change. We provide convenient updates to insurers through VINMASTER® or other data services and stay in compliance with filings throughout the United States. That enables product managers to answer confidently the questions posed in this article.
The ISO Risk Analyzer Symbols are part of a larger suite of analytic products that make it easier for companies to harness the power of bigger data. The ISO Advanced Rating Toolkit™ for personal auto also includes tools to deal with the growing data sources about geographic risk (the environmental module) and tools that further refine segmentation based on individual driver characteristics (the driver history module). Each module can assist insurers in creating segmentation within their books of business, with the flexibility to develop customized uses for unique competitive advantage.