Skip to Main Content

Why data management must be a top priority for insurance mergers and acquisitions

Properly managing data is a complex task within any insurance company, and it can become even more daunting when two carriers merge.

The key concerns are the quality and security of the data being merged. At the same time, the acquiring company typically also wrestles with the logistics of combining multiple I.T. systems and ensuring regulatory compliance. Every piece of data, no matter how granular, can be valuable. If the quality suffers, that can trickle down and impact product development, and potentially entire business models and markets.

Statistical Services

The need to get data management right should not be understated, given that data supports new and renewal business—virtually every part of the policy lifecycle. It is also essential given the increasing prominence of mergers and acquisitions in the insurance arena. A report1 on insurance M&A found most insurance companies are committed to pursuing M&A as a path to growth and that, in a survey of insurance companies, 72 percent agreed that at least half of the growth in the insurance industry over the next five years will be attributable to M&A.

Charging down the path to data-driven success

One situation that can crop up in a merger is how to contextualize data in order to properly assess its quality. For that to happen, critical considerations in handling the disparate data from both companies include:

  • Consolidated and validated—with workflows streamlined
  • Securely reported and warehoused
  • Part of an integrated system so it can be more easily analyzed, improved and enhanced
  • Part of a scalable infrastructure so the business can be viewed from an enterprise perspective

Prior to a merger, companies will have managed their own data and devised their proprietary systems for losses, coverages, and risks. The raw data may be sound within a company’s ecosystem. However, without context or documentation behind the numbers, it can make the challenge of integration that much greater.

For example, at the outset, each company will have its own approach to identify internal codes. If Company A identifies a state with a numerical code (e.g. Alabama = ‘01’), it needs to be determined upon merger that Company B represents the same information in the same way. If not, both companies must determine how to address that discrepancy and reconcile the numbers.

The united state of data

Proper data management is, of course, a top priority for any insurer. But its proper execution is especially acute when insurers merge. A Verisk white paper cites three attributes that must be present to ensure a proper system is in place.

Availability: The data should be accessible to employees and available on demand to generate reports, apply to compliance requests, and properly underwrite policies.

Granularity: The data needs to encompass ALL exposures a risk represents, including granular details that, when left uncovered, can negatively affect profitability in the long term.

Credibility: The sources of the high-quality data need to be reputable; they need to supply data that’s current and accurate.

Wrong data, wrong claims

There have been instances when either some of a claim’s underlying attributes—such as location or class—do not correspond to information on the affected policy, or worse, the claim does not correspond to any existing policy in their book. Carriers will often use separate policy administration and claims systems to collect their data.

A few years ago, a large carrier instituted a new claim reporting system. More large claims were being reported under an “All Other” peril code, as opposed to what actually occurred, which in some cases was a fire or storm. It turned out many claim adjusters did not have proper context or lacked the coding identification expertise. The result was a default being assigned. Cause of loss is obviously important when assessing a risk or determining an appropriate premium.

If a carrier is not careful about maintaining security integrity in a cohesive data environment, this may lead to disparate information and incorrect assumptions and analyses of their own book’s performance. There are other reasons for caution. A report2 on mergers and acquisitions notes a “target company may bring a cybersecurity weakness into the organization, or a transaction that involves layoffs or other workforce changes may create data security risks.”

Avoiding data bottlenecks

ISO is well-positioned to identify mistakes and analyze how companies can get on course and maintain regulatory compliance. The ISO Preferred Data Partnership is one way to streamline those processes and to help ensure reports are complete and accurate. That otherwise can become a herculean task for insurers, given that every state has its own reporting requirements that can often change.

Extracting data and cataloging it in a way a company finds useful post-merger, takes significant effort and expertise. Verisk knows this well, given that its Commercial Lines manuals support thousands of risk categories across business types, coverages, and exclusions. But in the insurance business, there is no such thing as too much information. It just has to be the right information.

Learn more about Verisk's data management and reporting services.

  1. Deloitte, 2020 M&A Insurance Outlook, < >
  2. Deloitte, “The State of the Deal, M&A Trends 2020”, < >

Mike Lenczewski

Mike Lenczewski is senior director of the ISO strategic actuarial operations division at Verisk. He can be reached at

Richard Morales

Rich Morales is the Product Director for Data Management Solutions. You can reach him at

Visualize Subscribe

Get the best of Visualize!

Get the latest news and insights straight to your inbox.

Subscribe now

You will soon be redirected to the 3E website. If the page has not redirected, please visit the 3E site here. Please visit our newsroom to learn more about this agreement: Verisk Announces Sale of 3E Business to New Mountain Capital.