In the insurance business, sometimes a carrier may deny an applicant coverage, investigate a claimant for suspicious activity, or deny full claim reimbursement for various reasons. These actions usually occur after a thorough evaluation from an insurance professional.
But what if an AI algorithm made these decisions based on biased information? For example, what if someone was denied coverage because of demographics of the neighborhood where they lived, investigated because of their gender, or denied full claim reimbursement because of their ethnicity?
As insurers’ interest in artificial intelligence (AI) grows, we must not only embrace its immense capabilities but also work to mitigate its potential risks.
AI adoption is on the rise
AI simulates intelligent behavior and processes large amounts of data to help humans make informed decisions. The technology has been around for decades, but as more data has become available and computing power has increased exponentially, it’s become a greater part of our day-to-day lives.
The insurance industry is starting to embrace the technology. More than 20 percent of insurers either deployed or piloted AI technology in 2021, according to a Novarica study, and that percentage has increased each year.
While initially used for underwriting purposes, AI is becoming more prominent in the claims business, facilitating automation and fraud detection. Across all business functions, AI has the potential to enable faster and more accurate decision-making, improve operational efficiency, and enhance customer experience.
But without the proper safeguards, AI can produce unethical results.
How AI can go wrong
AI isn’t biased in itself, but it’s easy for biases to creep into algorithms and affect decision output if proper checks aren’t in place. Here are three ways AI can go wrong.
1. Biases hidden in data
Data is the fuel that powers AI. AI’s primary function is to find patterns in data and correlate those patterns with outcomes of interest. But if the data is biased, the outcomes the algorithms produce will be biased as well. AI algorithms can mirror the biases that already exist in our marketplace.
For example, if you build a claims fraud detection model and the data is representative of a very specific book of business or region, then the AI solution is more likely going to identify similar type of fraud patterns and even amplify the biases over time in terms of whom the AI ends up tagging for investigation. Such biases not only get embedded into just one model, but can get scaled exponentially as we put the scoring process into production.
2. Incomplete data
Another way AI can go wrong is if the system is built on incomplete data. This occurs when the data only includes a small subset of the population or certain demographics. For example, if you’re building a model that predicts injury severity of a workers’ comp claim, but the data only includes people under the age of 40, then it may wrongly predict treatment costs and recovery timelines for older workers, who may not recover as quickly from similar injuries.
3. Gaps in subject matter expertise
While data scientists are critical to building AI models, you also need business experts to help ensure the model output makes sense for end-users. Otherwise, you’ll have algorithms that produce patterns with no practical application. It’s just as critical to have subject matter experts involved in the model-building process to ensure outcomes are understandable for business purposes and aren’t biased.
The framework for mitigating bias in AI
While it’s easy to allow biases to creep into algorithms, you can help safeguard against this issue by ensuring your data is robust and representative, and you have the right team in developing your solution.
The more relevant data you feed into a model, the better, often by including industry-wide data and other third-party data assets to allow for a better data recipe resulting in a model that is superior and explainable. Protecting against biases involves rigorous quality control and business tools and techniques to help uncover blind spots. Also, data should be representative of the problem you’re trying to solve.
Your framework should also involve the right team of engineers, data scientists, business experts, and subject matter experts. This can include technological professionals that have problem-solving capabilities to root out biases, claims professionals who can give insights into decision-making processes, and medical professionals who can provide insights into injuries and treatments.
An ethical approach to AI
AI adoption is only going to grow. While it’s still in its early stages in insurance, carriers should look to put in place procedures and safeguards to help mitigate the potential ethical issues now, so they lessen the chances to cause problems and discriminatory outcomes later. Along with the proper framework for AI and rigorous quality assurance, insurers should also consider an ethical AI policy with documented guidelines, analytics governance, and enforcement procedures.
At Verisk, we develop AI solutions with strict adherence to our ethical AI policy. With our extensive data assets, domain expertise, and a team of data scientists, we’re helping insurers leverage the benefits of AI while striving to ensure the technology is fair and ethical.
For more on our approach to AI, check out our resources on ethical AI.