Myths of Artificial Intelligence: What Insurers Need to Know

By Karthik Balakrishnan

myths-of-ai2.jpgThe term artificial intelligence, or AI, has become as ubiquitous as air and water, yet there’s extensive misunderstanding about what AI is (and is not) and what it really can (and cannot) do.

At a recent insurance conference, a speaker asked the audience for a show of hands if they had heard the term AI. Nearly 90 percent of the room raised their hands. The speaker then asked people to put their hands down if they first heard the term AI within the last year—and about a quarter of the hands went down. At two years, roughly half the hands were down, and at the four-year mark, none of the hands remained raised.

When probed on what AI means, the responses were generic and vague. Self-driving cars came up as the most prominent example of AI—recognizing things in images—and a few mentioned finding patterns in data, although they couldn’t describe how.

This informal poll would suggest that AI has come about in recent years and has to do with image recognition and perhaps a few more things.

Nothing could be further from the truth. AI is much older—and intoxicatingly more diverse—than that.

Birth of AI

From time immemorial, humans have been fascinated with intelligence, whether it be philosophers positing on how intelligence might arise in beings or authors creating stories like The Adventures of Pinocchio, where humanoid intelligence can be artificially created.

The possibility of real application of intelligence came about with the invention of the electronic digital computer at Iowa State University in 1939. Here now was a machine that could be programmed to execute different tasks, including ones that traditionally required human intelligence, such as solving systems of linear equations or learning to play games like backgammon, chess, and Go.

In 1950, a British mathematician named Alan Turing proposed a somewhat practical test of intelligence that has since been referred to as the Turing Test. In this test, a human interrogator interacts with a human expert and a computer but doesn’t know which is which. The interrogator asks a series of questions (in a specific domain), and the computer or the human provides the responses. If, after sufficient questioning, the interrogator is unable distinguish between the human and the computer, the computer is declared to have passed the Turing Test of intelligence. This notion of a computer imitating an intelligent human and thereby being considered intelligent itself came up in the movie “The Imitation Game,” which was based on Alan Turing and some of his other work.

The maturing of computers in the 1950s and the notion of the Turing Test of intelligence spurred tremendous research and development; and in 1956, a specialized conference was held at Dartmouth College, where the term Artificial intelligence (AI) was coined to formalize the notion of machine-based intelligence.

AI and machine learning

Early efforts in AI produced phenomenal results. Computers were programmed to win at checkers, solve algebraic problems, prove mathematical theorems, and speak English. These solutions, however, were “programmed intelligence,” where the computer was programmatically instructed to make specific moves or produce certain actions when faced with specific input scenarios, for instance, a board position in a game of checkers.

Defining this space of input scenarios and codifying the corresponding actions (called knowledge engineering) was daunting for most nontrivial problems and soon became a major hurdle in extending AI to real-world problems.

A key advancement happened in 1963 when two researchers created a program for pattern recognition that could evaluate its own operations and adjust itself to improve its own performance. In short, here was a system that could learn from its behaviors (and mistakes) and acquire the required intelligence through experience. It didn’t need to be programmed. The field of machine learning (ML) was thus born and spawned research efforts that led to the invention of a plethora of algorithms for learning from experience, whether it be computers teaching themselves to play games or robots learning to navigate their environments. Machine learning algorithms also allowed computers to discover and learn interesting and valuable patterns buried in data, a practice that has now been popularized by the term data science.

AI is thus the quest to build computer-based systems that can operate like human experts in their domains of expertise, whether programmed, self-learned, or hybrid. Machine learning is a subset of AI and encompasses a broad class of techniques and algorithms that support the quest of AI by enabling a computer to “learn” or acquire the required expertise or intelligence. Statistical modeling techniques—for instance, logistic regression and generalized linear models (GLMs)—are in essence machine learning algorithms because they infer a model (encapsulation of patterns) from data.

While many AI/ML algorithms were developed, real-world applications were limited by two factors: (1) insufficient data to train the ML models and (2) expensive compute/storage requirements that made their application impractical.

Moore’s Law and the cloud

Advances in computation and storage technologies and the emergence of cloud computing in the last decade have paved the way for AI and machine learning to seek reemergence in a more diverse form.

Moore’s Law—named after Gordon Moore, the cofounder of Fairchild Semiconductor and Intel—is a projection of an empirical observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years. A rough translation of Moore’s Law (albeit with some debate) is that the computation capability embeddable in the same size IC will double every two years, or the cost to make the same circuit will halve every two years.

Although slowing down in recent years, Moore’s Law has largely held for the last 50 years, which has produced stunning improvements in computation capabilities and storage capacities while simultaneously decreasing costs. This has made the storage of large volumes of data and its processing significantly more cost-effective than a few decades ago.

Graphics processing units (GPUs), originally designed as graphics controllers for computer displays, have been leveraged to uniquely suit a number of compute-intensive AI algorithms, especially in the neural network community. This has enabled “deep learning”–which includes new neural network algorithms with specialized multilayer architectures–to become practical. These algorithms excel in extracting patterns from information-rich, multimodal data, such as images, videos, speech, text, and, more recently, IoT data streams, and are finding powerful application in diverse industries.

Finally, the emergence of the public cloud (where one can simply “rent” servers or software for the duration of need)—coupled with powerful open-source tools like R and Python—has further allowed companies to begin analyzing their data without historical restrictions of large CapEx investment outlays for hardware and software.

These trends have directly influenced diverse industries to begin collecting and storing operational data, which can then be analyzed and modeled using AI and machine learning techniques to create business value.

AI in claims and underwriting

Albeit not stated in those terms, the property/casualty industry has been leveraging AI and machine learning for decades. In addition to marketing applications—using statistical and predictive models to score prospects and predict attrition—the industry has also used generalized linear models for ratemaking in personal lines (auto, homeowners) and increasingly so in commercial lines (businessowners, small commercial, and others). Most large insurers now have filed rating plans that use elements of predictive modeling and AI, and Verisk offers a series of such rating-related products under the ISO Risk Analyzer® umbrella that are available for carriers to use.

While the use of AI has increased for underwriting risk-scoring models and claims solutions (including fraud detection, triage modeling, and complexity estimation), recent trends in advanced AI, especially in image and speech processing, are heralding new opportunities for the property/casualty industry. Straight-through processing (STP), or touchless handling, is no longer only a possibility, but instead is fast becoming a reality. Faced with the daunting challenge of replacing retiring experts, many insurers are being forced to look at alternative solutions, including using computer programs to underwrite risks or adjust claims automatically.

As an example, consider a scenario with a touchless claim. Even on the simplest of claims, adjusters have to perform a number of actions. They need to collect all the details of the loss: what happened, where, when, who was involved, etc. The adjuster must then validate coverage as well as the loss event (for example, Did hail really fall on the roof?). Thereafter, the adjuster has to make a series of decisions, including triaging the claim (How complex is it likely to become? Should a more senior adjuster be assigned? Should a nurse case-manage the injury?). The process also involves evaluating liability, estimating settlement amounts, pursuing recoveries (subrogation, salvage, and so on), and negotiating close. Building systems to automate these adjuster decisions with high reliability and accuracy is not an easy task.

However, technology innovations coupled with new AI and machine learning methodologies are making these possible. For instance, leveraging the ubiquity of smartphones, many insurers can now directly interact with their insureds, often at the scene of the loss. Chatbot and VoiceBot technologies can be used along with speech analysis and natural language processing algorithms to craft automated, yet humanized contact with the insured to collect relevant loss details, including photos and videos of the damaged artifacts. Image analysis techniques can automatically identify and quantify the damage from photos and videos. Sophisticated fraud detection algorithms, including forensics algorithms to identify any manipulation or tampering of the photos and videos, can help to ensure all aspects of the claim are legitimate.

In sum, contrary to the informal conference poll results, AI has been around for more than half a century, but only in the last five years have advances in technology, computing, and storage made some of its more exciting capabilities practical. In what appears to be a perfect storm, many property/casualty businesses are not only facing resource challenges but also having to transform to meet customer expectations of high-tech touchpoints and instant gratification—applications that diverse AI/ML algorithms seem perfectly designed to enable.

Karthik Balakrishnan

Dr. Karthik Balakrishnan is senior vice president and head of the Analytics Center of Excellence – Claims for ISO, a Verisk (Nasdaq:VRSK) business.