As digital manipulation evolves, the insurance sector is facing an unprecedented surge of image and document fraud. At the Verisk Insurance Conference London 2024, experts provided insights into how insurers are adapting, even to threats they’ve never seen before.
The rise of digitally enabled fraud is reshaping the insurance industry, placing the potential for document manipulation in the hands of virtually everyone. Historically confined to hand-forged documents and exaggerated claims, today’s fraud landscape presents far more complex challenges.
Fraudsters are well-documented to be exploiting advancements in technology, particularly image and document manipulation fuelled by the improved accessibility of artificial intelligence (AI). This shift was a central theme at this year’s Verisk Insurance Conference London 2024, where industry leaders convened to share strategies for tackling these threats even as they evolve in real time.
From greed to need
“Fraud used to be driven by greed,” said James Burge, head of counter fraud at Allianz Insurance. “But now, we’re seeing a shift to need.” This shift is partly a consequence of the cost-of-living crisis, which has put financial pressure on individuals and businesses alike. Opportunistic fraud has become more prevalent, with perpetrators justifying their actions as a form of necessity rather than malice.
This trend isn’t limited to the personal lines of insurance. Burge highlighted a surge of fraud in commercial property and casualty insurance, where exaggerated claims, especially for pre-existing damage, have spiked. “In the casualty space, we’ve seen some big numbers this year,” he noted. “This is where people are genuinely injured but claim much more than the incident warranted.”
Simon Mattless, claims counter fraud lead at Aviva, echoed these concerns, pointing to the rise of opportunistic fraud. “One of the biggest trends we see is exaggeration,” he said.
“Organised fraud is still there, but it’s becoming more fragmented and more difficult to identify,” Mattless added, noting the proliferation of technologies that enable document and image manipulation has allowed even disparate groups of fraudsters to become highly sophisticated.
The rise of deepfakes
While inflationary pressure has helped individuals justify their fraudulent claims, generative AI has made even nonprofessional fraudsters more advanced than ever before. AI technologies can assist in creating deepfakes, through which an image is entirely AI-generated, and shallowfakes, in which a real image is subtly altered. The same technology can help draft fraudulent documents such as medical records.
Even more concerning is that these technologies are evolving as quickly as the tools to combat them.“Opportunistic fraudsters don’t have the same data footprint as organised ones, which is a challenge,” said Kaye Sydenham, product manager for anti-fraud, Claims UK at Verisk.
“We’re developing a model to detect deepfakes, but even in the time we’ve been in development, AI has changed so much, so that you can ask it to add just a sliver of deepfake. We’re now testing our model to see if it can detect that. The technology is moving fast, and we have to be able to respond quickly.”
It has been estimated that, by as early as next year, 90% of online content may be AI-generated. “It’s going to be increasingly difficult to trust what you see and, indeed, what you hear,” said Neil Jones, head of claims investigation unit, Claims UK at Verisk.
Adaptations to modern fraud
As the panel discussion concluded, one message was clear: insurers must be proactive in fortifying their defences against fraud. “Fraudsters are constantly evolving,” Jones warned. “But many of their tactics remain the same. It’s about getting the right processes, the right technology, and the right people in place.”
Sydenham emphasised the role of innovative solutions that blend multiple detection techniques. This might include metadata analysis, such as discovering whether an image was taken after the alleged event or opened in a photo editing app, and pixel analysis, where subtle changes in an image’s structure are detected through disruptions in the pattern of its pixels. “We’re developing a product that runs multiple checks within one stream. You might miss them on pixel manipulation, but you catch them on metadata,” she said.
However, the panellists agreed that the key lies in integrating detection tools with human expertise. “You can’t underestimate the power of asking general questions,” Burge said. “A lot of detection still comes down to what a person says and whether their story makes sense.” While AI and machine learning can highlight potential red flags, they cannot replace human intuition.
This is especially true as technology continues to evolve. “The machine isn’t going to find anything it’s never seen before,” Mattless said. “We need experts to help influence what the machine is looking for.”