Our new platform for enhanced medical record reviews is now live! Learn More

Can Claims Examiners Trust the Information Presented by AI?

Published On
September 5, 2024
Share this post
https://digitalowl.com/can-claims-examiners-trust-ai

I recently hosted a GenAI in Claims webinar with industry experts Mike Fiato, Chief Claims Officer at Allstate, Brandy Patrick, President at Lexitas, and Amit Man, Co-Founder and CTO of DigitalOwl. During our discussion, we explored the potential and pitfalls of using GenAI in claims, highlighting both its transformative capabilities and the challenges it presents. One of the standout questions from our audience captured a common concern about finding the right balance between leveraging innovative technology and maintaining trust. 

The Potential of GenAI in Claims Webinar

How can claims examiners and analysts trust the accuracy of AI-generated information without creating more work for themselves by constantly verifying the AI’s output? This question led to a valuable discussion among the panelists, and I wanted to share their insights with you here.

How can claims examiners and analysts trust the accuracy of AI-generated information without creating more work for themselves by constantly verifying the AI’s output? 

Amit Man began by emphasizing the importance of transparency and accountability in AI solutions. "With click-to-evidence, users can quickly validate specific details in just a few seconds by clicking a link that directs them to the original page," said Man.

"With click-to-evidence, users can quickly validate specific details in just a few seconds by clicking a link that directs them to the original page," said Man.

But for Amit, the concept of trust is more nuanced than simply proving that an answer is correct. It’s about building trust with the users, and that only comes from showing them that the AI can be trusted. He suggested giving claims professionals hands-on experience with the AI that allows them to compare cases where they already know the outcome to the AI results. If the AI’s suggested decision is the same as or better than the previous decision, the user will begin to build trust with the AI. 

Agreeing with Amit, Brandy Patrick shared a real-life example of a case where a paralegal spent 500 hours reviewing documents, billing for every hour. When the paralegal used an AI tool for the same task, the tool only found one additional detail that the paralegal had missed, but it did so in a fraction of the time. This efficiency, coupled with the paralegals' oversight, could’ve created dramatic time savings during the case. 

Brandy also noted that false positives are a frequent stumbling block for building trust with the AI, as users can become concerned when the AI introduces noise into the review process. However, she emphasized that this type of mistake is actually preferable, as it’s much better than the AI overlooking something important and can be easily checked by humans.

Mike Fiato further added to the discussion by underscoring the importance of setting realistic expectations when integrating AI into claims processes. He pointed out that no tool is flawless and highlighted the importance of understanding the strengths and limitations of AI, with a focus on aligning AI's capabilities with the practical needs of claims professionals.

As the moderator of the webinar, I was excited to have our expert panelists tackle this question, which touches on one of the core concerns many have in the claims industry. Thank you Mike and Brandy for joining the discussion!

If you missed the session, you can watch the full recording here.

Jim Sorrells
Sales Director, Claims
,
DigitalOwl
About the author

Jim Sorrells has over 30 years of experience in the P&C insurance industry, with 26 of those spent in leading claim organizations at a major carrier. He serves as Sales Director of Claims Services where he is transforming the medical data review process in Bodily Injury, Uninsured Motorist and Workers’ Compensation Claims.