Past

Textual Analysis of Insurance Claims with Large Language Models

Abstract
This study proposes a comprehensive and general framework for examining discrepancies in textual content using large language models (LLMs), broadening application scenarios in the insurtech and risk management fields, and conducting empirical research based on actual needs and real-world data. Our framework integrates OpenAI's interface to embed texts and project them into external categories while utilizing distance metrics to evaluate discrepancies. To identify significant disparities, we design prompts to analyze three types of relationships: identical information, logical relationships and potential relationships. Our empirical analysis shows that 22.1% of samples exhibit substantial semantic discrepancies, and 38.1% of the samples with significant differences contain at least one of the identified relationships. The average processing time for each sample does not exceed 4 seconds, and all processes can be adjusted based on actual needs. Backtesting results and comparisons with traditional NLP methods further demonstrate that our proposed method is both effective and robust.