Generative AI challenges: balancing potential and pitfalls
While there’s no doubt as to the enormous potential of generative AI in insurance , the industry will need to overcome several obstacles to fully realise the benefits.
1. AI inaccuracies and the need for critical thinking
As discussed in our previous blog post, machine learning models can generate factually incorrect content with high confidence, a phenomenon known as hallucination. To date, no comprehensive solution exists for this issue. As a consequence, these models cannot operate autonomously, nor should they replace your existing workforce. Instead, the focus should be on cultivating a collaborative environment between human experts and AI, which can lead to broader acceptance, adoption of AI technologies, and an optimal outcome for your AI-powered business transformation.
Leadership teams must assure staff that AI is intended to augment their capabilities, and foster a culture of experimentation – ideally for internal use cases initially. Given the nature of these new models, it is crucial not to accept their outputs at face value. As such, leaders should champion critical thinking within their teams to ensure the effective implementation of AI solutions.
‘These models can generate factually incorrect content with high confidence, a phenomenon known as hallucination. Consequently, these models cannot operate autonomously, nor should they replace your existing workforce’.
2. The security challenge of AI insurance applications
Security topics are another significant concern. Since cutting-edge generative AI models are typically proprietary to organisations like OpenAI or Cohere, deploying them in a dedicated cloud or on-premises environment is currently impractical. This constraint makes it difficult to regulate the models and their associated data flow.
There are ongoing concerns regarding sharing sensitive information, such as client data or proprietary company knowledge, with machine learning models, as well as uncertainties surrounding copyright. Regulatory policies are still evolving to keep pace with the latest developments. Therefore, initial experiments should prioritise the use of public data or internal data with minimal sensitivity. What’s more, personally identifiable information (PII) has to be sanitised before it can be used within the legal limits of regional data protection laws.
While conversations are recorded, converted to text, and summarised by an engine, it’s key to implement non-repudiation methods to ensure the origin and integrity of data is guaranteed. Generated summaries are not perfect and therefore need to be reviewed and edited by the call agent.
To avoid disputes in claims between the customer and insurance, every alteration of generated text needs to be logged in audit trails to achieve traceability.
It’s important to acknowledge that challenges from traditional machine learning approaches, such as bias and unfairness, persist. Adhering to responsible AI principles is crucial for the successful implementation of these new models. To ensure ethical and effective use, it’s essential to follow established frameworks for responsible AI development, such as the one outlined in our Responsible AI Framework.
By prioritising responsible AI practices, we can harness the power of generative AI while mitigating potential risks and fostering trust in these transformative technologies.