← Back

Unusual View: Insurance AI Has to be Explainable. Period.

Published on

Foreword

To further AI understanding and adoption in the insurance industry, Lazarus is producing a series of articles titled Artificial Intelligence - Insights for Insurance, or “AI-II.” Several of these Insights will focus on prompting as prompting is essential to implementing enterprise-scale AI solutions. In this Insight, we will discuss some fundamental concepts while later articles will dive much deeper into prompting. The viewpoints expressed in this Insight come from a combination of Lazarus’ expertise in large language models (LLMs) and our hands-on experience.

Introduction

For insurance industry veterans, the title of this Insight, and premise, may not be unusual at all but expected. However, some in the industry make it sound like AI must be treated as a complete “black box” technology. This view may be a reason some insurers have not pursued AI solutions to date. In this Insight, we will establish why explainability is a reasonable expectation and how the right technology partner can assist in AI regulation and understanding.

Explainability Explained

In the past few years, AI has moved to the forefront of insurance industry conversations. Discourse has been generally positive with some insurers already ideating, testing, and implementing AI tools with good results. This Insight will address how CEOs, board members, and other business leaders need to think about the expectation of explainability in AI.

Having explainability as an end goal is more realistic because it puts the responsibility of explaining model outputs on the technologists and data scientists. This is opposed to “Transparency” which seemingly puts the onus on the users to explain data science concepts. Explainability allows users to play their roles and data scientists to play their roles.

Leadership Expectations for Explainability 

To explore this topic further, let’s start with an aforementioned anti-pattern of Explainability; that being “AI is a black box.” Meaning that outcome from AI models can not be explained and worse yet expecting explanation is not reasonable.

In 2023, there were junctures where some solution providers defaulted to this sentiment. Occasionally, the messaging felt like “you cant explain AI outcomes to mere mortals.”  This sentiment is bad both short term and long term for the adoption of AI within our industry.

Explainability allows users to play their roles and data scientists to play their roles.

To set expectations for AI explainability, let’s start with a paradigm of explaining human decisions. Humans still have the most complex machine we know of, which is the brain. The brain has billions of neurons and scientists continue to struggle with, and learn more about, human decision making. However from an insurance regulatory perspective, or executive oversight, when insurance personnel have made a decision that needs to be explained, that explanation has to be made in a way the
questioner understands.

To take the position “humans are very complex and have billions of neurons—that decision can’t be explained” is prima facie absurd. Just as absurd would it be for someone to take the position “unless you are a neuroscientist, you can’t understand human decisioning.” The regulators don’t care which neuron, or series of neurons, in the brain fires. The regulator is focused on the fact that the company can explain the decision, or pattern of decisions, and takes accountability.  The CEO of the company has the
identical expectation.

To take the position “humans are very complex and have billions of neurons—that decision can’t be explained” is prima facie absurd.

Following this paradigm, the correct way for insurers to think about AI explainability is to take the view “we need to be able to explain outcomes to mere mortals.”  A key part of this expectation is holding technology partners accountable, and with both insurer and AI roles clarified, rationality will ensure. The extremes of “everything is opaque” vs. “everything is transparent and end users can explain on the fly” is replaced by a pragmatic approach where roles are properly played.

To be fair to technology solution providers, not all have taken the view that all AI outcomes must be opaque. Legitimate AI product firms will endeavor to provide explainability, be sensitive about industry responsibilities, even to the point of looking forward to engagement with regulatory bodies. Just because explainability is extremely difficult is no reason to set a different standard. Insurers need to directly ask their tech providers how the solution provider addresses explainability and how an AI solution partnership will be supported. If you find yourself trying to work with a tech firm not aligned on these principles, find someone else.

To be fair to technology solution providers, not all have taken the view that all AI outcomes must be opaque.

Summary

This insight presents a perspective on the concept of Explainability in AI. This topic will be of intense interest while both State and Federal authorities work through the rules of the road. In the interim, insurers need to strive for explainability and hold their partners accountable.

About Lazarus

Lazarus is an AI technology company that develops Large Language Models (LLMs) and associated solutions for industries such as insurance. The team at Lazarus is available to discuss all your AI needs regardless of use case or industry. (And yes…Lazarus strives every day to make our AI solutions explainable!)

お問い合わせご参加ください