Insurers : Transparent AI at the service of pricing agility – Akur8

AI technology has been at the forefront of discussions around technology for a number of years now and it has been able to provide the insurance world with a plethora of benefits.

The current pandemic has forced insurers to increase their digital operations, and are using AI technology to help deepen their offerings. While this is a good move for the industry, insurers need to make certain their AI and pricing models are still transparent to ensure customers are still treated fairly.

Akur8 COO Anne-Laure Klein and chief of sales Brune de Linares have taken a look into the rising need of artificial intelligence and how the current coronavirus pandemic has increased the need of transparency.

Akur8 is an AI solution developer which helps insurers better make use of their data, and it recently closed its Series A on €8m.

Akur8

In the world of insurance, increasing pricing sophistication should not happen at the expense of treating policy holders fairly. Artificial Intelligence has an undeniable role to play in this, however, “explainable” Artificial Intelligence is not enough to ensure customers are treated in a fair and impartial way. Only “transparent” Artificial Intelligence can offer such guarantees.

It is crucial that the insurance industry urgently accelerate its pricing processes, which lag behind what customers expect and what the competitive landscape demands.

The current economic crisis, which the IMF already considers as the worst recession since the Great Depression, is hitting all industries. Insurance companies will not be spared. Some specific areas of the economy will affect them in particular, such as the slump hitting the car industry.

In this context, insurers need to urgently reconsider their approach to evaluating and understanding risk: falling miles driven, falling number of claims, decreasing customer spending. These effects require a total re-think of risk modelling, how models are used, and of course, an adjustment in pricing.

Reactivity, coupled with analytical accuracy, is now more than ever a major and pressing challenge. Time is of the essence. In order to preserve their economic model and ensure their survival, insurance companies must make important pricing decisions quickly but also confidently.

Artificial Intelligence brings obvious benefits for this specific use case: speed, computing power, accuracy, and the ability to handle huge volumes of data. The current context means AI is not an opportunity but a necessity for insurers, who need to accelerate their precision and reactivity.

In an industry as highly regulated as insurance, this poses a real technical challenge. The general public, as well as institutions, demand that the use of AI be perfectly controlled in order to guarantee a fair treatment of consumers who are subject to choices made by algorithms. In a speech last February about the EU’s “Digital Future”, Ursula von der Leyen, the president of the European Commission, called for a “responsible human-centric approach” to Artificial Intelligence.

This is especially imperative in the insurance industry, where the transparency of pricing models is an inescapable regulatory requirement. An AI that is controlled allows insurers to segment their customer base, which is a structural principle in insurance policy pricing, but also guarantees that this segmentation is explainable, understandable and compliant with regulatory requirements and risk management constraints linked to solvency.

Can we then settle for an AI that is explainable?

The subtlety that is hidden here comes from the difference between explainability and transparency.

An explainable artificial intelligence can indeed be explained, but only in hindsight. You could compare this to an inverted train of thought, where you start from the end and arrive at the beginning, to retrace the path that was chosen by the algorithm. Explainable Artificial Intelligence allows us to understand and justify the decision that was made by the algorithm, only once it is made and once the result is determined.

It does not provide a transparent understanding of the train of thought which leads to the decision that is made and leaves many questions unanswered: What choice will the algorithm make in a different situation? For what reasons? Using what variables? In exactly which way?

Most Artificial Intelligence systems are highly complex: they cannot be broken down and analysed piece by piece. In order to run, they form a whole that cannot be grasped in its entirety by a human being. This inability to break down the way artificial intelligence models work prevents human beings from being able to answer important questions on how these models operate.

The inability to analyse such decision models exhaustively prevents their use in a way that is guaranteed to be fair. Correlations between variables that are computed by algorithms are almost uncontrollable. This risk was particularly well illustrated by the gender-bias controversy brought by Apple’s credit card last November. Even though gender did not even appear on the credit card applicant forms, the Machine Learning algorithm reconstructed this information by associating variables such as the type of spending, or the retailers that users shopped at, and applied a higher risk factor to a large share of women on these grounds. In such an example, the person who created the model did not create gender biases in the model and was probably not aware of the model’s behavior before it actually impacted clients. As a result, for a couple that jointly disclosed their revenues, the woman had a spending limit that was lower than that of her husband.

An alternative approach is to simplify a Machine Learning model to a model structure that is understandable and interpretable by a human being. Let’s take the example of a classic Machine Learning model, Gradient Boosting, one of the most widespread Machine Learning technologies. When an algorithm creates a Gradient Boosting model, the model that is created is structurally a “black box”. A classic approach is to simplify this Gradient Boosting model to an association of linear or additive models (GLM – Generalized Linear Model / GAM – Generalized Additive Model), commonly used by actuaries in insurance because they are naturally transparent.

This approach allows users to understand the simplified GLM / GAM model output. However, this approach has two main shortfalls: the differences between the “black box” model and the simplified and understandable model are unknown to the model user; and if they wish to make use of the simplified GLM / GAM model, it will significantly under-perform relative to the “black box” model, which negates the expected benefits of Machine Learning.

Nonetheless, the use of Artificial Intelligence in insurance risk modelling is a complex yet solvable equation. To solve it, the insurance modelling process must be separated from the structure created by Machine Learning algorithms. In other words, algorithms need to be capable of automatically creating these linear and additive models (GLM / GAM), which are naturally understandable.

Akur8 is an insurance pricing solution that is built around this core principle. Akur8 relies on transparent Artificial Intelligence: its proprietary algorithms automate the production of models that are perfectly transparent and understandable, which guarantees a fair treatment of consumers by insurance companies. With Akur8, transparent Artificial Intelligence holds its promises of speed, higher performance, and the ability to process a near-unlimited volume of data, while protecting consumers from injustice.

Anne-Laure Klein COO Akur8

Brune de Linares Chief of Sales Akur8

Read the article here.

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.