Skip to main content
Article

Big data and advanced analytics

Big questions from a risk perspective

Riskhanteringsverktyg & -teknologi
N/A

October 29, 2018

The ability to harness data and apply advanced analytics to it to improve business processes and decision making is a core requirement in the insurance sector. However, these new techniques bring with them various secondary risks. How will risk functions need to evolve to manage these new risks?

Insurers operate in an increasingly data-rich and algorithmically-driven world. Even without using external 'big data', advanced analytics present insurers with the opportunity to understand customers’ risk profiles and behaviours better than ever before. This could help, amongst other things, quantify underwriting risk much more precisely, enabling a properly bespoke cost of cover with products aimed to fit individual customers rather than groups; one pool, but many prices. But, before they are swept up in the general excitement about what analytics, artificial intelligence (AI) and machine learning can offer, companies need to consider the accompanying risks and how to best manage them. These go beyond cyber risk (on which we’ve already written extensively) and may require new skills and measurement tools to understand and manage the changing risk exposure.

Here are a few of the related topics to think about and some questions that risk teams may want to consider.

The unintended consequences of using customer data

Insights into customer behaviour arguably represent the biggest opportunity for many companies. Behavioural data can facilitate improved product pricing techniques, or the implementation of innovative underwriting rules. But, as we are beginning to see with leading insurers, behavioural data can be used to redesign the insurance product entirely, shifting from risk mitigation to risk control. With the right product and service propositions, and the trust of their customers, insurers can collect and mine 'Internet of Things' data (steps, cycle rides, health monitors), lifestyle data (gym visits, eating habits) and other behavioural information (app usage metadata). The value derived from this data mining relates partly to an improved understanding of mortality and morbidity, and partly also from the development of affinity relationships and a much more loyal customer base.

But before behavioural data is used, insurers should consider two major factors. First, there is the law. The General Data Protection Regulation (GDPR), that came into force across the European Union on 25 May 2018 (and has been enacted in the UK through a new Data Protection Act), places stringent requirements on organisations that handle the personal information of individuals. Companies have to be sure that they have specific consents in place for each intended use of that personal information. Risk teams will want to ensure the appropriate uses of customer data against the approvals given, as well as establishing processes that control future applications.

Beyond the letter of the law, there is also the spirit of the law – or at least how consumers interpret the spirit. Customers’ perceptions of acceptable corporate behaviour will have implications for reputation and business standing if companies are seen to be stepping over an ethical line, a line that is probably more sharply in focus, given consumers’ growing awareness of the value of their data.

What makes this all the more important is that all data use has the potential to raise ethical issues - whether the data has been consciously used in a way that consumers perceive as unethical, or whether issues result from the pre-selection criteria used, without any conscious discrimination meant.

For example, an increasing amount of granular data could allow a company to select the policyholders it wants. With this, there is the risk that an organisation may be seen as ignoring a societal need for insurance for a certain group of people. This would be the risk from a conscious choice, and the risk could ultimately be discussed as part of the product and pricing approval process.

An unconscious ethical risk may be harder to identify and avoid. For example, a company may choose to select policyholders based on postcode, without realising that these postcodes exclude a certain segment of society. In this case, the risk is unlikely to have been as obvious, and may not be considered in the pricing and product approval process. Nonetheless, the reputational risk is still there.

Additionally, the onset of advanced analytics creates the potential that prices are fairer for those where data can be found and used to price a policy. This presents an ethical issue – if a segment of society (for example, the elderly) doesn’t generate a lot of data, will they be regarded as disadvantaged when purchasing insurance?

The changing modelling environment

Model complexity, a function of the type of algorithm being used or the layering of models effectively on top of each other, can hamper an insurer’s ability to really understand how it is classifying or predicting its results. Risk teams will need to bring in skills to understand or appropriately challenge the models and their results. Expert judgement becomes more important than before because - despite the term 'machine learning' - machines are restricted to learning only what historical data can tell them. Experts need to be involved in several parts of the process, for instance ensuring that the factors are not spurious, and that the methodology is not trying to extrapolate beyond the bounds of reason.

The consequence of deploying poor models can be a major loss in today’s highly competitive, mobile-enabled world. The proliferation of aggregators and digital distribution platforms means insurers are more exposed than ever to poor underwriting and pricing execution. In addition to the financial loss, errors can also cause fast-spreading reputational damage. Consequently, risks teams will not only need to limit the risk of poor model deployment before it starts, but also be sufficiently quick to react, should it occur.

Given the large amounts of data, an alternative approach that some insurers may prefer is to purchase external models to model certain risks. But they should be aware that these external models could potentially be ‘black boxes’, leading to reduced expert challenge. Additionally, there is the risk that many use the same model, leading to a systemic risk if the results from that model are inaccurate.

Data – sorting the wheat from the chaff

With so much data to analyse and use, there is a risk that inputs are unsuitable for the objective. Businesses need to ensure that the right data is understood and used for the right purpose.

And when working with data, the importance of following rigorous analytics processes cannot be underestimated in managing model risk. Data scientists need to conform to agreed principles and processes to minimise the risk that algorithms perform poorly in the real world, causing financial loss or brand damage. These principles are far reaching but include:

  • Inappropriate data: It seems obvious, but we cannot assume that historic data, even recent data, will provide a good representation of the future, particularly if we are analysing data relating to human behaviour. Technological advances are affecting the breadth of choices consumers can make and the relative benefit of each. It’s implausible to think that the digital data left behind by on-line and mobile activity - the oil in customer behavioural predictive models - will remain a perfect representation of future behaviours for very long. Machine learning algorithms that build models based on data sets that are not relevant, biased, or even just plain wrong, may perform well in validation tests, as validation data will also suffer the same issues, but will not function well in practice.
  • Analytics bias: Data scientists must guard against any impulse to find preconceived hypotheses at any cost. Poke hard enough at a data set and the chances are that you will find what you are looking for. Data harking (Hypothesis After Results Known) is just one way in which scientific rigour breaks down, analytics quality deteriorates, and bad conclusions are made.

To counter these issues, and many more, risk teams have a role to play - they must ensure the right questions are asked of the underlying data; of the methods used to select a sample from that data and to infer missing data; of the validation process; and of the ways in which the model, once deployed, will be validated in a real-world environment (Figure 1). This may require new skills in the Risk Function; it will almost definitely require new approaches to assess these risks and to manage them.

Who monitors the march towards AI and automation?

It seems highly likely that AI and the automation it facilitates will become increasingly prevalent in the insurance sector and beyond. All areas of the value chain stand to benefit and, to one degree or another, they already have. In many ways this is, of course, beneficial to insurers – costs fall and new insights are identified that they can exploit – and to consumers – prices fall, turnaround times reduce, and a greater breadth of relevant and valued products and services are available.

But can there be too much of a good thing? The rate at which AI takes hold across an organisation and the consequential impact on roles and responsibilities will need to be carefully managed to ensure that the benefits of expert judgment and model understanding are not lost. With the onset of AI and automation comes the potential dilution of risk responsibilities and with it, the extra care and understanding that it demands of modelling processes.

How risk teams challenge, but also use, big data and advanced analytics

Risk teams may need to adapt the way they challenge models, data and decisions. The onset of AI, big data and advanced analytics brings with it the need to understand risks that differ in size and speed of impact, complexity and type, than they have previously had to deal with. To meet this need, risk teams should consider adjusting their risk management framework considering, for example, whether they need to invest in different skills, provide new training, put in place additional governance, and develop new risk appetite factors.

To do so, they will probably need to harness big data and analytics for their own purposes to develop and advance the way they manage existing risks and improve future processes and oversight. For example, data can give more information to effectively monitor and check risks in vulnerable areas, such as underwriting and claims. More insightful internal data reports can be developed to identify and raise awareness of operational and process risks, and AI can be used to improve processes around compliance.

A time for self-analysis

All of the above and more raise a number of questions for risk functions that will need addressing as they continue their big data and analytics journeys at differing speeds. These are just a sample to get the conversation going:

  • Who is thinking about the impact of data and how it is analysed on various customer groups (existing and new)?
  • Are the decision mechanisms for validating data and models used in the business timely and appropriate?
  • Is data and analytics usage reflected in risk appetite reviews?
  • How will you manage governance around data quality, models and implementation?
  • Who should be allocated the roles and responsibilities for decisions made by AI?
  • Who is challenging model results?
  • Do you have the right people and tools in your risk team to challenge data and analytics-related risks?
  • Have you invested enough in data and advanced analytics risks other than cyber?
  • Are you using all your data to the company’s advantage – whether this be in line one or line two?
  • Could you extract more information for risk identification, or streamline line two processes through data and analytics?

How can Willis Towers Watson help?

Willis Towers Watson is a leading exponent of the use of big data and advanced analytics in insurance businesses around the world. In addition to having carried out consultancy projects in a wide range of operational areas that involve applications of big data, AI and machine learning techniques, many of our software products increasingly include facilities for building systems and processes that take advantage of what they can offer.

Our risk team supports and advises insurers on the full range of regulatory, accounting, commercial and operational risks they face, including the development of enterprise risk management programmes, risk appetite statements, and processes and tools to embed proactive risk management into their businesses.

Related content tags, list of links Article Corporate Risk Tools and Technology Insurance