Skip to main content
Blog Post

Bank of England warns UK financial services of risks associated with artificial intelligence

Financial, Executive and Professional Risks (FINEX)
N/A

July 1, 2019

Financial service companies need to keep pace with the speed of change as AI technology becomes more prevalent.

It’s one thing to be held to account as a director for a failure to safeguard your company from predictable harm in the form of a cyber-attack or ransomware or loss of data etc. After all, there have been plenty of warnings and salutary tales in this area. But it is quite another to be criticised for the harm your company has caused others as a result of the unknown and unintended consequences of the deployment of artificial intelligence (AI) in pursuit of profit. 

James Proudman, director of UK Deposit Takers Supervision at the Bank of England, tackles this topic in a recent speech. Much of what he says is relevant to companies beyond the realms of financial services.

‘Rubbish in rubbish out’

Mr. Proudman suggests that AI poses three specific challenges for boards and management:

He starts with the proposition that “the output of any model is only as good as the quality of data fed into it.” That truth goes well beyond AI.  Mr. Proudman also highlights the risk with AI, that even if the initial data is robust, the automated process of analysis and drawing inferences may be suspect since this is dependent on the assumptions built into the underlying algorithms.

He gives the example of a well-known Big Tech company using AI in its staff recruitment policies. The company realised after a year that the system was not evaluating candidates in a gender-neutral way, because the model had taught itself to discriminate in favour of male candidates on the basis that they formed most of the applicants. The management challenge here is to continue to monitor the output of AI systems and be vigilant for these biases. As he puts it: “…the ability for a human or other override – the so-called human in the loop….to provide feedback to minimise gradually the risk of adverse unintended consequences.”

The role of people

As Mr. Proudman acknowledges, the role of people seems a bit of a paradox in this context, since the aim of AI is often thought of as automating tasks previously done by humans. But as with his first proposition, it stems from the same premise that machines “do what they are told by humans.” It follows that because coders, programmers and managers are subject to the myriad of human biases, similarly, the outputs of the machines may well also reflect those biases.

He reminds us: “Machines lack morals. If I tell you to shoplift, then I am committing an unethical act — and so are you, if you follow my instruction. “I was only following orders” is not a legitimate defence. There is, if you like, a double-lock on unethical instructions within a wholly human environment — on the part of the instructor and the instructed. This is one reason why firms and regulators are so determined to promote “good” cultures, including, for example, “speak up” cultures, and robust whistle-blowing. But there is no such double-lock for AI/ML. You cannot tell a machine to “do the right thing” without somehow first telling it what “right” is — nor can a machine be a whistle-blower of its own learning algorithm.”

So, the problem here is more to do with a lack of people able to police both their own conduct and that of others. As Mr. Proudman puts it: “In a world of machines, the burden of correct corporate and ethical behaviour is shifted further in the direction of the board, but also potentially further towards more junior, technical staff. In the round this could mean less weight being placed on the judgements of front-office middle management.”

This gap in the middle is awkward for directors. The courts have repeatedly made it clear that whilst boards can properly delegate executive functions, they cannot delegate their obligation to supervise a company’s activities. How are directors to be held to account for requiring fairness, ethics, accountability and transparency from machines which teach themselves to maximise profitability for the company? And yet the alternative approach of rejecting AI entirely on grounds of risk may also put the company at an unacceptable disadvantage to its competitors.

The speed of change

The speed at which AI is being introduced makes it difficult to take informed decisions. It seems there is little evidence of consistency of approach when it comes to the introduction of AI either in financial services or beyond. We are in the middle of a complex transition phase. As the rate of introduction increases so does the extent of the execution risk that boards will need to oversee and mitigate. In Proudman’s view “…the transition also creates demand for new skill sets on boards and in senior management” as well as “changes in control functions and risk structures,” although he does not spell out what they may be.

Conclusion

Mr. Proudman’s warnings are timely and useful. Many companies are no doubt already actively grappling with the challenges they pose. Perhaps a prior challenge though and one common to all his warnings (even assuming a degree of computer literacy on boards) is to find the means to explain enough of the essence of machine learning and AI to allow boards to debate their use in an informed and intelligent way. Entirely new professions focused on data governance and AI audit which are beginning to emerge will need to grow quickly to keep pace with demand as boards realise that ignorance in this (as in many other areas of enterprise) is simply no defence.

This article was originally authored by Francis Kean.

Contact

Executive Director
Coverage Specialist, FINEX

Contact Us