Skip to main content
Article | FINEX Observer

Will AI Without EQ Mean More EPLI is Needed?

Financial, Executive and Professional Risks (FINEX)
N/A

November 1, 2019

Given recently introduced regulations, will firms using artificial intelligence (AI) in employment-related process have more EPLI claims made by prospective employees?

Well, we obviously need a definitions key after reading this title:

  • AI - Artificial intelligence. An area of computer science that emphasizes the creation of intelligent machines that work and react like humans (per Techopedia)
  • EQ - Emotional quotient (intelligence). In business, it is the capability of individuals to understand their own emotions and the emotions of others and effectively apply that understanding in facilitating more collaboration and increasing productivity
  • EPLI - Employment practices liability insurance

Before we explore this question, let's set some generally accepted principles regarding AI.

  • AI is here to stay and will only become more ingrained in our lives as technology evolves. We all (hopefully) will be able to control how pervasive it will be in our personal lives, but there is little doubt about the fast-paced progression of AI in business. McKinsey Global Institute research suggests that by 2030, AI could deliver additional global economic output of $13 trillion per year.
  • Notwithstanding the above, AI is still in its infancy, and the unknowns outweigh the knowns. The value gained from AI, even if at a profound level, likely will still produce some unintended, negative consequences. Those that society (and potentially regulators and prospective plaintiffs) will almost certainly monitor closest are those that impact people the most — namely risks involving physical security, privacy, access to information, reputation and discrimination (including employment discrimination).

Immediate employment practices liability (EPLI) risks

Today, the most pointed question regarding AI and employment-related issues is: Given recently introduced regulations, will firms using artificial intelligence in the hiring process have more EPLI claims made by prospective employees?

The legislation closest to the intersection of AI and employment-related risk is the Illinois Artificial Intelligence Video Interview Act (IL Act), which will require employers using AI in their hiring process to:

  • Let the job applicant know about the use of AI
  • Explain "how the artificial intelligence works"
  • Obtain consent to evaluation by the AI program
  • Refrain from sharing video interviews
  • Delete the videos within 30 days of a request by the potential employee

Looking to get in front of this issue, we will see EPLI underwriters asking questions about training and education on the use of AI in the recruiting and hiring process.

Those industries that are public facing will get questions about the use of AI when interacting with consumers.

Assuming Illinois will not be the only state to enact this type of legislation, this could quickly become a leading EPLI issue. The overarching accusations will likely be that employers, by using AI in the recruiting and hiring process, are screening applicants out based on a protected class status; i.e., gender, race and disability. If employers violate any of the specific rules set out by legislation, they would seem to be walking into the cross hairs of regulators and the plaintiffs' bar, exposing the company to penalties and increased litigation/EPLI insurance claims. Furthermore, the IL Act does not address what happens if an individual does not consent to the use of artificial intelligence in the process. Will this eliminate the candidate from consideration entirely? Could the candidate allege discrimination and/or failure to hire?

Privacy implications

Let's add more complexity. If the employer fails to adhere to the specific notice requirements of the IL Act or any equivalent legislation, are companies also opening the door for invasion of privacy type claims? While these allegations could implicate various other types of liability insurance policies, it is important to point out that many EPLI policies are duty-to-defend policies (meaning the insurer must defend all allegations made in the claims(s), even if there is ultimately no coverage under the policy). Insurers, therefore, may face losses even if the allegations get dismissed and don't result in any insurance payments for judgments or settlements.

And then there are third-party claims

Third-party EPLI extends coverage for claims made by non-employees: a customer, a vendor or an independent contractor, for example. Covered third-party allegations typically include discrimination and harassment. Consider whether the increasing use of AI to interact with customers, vendors or other third parties could result in allegations of discrimination. Also, call to mind recent attempts by leading technology companies to use AI to engage with users only to have the 'chatbots' engage in inflammatory and offensive conversations. This is reminiscent of another prevalent Illinois law, the Illinois Biometric Information Act, passed in 2008 to guard against the unlawful collection and storing of biometric information — often done with the use of AI. This law sets a private right of action that, for example, would allow a consumer to sue a company that has not complied with the law, even without the need to allege actual injury or an adverse effect.

Longer-term EPLI risks

The longer-term employment-related question is also quite clear: Will AI cause companies to eliminate jobs, putting large numbers of employees out of work? This fear of potential widespread job losses due to AI-driven automation can quickly escalate to panic, with workers, politicians and undoubtedly EPLI underwriters following the developments closely. An EPLI underwriter will always fear the impact of RIFs (reductions in force) on the frequency of EPLI claims, especially if RIFs are likely industry-wide as opposed to company-specific. If AI eliminates an entire set of jobs industry-wide, and those impacted do not pursue training in a new set of skills, many workers may be out of work for longer periods of time, something that adds to the likelihood of EPLI claims. Some hopeful news: While a widely cited 2013 Oxford study states that as much as 47% of current U.S. jobs are at risk of automation, there are well regarded arguments that suggest that this concern is overblown, including the findings of Willis Towers Watson's Global Future of Work survey which notes that "Machine intelligence will be more about employee augmentation rather than elimination. Nevertheless, this journey toward higher productivity though human/machine collaboration also creates new challenges for HR executives…." 

Enter EQ

Time only will tell how impactful the acceleration of AI will be on jobs and how deftly companies welcome and use the evolving technology. It seems that mitigating AI-related liability will be less about controlling the development of the technology and more about managing its impact on people, particularly a company's own employees. Organizations can mitigate these emerging risks by applying fundamental principles, with the most impactful being strong organizational EQ.

EQ is becoming universally recognized as being just as important (if not more important) as IQ in success — both personally and organizationally. High EQ companies will have strong internal communication methods, senior executive-supported employee training programs and overall professional development efforts. From simple skill cross-training commitments to broader leadership programs, utilizing EQ as AI implementations increase should benefit all.

But how do you teach a machine? Generally speaking, if a person is described as a "machine," it can have both positive and negative connotations. The pros of high productivity and extreme efficiency may come with emotional rigidity that fails to recognize how emotions play into deeper engagements and communications with others. Clearly, actual machines present an even more difficult challenge and specific strategies will go a long way in effectively transitioning job responsibilities from humans to AI, limiting potential EPLI risks. Harvard Business Review (HBR) concludes that "those that want to stay relevant in their professions will need to focus on skills and capabilities that artificial intelligence has trouble replicating — understanding, motivating, and interacting with human beings" concluding that "…a human being…is still best suited to jobs like spurring the leadership team to action, avoiding political hot buttons, and identifying savvy individuals to lead change." (The Rise of AI Makes Emotional Intelligence More Important; Megan Beck and Barry Libert; FEBRUARY 15, 2017)

Conclusion

The added power of AI will raise the bar of responsibility for the benefactors of AI, and like any material change, judgments of success of failure will be made about how the evolution is handled, not necessarily about the changes themselves. Those companies that are proactive and anticipate the impact of AI on their business and their workforce will prove to be less risky to insurers, particularly in the area of employment practices liability.

Download
Title File Type File Size
FINEX Observer: Fall 2019 Edition PDF 6.7 MB
Contact Us