Introduction
Welcome to the first article in my series, AI Buyers Guide for Human Resources (HR) Professionals. Our objective for this series is to arm HR professionals responsible for selecting, deploying, and managing AI-based HR Tech solutions in the enterprise with the knowledge they need to perform these tasks confidently. The information shared here is not just of value to HR professionals but also generally applies to any buyer of AI-based software. I hope you find the information helpful and welcome your feedback and comments.
Artificial Intelligence (AI) application in Talent Management is obscured by marketing hype and misinformation. Much of what is called AI in the marketplace is much less than what AI researchers and practitioners consider “intelligent.” That doesn’t mean solutions that bear that moniker aren’t helpful. Still, they are misleading buyers of HR technology who believe that AI is a panacea for many of their most challenging talent management problems.
However, many promising AI applications in Talent Management are worth investigating, like finding the best match from thousands of applicants for a handful of job openings or inferring skills and capabilities that an employee may have based on their career journey. “Real” AI learns and gets better at its tasks over time. This learning and adapting capability is critical to our definition of intelligence and separates AI-based products from those masquerading as ones.
What is Intelligence?
Before we discuss AI, it’s essential to understand what we mean by intelligence. Philosophers and scientists have debated the meaning of intelligence for centuries. There is no single or standard definition for intelligence. Still, for this article, we’ll adopt a simple definition inspired by a cognitive systems view of intelligence that defines it as something that involves memory, reasoning, and the ability to learn and adapt.
Movies like Terminator, I, Robot, and the series Westworld have shaped the public’s understanding of the subject and their experiences with common “intelligent” household devices like the Roomba or from driving semi-autonomous vehicles made by companies like Tesla. But when you watch a Roomba navigate around obstacles like the legs of a chair while vacuuming your floor or if you have experienced the limitation of a self-driving car attempting lane changes in heavy traffic, you quickly realize the true meaning of “artificial:” it’s not quite as good as the real thing.
Artificial Intelligence vs Machine Learning
We often hear AI and ML used interchangeably. ML is a branch of AI concerned with modeling the real world from data. The data are “features” of the thing being modeled. For example, a model that predicts a person’s sex, for instance, will use other characteristics of the person, such as their name, height, weight, income, date of birth, hair color, etc., to determine the likelihood that the person is a male versus a female (note: we use “determine the likelihood” rather than “determine with certainty” to describe the outcomes since ML models are probabilistic, not deterministic). The ML model would have been trained by a data scientist who used data from persons whose sex is known. Plug in the data for the person of unknown sex, and voila, that person’s sex pops out the other side! This is a simple example, but hopefully, it illustrates the concept.
The Intelligent Learning Agent
Consider a medical diagnostic system that can’t learn new diagnoses or adapt to new information about disease: it will hardly be helpful a year or two from now. Would you trust such a system? Would you consider it intelligent? Our contention is that to be considered intelligent, natural, or artificial, the agent, human, animal, or otherwise, must possess the capacity to learn and adapt. We believe that this is the basis of intelligence. Perhaps we need to be more precise and call our agent an Intelligent Learning Agent or just an Intelligent Agent for short.
A demonstration of learning can be as simple as a Roomba demonstrating that it can navigate a floor it’s never traversed by using its sensors to build a map that it can later reference to optimize a cleaning route on its next run. The path it takes on subsequent runs is informed by the map and its memory of areas that tend to be dirtier than others. Each run is shorter than the previous one until it maximizes efficiency. Move the furniture around, and you’ll observe the Roomba regressing to an earlier state. Is it learning? Certainly. Is it intelligent? Maybe. Is the ability to learn and adapt sufficient to demonstrate intelligence? Perhaps, but it is necessary. Without it, the agent’s intelligence is fleeting at best.
Bias in AI
While we believe that there are many valuable and practical applications of AI in Talent Management, it’s essential to keep in mind that the technical buyer (the domain expert who is selecting the solution based on “fit-for-purpose”) may have concerns about the potential for an AI-based solution to be at odds with their goals. Consider the use-case of engagement or retention. How might the buyer perceive AI as an aid or something at odds with such goals? A 2018 survey found that 73% of adults believe AI will replace more jobs than it creates, and 23% are concerned about losing their jobs to an AI.
Bias in AI is well known. This is of particular concern in Talent Management. Consider the case where data about current employees that are demographically similar is used to build a model to predict the performance potential of job applicants that are from demographic groups that are distinctly different from the model’s training cohort. Will such a model make predictions that lead to hiring decisions that exclude otherwise qualified candidates? For example, “In 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination because the computer program it was using to determine which applicants would be invited for interviews was determined to be biased against women and applicants with non-European names.”
Bias is the Achilles heel of Talent Management solutions that employ AI in any form or fashion, and it must be addressed explicitly. It must be removed systematically through product design and engineering, testing, and maintenance of our solutions. The product team (design, engineering, and test) and data science team must define a formal system for identifying and eliminating bias from our designs, code, training data, etc. Our marketing literature should educate buyers about the risk of bias in AI-based solutions and how best to evaluate a vendor’s anti-bias methodology. Likewise, the sales team should be equipped to educate and challenge a prospect’s assessment of competitive solutions that don’t employ an anti-bias well-defined method.
AI under GDPR
The General Data Protection Regulation or GDPR is a regulation designed to protect the privacy of citizens of European Union (EU) member states:
“REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons about the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).”
Article 22 of the GDPR particularly interests software vendors employing AI/ML in their solutions. Article 22 is colloquially referred to as the “profiling” regulation and specifies the following:
– “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
– “…the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”
To comply with this regulation, we must be able to defend the actions of AI/ML algorithms that we employ in our products that “significantly affect” the employee. For example, suppose we implement an algorithm that recommends one employee over another for promotion. In that case, we need to be able to justify the recommendation if the employee who was passed over for promotion contests the decision. It’s unlikely that an explanation like, “We ran millions of rows of data through an n-layer perceptron network, and it concluded that you are not ready for promotion,” will suffice.
Buying AI
Talent Management Systems implement complex workflows and automate various human capital management tasks that are difficult, if not impossible, to implement without a software system’s aid. However, classic workflow management and automation programming approaches are sufficient for most tasks.
Best-in-class software vendors look for opportunities to apply AI to those areas of their products that will deliver value to the customer that would otherwise be problematic using conventional approaches. Problems that are “computationally intractable” or require highly specialized skills and expertise outside the system are difficult and often very expensive to acquire and apply.
Seek out vendors who understand that AI, for AI’s sake, is not the best approach. Beware of those vendors who lead with an AI-only story. Finally, embrace vendors who never take the “human” out of human capital decision-making and development.
Endnotes
1 Kelly, J. E., & Hamm, S. (2014). Smart machines: IBM Watson and the era of cognitive computing. New York: Columbia Business School Publishing.
2 Gallup. (2018). Optimism and Anxiety, Views on the Impact of Artificial Intelligence and Higher Education’s Response. Retrieved From https://bit.ly/3z7pwsH.
3 Stella Lowry and Gordon Macpherson, “A blot on the profession,” British Medical Journal, March 1988, Volume 296, Number 623, pp. 657–658.
4 General Data Protection Regulation. (2019, September 2). Retrieved from https://gdpr-info.eu/ .