Most companies now feature video interviews as part of their recruitment process to evaluate potential candidates. These interviews are increasingly being assessed by machine learning (ML). ML, in this context, applies algorithms to quantitatively identify successful features in human behaviour as observed in the video interview. For example, features such as facial expressions, posture and eye contact can be used to determine good communication skills. ML quantifies such features by learning on past experience of what is considered ‘good’ and ‘bad’ in terms of candidates. This is often referred to as the training data. While in its current form, ML is not capable of making a hiring decision independently, it is often used as guidance to indicate how the candidate has scored in characteristics deemed important by the algorithm.
Should an algorithm have an impact on whether you get employed?
Machines are learning, humans are not
While human recruiters evaluate potential candidates based on features such as vocalisations, humans have a qualitative definition of these features rather than the quantitative ones that ML algorithms have. For example, one of the features linked to job success in a technical support role is speed of speech. While a human recruiter evaluates this by “gut instinct”, an algorithm measures this by words spoken per minute. Additionally, humans have a higher tendency to be distracted by other factors unrelated to the job success leading to unconscious bias. In contrast, companies working on ML algorithms have teams of psychologists and data scientists working together to successfully identify features that link to characteristics for job success.
To tackle algorithmic bias, which is bias that exists in the training data, the contributing factors are identified and taken out. The model is then re-trained on the modified data, eliminating the bias, hence removing any adverse impacts against the specific subset of people the bias previously affected. For example, HireVue uses different datasets depending on cultural background to ensure there is no bias in the interviewing process. As a result, the recruitment process is a lot more quantitative, consistent and unbiased. Immanuel Kant (proponent of the Kantian ethics) defined the universality principle which states that actions that can become universal law are inherently ethical. Employing people irrespective of their race, gender, age, sexual orientation etc. and purely on merit is part of the Labour Law in many countries. Using ML algorithms that eliminate unconscious bias reinforces the Anti-Discrimination Laws. This can be interpreted as a universal law defined by the categorical imperative by Kant.
Figure 1 compares the recruitment process with and without ML. The main differences in the recruitment process is the replacement of the ATS (Applicant Tracking System) Screen with a CRM (Customer Relationship Management) targeted specifically for recruitment and the introduction of an AI driven assessment and video interview instead of the psychometric tests and phone interview. The advantages of the ML recruitment system are increased speed of hire, better candidate recruitment and better experience for potential candidates. Industry results of these can be gained from the reports published by Unilever and Virgin Media. From a utilitarian point of view, if the consequences of an action are positive for the majority, then the action is deemed ethical which is clearly the case with ML-based recruitment process.
A crucial aspect of the traditional recruitment process that is hard to quantify is the success rate of the recruiter. On the other hand, bias and accuracy are two easily measured factors in ML algorithms that are constantly tracked to ensure that the algorithm produces accurate results. This gives ML-based recruitment processes another advantage since their accuracy can be measured and improved over time.
Risk of the unknown
The concept of care ethics however can be employed in the argument for limiting the use of ML in recruitment. Care ethics state that it is the engineers’ responsibility to respect sustainability of environment, human life and contribution to engineering practice. This ethical approach works on the basis that people learn norms and values by encountering concrete people with emotions. As ML is designed by humans, the algorithms can be subject to “algorithmic bias” mentioned previously. Therefore if this issue is not found during the testing and re-training process, it could be argued that human life would be adversely affected. An example of this was when Amazon set up a team of specialists to develop computer models to aid in the search of top candidates. The computer model was trained using resumes received over the last ten years, most of which were men. Therefore, the system taught itself that men were more desirable candidates. Some biases can be easily identified, for example, gender and race, but others like postcodes for example which might discriminate against historically underprivileged neighborhoods will be much more demanding to find. This is an ongoing risk faced by developers of ML algorithms for recruitment such as Hirevue, impacting their reputation and damaging corporations using their services.
As the engineering industry is driven by efficiency and action, it focuses more on Kantian and Utilitarianism ethical reasonings. Thus, with the introduction of ML in recruitment, even less emphasis is placed on virtue ethics and the nature of the acting person. Two of the virtues of a morally responsible professional are informative communication and justice. This ethical perspective places emphasis on the person and what it means to be human. The data training process for ML looks at tens of thousands of factors and underlying criteria, most of which cannot be disclosed. Therefore, an algorithm can pass or fail someone based on secret criteria requirements. It will not be able to offer any feedback and scores for each of the criteria, independent of the success of the applicant. The black box phenomenon means candidates no longer receive feedback and are unaware of success metrics.
As an example, professions such as: lawyers, doctors and engineers require four years for graduation and an additional period for gaining further qualifications or relevant experience. It could be argued that for such individuals, a “quality over quantity” approach is preferred in the job searching process and that they require more personal feedback and interactions. Furthermore, there might be worries that companies using ML have developed “cyber-snooping capabilities” and are rejecting people based on personal online activities such as Spotify playlists or Netflix activities. To enable participants to aim for success, software companies must become more transparent and reveal the underlying logic.
Evolution not revolution
ML in recruitment is still in its infancy, with many problems to understand and resolve. As a whole, we agree that there are issues associated with it but with continued monitoring and the introduction of appropriate legislation, ML would be capable of playing a larger role in the recruitment process. If human bias can be eliminated, the end goal will be beneficial for the many as dictated by the utilitarian viewpoint.