Engineering – You’re Hired? The Machine Learning Revolution

Group 59

Most companies now feature video interviews as part of their recruitment process to evaluate potential candidates. These interviews are increasingly being assessed by machine learning (ML). ML, in this context, applies algorithms to quantitatively identify successful features in human behaviour as observed in the video interview. For example, features such as facial expressions, posture and eye contact can be used to determine good communication skills. ML quantifies such features by learning on past experience of what is considered ‘good’ and ‘bad’ in terms of candidates. This is often referred to as the training data. While in its current form, ML is not capable of making a hiring decision independently, it is often used as guidance to indicate how the candidate has scored in characteristics deemed important by the algorithm.

Should an algorithm have an impact on whether you get employed?

Machines are learning, humans are not

While human recruiters evaluate potential candidates based on features such as vocalisations, humans have a qualitative definition of these features rather than the quantitative ones that ML algorithms have. For example, one of the features linked to job success in a technical support role is speed of speech. While a human recruiter evaluates this by “gut instinct”, an algorithm measures this by words spoken per minute. Additionally, humans have a higher tendency to be distracted by other factors unrelated to the job success leading to unconscious bias. In contrast, companies working on ML algorithms have teams of psychologists and data scientists working together to successfully identify features that link to characteristics for job success.

To tackle algorithmic bias, which is bias that exists in the training data, the contributing factors are identified and taken out. The model is then re-trained on the modified data, eliminating the bias, hence removing any adverse impacts against the specific subset of people the bias previously affected. For example, HireVue uses different datasets depending on cultural background to ensure there is no bias in the interviewing process. As a result, the recruitment process is a lot more quantitative, consistent and unbiased. Immanuel Kant (proponent of the Kantian ethics) defined the universality principle which states that actions that can become universal law are inherently ethical. Employing people irrespective of their race, gender, age, sexual orientation etc. and purely on merit is part of the Labour Law in many countries. Using ML algorithms that eliminate unconscious bias reinforces the Anti-Discrimination Laws. This can be interpreted as a universal law defined by the categorical imperative by Kant.

Figure 1 compares the recruitment process with and without ML. The main differences in the recruitment process is the replacement of the ATS (Applicant Tracking System) Screen with a CRM (Customer Relationship Management) targeted specifically for recruitment and the introduction of an AI driven assessment and video interview instead of the psychometric tests and phone interview. The advantages of the ML recruitment system are increased speed of hire, better candidate recruitment and better experience for potential candidates. Industry results of these can be gained from the reports published by Unilever and Virgin Media. From a utilitarian point of view, if the consequences of an action are positive for the majority, then the action is deemed ethical which is clearly the case with ML-based recruitment process.

Figure 1: Comparing the recruitment process with and without ML.

A crucial aspect of the traditional recruitment process that is hard to quantify is the success rate of the recruiter. On the other hand, bias and accuracy are two easily measured factors in ML algorithms that are constantly tracked to ensure that the algorithm produces accurate results. This gives ML-based recruitment processes another advantage since their accuracy can be measured and improved over time.

Risk of the unknown

The concept of care ethics however can be employed in the argument for limiting the use of ML in recruitment. Care ethics state that it is the engineers’ responsibility to respect sustainability of environment, human life and contribution to engineering practice. This ethical approach works on the basis that people learn norms and values by encountering concrete people with emotions.  As ML is designed by humans, the algorithms can be subject to “algorithmic bias” mentioned previously. Therefore if this issue is not found during the testing and re-training process, it could be argued that human life would be adversely affected. An example of this was when Amazon set up a team of specialists to develop computer models to aid in the search of top candidates. The computer model was trained using resumes received over the last ten years, most of which were men. Therefore, the system taught itself that men were more desirable candidates. Some biases can be easily identified, for example, gender and race, but others like postcodes for example which might discriminate against historically underprivileged neighborhoods will be much more demanding to find. This is an ongoing risk faced by developers of ML algorithms for recruitment such as Hirevue,  impacting their reputation and damaging corporations using their services.

As the engineering industry is driven by efficiency and action, it focuses more on Kantian and Utilitarianism ethical reasonings. Thus, with the introduction of ML in recruitment, even less emphasis is placed on virtue ethics and the nature of the acting person. Two of the virtues of a morally responsible professional are informative communication and justice. This ethical perspective places emphasis on the person and what it means to be human. The data training process for ML looks at tens of thousands of factors and underlying criteria, most of which cannot be disclosed. Therefore, an algorithm can pass or fail someone based on secret criteria requirements. It will not be able to offer any feedback and scores for each of the criteria, independent of the success of the applicant. The black box phenomenon means candidates no longer receive feedback and are unaware of success metrics.

As an example, professions such as: lawyers, doctors and engineers require four years for graduation and an additional period for gaining further qualifications or relevant experience. It could be argued that for such individuals, a “quality over quantity” approach is preferred in the job searching process and that they require more personal feedback and interactions. Furthermore, there might be worries that companies using ML have developed “cyber-snooping capabilities” and are rejecting people based on personal online activities such as Spotify playlists or Netflix activities. To enable participants to aim for success, software companies must become more transparent and reveal the underlying logic.

Evolution not revolution

ML in recruitment is still in its infancy, with many problems to understand and resolve. As a whole, we agree that there are issues associated with it but with continued monitoring and the introduction of appropriate legislation, ML would be capable of playing a larger role in the recruitment process. If  human bias can be eliminated, the end goal will be beneficial for the many as dictated by the utilitarian viewpoint.

22 thoughts on “Engineering – You’re Hired? The Machine Learning Revolution

  1. Some very interesting points! I agree using machine learning could be the most efficient and quickest way of finding potential candidates. However I disagree with leaving out human opinion and emotion altogether as each individual’s circumstance is different, for example an excellent candidate may accidentally trip over their words or may have a stammer. Moreover, confidence can be shown in other ways than posture and word speed. I also believe it could get quite controversial if machine learning begins to discriminate against gender and area of residence and the company should embrace diversity and give everyone an equal chance based on their individual circumstance and hire people based on how hard they are willing to work for the company rather than simply a confident facade. 🙂

  2. A highly informative article that sheds light on a topic that is not discussed enough. I agree with the argument of algorithm bias that is perfectly represented with the Amazon example. I believe that machine learning is only as good as the person who designed it. Hence, it is very important that not only that machine learns from data it receives, but that humans learn from the machine as well. After all, what distinguishes humans from machines is the element of creativity, and we should use that to further improve machine learning.

    One topic that intrigues me is human emotion. For instance a candidate can be stressed because they are applying for their dream job and want to do well, while other candidates may stress because of lack of self-confidence. How can machine learning distinguish between both of those? Would a human be more capable of doing a better job at distinguishing?

    Finally, how can we address an issue like disability? If we do not factor it in, that puts some candidates at a serious disadvantage. This is the exact antithesis of Labour and Anti-Discrimination laws.

  3. Your article is quite thought-provoking. It is very interesting to see how our world is becoming dependent on computers. However as you said the computers have been programmed by humans so they cannot be totally neutral. I cannot think of having completely neutral computers anytime soon.
    When someone is being recruited, emotions play a big role as well. Reducing humans to the number of words per minute doesn’t completely make sense in my opinion. If the machine counts the number of words per minute, it may not take into consideration the emotion or the intonation a person has (which are fundemental in a speech).
    Your article reminds me that humans must continue to interact with each other and use more their senses rather than rely on computers to tell them what to do.

  4. I think it’s fair to say that while the interviewer looks to get a feel of a potential hire’s capabilities and behaviour, the candidate also tries to gauge what working with the person on the other side of the table or screen would be like. In this way I feel ML-conducted interviews are limited and limiting.

    That said, to the extent that ML-run interviews reduce bias in hiring there are considerable positives.

  5. Very interesting read. It’s a concept which is frightening because it’s asking the question if machines should dictate the future of a person. Using machine learning seems very effective for determining the technical qualities of an employee but could algorithms truly judge a persons behaviour in the workplace? These algorithms would lie with the programmer and is there such a thing as the perfect employee? This also may hinder those who thrives off human interaction in a interview and would not necessarily display their qualities when talking to a reflection of themselves on a screen at home. Humans can subconscious decipher a lot of information based on micro expressions and other various methods. Could machines do the same? Th video interview is efficient and fair but when the oppportunity to go off topic arises in a human interview, I believe many qualities can be tested or found as opposed to a one track video interview.

  6. Great article which in my opinion would give some former interview candidates the answer to “Why didn’t they chose me?” or “Why didn’t they bother to give me feedback?”.

    As a computer scientist, I agree that ML is not there yet in order to provide an unbiased state-of-the-art system that would facilitate the optimal recruitment by itself or to at least for the majority of the process.

    Nevertheless, can the ML applied on video interviews be compared to the other tools that the companies are already using (e.g. auto CV-reading, logical & reasoning tests, mathematical exams etc.)? Absolutely.

    The virtue ethics has been neglected for a while as these kind of tools and tests have been used for years. In their search for standardisation, their creators have taken the personal factor out of the interviewing process; therefore the ML cannot take the blame as being a the trend starter in this direction.

    In a way, the employers need to be understood as well, as they are dealing with thousands of applications. The biggest concern in my opinion should be the hacking of the ML system and its consequences.

    When people realised that their resumes get thrown into an algorithm that is searching for specific keywords, they started putting those into white font on the white background. Therefore, when the “successful” CV got printed before a one-to-one interview, the “sneaky” keywords were nowhere to be found, yet the candidate got to this step.

    Can this happen to ML? Would repeatedly saying keywords towards the camera and smiling frantically “just do it”? How much human input is required? Is it just enough that it makes such a system just not worth it?

    Most importantly… does anyone care? As long as you tick the boxes, does the employer care how you passed this test, or is it “all about the numbers”? I think this is where the efforts should be. If the employers, candidates and institutions don’t regulate or at least tread lightly around the subject, we might find our desired meritocratic system endangered.

  7. Using ML in interview process is a terrible idea. It was shown that Amazon’s Rekognition has a strong bias. It accurately identifies white men. Anyone other than that? Good luck. This kind of system would immediately run afoul of the ADA because it will discriminate automatically against people with disabilities. Some disabilities such as Autism Spectrum Disorder make it difficult to make eye contact. In some cultures making persistent eye contact is considered inappropriate, i.e. you’re being defiant. With these things considered (and others I haven’t thought of right this minute), I think that AI has no place in job interviews whatsoever. Leave AI to develop satellite data processing algorithms, play with storytelling, help medical diagnoses, but keep it out of human-human interactions.

  8. Interesting article, definitely worth the read, especially given the current technological breakthroughs. I feel these sort of subject needs more attention so that the general population can understand both the opportunity and threat when dealing with AI.
    To answer the question, yes, an algorithm should have an impact on the recruitment process. As it was mentioned in this article, a ML assessment could probably be introduced in a video interview. The algorithm could look at the subtle clues that an interviewer cannot pick, such as nonverbal communication. Indeed, a machine would be better at recognizing when a candidate is lying by pulling information from a dataset stating that liars manipulate their jaw and tend to look to the left; at the same time, the recruiter read this information in the book but did not pay attention to this subtle clue.
    However, the threats to it are significant. First of all, it may be based on some biases as in the Amazon example and will fail to identify the right candidates. Second, each candidate acts differently under emotion and certain performances are improved; how can you train an algorithm to identify that? Third, an algorithm that hears someone with a form of speech impairment may provide an inaccurate assessment of the candidate.
    Finally, it is worth noting that neither humans nor algorithms would probably not be able to predict and measure, during the recruitment process, how a candidate will perform on the job.

  9. A really interesting and concerning read at the same time. In my opinion machine learning should not replace the human factor and there are multiple reasons that could potentially back my statement.

    I work as a team leader in a digital marketing company thus I have conducted several interviews in my career so far. When having to choose between two candidates with similar resumees and capabilities I always listen to my gut instinct. So far it has never failed me as I’ve taken into consideration multiple factors which I believe ML cannot analyse: is this person compatible with our company culture? Will this person fit into our team? Is there any chemistry between myself and the person which is being interviewed for a position in my team? In my opinion if an employee will feel out of place or left out in a company his or her performance will not the desired one therefore frustration may appear on both sides. I don’t see how ML can determine these factors which is why I consider that the face to face interview should be held.

    From my experience in conducting interviews I’ve noticed that you can tell a lot about a person judging by his micro facial expressions or by his gestures and body posture. I feel that machines, as developed as they are today, would still not be able to decypher a human being better than a fellow human being.

    In addition I believe that feedback is essential when it comes to developing as a successful employee. Every interview one may attend is another lesson learnt but how can someone learn something if the points that can be improved are not being presented? I strongly believe that feedback for each candidate should be given as it will potentially offer them the chance to improve.

    However I have to agree that ML can be used as a preliminary stage in an interview process. If let’s say a considerable number of candidates are applying for a position then video interviews can be used as a mean to select only a few of them for the final stages. Still the face to face interview should be in my opinion the key factor in the decision making process.

  10. Whether we like it or not, the world is changing, you either adapt and overcome or as harsh at it may sound, be left behind.
    My take is that Machine Learning in recruitment is going to become increasingly popular with big corporations, at first, because of two simple reasons: profit & consistency. Less overhead in the HR departments means you can allocate those financial resources towards other goals which in turn will create additional reward.
    Such organizations with high number of employees have thought of a way to position themselves both towards the internal & external stakeholders through a simple solution: company culture. For it to work you must have consistency on certain “values” across all lines of business and all levels of employees. Here is where the ML comes into play, I see it like a simple multiple-choice test, you pick the wrong answer, you fail, easy process.
    Let us use a real-world example with easy tells, the hospitality business. Sally from HR is interviewing somebody in a customer faced role, candidate smiles once at the beginning and once at the end because the candidate knows those are the times when Sally is going to pay the most attention. Sally might read that in different ways but as a business, why take the chance on what Sally thinks and risk the consistency in your teams, consequently affecting your company culture. Set a minimum smile count to 5 during a 30 minutes conversation and whoever did not do that, well they picked the wrong “choice” at his test.
    Humanity has greatly developed in recent times and we have mostly embraced it, however, it seems like this “machine learning” is something that scares us as it may “replace us”. This is a statement which I read and heard a lot about but historically it is wrong as not only we managed to enable what we created to the better good, but we also made great use of our surrounding environment.

  11. Very interesting article! Just a couple of thoughts.

    I can see this becoming a potential tool for larger companies where the candidates will interact with a large sample of colleagues if hired. However, for smaller firms where candidates will work closely with their recruiter after being hired, a positive “gut-feeling” (even if without logical grounds) would possibly be an advantage in ensuring the team runs smoothly.

    Some of the previous comments also point out the importance of feedback to candidates in interviews and the fact that interviews also consist of “selling” the company to the candidate. It seems ML might have a difficult time getting around these points.

    Finally, the point that ML might base its judgement on race, gender, or social standing. Is it not sufficient to just eliminate those factors by restricting the inputs to the program and therefore make a genuinely more unbiased decision (or at least unbiased concerning those points)?

  12. Thank you for such an article which combines multiple credible sources into summarising one key point: Regardless if we want it or not, due to different economical, political, personal reasons, ML will be the way moving forward.

    Coming from a technical background, I hear a lot about ML in test & validation departments where post processing is necessary for generating different analysis. Seeing that these concepts are being picked up in other industries such as HR excites me.

    On the other hand, I feel that with all these buzzwords that everyone talks about some industries feel the pressure to adopt them and through the race to become no. 1 most efficient HR department they start missing on the human element. An element which is key in any HR/Recruitment is interaction. People from HR develop an emotional intelligence which helps them judge the abilities of the people they interview. However, as any human action this may turn into being subjective and mistakes to be made.

    To conclude, I strongly believe that when it comes to recruitment the human element should be the number one priority, but machine learning should not be neglected.

  13. There are clearly potential benefits from using ML in the recruitement process however it seems to be limited by the training data avaliable, and if the system is a black box it means there is very little way to fairly assess whether the ML is well trained. This would lead to potentially good canditates being rejected, which would be unethical both from a Kantian perspective for the candidate and a utalitarian perspective as it would harm the greater good of the company.
    Care ethics also states there is fundemental moral value in the relationships in human life, the more a recruitment process uses ML the less room there will be for interpersonal relationships between the candidate and the recruiter. This argument would suggest that it would be morally wrong to use ML in the recruitment process, more so the more it is used. There is also the argument here that no matter how well the ML is trained, while the recruiter may get to see what the candidate is like, the candidate has no such opportunity, which can be an important factor in the decision of the candidate should they be successful in choosing whether they want to work for the company. Another factor here is the fact that it could details about a candidate may be lost meaning the company loses out on a good candidate, for example a very good candidate might not have the best grades however there may be reasons for this (such as a family problem) which would mean the candidate is excluded prematurely. This would be detrimental to both the recruiter and the candidate and highlights why care ethics may in fact have a role in the engineering sector, especially as society develops and people become more ethically aware which through derived demand can effect a company.
    So it appear to me, with its implementation carefully limited such that it is only used in the initial stages of the process when there are a very high volume of applicants, it could be ethically right to use ML recruitment assuming all bias is eliminated and that this can be checked by an external party. However to expand it beyond this would be ethically wrong and detrimental to all parties involved. Given the lack of truely unbiased ML systems and possibly in some case a lack of awareness among developers of such systems of ethical and social issues (there are entire degrees dedicated to understanding these issue, on should not expect developers to have a complete awareness but therefore experts who are aware need to be involved in the design) there is much development needed before a comprehensive ML recruitment system can be ethically implemented.

  14. I agree that ML is the way to move forward in this field because all the advantages mentioned, quickness in particular. However, I believe that only a well-trained model that is almost error-proof should be used. The critical algorithmic issues will be solved in the future, and therefore ML should not be completely implemented while they still exist.

    There are other older ways of recruitment that have already taken out the human factor. For example, the designed psychometric tests that assign the candidate a score depending on the strengths the company is looking for. If the score is below a base line, the candidate will be rejected receiving feedback. A similar approach should be found for the whole application.

    Advancing to a ML analysing the whole application is a normal step in this development that needs to be taken carefully. Regarding the video interview analysing, I consider the ML should be used now to give some insight information and organised data (e.g. speech speed) that would be hard to be detected by a human, but it should not decide whether a candidate is rejected or not without checked by a person.

    The moment ML could be safely used as a recruitment method will be after we are sure it is using exclusively relevant factors in its decision, leaving out race, gender or personal online activities such as Spotify playlists. At that point, the process would be transparent, and feedback could be given to candidates. To reach that point, the engineers must be sure bias is removed by looking at the input and output data from the models training. The risk of unknown is, in my opinion, too big to be taken. If implemented without transparency, ML would probably take quick and good decisions, but the fact that they might be unethical cannot be afforded.

    The way I see it, an efficient ML recruitment process would look for specific candidate strengths, decided by the employer, then would make the decision based on the whole application by assigning scores on each competence. It would also be able to justify it so the means of taking the decision are legal and reasonable, and the candidate can receive feedback.

  15. This is a very interesting concept, taking out or at least minimising and potential issues around the impact of human assessment of another human.

    However, from personal experience, the role of the recruiter is generally to explain the role, salary, etc, assess the candidates suitability (have you done x or why before?) for the role as well as answer questions. It’s then a face to face interview with a hiring manager and potentially some technical assessment of the candidates skills. These are all soft skills which humans are perfect for.

    When we use ML for assessment of candidate, we need to be very careful not to introduce a new set of biases. Its now what the AI has “learned” from other candidates rather than what a human “thinks” they know. What we should be doing is taking each candidate as an individual rather than judging them based on an algorithm or human learned biases.

    Where the technology could be used is as described in the post, speech pattern assessment for clarity, CV reviewing etc, but recruitment is about suitability of one human to work with another. If managers are biased – maybe thats a better problem to fix?

    A study of how not to do it is the current US court system, with some states using AI to assess candidates for bail, which was initially designed without bias, but soon picked up patterns based on empirical evidence which lead to bias against the individual, due to circumstances beyond their control.

  16. I think Machine Learning has the potential to be used in conjunction with traditional recruitment methods, however being used as the only tool to recruit people could be counter productive.

    Whilst factors such as speed of speech, eye contact, etc (that can be measured quantitively by ML) do give the impression that a candidate is to be more successful, it can take away from more important qualities, especially in technical roles. For example, a skilled engineer may not have the skills that ML would deem as desirable, yet their ideas and creativity could be of real benefit to the company. Instead of using ML to eliminate them, it could be used to identify softer skills they need to work on.

    The issue of algorithmic bias is also an issue, not just ethically but also in terms of benefit to the company. The use of ML in recruitment should be to identify the best candidates (not, in my opinion, to speed up the recruitment process as outlined in figure 1; I feel this is an additional benefit, and not its main goal) yet it may result in vast numbers of qualified candidates slipping through the net due to their gender, race or location.

    In conclusion, I feel ML can be used to give employers an alternative view of a candidate to help them make a decision. I think leaving the entirety of the recruitment process to ML is far riskier.

  17. A very informative article, giving a comprehensive view of the evolution of recruitment processes. Although I understand the search for efficiency, I fear that the described approach contributes more to reduction of individuals to mere tools than it helps selecting candidates with humanistic values. Considering the environmental impact of our societies’s strive to constantly increase speed and producing inconsiderate amount of goods, it sets in my view a dangerous precedent to reduce the importance of ethics when choosing candidates. Moreover, certain skills (creativity, teamwork) are tremendously harder to quantify and to interpret using numerical tools and might also suffer from such recruitment metrics.
    Nevertheless, I am confident that if used with transparency and purposed only to advise, ML could contribute to a more efficient decision-making.

  18. Word speed is likely to be faster in an interview because the interviewee is nervous. In the case of recruitment, it is much easier to recruit someone then to dismiss them therefore I’d argue taking more time to ensure the best decision is taken is more appropriate.
    Those are just personal opinions.

    This is a good article; I hadn’t heard of ML before so thanks for educating me.

  19. Although I appreciate the benefits of ML and efforts that are going into perfecting this hiring system, I personally do not think an algorithm on its own should impact whether or not you are qualified/well-suited for a job. In my field of work, personality and building a good working relationship with your colleagues/boss and clients are the most important considerations during the hiring process. Regardless of the intricacy of the training data, I do not think a machine can judge personality and ‘desired traits’ better than a human recruiter can.

    Furthermore, being rejected and learning from the feedback given after an interview is such a normal and important part of job searching. Therefore, the idea that ML-based recruitment would not offer any sort of feedback to unsuccessful applicants is to me one of its biggest stumbling blocks. I am all for using ML as guidance to assist with the hiring process, as I do recognise its benefits in speeding up the hiring process and eliminating unconscious bias, but I do not think recruitment should be done solely with this system.

  20. Brilliant blog. So interesting. Had no idea how much influence algorithms and coding had on a person’s future. Can something so mechanical, so logistical, so numerical read emotions and expressions which are so natural, unique and pure? Then again, who is really themselves truly in an interview? Could a computer read into a person’s face better than a human could? All very exciting, but also worrying.

  21. I am strongly against this method. Using technology for a HUMAN resource practice is not the way forward. It makes the recruitment process lose its human touch, and could potentially filter out candidates, due to system malfunction. Even though it could reduce negative aspects of HRM e.g. discrimination, the use of technology is very unreliable. Factors such as human intuition which may come from the hiring personnel is potentially lost with this process.

    However, with an increasingly difficult business environment, the use of this method could reduce hiring costs.

  22. Having read the replies so far posted, I find that most of what I would have to contribute has already been stated. However, just some thoughts on the human condition:

    It is a natural state of all advanced sentient life, not just human, that it has a natural bias towards its own kind, from immediate family to wider family, local community and wider community,

    We humans have a natural affinity to those with whom we live, work, and play.

    Given that our lives are governed increasingly by large government and large corporations, is it desirable to give those entities yet more power to reduce people from persons to numbers? or to take natural human personality and emotion out of consideration when assessing the place of an individual in the workplace or any aspect of wider society?

    Whilst it is of course desirable to eliminate unfair biases and to allow every person as far as possible to develop their possibilities, I do not think that a computer algorithm designed by a corporate programmer is the way to achieve that objective. If human relationships in the workplace are governed not by human-to-human interaction but by AI then we are reduced to ciphers rather than people

    Given that we are at the beginning of the age of AI/ML we need to think very seriously where this is leading. If ML governs recruitment does it then go on to monitor performance and determine outcomes? If ML is developed to the point where it really does “learn to learn” then it could be much more than a number-crunching aid. Could it actually become a superior intelligence to whom the human race is subservient? This sounds very fanciful, but a great deal of the world we live in now would have seemed impossible only a short while ago.

Leave a Reply