Engineering – You’re Hired? The Machine Learning Revolution

Group 59

Most companies now feature video interviews as part of their recruitment process to evaluate potential candidates. These interviews are increasingly being assessed by machine learning (ML). ML, in this context, applies algorithms to quantitatively identify successful features in human behaviour as observed in the video interview. For example, features such as facial expressions, posture and eye contact can be used to determine good communication skills. ML quantifies such features by learning on past experience of what is considered ‘good’ and ‘bad’ in terms of candidates. This is often referred to as the training data. While in its current form, ML is not capable of making a hiring decision independently, it is often used as guidance to indicate how the candidate has scored in characteristics deemed important by the algorithm.

Should an algorithm have an impact on whether you get employed?

Machines are learning, humans are not

While human recruiters evaluate potential candidates based on features such as vocalisations, humans have a qualitative definition of these features rather than the quantitative ones that ML algorithms have. For example, one of the features linked to job success in a technical support role is speed of speech. While a human recruiter evaluates this by “gut instinct”, an algorithm measures this by words spoken per minute. Additionally, humans have a higher tendency to be distracted by other factors unrelated to the job success leading to unconscious bias. In contrast, companies working on ML algorithms have teams of psychologists and data scientists working together to successfully identify features that link to characteristics for job success.

To tackle algorithmic bias, which is bias that exists in the training data, the contributing factors are identified and taken out. The model is then re-trained on the modified data, eliminating the bias, hence removing any adverse impacts against the specific subset of people the bias previously affected. For example, HireVue uses different datasets depending on cultural background to ensure there is no bias in the interviewing process. As a result, the recruitment process is a lot more quantitative, consistent and unbiased. Immanuel Kant (proponent of the Kantian ethics) defined the universality principle which states that actions that can become universal law are inherently ethical. Employing people irrespective of their race, gender, age, sexual orientation etc. and purely on merit is part of the Labour Law in many countries. Using ML algorithms that eliminate unconscious bias reinforces the Anti-Discrimination Laws. This can be interpreted as a universal law defined by the categorical imperative by Kant.

Figure 1 compares the recruitment process with and without ML. The main differences in the recruitment process is the replacement of the ATS (Applicant Tracking System) Screen with a CRM (Customer Relationship Management) targeted specifically for recruitment and the introduction of an AI driven assessment and video interview instead of the psychometric tests and phone interview. The advantages of the ML recruitment system are increased speed of hire, better candidate recruitment and better experience for potential candidates. Industry results of these can be gained from the reports published by Unilever and Virgin Media. From a utilitarian point of view, if the consequences of an action are positive for the majority, then the action is deemed ethical which is clearly the case with ML-based recruitment process.

Figure 1: Comparing the recruitment process with and without ML.

A crucial aspect of the traditional recruitment process that is hard to quantify is the success rate of the recruiter. On the other hand, bias and accuracy are two easily measured factors in ML algorithms that are constantly tracked to ensure that the algorithm produces accurate results. This gives ML-based recruitment processes another advantage since their accuracy can be measured and improved over time.

Risk of the unknown

The concept of care ethics however can be employed in the argument for limiting the use of ML in recruitment. Care ethics state that it is the engineers’ responsibility to respect sustainability of environment, human life and contribution to engineering practice. This ethical approach works on the basis that people learn norms and values by encountering concrete people with emotions.  As ML is designed by humans, the algorithms can be subject to “algorithmic bias” mentioned previously. Therefore if this issue is not found during the testing and re-training process, it could be argued that human life would be adversely affected. An example of this was when Amazon set up a team of specialists to develop computer models to aid in the search of top candidates. The computer model was trained using resumes received over the last ten years, most of which were men. Therefore, the system taught itself that men were more desirable candidates. Some biases can be easily identified, for example, gender and race, but others like postcodes for example which might discriminate against historically underprivileged neighborhoods will be much more demanding to find. This is an ongoing risk faced by developers of ML algorithms for recruitment such as Hirevue,  impacting their reputation and damaging corporations using their services.

As the engineering industry is driven by efficiency and action, it focuses more on Kantian and Utilitarianism ethical reasonings. Thus, with the introduction of ML in recruitment, even less emphasis is placed on virtue ethics and the nature of the acting person. Two of the virtues of a morally responsible professional are informative communication and justice. This ethical perspective places emphasis on the person and what it means to be human. The data training process for ML looks at tens of thousands of factors and underlying criteria, most of which cannot be disclosed. Therefore, an algorithm can pass or fail someone based on secret criteria requirements. It will not be able to offer any feedback and scores for each of the criteria, independent of the success of the applicant. The black box phenomenon means candidates no longer receive feedback and are unaware of success metrics.

As an example, professions such as: lawyers, doctors and engineers require four years for graduation and an additional period for gaining further qualifications or relevant experience. It could be argued that for such individuals, a “quality over quantity” approach is preferred in the job searching process and that they require more personal feedback and interactions. Furthermore, there might be worries that companies using ML have developed “cyber-snooping capabilities” and are rejecting people based on personal online activities such as Spotify playlists or Netflix activities. To enable participants to aim for success, software companies must become more transparent and reveal the underlying logic.

Evolution not revolution

ML in recruitment is still in its infancy, with many problems to understand and resolve. As a whole, we agree that there are issues associated with it but with continued monitoring and the introduction of appropriate legislation, ML would be capable of playing a larger role in the recruitment process. If  human bias can be eliminated, the end goal will be beneficial for the many as dictated by the utilitarian viewpoint.

58 thoughts on “Engineering – You’re Hired? The Machine Learning Revolution

  1. Some very interesting points! I agree using machine learning could be the most efficient and quickest way of finding potential candidates. However I disagree with leaving out human opinion and emotion altogether as each individual’s circumstance is different, for example an excellent candidate may accidentally trip over their words or may have a stammer. Moreover, confidence can be shown in other ways than posture and word speed. I also believe it could get quite controversial if machine learning begins to discriminate against gender and area of residence and the company should embrace diversity and give everyone an equal chance based on their individual circumstance and hire people based on how hard they are willing to work for the company rather than simply a confident facade. 🙂

    1. Thank you very much for your comment @Jess. You’ve raised some excellent points in your comments. While it is difficult to evaluate confidence levels, a significant amount of research is being carried out to quantify as best as possible.
      There is another danger in the risk of algorithmic bias, whereas you mention gender bias could “lost” within the algorithm. However, are interviewers today held up to these same standards? Don’t we all have an unconscious bias the first time we meet someone? The challenge is getting it right in the setting up stages

  2. A highly informative article that sheds light on a topic that is not discussed enough. I agree with the argument of algorithm bias that is perfectly represented with the Amazon example. I believe that machine learning is only as good as the person who designed it. Hence, it is very important that not only that machine learns from data it receives, but that humans learn from the machine as well. After all, what distinguishes humans from machines is the element of creativity, and we should use that to further improve machine learning.

    One topic that intrigues me is human emotion. For instance a candidate can be stressed because they are applying for their dream job and want to do well, while other candidates may stress because of lack of self-confidence. How can machine learning distinguish between both of those? Would a human be more capable of doing a better job at distinguishing?

    Finally, how can we address an issue like disability? If we do not factor it in, that puts some candidates at a serious disadvantage. This is the exact antithesis of Labour and Anti-Discrimination laws.

    1. Thank you very much for your comment @Sahl Abdelsayed. You’ve raised some very interesting points. As you correctly mentioned machine learning is essentially a learning process and the more accurate the input data, the better the result.

      Another point you raised was distinguishing between different types of stress. We agree with you that a human is more capable of doing so, however, if a human can, surely the algorithm could be taught? The importance is knowing what criteria a candidate is evaluated on to understand if “stress” would impact the performance.

      We agree with you on your final point regarding disabilities. With current technologies, those suffering from disabilities would be accommodated for separetely.

  3. Your article is quite thought-provoking. It is very interesting to see how our world is becoming dependent on computers. However as you said the computers have been programmed by humans so they cannot be totally neutral. I cannot think of having completely neutral computers anytime soon.
    When someone is being recruited, emotions play a big role as well. Reducing humans to the number of words per minute doesn’t completely make sense in my opinion. If the machine counts the number of words per minute, it may not take into consideration the emotion or the intonation a person has (which are fundemental in a speech).
    Your article reminds me that humans must continue to interact with each other and use more their senses rather than rely on computers to tell them what to do.

  4. I think it’s fair to say that while the interviewer looks to get a feel of a potential hire’s capabilities and behaviour, the candidate also tries to gauge what working with the person on the other side of the table or screen would be like. In this way I feel ML-conducted interviews are limited and limiting.

    That said, to the extent that ML-run interviews reduce bias in hiring there are considerable positives.

    1. Thanks for a really good input, foodforthought! You identified very well another limitation of ML – one way interactions only. It is indeed limiting that a candidate can only find out more about a role or a company after several steps in the recruitment process. Thanks for sharing this!

  5. Very interesting read. It’s a concept which is frightening because it’s asking the question if machines should dictate the future of a person. Using machine learning seems very effective for determining the technical qualities of an employee but could algorithms truly judge a persons behaviour in the workplace? These algorithms would lie with the programmer and is there such a thing as the perfect employee? This also may hinder those who thrives off human interaction in a interview and would not necessarily display their qualities when talking to a reflection of themselves on a screen at home. Humans can subconscious decipher a lot of information based on micro expressions and other various methods. Could machines do the same? Th video interview is efficient and fair but when the oppportunity to go off topic arises in a human interview, I believe many qualities can be tested or found as opposed to a one track video interview.

  6. Great article which in my opinion would give some former interview candidates the answer to “Why didn’t they chose me?” or “Why didn’t they bother to give me feedback?”.

    As a computer scientist, I agree that ML is not there yet in order to provide an unbiased state-of-the-art system that would facilitate the optimal recruitment by itself or to at least for the majority of the process.

    Nevertheless, can the ML applied on video interviews be compared to the other tools that the companies are already using (e.g. auto CV-reading, logical & reasoning tests, mathematical exams etc.)? Absolutely.

    The virtue ethics has been neglected for a while as these kind of tools and tests have been used for years. In their search for standardisation, their creators have taken the personal factor out of the interviewing process; therefore the ML cannot take the blame as being a the trend starter in this direction.

    In a way, the employers need to be understood as well, as they are dealing with thousands of applications. The biggest concern in my opinion should be the hacking of the ML system and its consequences.

    When people realised that their resumes get thrown into an algorithm that is searching for specific keywords, they started putting those into white font on the white background. Therefore, when the “successful” CV got printed before a one-to-one interview, the “sneaky” keywords were nowhere to be found, yet the candidate got to this step.

    Can this happen to ML? Would repeatedly saying keywords towards the camera and smiling frantically “just do it”? How much human input is required? Is it just enough that it makes such a system just not worth it?

    Most importantly… does anyone care? As long as you tick the boxes, does the employer care how you passed this test, or is it “all about the numbers”? I think this is where the efforts should be. If the employers, candidates and institutions don’t regulate or at least tread lightly around the subject, we might find our desired meritocratic system endangered.

    1. Thanks for your great input on the matter, aleksrusu!

      It is indeed amazing how people find innovative ways of tricking the system and I really liked your example on hidden keywords within the application documents. I guess this reinforces the limitations of machine learning algorithms in the recruitment process. Although it clearly makes the process much more efficient, a human recruiter could be better at identifying such anomalies in the process.

  7. Using ML in interview process is a terrible idea. It was shown that Amazon’s Rekognition has a strong bias. It accurately identifies white men. Anyone other than that? Good luck. This kind of system would immediately run afoul of the ADA because it will discriminate automatically against people with disabilities. Some disabilities such as Autism Spectrum Disorder make it difficult to make eye contact. In some cultures making persistent eye contact is considered inappropriate, i.e. you’re being defiant. With these things considered (and others I haven’t thought of right this minute), I think that AI has no place in job interviews whatsoever. Leave AI to develop satellite data processing algorithms, play with storytelling, help medical diagnoses, but keep it out of human-human interactions.

  8. Interesting article, definitely worth the read, especially given the current technological breakthroughs. I feel these sort of subject needs more attention so that the general population can understand both the opportunity and threat when dealing with AI.
    To answer the question, yes, an algorithm should have an impact on the recruitment process. As it was mentioned in this article, a ML assessment could probably be introduced in a video interview. The algorithm could look at the subtle clues that an interviewer cannot pick, such as nonverbal communication. Indeed, a machine would be better at recognizing when a candidate is lying by pulling information from a dataset stating that liars manipulate their jaw and tend to look to the left; at the same time, the recruiter read this information in the book but did not pay attention to this subtle clue.
    However, the threats to it are significant. First of all, it may be based on some biases as in the Amazon example and will fail to identify the right candidates. Second, each candidate acts differently under emotion and certain performances are improved; how can you train an algorithm to identify that? Third, an algorithm that hears someone with a form of speech impairment may provide an inaccurate assessment of the candidate.
    Finally, it is worth noting that neither humans nor algorithms would probably not be able to predict and measure, during the recruitment process, how a candidate will perform on the job.

    1. Very interesting points here, frasinaa! I enjoyed reading your comment and all the good examples. I think an important conclusion from your ideas is the fact that both ML and people have their advantages and limitations when it comes to recruitment. In theory, the offered combination of ML at initial stages and people at further stages, should tackle or mitigate as many limitations as possible. However, as you said, it is more effective to incorporate ML into people’s decision making process, rather than separating them. It would be very interesting to see, for example, a virtual interview with a human receptor that offers live metrics on the candidate.

  9. A really interesting and concerning read at the same time. In my opinion machine learning should not replace the human factor and there are multiple reasons that could potentially back my statement.

    I work as a team leader in a digital marketing company thus I have conducted several interviews in my career so far. When having to choose between two candidates with similar resumees and capabilities I always listen to my gut instinct. So far it has never failed me as I’ve taken into consideration multiple factors which I believe ML cannot analyse: is this person compatible with our company culture? Will this person fit into our team? Is there any chemistry between myself and the person which is being interviewed for a position in my team? In my opinion if an employee will feel out of place or left out in a company his or her performance will not the desired one therefore frustration may appear on both sides. I don’t see how ML can determine these factors which is why I consider that the face to face interview should be held.

    From my experience in conducting interviews I’ve noticed that you can tell a lot about a person judging by his micro facial expressions or by his gestures and body posture. I feel that machines, as developed as they are today, would still not be able to decypher a human being better than a fellow human being.

    In addition I believe that feedback is essential when it comes to developing as a successful employee. Every interview one may attend is another lesson learnt but how can someone learn something if the points that can be improved are not being presented? I strongly believe that feedback for each candidate should be given as it will potentially offer them the chance to improve.

    However I have to agree that ML can be used as a preliminary stage in an interview process. If let’s say a considerable number of candidates are applying for a position then video interviews can be used as a mean to select only a few of them for the final stages. Still the face to face interview should be in my opinion the key factor in the decision making process.

    1. Thanks a lot for sharing your thoughts, Radu Nita! It was definitely a very interesting read, filled with great ideas to take forward. It is indeed impressive how gut feeling plays an important role in recruitment choices. A manager’s success will always be decided by the quality of the team around him and the chemistry between his employees. As most jobs nowadays rely on the efficient collaboration of people between them, it is essential to build a stable and reliable team. As you said, this is very difficult to be achieved by algorithms. To address your final point, it is indeed very true that ML could improve greatly the efficiency of recruitment. Unfortunately, some ideal candidates for the team’s chemistry might get rejected at this stage.

  10. Whether we like it or not, the world is changing, you either adapt and overcome or as harsh at it may sound, be left behind.
    My take is that Machine Learning in recruitment is going to become increasingly popular with big corporations, at first, because of two simple reasons: profit & consistency. Less overhead in the HR departments means you can allocate those financial resources towards other goals which in turn will create additional reward.
    Such organizations with high number of employees have thought of a way to position themselves both towards the internal & external stakeholders through a simple solution: company culture. For it to work you must have consistency on certain “values” across all lines of business and all levels of employees. Here is where the ML comes into play, I see it like a simple multiple-choice test, you pick the wrong answer, you fail, easy process.
    Let us use a real-world example with easy tells, the hospitality business. Sally from HR is interviewing somebody in a customer faced role, candidate smiles once at the beginning and once at the end because the candidate knows those are the times when Sally is going to pay the most attention. Sally might read that in different ways but as a business, why take the chance on what Sally thinks and risk the consistency in your teams, consequently affecting your company culture. Set a minimum smile count to 5 during a 30 minutes conversation and whoever did not do that, well they picked the wrong “choice” at his test.
    Humanity has greatly developed in recent times and we have mostly embraced it, however, it seems like this “machine learning” is something that scares us as it may “replace us”. This is a statement which I read and heard a lot about but historically it is wrong as not only we managed to enable what we created to the better good, but we also made great use of our surrounding environment.

    1. Thanks for a very valuable input, RobertBickford! As you said, nowadays, the industry has developed in such a way that company metrics and financial results are defining the performance. Similarly, the recruitment process needs to adapt, become more efficient and select the most suitable candidates for the company’s values and objectives. On the other hand, engineering and technology have often been faced with ethical debates. In this case, the relation with the candidates causes extensive ethical misalignments. But, as you mentioned, as we continue to evolve and improve the processes we use, it is possible that ML recruitment grows into a perfectly ethical apparatus that will be used in every company on the globe.

  11. Very interesting article! Just a couple of thoughts.

    I can see this becoming a potential tool for larger companies where the candidates will interact with a large sample of colleagues if hired. However, for smaller firms where candidates will work closely with their recruiter after being hired, a positive “gut-feeling” (even if without logical grounds) would possibly be an advantage in ensuring the team runs smoothly.

    Some of the previous comments also point out the importance of feedback to candidates in interviews and the fact that interviews also consist of “selling” the company to the candidate. It seems ML might have a difficult time getting around these points.

    Finally, the point that ML might base its judgement on race, gender, or social standing. Is it not sufficient to just eliminate those factors by restricting the inputs to the program and therefore make a genuinely more unbiased decision (or at least unbiased concerning those points)?

  12. Thank you for such an article which combines multiple credible sources into summarising one key point: Regardless if we want it or not, due to different economical, political, personal reasons, ML will be the way moving forward.

    Coming from a technical background, I hear a lot about ML in test & validation departments where post processing is necessary for generating different analysis. Seeing that these concepts are being picked up in other industries such as HR excites me.

    On the other hand, I feel that with all these buzzwords that everyone talks about some industries feel the pressure to adopt them and through the race to become no. 1 most efficient HR department they start missing on the human element. An element which is key in any HR/Recruitment is interaction. People from HR develop an emotional intelligence which helps them judge the abilities of the people they interview. However, as any human action this may turn into being subjective and mistakes to be made.

    To conclude, I strongly believe that when it comes to recruitment the human element should be the number one priority, but machine learning should not be neglected.

    1. Thanks for your input MihneaTrifan! I really liked the points you mentioned about the developed emotional intelligence in recruiters. As you said, it seems like the human element is still the decisive factor because of these traits recruiters have. It would be interesting to see how the connection between algorithms and recruiters develops in the near future.

  13. There are clearly potential benefits from using ML in the recruitement process however it seems to be limited by the training data avaliable, and if the system is a black box it means there is very little way to fairly assess whether the ML is well trained. This would lead to potentially good canditates being rejected, which would be unethical both from a Kantian perspective for the candidate and a utalitarian perspective as it would harm the greater good of the company.
    Care ethics also states there is fundemental moral value in the relationships in human life, the more a recruitment process uses ML the less room there will be for interpersonal relationships between the candidate and the recruiter. This argument would suggest that it would be morally wrong to use ML in the recruitment process, more so the more it is used. There is also the argument here that no matter how well the ML is trained, while the recruiter may get to see what the candidate is like, the candidate has no such opportunity, which can be an important factor in the decision of the candidate should they be successful in choosing whether they want to work for the company. Another factor here is the fact that it could details about a candidate may be lost meaning the company loses out on a good candidate, for example a very good candidate might not have the best grades however there may be reasons for this (such as a family problem) which would mean the candidate is excluded prematurely. This would be detrimental to both the recruiter and the candidate and highlights why care ethics may in fact have a role in the engineering sector, especially as society develops and people become more ethically aware which through derived demand can effect a company.
    So it appear to me, with its implementation carefully limited such that it is only used in the initial stages of the process when there are a very high volume of applicants, it could be ethically right to use ML recruitment assuming all bias is eliminated and that this can be checked by an external party. However to expand it beyond this would be ethically wrong and detrimental to all parties involved. Given the lack of truely unbiased ML systems and possibly in some case a lack of awareness among developers of such systems of ethical and social issues (there are entire degrees dedicated to understanding these issue, on should not expect developers to have a complete awareness but therefore experts who are aware need to be involved in the design) there is much development needed before a comprehensive ML recruitment system can be ethically implemented.

  14. I agree that ML is the way to move forward in this field because all the advantages mentioned, quickness in particular. However, I believe that only a well-trained model that is almost error-proof should be used. The critical algorithmic issues will be solved in the future, and therefore ML should not be completely implemented while they still exist.

    There are other older ways of recruitment that have already taken out the human factor. For example, the designed psychometric tests that assign the candidate a score depending on the strengths the company is looking for. If the score is below a base line, the candidate will be rejected receiving feedback. A similar approach should be found for the whole application.

    Advancing to a ML analysing the whole application is a normal step in this development that needs to be taken carefully. Regarding the video interview analysing, I consider the ML should be used now to give some insight information and organised data (e.g. speech speed) that would be hard to be detected by a human, but it should not decide whether a candidate is rejected or not without checked by a person.

    The moment ML could be safely used as a recruitment method will be after we are sure it is using exclusively relevant factors in its decision, leaving out race, gender or personal online activities such as Spotify playlists. At that point, the process would be transparent, and feedback could be given to candidates. To reach that point, the engineers must be sure bias is removed by looking at the input and output data from the models training. The risk of unknown is, in my opinion, too big to be taken. If implemented without transparency, ML would probably take quick and good decisions, but the fact that they might be unethical cannot be afforded.

    The way I see it, an efficient ML recruitment process would look for specific candidate strengths, decided by the employer, then would make the decision based on the whole application by assigning scores on each competence. It would also be able to justify it so the means of taking the decision are legal and reasonable, and the candidate can receive feedback.

    1. Thanks for your valuable thoughts, dragos2727! I found very interesting your example on psychometric tests. As you said, ML might just become another step in assessing a candidate’s suitability for a role. Additionally, you reinforced the importance of feedback, avoiding the black box phenomenon and removing unfair bias in the algorithm! Thanks for the inputs!

  15. This is a very interesting concept, taking out or at least minimising and potential issues around the impact of human assessment of another human.

    However, from personal experience, the role of the recruiter is generally to explain the role, salary, etc, assess the candidates suitability (have you done x or why before?) for the role as well as answer questions. It’s then a face to face interview with a hiring manager and potentially some technical assessment of the candidates skills. These are all soft skills which humans are perfect for.

    When we use ML for assessment of candidate, we need to be very careful not to introduce a new set of biases. Its now what the AI has “learned” from other candidates rather than what a human “thinks” they know. What we should be doing is taking each candidate as an individual rather than judging them based on an algorithm or human learned biases.

    Where the technology could be used is as described in the post, speech pattern assessment for clarity, CV reviewing etc, but recruitment is about suitability of one human to work with another. If managers are biased – maybe thats a better problem to fix?

    A study of how not to do it is the current US court system, with some states using AI to assess candidates for bail, which was initially designed without bias, but soon picked up patterns based on empirical evidence which lead to bias against the individual, due to circumstances beyond their control.

    1. Thanks for the insights, Pete_UK_Systems, and for the very good examples you offered! I really liked your point on assessing candidates as individuals, rather than comparing them to historical interactions. I also found very interesting the example on the US court system. This proves that ML not only picks up the positive parameters from training material, but also negatively influencing factor such as bias.

  16. I think Machine Learning has the potential to be used in conjunction with traditional recruitment methods, however being used as the only tool to recruit people could be counter productive.

    Whilst factors such as speed of speech, eye contact, etc (that can be measured quantitively by ML) do give the impression that a candidate is to be more successful, it can take away from more important qualities, especially in technical roles. For example, a skilled engineer may not have the skills that ML would deem as desirable, yet their ideas and creativity could be of real benefit to the company. Instead of using ML to eliminate them, it could be used to identify softer skills they need to work on.

    The issue of algorithmic bias is also an issue, not just ethically but also in terms of benefit to the company. The use of ML in recruitment should be to identify the best candidates (not, in my opinion, to speed up the recruitment process as outlined in figure 1; I feel this is an additional benefit, and not its main goal) yet it may result in vast numbers of qualified candidates slipping through the net due to their gender, race or location.

    In conclusion, I feel ML can be used to give employers an alternative view of a candidate to help them make a decision. I think leaving the entirety of the recruitment process to ML is far riskier.

  17. A very informative article, giving a comprehensive view of the evolution of recruitment processes. Although I understand the search for efficiency, I fear that the described approach contributes more to reduction of individuals to mere tools than it helps selecting candidates with humanistic values. Considering the environmental impact of our societies’s strive to constantly increase speed and producing inconsiderate amount of goods, it sets in my view a dangerous precedent to reduce the importance of ethics when choosing candidates. Moreover, certain skills (creativity, teamwork) are tremendously harder to quantify and to interpret using numerical tools and might also suffer from such recruitment metrics.
    Nevertheless, I am confident that if used with transparency and purposed only to advise, ML could contribute to a more efficient decision-making.

  18. Word speed is likely to be faster in an interview because the interviewee is nervous. In the case of recruitment, it is much easier to recruit someone then to dismiss them therefore I’d argue taking more time to ensure the best decision is taken is more appropriate.
    Those are just personal opinions.

    This is a good article; I hadn’t heard of ML before so thanks for educating me.

  19. Although I appreciate the benefits of ML and efforts that are going into perfecting this hiring system, I personally do not think an algorithm on its own should impact whether or not you are qualified/well-suited for a job. In my field of work, personality and building a good working relationship with your colleagues/boss and clients are the most important considerations during the hiring process. Regardless of the intricacy of the training data, I do not think a machine can judge personality and ‘desired traits’ better than a human recruiter can.

    Furthermore, being rejected and learning from the feedback given after an interview is such a normal and important part of job searching. Therefore, the idea that ML-based recruitment would not offer any sort of feedback to unsuccessful applicants is to me one of its biggest stumbling blocks. I am all for using ML as guidance to assist with the hiring process, as I do recognise its benefits in speeding up the hiring process and eliminating unconscious bias, but I do not think recruitment should be done solely with this system.

    1. Thank you very much for your comment @atl96. You’ve raised a very interesting point, should ML be only considered for professions where logical thinking is a must such as finance, engineering, accounting rather than client facing roles. There are limits to the “algorithm” to judge a candidate which is why coupling the face to face interview with a ML video interview has benefits. The video interview could identify potential candidates and then assess them face to face where personality and ‘desired traits’ could be evaluated more throughly.

  20. Brilliant blog. So interesting. Had no idea how much influence algorithms and coding had on a person’s future. Can something so mechanical, so logistical, so numerical read emotions and expressions which are so natural, unique and pure? Then again, who is really themselves truly in an interview? Could a computer read into a person’s face better than a human could? All very exciting, but also worrying.

    1. Thank you very much for your comment @jf96. You’ve raised a very interesting point, is anyone truly themselves in an interview? As we prepare for interviews we learn and adapt our responses to match those of the organization. It’s common practice to research their traits of success and align our experience against them.

      The current technology can analyze facial movements and evaluate them against benchmarks to categorize human emotion and traits. As you mention there is a potential risk reward for this technology and the debate should keep going to continually improve or more drastically halt the development of ML for recruitment.

  21. I am strongly against this method. Using technology for a HUMAN resource practice is not the way forward. It makes the recruitment process lose its human touch, and could potentially filter out candidates, due to system malfunction. Even though it could reduce negative aspects of HRM e.g. discrimination, the use of technology is very unreliable. Factors such as human intuition which may come from the hiring personnel is potentially lost with this process.

    However, with an increasingly difficult business environment, the use of this method could reduce hiring costs.

    1. Thank you very much for your comment @TK. It is common to experience a strong sense of disagreement with such a technology and it’s completely reasonable. Could an algorithm truly evaluate me to my fullest potential? As you mention could a malfunction deny my chances at success?

      However, this is the world we currently live in, where face to face interviews are only accessible after completing a CV with multiple online cognitive ability tests giving each candidate a score. Incorporating the use of ML for the video interview can be an additional chance for a candidate to reveal their personality and potential to get shortlisted for interviews.

      Therefore could this be a motive to work on the technology to adapt it to our needs and include a human element?

  22. Having read the replies so far posted, I find that most of what I would have to contribute has already been stated. However, just some thoughts on the human condition:

    It is a natural state of all advanced sentient life, not just human, that it has a natural bias towards its own kind, from immediate family to wider family, local community and wider community,

    We humans have a natural affinity to those with whom we live, work, and play.

    Given that our lives are governed increasingly by large government and large corporations, is it desirable to give those entities yet more power to reduce people from persons to numbers? or to take natural human personality and emotion out of consideration when assessing the place of an individual in the workplace or any aspect of wider society?

    Whilst it is of course desirable to eliminate unfair biases and to allow every person as far as possible to develop their possibilities, I do not think that a computer algorithm designed by a corporate programmer is the way to achieve that objective. If human relationships in the workplace are governed not by human-to-human interaction but by AI then we are reduced to ciphers rather than people

    Given that we are at the beginning of the age of AI/ML we need to think very seriously where this is leading. If ML governs recruitment does it then go on to monitor performance and determine outcomes? If ML is developed to the point where it really does “learn to learn” then it could be much more than a number-crunching aid. Could it actually become a superior intelligence to whom the human race is subservient? This sounds very fanciful, but a great deal of the world we live in now would have seemed impossible only a short while ago.

  23. its risks. However as we mention the combination of both ML and face to face interviews has a merit in the short term.

    Large organisations would reduce people down to numbers which is a modern day risk. However this is the world we currently live in, where face to face interviews are only accessible after completing a CV with multiple online cognitive ability tests giving each candidate a score. Incorporating the use of ML for the video interview can be an additional chance for a candidate to reveal their personality and potential get short listed for interviews.

    There is a need to include the human element within the recruitment process because as you correctly mention we could be reduced to ciphers rather than people. Although the question of human bias is a long standing debate current ML technologies aren’t free of them and whether eliminating them completely is achievable is debatable.

  24. I truly believe that ML is the way forward. Obviously at this stage it may be hindered as there still has to be development in the technology. However, I think in a few years time it will reach a point where there is not even the need for a human to look at the actual interview. The current limit for ML is how much data they have to train the algorithms. So as the years go by they are acquirining more and more of this data. Eventually the algorithms will be trained to the point I imagine where that are 99.99…% accurate. At the point you can same ML is a lot better than a person reviewing a video for consistency.

    1. Thanks @gsulivan94 for your comment. I think with better development with this technology I think there is a great future for ML in video recruitment. For example I think ML is used in structural health monitoring which an engineering specific context and in that application they require that the algorithms have +90% accuracy for their predictions. I think if it can get close to that accuracy for categorising good candidates then most people wouldn’t have a problem with its application.

  25. I think the automation of tasks is where society is going. The use of this technology in recruitment is just a side effect of the importance ML has for the wider society. If you think about how many different industries ML has and will have an impact on its quite staggering. Automation might detrimental for people as it takes jobs away but we must think about it in a positive light. For example, I’m sure in 40/50 years ML and AI will make it so that there is less requirement on people having to work due to being there so much automation. So overall, I feel the adoption of ML in recruitment is good thing.

    1. Hi @ethicsisfun, thanks for your comment. I agree with your statement that ML is important for the wider society. For example it already has uses in the medical field where it is being used to save peoples lives. I think as you say we must consider who this automation is effecting and how we’ll tackle the problem of fewer jobs being available as many things will start being automated. I believe the government should keep a close eye on AI and ML and how it is used in society.

  26. I think consistency for the recruitment process is a good thing. I’ve personally been accepted and reject for different video interviews when I responded in pretty much the same way for both interviews. I think candidates would feel a lot knowing better that the outcome of their interview was an objective decision and not one made because the recruiter was having a bad day or something. Although its said here that ML doesn’t make the entire decision for the process I feel like it would be good if it went in that direction.

    1. Thank you very much for your comment @ajohson98. I think you bring up a good point of consistency. I think sometimes candidates get very frustrated with the interview process and perhaps this is because each company has widely different interpretations of what makes a candidate a good fit. Obviously this is natural as the companies have different ethoses, however, I imagine some of this is also due to different recruiters. It must be very frustrating imagingi that you might have been accepted for the role had it been someone else that reviewed your application.

  27. Great article and very thought provoking.
    I think this technology has some great implications for small businesses. As the figured you’ve stated shows that the process of hiring the correct candidate can be quite expensive. For small business I imagine the value must be even higher and also the importance on hiring the right person is much greater. So if ML is able to make objective judgements on the skill/talent of a candidate I think it can streamline the process for hiring someone and make the process a lot more hassle free for the business.
    I also feel in terms of diversity ML will be very important. Although you highlight that there may be bias in terms of the data or the person who makes the model, if this is consciously addressed, I think it shouldn’t be a problem in the future. The fact that we are aware this happens is a good step in preventing it occurring.

  28. I believe ML is the future. I think it is a bit hypocritical for society to expect perfect performance from an algorithm when it is very possible that people aren’t not set to the same standard. I think the only reason people are so critical of machine learning is that their performance is much more easily tracked and showcased. As they often deal with numerical data you can very clearly tell when the algorithm has made good or bad predictions. As someone who is a little more knowledgeable about machine learning, a lot time is spent testing these algorithms to ensure that they can achieve the best performance possible. For humans it is much more difficult to quantify whether a correct decision is made because we can’t always track this type of data.

    1. Hi @m_redfurn, thanks for showing interest in our article. I think you raise an important point about how we make comparisons on how good the algorithms are. I guess we’ll never be able to fully quantify how successful a recruiter is in picking the right candidates through these video interviews. On the other hand as you mention it is a lot easier done with ML. Whilst I agree, I’m not entirely sure if making comparisons in this way is indeed the best approach. I think we should strive to get the best quality candidates no matter what technique is being used and hence we should always be striving for better performance of the algorithms.

  29. Very interesting article and it makes me think where else in the recruitment process machine learning would be useful. I think once these algorithms have ironed out their problems the technology will be revolutionary. For example, I imagine the task of having to review CVs probably quite a tiresome and automated task. I’ve heard recruiters only spend 7 seconds on average looking at a CV. So instead of making a human look at it and make a decision, why don’t we just use machine learning to do it? The only difference would be that machine learning would probably be more consistent at doing it since its been trained specifically for this task.

  30. Transparency in this process is what I feel is needed for people to accept machine learning. The hirevue article you reference is a good example of what other companies should do. They clearly outlined a problem in bias and how they addressed it by using different datasets. Companies should say what problems they found when they applied machine learning to videos and how they addressed these issues. They should also clearly outline what aspects machine learning is effecting and what decisions it Is used to make. Perhaps in the ideal situation they should give us a summary every year as to what percentage of people were rejected due to the role of machine learning or what kind of diversity it achieved through the hiring process using machine learning. More information and clarity is always good and I think for this technology to be mass adopted it is a key aspect that has to be considered.

    1. Thank you very much for your comment @cameron_19. As you correctly mention transparency is the solution to convince more people of ML. As we discussed in this article, ML is all about learning and training with more accurate data to one day reach the desired solution.
      There will undoubtedly be opposition to the wide use of ML for the fear of being reduced to a number in a hiring chain, however, aren’t we already?
      Information is key but the question becomes should this be enforced by law before people start accepting such a technology?

  31. Interesting article. I think regulation of this technology should be mandatory. This technology is in the best interest of the companies as it saves them a lot of money with very little drawback. Companies should be held accountable for any errors or problems that occur with the technology. I’m not entirely sure in what capacity this would be done so, perhaps if a student found out that they had been unfair discriminated against when they felt they gave a very similar interview to someone else?

    1. You’ve brought up a good point about regulation. I know for the general conversation of AI governments across the globe are already to consider the ethical applications of technology such as this. If there is oversight that these systems can be abused and that they’re not causing more problems than they are fixing for the candidates I think it is a step in the right direction. Thanks for your comment.

  32. I think one benefit of this technology might that they might be able to offer better feedback? Although it would still be generic feedback, I can imagine a situation where you perhaps scored particularly low on the clarity of your answer and then the algorithm has detected that this was the case. Then they could offer some feedback such as ‘consider the clarity of your answers…’. This would be better than some of the video interviews currently as a lot of companies don’t give you any feedback so it is difficult to improve for this aspect of the interview process. So in general this technology has the ability to make the interview process a lot less frustrating.

    1. Hi @kat_g, thanks for your comment. I agree that if they were able to incorporate feedback into this process it would be a great addition. As it currently stands I think companies don’t give enough feedback to candidates so often times it can be frustrating and difficult to better yourself in this regard. I’m not entirely sure how this would be implemented, but an interesting idea nonetheless!

  33. Great article. From reading other applications of machine learning I’ve seen that sometimes these algorithms can be messed with in some way. For example, in video detection, people have managed to fool ML algorithms by holding random pictures to themselves and in these cases the algorithm wasn’t able to distinguish them as a person. I wonder if something similar is able to happen for this setting. Could the system be abused by candidates such that they scored close to full marks? I also saw someone demonstrate that if you held you hand to the camera and made no noise at all then they algorithm would give you an automatic 70% score for one of the companies that ran technology like this. This probably suggests that ML has to be better fine tuned for this application and any bugs should be ironed out before people can abuse the system. I guess at the current stage it is not a problem because people still review the video, however, in the future if this becomes an entirely ML driven process it might be an issue.

    1. Good point @WilliamW_34. I’m not too knowledgeable about the intricacies of the technology but I do believe that the stuff you have mentioned might be possible with machine learning in terms of video analysis and speech recognition. Although I imagine a lot of these companies spend a lot of time trying to ensure that things like this don’t happen. However, if someone find a way to do it, as you mention, at this stage it wouldn’t be a problem because people are still reviewing the videos. However, if ML is used to fully automate the video recruitment stage then this might be of concern.

  34. I think the comment from @DrPatrickJS brought up a good point. Interviewers are likely to speaker faster for a video interview. Also I’ve heard from a lot of friends that speaking to a camera is a lot more difficult than speaking to a person. It is slightly more unnatural speaking to a laptop and having to look into a camera. In these cases I wonder if ML takes these factors into account? If so, then the technology has great application for video recruitment.

    Another point is a more general point of AI and machine learning. By the looks of it society is trying to automate everything possible with these new technologies. I’m not sure if it is entirely beneficial for the people who do these jobs, perhaps they are able to get other positions within the company? Right not it doesn’t seem to be a problem as you’ve mentioned it is only there to aid in making a decision instead of making the entire decision itself. In the future I’m sure the technology will get to a point where it is able to do the entire recruitment process by itself. Will recruiters even be needed at that point? I guess we’ll find out.

    1. Hi Andy, thanks for your comment. I think whether they take into account some of the issues you mentioned varies from company to company. For example, Hirevue have written an article which specifically addresses the issues of being nervous whilst doing the interview. I think in the case of hirevue they make sure that the algorithm takes these factors into account when scoring your response. On top of this I think since it is still being reviewed by a real person, they can access whether the score the algorithm gave you is justified if they can see that you were nervous when doing the interview.

  35. I think to whether know this technology has some real application is tracking the candidates it considers worthy. If you can follow the career path of people it would decide to hire then you can truly validate whether the algorithm is making the right decisions. Or perhaps do a study on two different groups, one whether they were entirely selected due to machine learning and one where only recruiter made the decision. If you make a comparison on the quality of candidates and how they progressed in their careers then you will get a definitive answer on which technique is better.
    I think my main point here is that ML should be properly validated in someway before being applied. Perhaps not in the way I have suggested but there must be a way to make a comparison of recruiters vs the algorithm. I don’t think merely checking whether their scores were in line with their answers is good enough. If we’re going to push the limits of technology for different applications we should see whether the quality of candidates improves as a result of using this technology.

    1. Hi Jake, thanks for your comment. I think you raise an important point about validating whether this technology is truly better than what we currently have in terms of hiring higher quality candidates. However, one aspect I think that also has to be considered is the business benefits of this technology. Obviously it is beneficial for a company to hire better candidates, but just by using the technology they are saving valuable time and money. Perhaps in this case they will have to find the correct balance to ensure they find good quality candidates without spending too much time and money.

Leave a Reply