Artificial Intelligence (AI) is revolutionising the way we work. Teams adopting AI in their hiring processes have seen massive increases in their efficiency. Hiring managers are able to spend less time screening CVs, leading to more time for high value tasks, such as engaging with candidates. However, as we construct these new pillars of hiring, they begin to cast a long shadow on the ground. Questions creep on to our feeds and into the news – Is AI causing more problems than it solves? Is AI in hiring ethical? Or does it risk introducing something nasty into our hiring processes.
In this article, I explore both sides of the debate, as well as suggesting some actions for those who want to explore what AI has to offer.
The case for AI in hiring
1. Reducing bias
A key argument for using AI within the hiring process is to reduce bias. As we discussed within our article on unconscious bias – hiring staff are vulnerable to unconscious decisions about applicants, based on their race, gender and other protected characteristics. This can often led to applicants in under-represented backgrounds not being treated the same way as other applicants.
When AI is trained on diverse data-sets, it can anonymise applications and focus objectively on the applicant’s qualifications. Tools can hide identifying information, such as names, universities and addresses, addressing these biases, and giving everyone a fair chance.
2. Increasing efficiency
Recruitment often requires repetitive work, such as screening hundreds of applications per role, or scheduling interviews. AI can automate these processes, saving hundreds of hours per year, which allows hiring managers to focus more on valuable tasks, such as engaging with candidates.
When we do an action repeatedly, we can sometimes make mistakes in processes – for example filling in a field with the wrong data. When we automate these processes, we’re less likely to make mistakes – meaning a far more consistent and reliable hiring process.
3. Spotting patterns
AI can be far better at spotting patterns than people – especially in large data sets. With AI, you can find patterns that a human would never notice
For example, you could give AI access to both your hiring system and your employee performance data. It could then be asked to predict which features of job application leads to success in a role. When a new candidate applies with a similar set of skills, it could highlight this automatically for you.
4. Better candidate experience
Long hiring processes lose candidates, especially if that candidate is in a competitor’s hiring process. By increasing automation early in the hiring process, you can get the candidate to an interview more quickly, increasing your odds of being able to hire them.
Candidates also appreciate a transparent process, which keeps them informed. They also appreciate being able to ask questions, and having quick responses. AI can help with this through chatbots, giving candidates information they need about the role. It’s important however, not to isolate candidates from being able to reach a real member of staff, since Chatbots do have their limits.
The case against AI in hiring
1. Bias in the data
AI is only as good as the data it has available. It makes decisions based on that data – and if that data is flawed, then the AI will make bad decisions. Amazon faced public backlash after they introduced a “sexist” hiring tool. They trained an AI system on 10 years of job applications, much of which came from men. Because more successful applicants were men, the AI taught itself that the male candidates were better – and then began rejecting female applications.
Biased AI can seriously affect Diversity, Equity and Inclusion efforts in hiring. If the training data hasn’t been specifically prepared to be inclusive, the AI can make biased decisions – which risks causing systematic discrimination.
2. Lack of transparency
Very often, you can’t see exactly how an AI works. AI isn’t like other software where someone has designed beforehand how it works. AI makes its own decisions. When AI rejects a candidate, it can be very difficult to figure out why. Since you can’t see the decision making, you won’t know if the decision was made due to a bias or by mistake, making it difficult to correct.
This lack of transparency raises some serious ethical concerns. If you cannot explain to your candidate how you’ve arrived at a decision, is it really fair?
Fortunately some AI tools are being built with “explainability” features. This means you can ask the AI why it has made that decision. Unfortunately, not every tool has this feature.
3. Privacy risks
AI systems often process sensitive information. This information could be demographics, employment history, or even psychometric test results. In any organisation, each additional system with access to this data creates extra risk. If this data is not properly secured, it can be breached or misused, exposing the candidates to identify theft. Data breaches in the EU carry significant fines – up to 4% of global turnover, or €20 million, depending on which is higher.
Secondly, lack of transparency about where this data goes can also be a concern for candidates. Candidates are becoming much more aware of data privacy, especially in the younger workforce.
To address these risks, EU and UK companies need to remain compliant with GDPR and be transparent with their candidates about how the data is collected, stored and used.
4. Lack of human connection
When you choose to let AI make decisions, it comes at a human cost. If you’ve ever been unlucky enough to have a job application denied immediately, you know this has been a machine. A lack of human contact leads to a depersonalised experience, and can hurt your employer branding.
Additionally, if the early stages of your hiring process lack human interaction, this can make candidates feel undervalued.
5. Gaming the system
AI in hiring creates the possibility of candidates “gaming the system”. Candidates have been known to tailor their applications to exploit known AI preferences. For example, using keyword optimisation to rank higher in AI resume screenings, even if their skills don’t match the role well.
This raises ethical problems since less qualified candidates could overshadow more suitable ones. At the same time, if a process is seen as easily manipulated, it might undermine trust in the whole process.
Ethical considerations when using AI
To use AI ethically, we need a thoughtful approach. Being aware of the risks is a good start, however we need safeguards to ensure fair outcomes. Below are some suggestions on how you can work mindfully with AI.
Transparency
- Inform candidates that you are using AI before they apply.
- Tell candidates how you are using AI.
- State if there is any automated decision making.
- Give your candidates a way to ask for feedback, and make appeals.
- Describe what steps you’ve taken to ensure the hiring process remains fair.
Bias audits
- Check the vendor regularly audits their AI data before you commit to them.
- Ensure the vendor provides transparency about their data set, and how they update it.
Human oversight
- Make sure that AI augments, not replaces, human judgement.
- Make sure that the AI tool can explain its decision making.
- Avoid allowing AI to make final hiring decisions without human review.
- Train hiring managers to interpret AI outputs critically.
Data privacy
- Be transparent with what candidate information is collected, and why.
- Ensure compliance with privacy regulations such as GDPR.
- Implement strong security measures to any candidate information.
Conclusion – Is AI in hiring ethical?
The ethics of AI in hiring is a nuanced issue. If used correctly, it can significantly reduce unconscious bias within the hiring process, giving candidates from underrepresented backgrounds a fair chance. However, using it to make decisions on our behalf risks dehumanising the hiring process, and introducing data bias.
So, is AI in hiring ethical? In the end, how ethical the use of AI depends on how much care you take in using it. If you choose AI suppliers who are transparent and accountable for their decisions, and choose to improve the candidate experience, AI can be used as a force for good.