The benefits of AI and machine learning are pretty well established. Technology can help businesses automate processes, gain insight through data analysis, and interact with customers and employees. And it can help them meet ever-changing market demands, streamline operating costs, and stay competitive in an increasingly fast-paced digital world.
The role of HR in understanding and mitigating AI biases
Today, many large cloud providers even offer AI functionality in their service packages, democratizing the technology for businesses that might otherwise struggle to afford expensive in-house engineers and data scientists.
For HR teams, the value of AI is undeniably clear. When a single job posting results in hundreds or even thousands of applicants, manually reviewing each resume is a monumental and often unrealistic task. By leveraging artificial intelligence and machine learning technologies, HR teams gain the ability to assess candidates at scale and make much more effective hiring recommendations.
The ramifications of AI-induced bias in HR are significant
While AI offers some pretty obvious benefits for HR groups, it also presents some pretty serious challenges and potential pitfalls. With any AI system, one of the most difficult (but critical) things you need to tackle head on is making sure it’s free from bias.
This is especially crucial for AI systems for HR, as any AI-induced bias can lead companies to discriminate against qualified candidates, often unknowingly.
Remember when Amazon had to ditch its AI system to filter resumes several years ago because it penalized female applicants? It’s a perfect, if unfortunate, example of the power of training data. At the time, the majority of Amazon’s employees were male, so the algorithm fueling the AI system, trained on the company’s own data, matched successful apps with male-oriented words.
In doing so, the highly qualified candidates were simply ignored by the model. The lesson: If the data used to train your AI models is biased, the deployed AI system will also be biased. And it will continue to reinforce this bias indefinitely.
Outsourced AI Systems and Corporate Cultures Need Further Examination
In Amazon’s case, the AI system for CV selection was built in-house and trained with data from the company’s own job applicants. But most companies don’t have the resources to build internal AI systems for their HR departments. Thus, HR teams are increasingly outsourcing their work to service providers such as Workday or Google Cloud. Unfortunately, too often, they also outsource their due diligence.
It’s more important than ever for HR teams to recognize the enormous responsibility that comes with outsourcing any AI implementation. Don’t just blindly accept and implement your AI vendor’s models. You and your teams should review the systems several times to make sure they are not biased. You must constantly ask yourself:
- What data sources (or combination of data sources) are used to train the models?
- What specific factors does the model use to make its decisions?
- Are the results satisfactory or is something wrong? Does the system need to be temporarily shut down and reassessed?
It is therefore essential to carefully review training data, especially in outsourced AI systems. But that’s not the only requirement to mitigate bias: biased data comes from biased work environments.
Your HR teams therefore have a duty to also assess any problem of bias or injustice within your organization. For example, do men hold more power than women in the company? What questionable conduct has long been considered acceptable? Are employees from under-represented groups likely to be successful?
The diversity, fairness, and inclusiveness of your company’s culture are absolutely relevant when integrating AI, as it determines how AI systems and outcomes will be deployed. Remember, AI doesn’t know it’s biased. It’s up to us to find out.
Three best practices for leveraging AI fairly and without bias
Ultimately, HR teams need to be able to understand what their AI systems can do and what they cannot do. Now your HR teams no longer need to be technology experts or understand the algorithms that power AI models.
But they need to know what types of biases are reflected in training data, how biases are embedded in corporate cultures, and how AI systems perpetuate those biases.
Below are three tactical best practices that can help your HR teams take advantage of AI technology fairly and impartially.
- Regularly audit the AI system. Whether your systems are built in-house or outsourced to a vendor, regularly review the data collected to form the models and results produced. Is the dataset large and varied enough? Does it include information on protected groups, including race and gender? Do not hesitate to stop the system to change course if its results are not satisfactory.
- Understand the data supply chain. When relying on out-of-the-box, outsourced AI systems, recognize that training data may reflect the vendor’s own biases or the biases of third-party data sets. Keep an eye out.
- Use AI to augment, not to replace. The capabilities of AI are advancing rapidly, but the reality is that AI has yet to be managed. Because of the risks involved, HR teams should leverage AI to increase their role, not replace it. Humans always have to make the final hiring and HR decisions.
With the help of AI, HR teams can uncover corporate injustices
Your HR teams are in a unique position to leverage AI technology in a fair and impartial manner because they are already familiar with systemic issues of bias and inequity.
Recognize the responsibility that AI systems need and constantly work to understand how they are formed and produce results.
When done correctly, AI will help your HR teams uncover biases rather than perpetuate them, improve the efficiency and effectiveness of HR tasks, and advance the careers of deserving candidates and valued employees.
Image Credit: Rodnae Productions; pexels; Thank you!