As we all know, the word "bias" means holding prejudice in favor of or against an idea, an entity, or a belief. And, rightfully, recruiting and talent professionals are VERY worried about how machine learning and AI might increase hiring bias. Before you combat hiring bias from machine learning and AI, you need to understand it.
It is our thesis that machine learning (ML) and AI bias occurs when an algorithm produces systematically prejudiced results due to erroneous assumptions in the machine learning process - but there is more than one way that can happen. Let's take a look.
What is Machine Learning?
There's a lot of hype about how AI will take jobs, but really it works on tasks not jobs, and there are positives to be had. The Human Capital Management (HCM) Strategy Director at Oracle, Andy Campbell points out five ways ML is a positive addition to the workplace:
ML supports employees through customized training and learning recommendations. It democratizes learning and development initiatives for employees at the correct times.
It can provide targeted advice for remote problems based on past experiences.
It allows supervisors to remain impartial since ML can evaluate performance information without bias to the employee.
ML supports more timely decision-making by basing it on comprehensive content analysis.
Machine Learning is far from being a threat to employees. It has the potential to change employment engagement in myriad ways. In spite of the fragile balance between technology-enabled solutions and the human factor, HR is heading to a new dimension.
Types of Machine Learning and AI
Supervised Machine learning means a data analyst provides the input and desired output data.
Unsupervised machine learning means using information that is neither classified nor labeled and allowing the algorithm to act on that information without guidance. Information that can be harvested by ML processes includes the use of:
Websites, and more
However, both kinds have challenges with bias. Chief Data Scientist Cheryl Martin at Alegion, explains:
The way we address bias," she says, "is by looking at the data and understanding how an algorithm might be deployed and what the target environment is, and doing a match between looking at the characteristics of that environment and the data that we might be labeling.
Machine Learning and AI Bias
Sample Data Bias
This bias occurs when the data used to train the algorithm does not accurately represent the problem space in which the model will operate. In talent, that might happen if you use a subset of data about people which was pre-filtered to use only a specific set (eg Stanford graduates!). The solution is to use a larger, broader dataset - eg you might want to add public data to your ATS data to balance out your own sample.
Prejudicial or Stereotypical Bias
This is where the dataset has cultural biases, eg it only includes men in the computer programmers data, and women in the administrators data. And that can really happen in talent. One way to offset that is to remove the data that indicates gender, race, age, etc - but be very careful with that approach, since correlations with those fields can already be baked into other fields (salary and gender, anyone?). Checks using the main fields works better!
This is when the device making a measurement or observation is in a biased situation. This problem can scramble the ML results in a specific direction. Think of a temperature measurement device that's in the shade all the time - it will have a bias to lower temperatures.
This is when bias is in the algorithm itself, such as when a coder writes a matching algorithm looking for particular things like verbiage or education, rather than being neutral. There will always be a degree of human error in our lives, but this one is easy to watch for if we build the right teams.
AI and Machine Learning Risks
Outside of AI taking over the world a-la-Skynet, there are a number of risks that can be associated when you run machine learning programs with certain biases.
The biggest risk comes from your AI and machine learning algorithms being trained to optimize for, and select, talent based on the above biases.
For example, if your AI / Machine learning is trained with poor data sample to discover top candidates to fill your talent pipeline, then your talent acquisition team may only see candidates that fit a particular type of profile. Imagine if your AI optimized your talent acquisition process to only pass your team candidates that were 22 year old male Stanford graduates? Your team would decrease in diversity, as well as limit its potential for bringing in the best talent available. That's why it's important to learn how you can mitigate the bias of your machine learning and AI programs.
How to Mitigate Machine Learning and AI Bias
Start by understanding the types of ML bias, and think about how they play in your work and in systems you might start to use. Then ask a lot of questions! Your vendors and partners should be very well versed in where bias may lie in their approaches. If they aren't - be wary!
We are in the business of supporting and assisting companies with their talent data and system challenges, and helping to . Swoop Talent invites you to learn from our blog; take a free talent data assessment, and schedule a demo to see for yourself that our we can serve as your HR Tech team's Swiss Army knife for data integration, migration, and analytics. Contact us today for more information.