Check Out our Simple Overview Presentation of "Machine Learning Bias"
As we all know, the word "bias" means holding prejudice in favor of or against an idea, an entity, or a belief. And, rightfully, recruiting and talent professionals are VERY worried about how machine learning might worsen or add bias in their work. If you want to combat it, you have to understand it, so we wanted to give you a simple explanation of it!
It is our thesis that machine learning (ML) bias is a phenomenon that occurs when an algorithm produces systematically prejudiced results due to erroneous assumptions in the machine learning process - but there is more than one way that can happen. Let's take a look.
What is Machine Learning?
There's a lot of hype about how AI will take jobs, but really it works on tasks not jobs, and there are positives to be had. The Human Capital Management (HCM) Strategy Director at Oracle, Andy Campbell points out five ways ML is a positive addition to the workplace:
ML supports employees through customized training and learning recommendations. It democratizes learning and development initiatives for employees at the correct times.
It can provide targeted advice for remote problems based on past experiences.
It allows supervisors to remain impartial since ML can evaluate performance information without bias to the employee.
ML supports more timely decision-making by basing it on comprehensive content analysis.
ML is far from being a threat to employees. It has the potential to change employment engagement in myriad ways. In spite of the fragile balance between technology-enabled solutions and the human factor, HR is heading to a new dimension.
Types of Machine Learning
Supervised ML means a data analyst provides the input and desired output data.
Unsupervised ML means using information that is neither classified nor labeled and allowing the algorithm to act on that information without guidance. Information that can be harvested by ML processes includes the use of:
Websites, and more
However, both kinds have challenges with bias. Chief Data Scientist Cheryl Martin at Alegion, explains:
The way we address bias," she says, "is by looking at the data and understanding how an algorithm might be deployed and what the target environment is, and doing a match between looking at the characteristics of that environment and the data that we might be labeling.
Machine Learning Biases
This bias occurs when the data used to train the algorithm does not accurately represent the problem space in which the model will operate. In talent, that might happen if you use a subset of data about people which was pre-filtered to use only a specific set (eg Stanford graduates!). The solution is to use a larger, broader dataset - eg you might want to add public data to your ATS data to balance out your own sample.
Prejudicial or Stereotypical Bias
This is where the dataset has cultural biases, eg it only includes men in the computer programmers data, and women in the administrators data. And that can really happen in talent. One way to offset that is to remove the data that indicates gender, race, age, etc - but be very careful with that approach, since correlations with those fields can already be baked into other fields (salary and gender, anyone?). Checks using the main fields works better!
This is when the device making a measurement or observation is in a biased situation. This problem can scramble the ML results in a specific direction. Think of a temperature measurement device that's in the shade all the time - it will have a bias to lower temperatures.
This is when bias is in the algorithm itself, such as when a coder writes a matching algorithm looking for particular things like verbiage or education, rather than being neutral. There will always be a degree of human error in our lives, but this one is easy to watch for if we build the right teams.
What can I do?
Start by understanding the types of ML bias, and think about how they play in your work and in systems you might start to use. Then ask a lot of questions! Your vendors and partners should be very well versed in where bias may lie in their approaches. If they aren't - be wary!
We are in the business of supporting and assisting companies with their talent data and system challenges, and helping to . Swoop Talent invites you to learn from our blog; take a free talent data assessment, and schedule a demo to see for yourself that our we can serve as your HR Tech team's Swiss Army knife for data integration, migration, and analytics. Contact us today for more information.