<img src="https://secure.leadforensics.com/75129.png" style="display:none;">
Request a Demo
Menu
Take a Talent Data Assessment
Request a Demo

Artifical Intelligence can Reduce Human Biases, but Only with Human Training

by SwoopTalent, on June 4, 2018

AI holds tremendous influence over our daily lives, managing our Facebook feeds, Google search results, shopping recommendations, and many of our e-commerce interactions. We are increasingly relying on them to determine mortgage rates, job interviews, and TSA screenings. But, while we have hoped that these systems would use facts and data to improve these processes, making them more fair and equal, it's increasingly clear that machine learning inherits social biases, and, the more we rely on these systems, the more we risk perpetuating these biases.

New Scientist recently uncovered several instances of troubling biases in AI systems, including:

  • A two-year study of COMPAS recidivism risk assessment predictions shows racial bias. The system's prediction of re-offense rates were higher for black defendants than white ones, despite the actual rate of recidivism among the test group. Even when the data was controlled for prior criminal history, sex, and gender, black defendants were 77% more likely to be assigned a higher risk category for violent recidivism.
  • A study of Google ads showed that when a user changed their profile settings to "female", they receive fewer ads for high-paying jobs than when the same profile is set to "male." Whether this is due to biases in the Google ads system learned from user behaviors, or because advertisers can choose to target their ads based on gender, it can perpetuate workplace disparity by not even showing these ads to women regardless of their qualifications.
  • Facial-recognition systems are increasingly being used in a variety of settings, from advertising to law enforcement, and yet the top three facial recognition systems are based on datasets that are 75% male and 80% white. This means that these systems are 99% accurate at identifying the gender of white men, but only 65% accurate when identifying women of color.

What closer analysis reveals is that our AI systems are trained, educated, and informed by datasets and behaviors that are inherently biased. They learn existing biases and then, invisibly, perpetuate them, with consequences that exacerbate biased behaviors instead of reducing them.

It is precisely this risk of creating a feedback loop, in which biased predictions lead to biased behaviors that create biased outcomes, that should make companies think carefully about how they program AI systems, and how they are deployed. Here are some ways to examine AIs for bias:

Be attentive to the dataset. AIs that learn either by being "turned loose" online to learn by user behavior, or that are informed by specific curated datasets, can both inherit biases. Care needs to be taken with what models the AI is learning from, and how they are being interpreted.

Screen for negative associations. While the CEO of PredPol acknowledges that there is bias in the system, and therefore advises not to use the system for drug crimes, that's simply not enough. At ConceptNet, the AI has been "de-biased" by human intervention, with people manually removing unwanted associations and preserving others. 

Conduct bias testing. Researchers at Boston University conduct scientific tests on AI systems, uncovering biases within the system. This kind of testing is imperative to ensure that the AI is behaving as desired.

As we deploy AI ever more widely, it's incredibly important that these systems be examined and, if necessary, manually corrected, so that we don't use them in ways that perpetuate (or even exacerbate) existing inequalities. We can use these tools to make the world not only more fair, but better: analysis shows that diverse organizations have lower turnover, higher employee engagement, and are more profitable. Let's make sure our AI systems include that kind of data, and help us build that kind of world. Contact us to find out how to make it happen.

 

 

Topics:artificial intelligenceai

Comments