<img src="https://secure.leadforensics.com/75129.png" style="display:none;">
Request a Demo
Menu
Take a Talent Data Assessment
Request a Demo

Bias, AI, and 12 Angry Men

Arun Radhakrishnan
December 4, 2018

If you've seen the film or read the play 12 Angry Men, you know that human beings can never be truly free of their biases, biases that often live beneath the surface, free to shape our actions and thoughts without us knowing they are lurking there. 

In the film, twelve men have to deliberate about the guilt or innocence of a teenage boy accused of murdering his father. One man is biased against the boy because he himself had been punched by his own teenage son. Another is biased against the boy's low-income neighborhood and believes that fights and violence are just a normal part of living there. Throughout their deliberations, each "angry man" reveals how his own assumptions, experiences, and perceptions, which together create bias. 

Bias is a Human Quality

Bias is inevitable. We can't avoid it. We cannot have every experience or see things through every pair of eyes, so we will always approach our decisions with a limited (and therefore biased) perception. However, recognizing our limitations and doing what we can to address them is an important part of making sure that our own biases don't extend beyond us. 

This is of particular importance for people who are responsible for programming AI. Why? Because we can (unknowingly) map our biases onto these artificial intelligence systems, magnifying them exponentially. 

AI Absorbs our Bias

We like to think of technology as a great equalizer, a force that is not tainted by the human scourge of racism, sexism, or classism. The truth, though, is that technology is a reflection of its creators, and there is no such thing as pure data once it has been put to use in an algorithm. Some guidance was given to help determine which data points would be included and which ones would be excluded. That guidance is put into place by the human architects of an AI system, and those humans—like all humans—made those decisions with their biases intact. 

There have been many instances of human bias showing up in AI systems. For example, an AI tool designed to analyze criminal defendants inaccurately predicted recidivism rates, judging white defendants to be at a lower risk than they actually were and black defendants to be at a higher risk. Another example shows that face recognition software could accurately make a match 99% of the time when the face shown was that of a white man, but when the face was a dark-skinned woman, the accuracy dropped to 35%.

Bias Matters

The bottom line is that bias is real, whether we want to admit it or not. These algorithms did not create these inaccuracies; they merely reflected the imperfect and biased decisions of their architects. If we want to create systems that are free from these kinds of errors and limitations, we have to start by recognizing our own biases and addressing them honestly. 

Frameworks like the one presented in 12 Angry Men give us the opportunity to explore how we view the world and how those views translate into our work. SwoopTalent strives to create, build, and manage talent data systems that are responsive, ethical, and thorough as a means to address unexamined bias in AI design.

 

Do AMAZING things with talent data

Request a demo and see how!

Data Handed To You
You'll be amazed at the headaches you cure and the opportunities you create when you change the way you manage talent data.  Take half an hour with us and see how
Request a Demo

Subscribe by Email

No Comments Yet

Let us know what you think