Computers Aren't Biased... We Are

 
ai-blur-codes-577585.jpg
 

 

I’m frustrated by the current discussions surrounding bias in machine learning (ML) or ‘AI’. Our CEO, Sam Witherspoon, was on a panel the other day and the conversation quickly shifted to bias in ML and ‘what to do about it?’ Most people seemed content to suggest that regulation of algorithmic decision making is the correct path. Sam suggested that to think this way is silly since it ignores the root cause - the fact that humans are deeply flawed. Seeing that Sam was limited in his response time, I thought it was important to dive a little deeper into his response and explore this hot topic.

Bias in machine learning/AI

First off bias in ML/AI (henceforth ML) is a thing. There are countless examples of bias in models. 

Bias in ML is a reflection of the data it is trained upon. Most often, the data is reflective of actual environmental conditions or a sample of those conditions. If the data trains a model to be biased, what we should really be talking about are the underlying societal conditions that produced the data. People don’t get upset when the weather network predicts another day without rain in the Namib Desert. That model’s got a pretty heavy inclination towards ‘no rain’ based on historical data. But when it comes to human behaviour, the datasets we use to train our models are empirical proof that people are discriminated against. Weather patterns are based on undisturbed objective  observations; behaviour can be subjected to and altered by prejudicial attitudes.

As a result, sexism, racism and bigotry are all reflected in our data… however we don’t confront that uncomfortable truth. We talk about how the ‘algorithm’ (or more correctly the model) is flawed and biased, when in reality it’s only a reflection of the real life behavioural data that created it. In doing so we diminish the experience of the people who actually suffer at the hands of real humans today.

People are less likely to confront the fact that humans are biased. There’s probably some psychological explanation for it. Maybe it’s our need to avoid conflict. Who knows. What I do know is that humans aren’t just biased - we’re big fans of obscuring our bias. The quotation below is from a mother whose son was shot by police. She describes the security footage which was censored to hide the shooter but not the victim. In other words, signalling our desire to cover up the source of the injustice, not the crime itself.  

“So you can see Nook running toward the other car. But then, when he wheels, turns around and runs back, as soon as he’s within the range of the car where the police officer is, a black box pops up, covering the shooting. So you don’t actually get to see it.” [Source]

As a society we’re so afraid about acknowledging our prejudice that we continually work to scapegoat or push it into the shadows. Blaming bias in models and algorithms seems to be the new target of choice. So to prove the foolishness of this concept, I’m going to highlight the intense bias that currently exists in two of the industries that are predicted to be largely affected by ML in the future.     

Criminal Justice

An area where this problem has manifested is in security and police monitoring systems. While these programs promise greater efficiencies for large scale surveillance, the idea that they operate transparently is false.

Our criminal justice system continually operates with racial bias and profiling metrics that target certain communities.  For example, black male offenders face 20% longer sentences than their white counterparts (Source). This is even more disturbing when you try to find what the root cause of this disparity could be. Black arrestees are 75 percent more likely to face a charge with a mandatory minimum than a comparable white arrestee (Source).  Facing a mandatory minimum virtually guarantees a harsher sentence than their white counterparts.

giphy.gif

Moreover, according to research, innocent black people are seven times more likely to be convicted of murder than innocent white people. Because we as humans have adopted this type of behaviour the models that we create will be tainted by the same bias. If you disagree with this statement you probably also disagree that black lives matter and say stuff like “no ALL lives matter”. This gif is for you.

But seriously. There are actual AI backed criminal justice programs that have provided statistical evidence to support this point. A study found that when a model was trained using historical drug data in Oakland California, it was twice as likely to target predominantly black areas as it was white ones. (Source)

The bias doesn't just exist for violent crimes either. In the United States, white males aged 18-25 smoke marijuana at a higher rate than black males in the same demographic, yet black males are 3 times as likely to get arrested for possession charges (Source).

Workplace Management

Many people like to rave on about how ML is going to revolutionize the way we hire, fire and evaluate employees. While I agree that this technology stands to greatly automate many of these functions, the notion that it can operate in the absence of bias is not realistic given the current landscape of the workforce.

As it stands, women are a staggering 15% less likely to receive a promotion than their male co-workers (Source). Moreover, on an annual basis, female employees who work full time in Canada earn 74.2 cents for every dollar made by a full-time male employee. Equally, to adjust for the the fact that men typically work longer hours than women, the hourly wage rate indicates that women still earn only 87.9 cents on the dollar (Source).

giphy-2.gif

There’s obviously many different factors that attribute to these shortcomings, in particular, our inability to shake gender roles should be at the forefront. Even our robots are subject to them. Ever consider the fact that literally every robot assistant is a woman? From Siri to Alexa or the navigator in your Honda Civic, the societal idea that said function needs to be female is undeniable. These ideologies are widespread across many different occupations and it’s these types of unconscious biases that leave woman at an unfair disadvantage before they even walk in the door.

In the United States, the average wealth of white families is a whopping $500,000 higher than those of African Americans. White people also earned an average hourly wage that is 36% higher than black people (Source). This is a direct outcome of the perpetual systematic discrimination that minorities have suffered from for centuries.   

So what happens when these models are trained with data littered with historic bias against women, visible minorities and those with disabilities? Unlike humans, computers can’t consciously hold bias and work to act upon it over time. The opposite occurs, their pattern of behaviour will be compounded over time.  

In cases such as these, many argue that we should just make more of a conscious effort to put our biases aside when training machines. But that’s precisely the issue, bias and prejudice aren't something that can be just switched on and off. Yes, in recent years there has been a successful push to discourage and flag the discrimination that continually seems to poison our workforce, but research indicates that we are still over 100 years away from achieving gender equality at the executive level (Source).Widespread cognitive change is a painfully slow process, and one that can only be accelerated through education and awareness.

The Point

The crux of my argument is that ‘bias in algorithms’ is the new safe place for a really scary conversation about equality. The models that we create are a distillation of our behaviour. Even if you say you aren’t biased, your behaviour will ultimately reflect your true mindset. When we train models using data that was created via human behaviour we are able to quantify our bias.  The fact that we continue to scapegoat computer models that were created from actual human behaviour is depressingly ironic.

Simon Hicks

Simon Hicks