IDG Contributor Network: Is it even possible to overcome built-in AI/ML bias?

0 Posted by - 25th September 2019 - Technology

Machines are fed mounds and mounds of data to extrapolate, interpret and learn.  Unlike humans, algorithms are ill-equipped to consciously counteract learned biases because although we would like to believe AI/ML correlates to human thinking, it really doesn’t. AI/ML has created what we have determined to be the newest industrial revolution by giving computers the ability to interpret human language and without intention, it has learned human biases as well.

So, where does the data being used by AI/ML systems come from? Most of this historical data comes from the same type of people who created the algorithms and the programs using the algorithms which until recently has been those socio-economically above average and male. So “without thinking” or intent, gender and racial biases have dominated the AI/ML learning process. An AI/ML system is not capable of “thinking on its feet” or reversing this bias once it makes a decision. The point is AI/ML systems are bias because humans are innately biased and AI/ML systems are not capable of moral decisions only humans are; at least not yet any way.

Research has shown recruiting (HR) software is biased

Much research shows that as machines are acquiring human-like language capabilities, they are also absorbing deeply ingrained human biases concealed within language patterns. Within recruiting (HR) selection software, this means a resume may not make the “first cut” based on the language and pattern recognition of the resume not based on the skills. As time passes, writing resumes has become an art and science; this alone is a skill belonging to a data scientist coupled with a professional writer; above all someone highly language educated with an analytical mind. How many professional writers are capable of being data scientists? Our educational system needs to address this because I believe that everyone will need to be a highly skilled data scientist or have access to one quickly and easily.  

Recent research has shown through implicit mathematical word association tests that categorize pleasant word versus unpleasant word associations, human psychological biases in AI/ML systems can be exposed. Words associated to “flowers” versus “insects” have been determined as psychologically more pleasant. Professional results for women are seen with gender biases through the words “female” and “woman” as associated with humanities professions and with the home. On the other hand, “male” and “man” algorithms result in associations with math, science and engineering professions.  European American names perceiving to be more Anglo-Saxon were heavily associated with the words “gift” or “happy” while African American names were associated with unpleasant words.

Statistically, research shows that even with an identical coefficient of variation (CV) of 50% a European American is still more likely to be interviewed over an African American.

“The coefficient of variation (CV) represents the ratio of the standard deviation to the mean, and it is a useful statistic for comparing the degree of variation from one data series to another, even if the means are drastically different from one another.”

read more at https://www.cio.com by Robin Austin

Cio