Facebook LinkedIn Instagram Twitter YouTube

Single Post

10 May

Artificial Intelligence and Bias

Artificial Intelligence or simply put AI needs no introduction. Biases are shortcuts that our mind creates based on the category. Artificial Intelligence and Bias .. what is the connect?

Whether it is Alexa responding to your command, or driverless cars on the road, or your bank calculating your loan eligibility, or the machine identifying presence or absence of killer cancer cell in the body, or simply your browser throwing up ads based on your earlier purchases, it is AI which is working in the background and making itself smarter (in other words “more intelligent”) on the go.

Modern day AI system typically uses Machine Learning to achieve a predefined purpose. Training data, relevant to that purpose, acts as an input for an algorithm. The said algorithm will in turn detect patterns in the training data and develop a model to make predictions or recommendations about new data.

Our dependence on everyday AI based decisions, are slated to increase multifold from its current levels. Within an organization, some of the talent decisions are being supported by AI systems e.g. recruitment engine shortlisting profiles for a job, making decisions on who gets to lead the new assignment. Clearly its a system to support humans decisions than take decisions on behalf of the human.

AI making biased decisions


“The danger inside the machine-learning algorithm making millions of decisions every day is more a cause of concern than the super-intelligent killer robots.”
John Giannandrea, AI Chief at Google

The Concern now is about AI decisions being biased….. yes that’s right …. there is bias in decisions made by Artificial Intelligence.  Logic says this is not possible, since Bias is a human phenomenon and AI is Artificial Intelligence and it has to be unbiased. The AI algorithm which builds the decision engine, is not biased by itself. The data which which is used to build the AI engine is a product of all human decisions. The AI system is as good or as bad as the data that it is trained on. The customary acronyn GIGO (Garbage in Garbage out) associated with computer software … can now be changed to “Garbage in … and entire junkyard out“. Which means, AI not only givies the decisions based on the biases present in the data but also accentuates all the biases that are present in the base data.

Examples of Biases which researchers have identified in AI system
  • Vicente Ordóñez Professor of Computer science, Univeristy of Viriginia, observed, the image recognition software recognized a person in the kitchen as “Woman” even if the person was a man. The world is still arguing about exaggerating “gender bias” in humans.
  • Researchers from Boston University and Microsoft used text from Google News to show the gender biases that exist in humans. When the question “Man is to computer programmer as woman is to X,” was asked to the software, the answer was “homemaker.” Does this research does not throw up any surprises for you?
  • Amazon’s recruitment software was scrapped when it there was a realization of it automatically not shortlisting women from the CVs. Needless to say, the AI engine was built on data of past 10 years. Amazon did try and edit the programs to make them neutral to the word “Women”, but the developers could not guarantee that the machines would become discriminatory in other ways when sorting candidates.
So what are our options ? How do we mitigate these biases from our AI system?

The AI system is as good or as bad as the data that it is trained on. The quality of data that goes into the system is paramount. If in the process of developing AI system, ask yourself if the data is appropriate, relevant, accurate to a certain level, accurately labeled, represents the diversity, and representative of the system we are trying to build.

Once we have determined the quality of data, knowing and understanding possible biases which are present in the datasets will help in developing a system with minimized biases. Going back to the recruitment example, there have been more men hired than women in the past. If the past data becomes our base data, it will continue to shortlist men over women. Understanding this would help us develop an algorithm which will mitigate this bias.

Looking for patterns, or consistency in the new data will help in identifying biases hence also finding a way to mitigate the bias. IBM has a tool that looks for patterns in the new data.

As an endnote, AI based systems are here to stay and grow. It is upon humans to ensure that it is as bias-free as possible. Understanding the nature of training data, testing the AI model with diverse data sets, monitoring the decisions and looking for possible patterns in the decisions could go a long way in bringing more consciousness in the prevalent biases mitigating the biases in the Artificially Intelligent systems. Let’s not allow bias and Artificial Intelligence together create a world which is undesirable.

Watch Genpact CEO Tiger Tyagarajan talking to CNBC’s Jim Cramer how artificial intelligence can have a bias because it is based on human intelligence 

References

https://www.forbes.com/sites/bernardmarr/2019/01/29/3-steps-to-tackle-the-problem-of-bias-in-artificial-intelligence/#63bda077a128 https://www.wired.com/story/machines-taught-by-photos-learn-a-sexist-view-of-women/



Leave a Reply