We know AI and machine learning are biased, now what?

We know AI and machine learning are biased, now what?

The revolutionary technologies of Artificial Intelligence (AI) and machine learning have already impacted almost every industry. AI and machine learning have also infiltrated our personal lives. Not just through smartphones, laptops, and smart speakers, but through the rapidly growing number of intelligent, connected products — the Internet of Things (IoT) devices — we find in our increasingly smart homes.

AI and Machine Learning

Source: Forbes

The rapid growth of AI and machine learning is continuing to accelereate. According to Statista, there are now over 25 billion connected devices in the world. That number is expected to exceed 75 billion connected devices by 2025. For many of us, we now use multiple virtual assistants (e.g., Siri, Alexa, Google Assistant, etc.) on a daily basis.

So how can we overcome AI’s bias? People are biased, so AI’s algorithms suffer from the same bias, unintentionally transferred to the algorithms during the creation process.

Why do we care about AI and machine learning bias?

The data we use to make decisions big and small increasingly comes from machines. No one wants to be making decisions based on biased data and analytics. A facial algorithm that decides you’re an animal and not a human is not funny when you’re the human. According to a recent study, researchers found that a machine learning algorithm was twice as likely to tag people of color as criminals when it was calculating the probability of someone recommitting a crime.

Fortunately, there are several best practices already being used to prevent machine learning bias. Here’s what developers are currently doing:

More diversity = less bias

AI and Machine Learning

Source: rawpixel on Unsplash

Bias is transferred from creators to algorithms, but acknowledging that human beings are biased is just the start. Companies are checking this bias by increasing the diversity of their technical workforce and through auditing testing.

Companies are also realizing the importance of protecting their data scientists from expicit and implicit organizational pressures so they can work in an unbiased manner. And companies are increasingly educating users themselves to better understand how machine learning works, so that potential bias can be quickly recognized.

Select the right learning model

Selecting the right learning model is a complicated task for data scientists and executives. The challenge is that every learning model is different and there isn’t a single, “bias-proof” model. Depending on your project, you can, for example, choose a supervised or an unsupervised learning model. Unsupervised models include cluster and dimensional reduction, which can make machines learn bias from a given data set. The device can mix the information if there is a high correlation between a specific group with a particular behavior. On the flip side, a supervised model can add human bias to its algorithm.

Here’s an example. The admissions process for a given college automatically includes ACT scores. Including postal codes may lead to discrimination, but attending a given school may legitimately affect student’s ACT scores. Data scientists and executives must actively discuss which learning model will minimize bias for their particular use case.

Reach beyond the tech team

To safeguard against bias, the tech team can include additional parties. While the training data must be carefully selected by data scientists, having non-technical people should also monitor the training data set selection process. Make sure the training data set is diverse.

Use real data to monitor performance

AI and Machine Learning

Source: turalt on Medium

Using real data can help in eliminating biases. Testing the algorithm with real-world applications can help us to understand the algorithm’s prejudices. To examine data, we can choose two types of equality, opportunity equality and outcome equality.

Suppose we are designing an algorithm for approving car loans. To monitor that, we can see if the algorithm is approving loans at equal rates regardless of race and location. If so, there is outcome equality. For opportunity equality, we look to see if all loan applicants are returning their applications at the same rate regardless of their race and location.

Final Considerations

Although widely used, AI and machine learning are still in their infancy, with advancements being announced daily. As investment in AI increases, so will the investment in reducing AI’s bias. I hope you’ve found useful insights in this article. If you have something to add, feel free to leave a comment below. And if you’re looking for a course to learn more about AI and machine learning, I recommend Simplilearn’s Machine Learning Certification.

 

About the Author

Danish Wadhwa governs digital content to assemble good relationships for enterprises and individuals. He specializes in digital marketing, cloud computing, web design, and other IT services for organizations, delivering solutions to business problems. Connect with Danish on LinkedIn at linkedin.com/in/danishwadhwa/.

By | 2019-02-16T18:31:36+00:00 February 15th, 2019|Categories: Artificial Intelligence, UIB, Unified Inbox|Tags: , , , , , , |0 Comments

About the Author:

Ken Herron
UIB Chief Marketing Officer Ken writes about the latest IoT and AI global news, trends, and best practices.