Artificial Intelligence

Why Bias in Artificial Intelligence is Bad News for Society?


Artificial Intelligence

The practice to include Artificial Intelligence in industry application is skyrocketing for a decade now. It is evident since, AI and its constituent applications Machine Learning, computer vision, facial analysis, autonomous vehicles, deep learning form the pillars of modern digital empowerment. The ability to learn the data it is trained up to understand the binary, quantum computation of the world, and make decisions derived from its insights makes AI unique than earlier technologies. Leaders believe that possessing AI-based technologies equate to future industry successes. From healthcare, research, finance, logistics to military, law enforcement department AI holds the key to massive competitive edge and up-gradation with monetary benefits too. This is where AI emerges as a double-faced sword. Under the authority and accessibility of malicious entities, AI can have negative implications on humans. Not only that, but AI is also set to bring a paradigm shift of power in favor of one who possesses it or ones who escaped its bias.

 

Why is it bad?

The recent global outrage against George Floyd’s death highlighted the bias that may exist in today’s technologies, especially when AI has a history of racial, ethnic bias. While incidents like Microsoft’s row over mislabeling a famous singer were a random and unfortunate mistake, there is evidence that the data fed to AI systems are already biased enough. This data contain implicit racial, gender, or ideological biases, thus resulting in discrimination when they find their way into the AI systems that design, and are used to make decisions by many, from governments to businesses. IBM predicts the number of biased AI systems and algorithms will grow within the next five years. This is alarming as AI is explicitly used in sectors like healthcare, criminal justice, customer services, among other sensitive areas. A simple biased AI system may end up not grant loans or faulty surveillance in neighborhood streets. Thereby it shall cause the trust upon these systems to corrode with time and threaten the socio-economic and political balance.

 

How does bias occur in AI?

Though bias has been identified in facial recognition systems, hiring programs, and the algorithms behind web searches question lies that how biases enter systems. Well, they emerge during any of these three processes: building a model, data collection, or preparing dataset governed by certain attributes. Out of these three, the most common is biases during the collection of data. This can happen in two ways, either the data garnered is unrepresentative of reality, or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. This can ‘train’ system towards failure in detecting dark-skinned faces. For example, Joy Buolamwini at MIT, working with Timnit Gebru, found that facial analysis technologies had higher error rates for minorities and particularly minority women, potentially due to unrepresentative training data.

In the case of the latter scenario, it can happen when, for instance, an internal recruiting tool filter out female candidates during the hiring process. This is exactly what had happened with Amazon when its AI recruiting tool dismissed female candidates as it was trained on historical hiring decisions that favored men over women. Furthermore, it is essential to note that developers are not responsible for AI bias. However, a lack of diverse social representation can compound the frequency of designing a skewed system.

 

Possible Solutions

To mitigate this situation, we must realize that the sensitivity of the issue depends on how we define bias. Simply removing bias from the data set is never the solution. This is because, removing bias from the training dataset at test time when one is deploying the solution, can render the system unfair. So, the best way to resolve bias in AI is by cross-checking the algorithm to see if there are patterns of unintended bias and retraining the system. There are also discussions on a global scale to augment AI with social intelligence to eradicate biases. Besides, it is a symbiotic tradeoff. AI can help us by revealing ways in which we are partial, parochial, and cognitively biased, thus leading us to adopt more impartial or egalitarian views. In the process of recognizing our bias and teaching machines about our shared values, we too can improve AI.

Other methods to counter bias in AI is by trying to understand how Machine Learning and deep learning algorithms arrived at a specific decision or observation. Through, explainability techniques, we can learn if the outcome was based on predefined bias or not. On the data side, researchers have made progress on text classification tasks by adding more data points to improve performance for protected groups. Innovative training techniques such as using transfer learning or decoupled classifiers for different groups have proven useful for reducing discrepancies in facial analysis technologies. Another solution is encouraging ethics education with companies and organizations. By educating employees on cultural and lifestyle differences, one can create an awareness of groups within the society that might have been overlooked or not even considered.

In the end, we must remember that,

“AI is a tool. The choice about how it gets deployed is ours.”– Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence.

Share This Article


Do the sharing thingy



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.