Artificial Intelligence

Artificial Intelligence and Bias: The Buck Stops (W)here


Artificial Intelligence and Bias: The Buck Stops (W)hereWhile biases will always be part of artificial intelligence, is it time for an AI renaissance?

It is not surprising that many industries are turning to artificial intelligence (AI) technologies like machine learning to review vast amounts of data. Be it analyzing financial records to check if one qualifies for a loan or errors in legal contracts or determine if one suffers from schizophrenia; artificial intelligence has got you covered! However, is it totally foolproof or impartial? Can this modern technology be prone to bias like humans? Let us find out!

Bias risks differ for each business, industry, and organization. They can find their way into artificial intelligence systems through numerous ways. For instance, it can either be intentionally introduced into an AI system via a stealth attack or unintentionally, making it hard to ever be seen or discovered. It can also be due to humans who input already biased data that reflects their biased thinking or due to data sampling bias. We also have long tail biases that occur when certain categories are missing from the training data.

It is obvious that presence of bias in data can cause artificial intelligence model to become biased, but what is more dangerous is that the model can actually amplify bias. E.g., a team of researchers found that 67% of images of people cooking were women but the algorithm labeled 84% of the cooks as women. Deep learning (another AI technology) algorithms are increasingly being used to make life-impacting decisions, like in hiring employees, the criminal justice system and health diagnosis. In these scenarios, if the algorithms make incorrect decisions due to AI bias, the results would be devastating in the long run.

For instance, in 2016, Pro Publica, a nonprofit news organization, had critically analyzed risk assessment software powered by AI known as COMPAS. COMPAS has been used to predict the likelihood that a prisoner or accused criminal would commit further crimes if released. It was observed that the false-positive rate (labeled as “high-risk” but did not re-offend) was nearly twice as high for black defendants (error rate of 45%) as for white defendants (error rate of 24%). Apart from this, there are multiple instances where artificial intelligence tools misclassify/mislabeled/misidentified people due to their race, gender, and ethnicity. Like in the same year, when the Beauty.AI website employed AI robots as judges for beauty contests, it found that people with light skin were judged much more attractive than people with dark skin.

It is important to uncover unintentional artificial intelligence bias and align technology tools with diversity, equity and inclusion policies and values in the business domain. As per 2020 PwC AI Predictions 68% of organizations still need to address fairness in the AI systems they develop and deploy.

Often, machine learning, deep learning models are usually built in three phases: training, validation, and testing. Though bias can creep in long before the data is collected and at many other stages of the deep-learning process, bias influences the models in the training phase itself. Generally, parametric algorithms like linear regression, linear discriminant analysis, and logistic regression are prone to high bias. As artificial intelligence systems become more dependent on deep learning and machine learning, owing to their usefulness, tackling AI biases can get more tricky.

While the biases are being addressed in an accelerated manner, the key challenges lie in defining the bias itself. This is because what may sound bias to one developer or data scientist may not mean bias for another. Another concern is what guidelines should ‘fairness’ adhere to – is there any technical way to define fairness in artificial intelligence models? Also, it is important to note that varying explanations will create confusion and cannot be satisfied every time. Further, it is crucial to determine what shall be error rates and accuracy for different subgroups in a dataset. Next, data scientists, need to factor in the social context. If a machine learning model works perfectly in criminal justice scenarios, it does not imply it will be suitable for screening candidates for a job position. Hence social context matters!

No doubt that opting for diverse data can alleviate AI biases, by giving space for more data touchpoints and indicators that cater to different priorities and insights, it is not enough. Meanwhile, the presence of proxies for specific groups, make it hard to build a deep learning or any other AI model that is aware of all potential sources of bias.

Lastly, not all AI biases have a negative footprint nor influence. In such cases, explainable AI (XAI) can help to discern whether a model uses good bias or bad bias to make a decision. It also tells us which factors are more important when the model makes any decision. Though it will not eliminate biases, it will surely enable human users to understand, appropriately trust and effectively manage AI systems.

Share This Article


Do the sharing thingy



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.