Artificial Intelligence

A Pathway to Trustworthy And Unbiased AI


While the country continues to struggle convincing people it is safe and smart to get vaccinated against Covid-19 despite emergency authorization and advocacy from the U.S. Federal Drug Administration (FDA), other growing numbers of people are willing to jump in with both feet when it comes to trusting their lives to Artificial Intelligence (AI), which has no standards or organizational oversight. At this time, only “guidelines” exist from the U.S. Federal Trade Commission (FTC) for AI.

I will take you through an example of how even impactful applications of AI still need a human assistant to ensure trustworthy, explainable, and unbiased decision making. 

Using AI for automating manufacturing and improving our work streams, such as providing robust CMMS (computerized maintenance management systems) is an excellent application for AI. CMMS is a proactive methodology to keep systems running and to optimize maintenance operations. Alternatively, if businesses and institutions don’t invest in smart maintenance approaches and choose only to take action after a system fails, it ultimately impacts operations, and customer service. Especially, if the components needed for repair are not readily available. 

CMMS systems utilize sensors and other Internet of Things (IoT) devices and leverage on cloud services. They can track equipment assets, sense whether device performance is degrading, maintain the history of repair times and help us determine whether it is time to invest in replacing or upgrading systems versus investing in more repairs. AI can be used to look at this plethora of data and help make inferences that could save money, anticipate risk factors and build risk mitigation plans.

So far in this example, it appears that an individual human can’t be adversely affected by the AI used in this CMMS application. However, let’s consider a scenario when a relationship appears between the number of times a repair was done on a device that repeatedly failed and the person who performed that repair.

Does this indicate that the repair person needs more training or does it simply mean the device has too many faults and should be replaced? This is when the AI needs to ask a human for help and the human needs to be smart about interpreting what the data means. 

Whenever AI makes a decision on a human’s livelihood or is used to evaluate an individual’s competency, we need to hit the big red STOP button and understand how the AI is making decisions and what it means. There have been many examples of companies using AI to evaluate employee job performance, which resulted in the subsequent dismissal or poor evaluations of employees. Lending institutions using AI have encountered algorithms that have unfairly eliminated underrepresented groups of individuals from qualifying for loans.

This kind of blind faith in computer generated decisions gives me a flashback to high school when they gave us career assessment tests to match us to a future vocation.

I was at the top of my class and my best friend, a male had scored slightly lower than I did in math and science. We were both expecting the outputs of our career assessments to be similar. How wrong I was! 

His assessment said he could become an engineer, scientist, or politician. The results of my assessment, said I could be a cook, or sell cosmetics.

True fact:  I stink at cooking. I never could cook and I still can’t. Furthermore, it would be cruel and unjust for anyone to be subjected to eating my cooking. 

When I complained to the guidance counselor, he reminded me that the results had to be correct, because this was a “computer generated” result and therefore it had to be accurate. Thankfully, I have always been a rebel and I ignored that assessment and went on to become an engineer.

 AI has so much potential, but it has a long way to go before it can be considered a standalone replacement for human decision making that is trustworthy and unbiased. 

In a recent IEEE/IEEE-HKN webinar, Dr. Manuela M. Veloso, Head of J.P. Morgan AI Research discussed human assisted AI as the bridge in the quest to help create more robust trustworthy AI. Including the human in the loop while the AI is exploring the data and asking the human for help is a new paradigm that many have leaped over to expedite the use of AI. It’s time to embrace that Human Assisted AI is necessary if we are to move forward developing robust applications of AI. We need to demystify AI as a building block and have a way for the AI to declare it needs help from its human partner. 

Finally, we need to be vigilant questioning the decisions from AI and help lawmakers develop standards to prevent blind trust in computer generated outputs that could have disastrous impacts of turning aspiring engineers into horrible cooks.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.