Artificial Intelligence

Top 10 Massive Failures of Artificial Intelligence Till Date




by

September 15, 2021

Artificial Intelligence

Here are the top massive failures of artificial intelligence in AI history to date.

The creation of artificial intelligence has been postponed for several millennia, and current AI technology is still far from being able to re-design itself in any significant sense. Even now, though, things with artificial intelligence may go wrong. Unfortunately, AI systems may run amok on their own, with no outside intervention.

 

AI failed to recognize images:

This example is one of the popular AI failures. Deep learning, a collection of methods commonly used to construct AI, began its triumphant march around 20 years ago with the breakthrough in image recognition, also known as computer vision. It completed previously unsolvable tasks, such as identifying cats from dogs and vice versa, before moving on to more difficult and demanding challenges. It is now a widely held belief that computer vision is a stable and trustworthy technology that is unlikely to fail. Researchers from Berkeley, the University of Chicago, and the University of Washington gathered 7,500 unedited nature pictures a year ago, which perplexed even the most powerful computer vision algorithms. Even the most tried and true algorithms might fail at times.

 

AI despised humans:

After 24 hours of ‘learning’ from human interactions, Tay, Microsoft’s most advanced chatbot, declared, “Hitler was correct to hate the Jews” on Twitter. The goal was to build a slang-filled chatbot that would raise machine-human conversation quality to a new level. However, it was revealed to be a “robot parrot with an internet connection.” The chatbot was constructed on top of the company’s AI technology stack, but the harsh reality appears to have ruined the innocent artificial intelligence worldview: an excellent illustration of how data may damage an AI model built in a “clean” lab environment without immunity to detrimental outside impact. This example belongs to another popular AI failure.

 

AI to fight cancer could kill patients:

Another failure cost US$62 million, which was spent by IBM to develop an AI system to aid in the battle against cancer. However, the outcome was once again unsatisfactory. The product, according to a doctor at Jupiter Hospital in Florida, was a complete failure. He went on to say that they acquired it for marketing purposes. Watson advised physicians to give a cancer patient with serious bleeding a medication that might aggravate the bleeding, according to medical experts and customers. Multiple cases of dangerous and erroneous therapy suggestions were reported by medical experts and customers.

 

AI despised women:

Amazon wanted to automate its hiring process to expedite the selection of candidates for the thousands of job openings they have. Everything ended up being a public relations disaster since the system turned out to be sexist, favoring white guys. The training data used to create the model was most likely imbalanced, resulting in candidate selection bias. This is also another example of AI Failures.

 

AI for secure system access by a face can be tricked with a mask: 

Make sure no one is wearing a mask with your face if you have an iPhone X with a Face ID. Face ID, according to Apple, creates a 3-dimensional model of your face using the iPhone X’s powerful front-facing camera and machine learning. The machine learning/AI component allowed the system to adjust to aesthetic changes (such as putting on make-up, donning a pair of glasses, or wrapping a scarf around your neck) while maintaining security. Bkav, a security business located in Vietnam, discovered that by attaching 2D “eyes” to a 3D mask, they could successfully unlock a Face ID-equipped iPhone. The stone powder mask, which cost approximately US$200, was created. The eyeballs were simply infrared pictures printed on paper. Wired, on the other hand, attempted to defeat Face ID using masks but was unable to replicate the results.

 

AI believes that members of Congress resemble criminals:

Amazon is responsible for another face recognition blunder. Its AI system was meant to detect offenders based on their facial image, but when it was put to the test using a batch of photos of members of Congress, it proved to be not only incorrect but also racially prejudiced. According to the ACLU (American Civil Liberties Union), almost 40% of Rekognition’s (the system’s) erroneous matches in our test were of persons of color, even though people of race make up just 20% of Congress. It’s unclear if it was a fault with non-white face recognition or if the training data was skewed. Both, most likely. However, relying only on AI to determine whether or not a person is a criminal would be crazy.

 

A lawsuit was filed as a result of an AI-related loss:

A Hong Kong real estate tycoon purchased an AI system to handle a portion of his money to increase funds. In reality, the robot continued to lose up to US$20 million every day. To reclaim a portion of his money, the tycoon sued the firm that provided the fintech service for US$23 million. The lawsuit claims that the business overstated K1’s capabilities, and it is the first recorded example of court action over automated investing losses. This AI Failure is also another example.

 

AI has lost its employment to humans:

The first revolutionary Henn-na Hotel opened its doors to visitors in Japan in 2015. All of the hotel’s employees were robots, including the front desk, cleaners, porters, and in-room helpers. However, the bots quickly accumulated consumer complaints: they regularly broke down, we’re unable to offer adequate responses to visitor questions, and in-room helpers frightened guests at night by misinterpreting snoring as a wake command. The hotel group that owned the hotel finally got rid of the last of its unreliable, costly, and irritating bots, replacing them with human staff after years of effort. The management said that it will return to the lab to see if it can build a new generation of more adept hospitality bots.

 

A triumph for AI that ended in a defeat:

Artificial intelligence was victorious in this case, but it was on the wrong side of the law. A call from his German employer instructed the CEO of a UK-based energy business to transfer €220,000 (US$243,000) to a Hungarian supplier. The ‘boss’ stated that the request was urgent and instructed the UK CEO to send the funds as soon as possible. Regrettably, the boss was a ‘deep fake’ speech-generating program that precisely resembled the genuine human’s voice. According to The Wall Street Journal, it employed machine learning to become indistinguishable from the original, including the “slight German accent and the melody of his voice.” Is AI a success or a failure? It is entirely up to you to make your decision. This is one of the examples of AI failures.

 

AI-driven cart Malfunctioned on the tarmac:

On the tarmac, an AI-driven food cart malfunctioned, circling out of control and ever-closer to a vulnerable Aeroplan parked at a gate. Finally, a yellow-vest worker was able to stop the cart by hitting it with another vehicle and knocking it down. This case is a little off the rails. The cart was neither mechanized nor controlled by artificial intelligence in any manner. This is also considered one of the popular AI failures.

Share This Article

Do the sharing thingy



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.