Google’s artificial intelligence tool fails real-world test


An Artificial Intelligence (AI) tool developed by Google failed during real-world testing. It was supposed to detect signs of blindness.

Many in the medical field has touted the help brought by AI tools. These tools are usually used in the screening process of many ailments. When properly trained, these tools can render highly accurate diagnosis.

Google Health developed a deep learning AI that scans images of an eye. The AI tool then looks for evidence of diabetic retinopathy on these images. Diabetic retinopathy is one of the leading cause of blindness around the world.

Google claims that the AI tool was properly trained. The tech giant added that when it was tested on controlled environments, it returned accurate results. However, it would appear that this accuracy was not carried over to real-world tests.

What went wrong

Google researchers and medical experts tested the tool in clinics located in Thailand. The research and tests were conducted over the span of eight months. The team gathered data from patients from a total of 11 clinics.

Despite exhibiting high theoretical accuracy, the tool failed when tested in the real world. The negative result ended up frustrating both patients and researchers. The result also raised questions about the effectiveness of AI tools in real-world applications.

According to Google, one of the reasons why the tool failed is due to various environmental factors. The tech giant said that factors like room lighting can have a direct impact on the quality of images.

READ  Weaponized Artificial Intelligence – Critical Dual-Use Applications

Experienced and trained clinical technicians can properly respond and adjust to these environmental factors. On the other hand, AI tools need to be properly trained to handle such situations.

Google added that lighting have significant impact on the images it had gathered. In some instances, captured images tend to have dark areas and blurs. The tool then interpreted these areas as “ungradable,” thus affecting it accuracy.

What does this mean for AI

Google’s finding could help experts how to train better AI tools. Despite failing in real-world tests, Google maintains that the lessons learned throughout the test are invaluable to the future of AI.

The tech giant added that the problem was not with the artificial intelligence tool. One of the reasons it failed was because its developers failed to train it to various situations. Though the AI tool did produce some valuable outputs, Google said that it has to be perfectly accurate before it can be adopted further.

Image courtesy of Hitesh Choudhary/Unsplash

Micky is a news site and does not provide trading, investing, or other financial advice. By using this website, you affirm that you have read and agree to abide by our Terms and Conditions.
Micky readers – you can get a 10% discount on trading fees on FTX and Binance when you sign up using the links above.





READ SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here