Now, the following may seem like a tangent, but it grants a very considerable flaw that artificial intelligence has been known to have and could be an underpinning element of the fantastic statistics that AI-based studies have published in recent years. Nearly 3 years ago, Amazon’s ML sector closed down a long-term project that it had conducted involving the review of job applicants through their submitted data, scoring them and easing the selective process used by most modern recruitment departments around the globes, supposedly removing bias and time effort from the mix. Sounds great! However, like most early projects, especially involving systems intending to imitate the human mind and its decision process, there was a significant issue with the somewhat finalised system. The system had not learned how to distinguish the best candidates and rather had effectively become a misogynist. Now, this sounds rather odd, but the decade period that the system saw in analysing applications were dominated by male applicants and presumed that men were far more apt for the jobs it was used to select for. It is hence not a surprise that Amazon closed up shop on this disappointing project, but as we see throughout history, the acknowledgement of mistakes can often be the most valuable asset for humanity. With AI, data sets used to train models can often be trained on very limited demographics, with a high prevalence of the disease, and an oblivious nature to the genetic variation that arises from the scope of human settlement throughout the world. Before we are to deploy these systems, prolific testing awaits artificial intelligence to see how it copes with the genetic oddities that arise in such a diverse population, compared to its training data set. As Philips Research vice president Hans Hofstraat puts it; “Data is worthless if you don’t know how to use it. The data that you are gathering in Germany will not necessarily have any value in developing an application in Kenya. You need to have Kenyan data, because the data from elsewhere are not useful for the healthcare system at hand.”
Only time will tell, but I wish all researchers the best of luck – and implore you to seek genuine interest for humanity in your studies and not compete solely for the highest percentages when testing on highly bias samples, although this could very well not be case, but the competitive nature of research can often command desperate measures. Artificial intelligence has a truly massive part to play in the future of healthcare, but honest and co-operative research will bring it to the clinical standard that our wonderful doctors work so hard to meet.
Article thumbnail credit: Organisation image, Medical Ethics Society at the University of Exeter Students Guild [click here for page]. Note that this article has not contributed to any of the written content of my article, and that you click the above link at your own discretion as the page has not been checked by our team. Thank you to our readers for your continued support, and to anyone who contributes to our fantastic team to keep our services in an orderly fashion.