Tesla boss Elon Musk has sounded dire warnings about the dangers of artificial intelligence (AI) on many occasions. Of course, he differentiates between narrow (case-specific) applications of machine intelligence such as automated cars and general machine intelligence. The latter, he says, is more dangerous than a nuclear warhead and needs stringent regulatory curbs because it is a species-level risk.
In 2018, Harvard Professor Steven Pinker criticised Musk’s grave prognosis about AI. ‘If Elon Musk was really serious about the AI threat, he’d stop building those self-driving cars, which are the first kind of AI we’re going to see,’ he said.
Pinker was prophetic in implying that self-driving cars could cause harm, while warning that unrealistic apocalyptic scenarios would hurt society. Musk, who also owns an AI company Neuralink, has deep exposure to cutting-edge AI. His company seeks to wirelessly connect the brain via a chip to the digital world to enable humans to compete with AI. From that vantage, he was warning that general AI could potentially be a risk to human civilisation while downplaying the dangers of narrow AI, exemplified by automated cars. Musk and Pinker were both right and wrong.
It was a seminal moment when Google DeepMind mastered ‘Go’, the ancient Chinese game based on intuition, and beat grandmaster and world champion Lee Sedol in 2016. This victory demonstrated that computers could figure all possible moves and they can learn, maybe better than humans, without any guidance.
In 2018, Google DeepMind researchers trained a neural network to tackle a virtual maze that spontaneously developed the digital equivalent of grid cells that mammals use to navigate. Nobel laureate Edvard Moser remarked that it was striking that a computer model could mimic a grid pattern from biology. What was remarkable was that the grid cell-related code was system generated. The barrier of dependence on human knowledge had been broken.
That AI is not limited by human ability is hard to stomach, and that hasn’t stopped us from harnessing it for our benefit. Vaccine development traditionally takes many years. That time was cut short to months for Covid-19. AI was used to analyse mountains of data to figure what would produce the best immune response, revolutionising vaccine production.
A doctor at Tata Memorial Hospital (TMH), Mumbai, collaborated on a surgery in London. The doctor was in the operating theatre in London, gleaned all the patient test reports and scans, and conversed with other doctors as if he was there in person. They used Microsoft HoloLens to connect virtually, process and display information, and blend with the real world, leveraging sensors, advanced optics and holographic processing. Reportedly, TMH has done the largest number of rectal, pancreatic and gastrointestinal robotic surgeries.
Sales returns for online fashion can be as high as 80%. Bangalore-based Bigthinx used virtual reality and AI to address this. Smartphone photographs help a neural network create a digital persona of customers for virtual trials, using 40+ body measurements, to find the right fit. Returns, they say, are down 40-70%.
The US-based AI-powered financial investing platform, Betterment, has created robo-advisers that build a profile of the investor, use algorithms for trading and portfolio management, minimising tax loss and optimising returns. Betterment has $32 billion (₹2.41 lakh crore) of assets under management and serves 650,000 customers.
AI combines natural language processing (NLP), speech-to-text, computer vision, audio and sensor processing, machine learning (ML) and expert systems. We must ask ourselves, what happens when these systems use deep learning and reinforcement learning to develop code unaided? What happens when these powerful AI systems start taking decisions autonomously? Who will be culpable when, for instance, an autonomous car crashes into a human being?
According to Accenture, AI has the potential to add $957 billion (₹72 lakh crore) to India’s economy in 2035. The jury, however, is out on whether AI-powered intelligent systems taking over the world is a mere exaggeration.
Some believe that regulating a technology early could kill it. The economic and social benefits can’t be ignored, neither can the existential risks. The Global Partnership on Artificial Intelligence (GPAI), co-founded by India and 14 other countries, has put human-centric development and use of AI at its core. A small but astute beginning.