Creators of viral chatbot ChatGPT fear that artificial intelligence (AI) systems can pose a threat to humanity as they get smarter.
In a blog post published this week, OpenAI announced that it is forming a new team to focus on the risks associated with ‘superintelligent’ AI.
The team will be led by the company’s chief scientist Ilya Sutskever and Jan Leike, the head of Alignment Lab, a research group focused on the long-term issues of AI.
While AI could help solve many of the world’s most pressing problems, superintelligent AI ‘could lead to the disempowerment of humanity or even human extinction’, the authors wrote.
OpenAI believes that although ‘superintelligent’ AI seems far off, it could arrive within the decade.
A superintelligent AI would be smarter than any human, and it could potentially pose a serious threat to our existence. Think, Skynet from sci-fi thriller Terminator that becomes self-aware and decides to wipe out humanity.
‘Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,’ OpenAI co-founder Ilya Sutskever and this new team’s co-head Jan Leike wrote in the blog post.
The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years.
So basically, a Terminator-style team of ‘good’ AI to prevent evil AI from killing us all.
If you fancy yourself a future Sarah Connor, OpenAI is currently hiring for the team. According to the blog post, the company said it plans to dedicate 20% of its computing power towards this research.
Hopefully, that’s enough to stop the rise of the machines.
The CEO of OpenAI, Sam Altman, was among the AI experts who recently signed a statement calling for the mitigation of ‘the risk of extinction from AI’ to become a global priority.
Dr Geoffrey Hinton, widely regarded as the ‘godfather of artificial intelligence’ who recently resigned from Google while warning of the dangers of the technology, is also among the prominent signatories.
In April, an open letter signed by Elon Musk and other AI experts called for a six-month pause in the development of more powerful systems, citing potential risks to society and humanity.
MORE : ‘Terminator’ AI combat drones to fly with US fighter pilots
MORE : No, AI isn’t going to end mankind like in Terminator, says Sir Tony Blair
Get your need-to-know
latest news, feel-good stories, analysis and more
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.