Science

Everything you wanted to know about AI – but were afraid to ask | Artificial intelligence (AI)


Barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.

There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.

So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.

What is artificial intelligence?

The term is almost as old as electronic computers themselves, coined back in 1955 by a team including legendary Harvard computer scientist Marvin Minsky.

In some respects, it is already in our lives in ways you may not realise. The special effects in some films and voice assistants like Amazon’s Alexa all use simple forms of artificial intelligence. But in the current debate, AI has come to mean something else.

It boils down to this: most old-school computers do what they are told. They follow instructions given to them in the form of code. But if we want computers to solve more complex tasks, they need to do more than that. To be smarter, we are trying to train them how to learn in a way that imitates human behaviour.

Computers cannot be taught to think for themselves, but they can be taught how to analyse information and draw inferences from patterns within datasets. And the more you give them – computer systems can now cope with truly vast amounts of information – the better they should get at it.

The most successful versions of machine learning in recent years have used a system known as a neural network, which is modelled at a very simple level on how we think a brain works.

What are the different types of artificial intelligence?

With no strict definition of the phrase, and the lure of billions of dollars of funding for anyone who sprinkles AI into pitch documents, almost anything more complex than a calculator has been called artificial intelligence by someone.

There is no easy categorisation of artificial intelligence and the field is growing so quickly that even at the cutting edge, new approaches are being uncovered every month. Here are some of the main ones you may hear about:

  • Reinforcement learning
    Perhaps the most basic form of training there is, reinforcement learning involves giving feedback each time the system performs a task, so that it learns from doing things correctly. It can be a slow and expensive process, but for systems that interact with the real world, there is sometimes no better way.

  • Large-language models
    This is one of the so-called neural networks. Large-language models are trained by pouring into them billions of words of everyday text, gathered from sources ranging from books to tweets and everything in between. The LLMs draw on all this material to predict words and sentences in certain sequences.

  • Generative adversarial networks (GANs)
    This is a way of pairing two neural networks together to make something new. The networks are used in creative work in music, visual art or film-making. One network is given the role of creator while a second is given the role of marker, and the first learns to create things that the second will approve of.

  • Symbolic AI
    There are even AI techniques that look to the past for inspiration. Symbolic AI is an approach that rejects the idea that a simple neural network is the best option, and tries to mix machine learning with more diligently structured facts about the world.

What is a chatbot?

A chatbot draws on the AI we have just been looking at with the large-language models. A chatbot is trained on a vast amount of information culled from the internet. It responds to text prompts with conversational-style responses.

The most famous example is ChatGPT. It has been developed by OpenAI, a San Francisco-based company backed by Microsoft. Launched as a simple website in November last year, it rapidly became a sensation, reaching more than 100 million users within two months.

The chatbot gives plausible-sounding – if sometimes inaccurate – answers to questions. It can also write poems, summarise lengthy documents and, to the alarm of teachers, draft essays.

Tell me more about how these chatbots work

The latest generation of chatbots, like ChatGPT, draw on astronomical amounts of material – pretty much the entire written output of humanity, or as much of it as their owners can acquire.

Those systems then try to answer a deceptively simple question: given a piece of text, what comes next?

If the input is: “To be or not to be”, the output is very likely to be: “that is the question”; if it is: “The highest mountain in the world is” the next words will probably be: “Mount Everest”.

But the AI can also be more creative: if the input is a paragraph of vaguely Dickensian prose, then the chatbot will continue in the same way, with the model writing its own ersatz short story in the style of the prompt.

Or, if the input is a series of questions about the nature of intelligence, the output is likely to draw from science fiction novels.

Why do chatbots make errors?

LLMs do not understand things in a conventional sense – and they are only as good, or as accurate, as the information with which they are provided.

They are essentially machines for matching patterns . Whether the output is “true” is not the point, so long as it matches the pattern.

If you ask a chatbot to write a biography of a moderately famous person, it may get some facts right, but then invent other details that sound like they should fit in biographies of that sort of person.

skip past newsletter promotion

And it can be wrongfooted: ask ChatGPT whether one pound of feathers weighs more than two pounds of steel, it will focus on the fact that the question looks like the classic trick question. It will not notice that the numbers have been changed.

Google’s rival to ChatGPT, called Bard, had an embarrassing debut this month when a video demo of the chatbot showed it giving the wrong answer to a question about the James Webb space telescope.

Which brings us to growing concern about the amount of misinformation online – and how AI is being used to generate it.

What is deepfake?

Deepfake is the term for a sophisticated hoax that that uses AI to create phoney images, particularly of people. There are some noticeable amateurish examples, such as a fake Volodymyr Zelenskiy calling on his soldiers to lay down their weapons last year, but there are eerily plausible ones, too. In 2021 a TikTok account called DeepTomCruise posted clips of a faux Tom Cruise playing golf and pratfalling around his house, created by AI. ITV has released a sketch show comprised of celebrity deepfakes, including Stormzy and Harry Kane, called Deep Fake Neighbour Wars.

In the audio world, a startup called ElevenLabs admitted its voice-creation platform had been used for “voice cloning misuse cases” This followed a report that it had been used to create deepfake audio versions of Emma Watson and Joe Rogan spouting abuse and other unacceptable material.

Experts fear a wave of disinformation and scams as the technology becomes more widely available. Potential frauds include personalised phishing emails – which attempt to trick users into handing over data such as login details – produced at mass scale, and impersonations of friends or relatives.

“I strongly suspect there will soon be a deluge of deepfake videos, images, and audio, and unfortunately many of them will be in the context of scams,” says Noah Giansiracusa, an assistant professor of mathematical sciences at Bentley University in the US.

Can AI pose a threat to human life and social stability?

The dystopian fears about AI are usually represented by a clip from The Terminator, the Arnold Schwarzenegger film starring a near-indestructible AI-robot villain. Clips on social media of the latest machinations from Boston Dynamics, a US-based robotics company, are often accompanied by jokey comments about a looming machine takeover.

Elon Musk, a co-founder of OpenAI, has described the danger from AI as “much greater than the danger of nuclear warheads”, while Bill Gates has raised concerns about AI’s role in weapons systems. The Future of Life Institute, an organisation researching existential threats to humanity, has warned of the potential for AI-powered swarms of killer drones, for instance.

More prosaically, there are also concerns that unseen glitches in AI systems will lead to unforeseen crises in, for instance, financial trading.

As a result of these fears, there are calls for a regulatory framework for AI, which is supported even by arch libertarians like Musk, whose main concern is not “short-term stuff” like improved weaponry but “digital super-intelligence”. Kai-Fu Lee, a former president of Google China and AI expert, told the Guardian that governments should take note of concerns among AI professionals about the military implications.

He said: “Just as chemists spoke up about chemical weapons and biologists about biological weapons, I hope governments will start listening to AI scientists. It’s probably impossible to stop it altogether. But there should be some ways to at least reduce or minimise the most egregious uses.”

Will AI take our jobs?

In the short term, some experts believe AI will enhance jobs rather than take them, although even now there are obvious impacts: an app called Otter has made transcription a difficult profession to sustain; Google Translate makes basic translation available to all. According to a study published this week, AI could slash the amount of time people spend on household chores and caring, with robots able to perform about 39% of domestic tasks within a decade.

For now the impact will be incremental, although it is clear white collar jobs will be affected in the future. Allen & Overy, a leading UK law firm, is looking at integrating tools built on GPT into its operations, while publishers including BuzzFeed and the Daily Mirror owner Reach are looking to use the technology, too.

“AI is certainly going to take some jobs, in just the same way that automation took jobs in factories in the late 1970s,” says Michael Wooldridge, a professor of computer science at the University of Oxford. “But for most people, I think AI is just going to be another tool that they use in their working lives, in the same way they use web browsers, word processors and email. In many cases they won’t even realise they are using AI – it will be there in the background, working behind the scenes.”

If I want to try examples of AI for myself, where should I look?

Microsoft’s Bing Chat and OpenAI’s ChatGPT are the two most advanced free chatbots on the market, but both are being overwhelmed by the weight of interest: Bing Chat has a long waitlist, which users can sign up for through the company’s app on iOS and Android, while ChatGPT is occasionally offline for non-paying users.

To experiment with image generation, OpenAI’s DallE 2 is free for a small number of images a month, while more advanced users can join the Midjourney beta through the chat app Discord.

Or you can use the wide array of apps already on your phone that invisibly use AI, from the translate apps built in to iOS and Android, through the search features in Google and Apple’s Photos apps, to the “computational photography” tools, which use neural network-based image processing to touch up photos as they are taken.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.