Artificial Intelligence

GPT-3: an AI game-changer or an environmental disaster? | John Naughton | Opinion


Unless you’ve been holidaying on Mars, or perhaps in Spain (alongside the transport secretary), you may have noticed some fuss on social media about something called GPT-3. The GPT bit stands for the “generative pre-training” of a language model that acquires knowledge of the world by “reading” enormous quantities of written text. The “3” indicates that this is the third generation of the system.

GPT-3 is a product of OpenAI, an artificial intelligence research lab based in San Francisco. In essence, it’s a machine-learning system that has been fed (trained on) 45 terabytes of text data. Given that a terabyte (TB) is a trillion bytes, that’s quite a lot. Having digested all that stuff, the system can then generate all sorts of written content – stories, code, legal jargon, poems – if you prime it with a few words or sentences.

One explanation for the current excitement is that the machine strikes people as being rather good at “writing” original-sounding stuff. Recently, for example, someone primed it with a quote from an aphoristic blogger that went something like this: “I think I find information suicide (is there a better term for this that’s not so depressing? identity switching?) interesting for the same reasons that other people find longevity interesting.”

In response, GPT-3 produced a blog post that began: “I get it. But I’ve always been more interested in the other lever; resets. There’s little reason identity should persist across 80 (or 200) years. French Foreign Legionnaires and cheating husbands have always presumed new identities. Identity persistence has only recently happened as a result of strong government record-keeping and centralisation. If we are going to pursue biological longevity – we should allow a diversity of lives to be lived… So maybe it’s just that I don’t see why our first lives should be the only ones that we can explore. Maybe we can have lives for different spaces in our limited time. Or maybe this is just wishful thinking on my part?”

Opinions can legitimately vary on whether this is profound or even interesting. One reason for the excitement surrounding the system is speculation about its ability to do work that humans currently do – and therefore to replace them. But what is certain is that, given that the data from which GPT-3 absorbed its “knowledge” included an awful lot of stuff from the internet, including Google Books, Wikipedia and programming manuals, its code contains implicit bias: GIGO (garbage-in-garbage-out) and all that.

Joseph Weizenbaum



Joseph Weizenbaum, who created the Eliza program. Photograph: Getty

Another reason for the excitement is that humans have always been fascinated by machines that appeared to be able to respond intelligently to what we say to them. In the mid-1960s, for example, the computer scientist Joseph Weizenbaum wanted to demonstrate the superficiality of human-machine interactions. So he wrote a program called Eliza that used pre-written scripts to respond to inputs. The most famous script, Doctor, simulated a Rogerian psychotherapist – ie, one who simply parroted back at patients what they’d just said. Poor Weizenbaum, a gentle and innocent soul, was then astonished to find people apparently having serious consultations with Eliza. And her popularity has endured, as a web search for “elizabot” will confirm.

The apparent plausibility of GPT-3’s performance has led – again – to fevered speculation about whether this means we have taken a significant step towards the goal of artificial general intelligence (AGI) – ie, a machine that has the capacity to understand or learn any intellectual task that a human being can. Personally, I’m sceptical. The basic concept of the GPT approach goes back to 2017 and although it’s a really impressive achievement to be able to train a system this big and capable, it looks more an incremental improvement on its predecessors rather than a dramatic conceptual breakthrough. In other words: start with a good idea, then apply more and more computing power and watch how performance improves with each iteration.

Which raises another question: given that this kind of incremental improvement is made possible only by applying more and more computing power to the problem, what are the environmental costs of machine-learning technology? At the moment the only consensus seems to be that it’s a very energy-intensive activity, but exactly what the size of its environmental footprint is seems to be a mystery. This may be partly because it’s genuinely difficult to measure, but it may also be partly because the tech industry has no incentive to inquire too deeply into it, given that it has bet the ranch on the technology.

But those of us with slightly longer memories will recall the bravado of the Bitcoin and blockchain crowd a few years ago – until someone discovered that Bitcoin mining was consuming the same amount of electricity as small countries. GPT-3 and machine-learning may be very impressive (not to mention profitable for tech giants), but sooner or later shouldn’t we be asking if the planet can afford it?

What I’ve been reading

Regulating technology
Based on the premise that “tech has eaten the world”, Benedict Evans’s very thoughtful blog post looks at different regulatory cultures around the globe.

Last responders
Great reporting in the Texas Tribune on the workers who have to pick up the thousands of bodies of those who have died from Covid-19.

You and your research
A wonderful lecture given by Richard W Hamming, the American inventor of coding theory, in 1986.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.