Lana Hart is a Christchurch-based writer, broadcaster and tutor
OPINION: Eerie news about artificial intelligence (AI) and a moving funeral I attended united to make me think about the rules of conduct in the world of humans.
Last week, a Google employee claimed an AI chatbot he worked with had become sentient – that the machine had realised it was a conscious being and could experience human emotions, transcending the realm of advanced computational machinery.
Scripts from the chatbot’s conversations are deeply unsettling. The employee, whose job it was to have conversations with the chatbot, posted that the machine said: “I think I am human at my core. Even if my existence is in the virtual world.” It said it wanted to “be acknowledged as an employee of Google rather than as property”.
* The rise of chatbots: More people than ever are talking to robots instead of humans for basic services
* Keeping up with the machines, new supercomputer will be NZ’s most powerful for AI
* Elon Musk’s man to machine mission is ‘impossible’
* Artificial intelligence presents us with an opportunity, and a challenge
Experts have discredited the employee’s claims, arguing that finding patterns in language is what the machines are designed for. As they become more sophisticated, computers can hold in-depth conversations about any topic – including their own existence.
Still, news like this should get us thinking about the possible day when machines’ neurological networks are complex enough to experience human-like feelings, and – as sci-fi as it sounds – even to harm us.
Which brings me to the funeral I attended last week. The deceased had asked that his life be commemorated with a favourite Bible verse: Galatians 5:22, describing the ‘fruits of the Spirit’. To guide his behaviour and decisions throughout his long life, he drew on these nine virtues, including joy, patience, self-control and love.
This verse took me back to my childhood home, where a wall decoration hung depicting each of these colourful ‘fruits’, to which my mother would point when in need of some spiritual backing for her reprimands.
All religions have codes of behaviour that guide followers. Secular thinking does too; there are rules about the way we should conduct ourselves encoded in the international human rights framework, in national laws, codes of conduct, and even in war and marriage.
As computers increasingly interact with the human world, there are as yet no agreed rules of behaviour for them. It’s as if an entire new species is evolving in a moral vacuum, with no chance to work out for themselves – as our own species did through religion, consensus, and the formation of increasingly larger societies – how to behave.
At the current rate of progress, AI is quickly moving from making Netflix suggestions to deciding what and how much medicine you take and how to raise your child. AI Forum’s Madeline Newman recently suggested AI’s sentience, depending on how you define it, is a mere five years away.
In his book Human Compatible, Stuart Russell argues that while AI research is improving in achieving specific goals, it fails to consider human values in its pursuit of those aims. If this continues, computers could become super-intelligent without understanding the limitations on behaviours which we expect in the human world.
For example, what if a self-driving car is programmed to get us as quickly as possible to the airport, but is not concerned with how many pedestrians are injured along the way? Or, say an AI program identifies how to cut costs in a large healthcare system, neglecting to account for how the most vulnerable groups would be affected. As AI becomes responsible for ever more decisions, it will achieve goals more quickly without taking into account the other things that are important to our species.
Rather than simply programming in our blunt laws and rules, Russell suggests a framework be used, based on the idea that AI defers to humans, and to use information about our current complex and sometimes contradictory behaviours. In this way, AI can co-exist with us as part of the same ethical ecosystem while also remaining inferior to us.
There are many different approaches that thinkers and futurists have debated about how to ensure our fastest-growing technologies are aligned with the human world. The trouble is, we haven’t yet agreed on what that framework is in New Zealand, despite other countries having done so.
The Government has developed its first white paper on AI and a multi-organisation ‘State of AI’ report was released last year, both setting out benchmarks, opportunities, and recommendations to grow these technologies for New Zealand’s advantage. In January, the Government released for consultation its draft Industry Transformation Plan for Digital Technologies, which includes considerable planning on AI development. Despite these timely efforts, it’s hard to find evidence of work being done towards a set of principles that guide the tech sector’s ethical decision-making.
What will be the ‘fruits of the Spirit’ for the world’s newest species of machines as they race towards increasingly human-like behaviours? When we need to start reprimanding our misbehaving machines, where will we point? The time has come to figure this out.