Artificial Intelligence

Interview: John Lennox Answers Questions about Artificial Intelligence in 2084


This week, Oxford mathematician John Lennox’s new book, 2084, on the problems raised by AI, hit the stands. 

Lennox asks, “What will the year 2084 hold for you — for your friends, for your family, and for our society? Are we doomed to the grim dystopia imagined in George Orwell’s 1984?”

Well, maybe not. There are good reasons for doubt. First, is it really true that computers can out-think humans? Mind Matters asked Lennox some searching questions about that. His answers are not what we have heard in the sci-fi fright films:

Mind Matters News: Dr. Lennox, you quote astronomer Martin Rees as saying, “Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity — spanning tens of millennia at most — will be a brief precursor to the more powerful intellects of the inorganic post-human era.”

Okay. But what reason have we to believe that our artefacts will really be smarter than ourselves? Isn’t that something of a skyhook?

John Lennox: Very little. It is always dangerous to extrapolate exponentially and our undoubted progress in technology in terms of speed and competence can easily mask the huge barrier that stands in the way of superior intelligent machines — consciousness. Smart humans are conscious — as since we do not even know what consciousness is, we are no further forward in that direction.

The Limitations of AI

Mind Matters News: Now that you mention it, Nick Bostrom and Eliezer Yudkowsky have commented on the limitations of AI: “Current AI algorithms with human equivalent or superior performance are characterized by a deliberately programmed competence only in a single, restricted domain. Deep Blue became the world champion at chess, but it cannot even play checkers, let alone drive a car or make a scientific discovery.”

That seems like a problem for the whole concept. The intelligence is never generalized; it exists only in the specific programming, which might mean it is merely the programmer’s intelligence. Is that a wrong way to see it?

John Lennox: That is my own feeling. As Dr Mellichamp, an early (Christian) researcher said in 1985 in a lecture at Yale: “The artificial in artificial intelligence is real.” An AI system is not intelligent in the human sense since the ‘intelligent’ behavior it manifests is due, as you suggest, to the intelligence of the constructor and programmer.

Mind Matters News: But, of course, that doesn’t mean that there are no problems. You write that the power of such systems “lies in their ability to handle vast amounts of data, to build up a profile of individuals, and to detect patterns, both within an individual’s behaviour and across a population.”

Surveying the scene in China, isn’t that the biggest problem? Not that the machines will outsmart us but that they will be used by powerful forces to control us in more detail than was ever possible before?

John Lennox: Yes this is the much greater danger since it comes from (narrow) AI that has already been developed and is now in use, particularly in China. However, the point has been made that all the necessary equipment to produce a totalitarian surveillance state is available in the West. The only difference is that it is not (yet) under centralised state control.

Humans and Superhumans

Mind Matters News: Further to claims that computers will out-think people, you also write about “the general-purpose ability to visualise things and to reason about scenarios of objects and processes that exist only in our minds. This general-purpose capability, which humans all have, is phenomenal; it is a key requirement for real intelligence, but it is fundamentally lacking in AI systems. There are reasons to doubt if we will ever get there.”

Some of us are not clear that there is any way to get there! As a general principle, a cat only gives birth to kittens, not babies. How will a human give birth to superhumans?

John Lennox: I think that this is essentially a sci-fi hope. One attempt to make super-humans is by starting with humans — which we can produce by the usual genetic process — and then enhancing them by bioengineering, merging them with micro-miniature technology, rearing them on drugs, etc. The other way is the attempt to eliminate dependence on carbon-based life and construct more durable “life” on silicon. That would be giving “birth” in an engineering and intellectual sense.

I do not see any evidence that either of these will happen and, even if it did, the products would not be human but, as C. S. Lewis said, artefacts: “Man’s final conquest has proved to be the abolition of Man.”

The Hype Goes On

Mind Matters News: Overall, do you see a more reasonable approach to these issues shaping up before 2084?

John Lennox: I see the hype continuing since it is not only being driven by sci-fi thriller writers but by leading scientists on the dubious assumption that since we have evolved to our present state by mindless unguided processes, we are now intelligent enough to take the process into our own hands and there is therefore no limit to the extent that we can accelerate the future shaping of humanity. 

However, ethical considerations may put a brake on this kind of hubris, but I see little hope of that since ethical thinking about these issues is far outpaced by the rate of technological development.

It is important for us to realise that worldview assumptions lie behind the transhumanist drive and there is a vast difference between those who hold the view just sketched and Christians who believe that God has authenticated humanity 1.0 by creating us in his own image.

Photo: John Lennox, by Pro Medienmagazin, via Flickr.

Cross-posted at Mind Matters.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.