xi's moments
Home | Innovation

How scared should we be about machines taking over?

Life 3.0 by Mark Tegmark argues questions about artificial intelligence need be confronted sooner rather than later

By Steven Poole | China Daily USA | Updated: 2017-12-04 14:07

‘Prediction is very difficult,” the great physicist Niels Bohr is supposed to have said, “especially when it’s about the future.” That hasn’t stopped a wave of ­­popular-science books from giving it go, and attempting, in particular, to sketch the coming takeover of the world by superintelligent machines.

This artificial-intelligence explosion — whereby machines design ever-more-intelligent successors of themselves — might not happen soon, but Max Tegmark, an American physicist and founder of the Future of Life Institute, thinks that questions about AI need to be addressed urgently, before it’s too late. If we can build a “general artificial intelligence” — one that’s good not just at playing chess but at everything — what safeguards do we need to have in place to ensure that we survive?

We are not talking here about movie scenarios featuring killer robots with red eyes. Tegmark finds it annoying when discussions of AI in the media are illustrated like this: the Terminator films, for example, are not very interesting for him because the machines are only a little bit cleverer than the humans. He outlines some subtler doomsday scenarios. Even an AI that is programmed to want nothing but to manufacture as many paper clips as possible could eradicate humanity if not carefully designed. After all, paper clips are made of atoms, and human beings are a handy source of atoms that could more fruitfully be rearranged as paper clips.

What if we programmed our godlike AI to maximise the happiness of all humanity? That sounds like a better idea than making paper clips, but the devil’s in the detail. The AI might decide that the best way to maximise everyone’s happiness is to cut out our brains and connect them to a heavenly virtual reality in perpetuity. Or it could keep the majority entertained and awed by the regular bloody sacrifice of a small minority. This is what Tegmark calls the problem of “value alignment”, a slightly depressing application of business jargon: we need to ensure that the machine’s values are our own.

What, exactly, are our own values? It turns out to be very difficult to define what we would want from a superintelligence in ways that are completely rigorous and admit of no misunderstanding. And besides, millennia of war and moral philosophy show that humans do not share a single set of values in the first place. So, though it is pleasing that Tegmark calls for vigorously renewed work in philosophy and ethics, one may doubt that it will lead to successful consensus.

Even if progress is made on such problems, a deeper difficulty boils down to that of confidently predicting what will be done by a being that, intellectually, will be to us as we are to ants. Even if we can communicate with it, its actions might very well seem to us incomprehensible. As Wittgenstein said: “If a lion could talk, we could not understand it.” The same might well go for a superintelligence. Imagine a mouse creating a human-level AI, Tegmark suggests, “and figuring it will want to build entire cities out of cheese”.

A sceptic might wonder whether any of this talk, though fascinating in itself, is really important right now, what with global warming and numerous other seemingly more urgent problems. Tegmark makes a good fist of arguing that it is, even though he is agnostic about just how soon superintelligence might appear: estimates among modern AI researchers vary from a decade or two to centuries to never, but if there is even a very small chance of something happening soon that could be an extinction-level catastrophe for humanity, it’s definitely worth thinking about.

In this way, superintelligence arguably falls into the same category as a massive asteroid strike such as the one that wiped out the dinosaurs. The “precautionary principle” says that it’s worth expending resources on trying to avert such unlikely but potentially apocalyptic events.

In the meantime, Tegmark’s book, along with Nick Bostrom’s Superintelligence (2014), stand out among the current books about our possible AI futures. It is more scientifically and philosophically reliable than Yuval Noah Harari’s peculiar Homo Deus, and less monotonously eccentric than Robin Hanson’s The Age of Em.

Tegmark explains brilliantly many concepts in fields from computing to cosmology, writes with intellectual modesty and subtlety, does the reader the important service of defining his terms clearly, and rightly pays homage to the creative minds of science-fiction writers who were, of course, addressing these kinds of questions more than half a century ago. It’s often very funny, too: I particularly liked the line about how, if conscious life had not emerged on our planet, then the entire universe would just be “a gigantic waste of space”.

Tegmark emphasises, too, that the future is not all doom and gloom. “It’s a mistake to passively ask ‘what will happen’, as if it were somehow predestined,” he points out. We have a choice about what will happen with technologies, and it is worth doing the groundwork now that will inform our choices when they need to be made.

Do we want to live in a world where we are essentially the tolerated zoo animals of a powerful computer version of Ayn Rand; or will we inadvertently allow the entire universe to be colonised by “unconscious zombie AI”; or would we rather usher in a utopia in which happy machines do all the work and we have infinite leisure?

The last sounds nicest, although even then we’d probably still spend all day looking at our phones.

Steven Poole’s Rethink: the Surprising History of New Ideas is published by Random House

Run Smart: Using Science to Improve Performance and Expose Marathon Running’s Greatest Myths, by John Brewer, is published by Bloomsbury, £12.99

374pp, Allen Lane, £20, ebook £9.99

Global Edition
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349