Search This Blog

Saturday, January 24, 2015

Martin Rees: Robots can enrich humanity - as long as we can keep them under control

The technology of artificial intelligence is advancing rapidly – but how will we cope with people-like machines more intelligent than us?

Science fiction or the future? the distinction between humans and robots could become blurred
Updated: 15:00, 23 January 2015
Original link:  http://www.standard.co.uk/comment/martin-rees-robots-must-abide-by-laws--or-humans-could-become-extinct-9998203.html 

Way back in 1942, the great science fiction writer Isaac Asimov formulated three laws that robots should obey. First, a robot may not injure a human being or, through inaction, allow a human being to come to harm. Second, a robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. Third, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seven decades later, intelligent machines pervade popular culture — most recently in the movie Ex Machina. But, more than that, the technology of artificial intelligence (AI) is advancing so fast that there’s already intense debate on how Asimov’s laws can be implemented in the real world.

Experts differ in assessing how close we are to human-level robots: will it take 20 years, 50 years, or longer? And philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs — so that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life. But there’s agreement that we’re witnessing a momentous speed-up in the power of machines to learn, communicate and interact with us — which offers huge benefits but has downsides we must strive to avoid.

There is nothing new about machines that can surpass mental abilities in special areas. Even the pocket calculators of the 1970s could do arithmetic better than us.

Computers don’t learn like we do: they use “brute force” methods. Their internal network is far simpler than a human brain but they make up for this disadvantage because their “nerves” and neurons transmit messages electronically at the speed of light — millions of times faster than the chemical transmission in human brains. Computers learn to translate from foreign languages by reading multilingual versions of (for example) millions of pages of EU documents (they never get bored!). They learn to recognise dogs, cats and human faces by crunching through millions of images — not the way a baby learns.

Because computers can process huge amounts of data, they can identify trends that unaided humans would overlook. This is how “quant” hedge-funds make their money. Perhaps we should already worry that future “hyper computers”, analysing all the information on the internet, could achieve oracular powers that offer their controllers ever-growing dominance of international finance and strategy.

Advances in software and sensors have been slower than in number-crunching capacity. Robots are still clumsy compared with a child in moving pieces on a real chessboard. But sensor technology, speech recognition, information searches and so forth are advancing apace.

Google’s driverless car has already covered hundreds of thousands of miles. It will be years before robots can cope with emergencies as well as a good driver. But it will be better than the average driver — machine errors may occur  but not as often as human error. The roads will be safer. But when accidents occur they will create a legal minefield. Who should be held responsible — the “driver”, the owner, or the designer?

And what about the military use of “dumb” autonomous robots? Can they be trusted to seek out a targeted individual via facial recognition and decide whether to fire their weapon? Who has the moral responsibility then?

Robots are replacing people in manufacturing plants. And they will take over more of our jobs — not just manual work (indeed jobs such as plumbing and gardening will be among the hardest to automate), but clerical jobs, routine legal work, medical diagnostics and operations. But the big question is this: will the advent of robotics be like earlier disruptive technologies — the car, for instance, which created as many jobs as it destroyed? Or is it really different this time as has been argued, for instance in Erik Brynjolfsson and Andrew McAfee’s fine book, The Second Machine Age.

These innovations would generate huge wealth, but there would need to be massive redistribution via taxation to ensure that everyone had at least a living wage. Moreover, a life of leisure — as available, for instance, to the citizens of Qatar today — doesn’t necessarily lead to a healthy society.

By 2050, if not sooner, our society will surely have been transformed by robots. But will they be idiot savants or will they display full human capabilities? If robots could observe and interpret their environment as adeptly as we do, they would be perceived as intelligent beings that we could relate to. Would we then have a responsibility to them?

And this is where Asimov’s laws come in. How can we ensure that robots remain docile rather than “going rogue”? What if a hyper-computer developed a mind of its own? It could infiltrate the internet and manipulate the rest of the world. It might even treat humans as an encumbrance.

In the 1960s the British mathematician I J Good — who worked at Bletchley Park with Alan Turing — pointed out that a super-intelligent robot (were it sufficiently versatile) could be the last invention that humans need ever make. Once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones. Or could humans transcend biology by merging with computers, maybe losing their individuality and evolving into a common consciousness?

We don’t know where the boundary lies between what may happen and what will remain science fiction. But some of those with the strongest credentials think that the AI field already needs guidelines for “responsible innovation”. Many of them were among the signatories — along with anxious non-experts like me — of a recent open letter on the need to ensure responsible innovation in AI.

In later writings, Asimov added a fourth law: a robot may not harm humanity, or by inaction allow humanity to come to harm. Perhaps AI developers will need to be mindful of that law as well as the other three.

Martin Rees is Astronomer Royal and co-founder of the Centre for Study of Existential Risks at Cambridge University.

Memory and trauma

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Memory_and_trauma ...