I remember once sitting in a pub with some friends and having a discussion about whether there’s a chance that one day we might all be enslaved by robots. We imagined a scenario where humans work menial jobs for the robots, in return for drinks and the odd packet of crisps each evening. We would enter the bar and drink in silence, our will broken by our robotic rulers. Meanwhile, the robots, invulnerable due to their lack of desire for pleasure and logical design, would sit behind the bar laughing at us and planning how to invade Mars.

Let’s not dwell on any questions surrounding possible functioning alcoholism within the group I was with, but rather focus on the idea of artificial intelligence taking over earth. We’re probably all familiar with the many science fiction books and movies that have explored the idea of robots fighting humanity. Films like I, Robot and The Terminator series have asked: what happens when advanced machines gain the ability to think for themselves? It might seem far-fetched now but, considering the way things are going in terms of robotics, this could be a serious issue not too far from now. Amazingly, it turns out that it is a possibility which goes well beyond ‘pub talk’.

robots

The theoretical moment when artificial intelligence surpasses humankind is known as ‘the singularity’ and according to some people – and these people aren’t all odd eccentrics standing on street corners, without any shoes on, ranting – the singularity is not far away from taking place. In 2011, Ray Kurzweil, Director of Engineering at Google, argued that it appeared increasingly likely that the singularity will occur around 2045. He told Time magazine: “We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence.”

During our series on the New Age of Disability, we’ve been ringing the praises of the advancement towards autonomous humanoid robots of the sci-fi ilk, which could really benefit the less able. Robots with incredible levels of dexterity and function already exist and can carry out simple tasks, aid and assist the less able and even provide a level of companionship. These robots aren’t available to buy yet but the technology is basically all there. Research into cognitive, autonomous ‘brains’ is progressing all the time too, but we have to pause to consider the potential problems that it could bring.

In many senses machines have already surpassed human intelligence. Simple examples of this are the calculator, Deep Blue beating the best chess player alive and a computer called ‘Watson’ who beat human challengers in a verbal quiz game. But Daniel Wolpert, Royal Research Society Professor in the Department of Engineering at Cambridge, points out that true intelligence is based around our ability to interpret human behaviour and to respond to it with creative intelligence. He says that “expecting a machine close to the creative intelligence of a human within the next 50 years would be highly ambitious….I think there will be robotic entities with superhuman intellect within a few centuries.”

robots

Although the date is disputable, it is apparent that the singularity is a very real prospect. This leads us to some serious issues about how to approach robotics. We know that there is a potential to do great things with robots, particularly in the realm of helping the less able. Already we’re seeing updates and advancements towards the possibility of robot carers who can attend to the needs of disabled people both physically and emotionally. But if this path is taking too far could it be a genuine risk to humanity?

Again, we can return to the world of academia to highlight that very smart, intellectual people are discussing these issues – it can’t be emphasised enough that this isn’t just a topic 2am conversations between paranoid hippies. For instance, Lord Martin Rees, Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, believes that there is a need to make sure we don’t allow progression to go too far.”I think we should ensure that robots remain as no more than ‘idiot savants’” he said, “lacking the capacity to outwit us, even though they may greatly surpass us in the ability to calculate and process information.”

Oddly, there are a good deal of echoes of the creepy world of sci-fi within the real world of robotics – a suggestion that perhaps the scientists are concerning themselves with the notion of a human vs. robots conundrum. For instance, one company that has developed an exo-skeleton, to supplement or expand physical capability, has unwittingly taken the name of  the system which revolted against humanity in The Terminator movies (Cyderdyne Systems). On top of that strange scenario, they’ve named this exo-skeleton HAL – the name of the computer who tries to kill the crew in 2001: A Space Odyssey. They just seem like they’re tempting fate really.

robots
HAL exo-skeleton

But there are others who think that we’re getting carried away with ourselves when it comes to robot uprisings. Kathleen Richardson, an anthropologist of robots, sees the concern about robots becoming too powerful as part of a natural human process of personifying everything. We’ve designed robots in our image, both in reality and in fiction. By giving them names and faces it’s as if we are trying to humanise them. This can lead to a feeling of revulsion towards robots as the My Robot Companion project explores but it can also lead to fear. Richardson say that “the human fear of robots and machines arguably has much more to say about human fear of each other rather than anything inherently technical in the machines.”

It is a peculiarity that we tend to focus on the worst aspects of humanity when we talk about robots becoming closer to human consciousness. In sci-fi superior intelligence nearly always seems to lead robots taking over or destroying earth. But maybe we should think about it differently – if robots are capable of surpassing humans, maybe that would include surpassing our levels of selflessness and compassion. It is more than possible that our idea that greater intelligence leads to strength, power and corruption is based on a flawed logic that robotic intelligence could rise above.

Super brains could also lead robots in an entirely different direction, like Marvin the Paranoid Android in The Hitchhiker’s Guide To The Galaxy – perhaps a brain the size of a planet can lead to depression rather than a desire for world domination. This might sound crazy but you never know, maybe power isn’t the most important thing after all.

robots
Marvin the Paranoid Android

Whatever the case may be, whether robots would actually decide to take over the world, as we’ve seen in sci-fi, or not, there are questions regarding how far we should take their advancement. A world were robots fulfill every possible function we need would make our lives pretty obsolete anyway but a world where robots can aid or even supplement the strength of less able people would be an undeniably good thing. So, is it a case of finding where the line is?

There are scientists and academics who support the case that, after the singularity, there is a danger of cybernetic revolution. No matter how small that danger is, we may need to keep asking ourselves how much further should we go before it all becomes too dangerous? If not, we may have a serious problem on our hands one day.

And if that day comes, not even Will Smith will be able to save us.

Leave a Reply