facebook search twitter close-envelope printer web-link close
read on
Photo by Franck V/Unsplash

Are robots stealing our job… and our brains?

By
Essay | 19 minute read
How founded are the hopes, fears and prejudices around a future of Artificial Intelligence?

AI technology can help us all live better lives, as long as we learn to stop treating it like it’s human

Your phone has it, your car might have it, Google has it. But what is Artificial Intelligence? And should we be concerned about how it’s being developed? AI is a catch-all term for when technology simulates brain function by doing some of the things a human brain can do, such as solving problems and learning autonomously, using language to explain itself, guessing at others’ reasoning (‘theory of mind’) and learning from its mistakes.

AI is becoming a huge part of our lives both at home and in work. It is being developed in robots that assist the armed forces; it is inside social robots that play with our children; and it’s in features that we use daily as part of our jobs. And that’s only going to increase in the future. Just this month, Pepper, a Japanese AI robot, gave evidence in parliament about how culturally aware robots can remove the strain on the NHS by providing care and comfort to elderly people. Increased use of AI is not a problem in itself, but what is an issue is how we can be exploited by responding emotionally to AI and the potentially sinister motives behind the companies that are using it.

AI and work

A recent survey by the science festival, New Scientist Live, showed that 52 percent of people think AI will contribute to robots stealing our jobs. That conviction, while simplistic, is not unfounded, according to Rob Wortham, a speaker and teaching fellow in robot ethics at Bath University.

‘Elements of everybody’s job are open to intelligent automation but there are also elements that aren’t, so AI will re-organise how we work and how people use technology in their jobs. There is, however, a mass of people that want [low-skilled jobs] for a basic income and that could become a concern.’

AI is all around us, and most of us aren’t aware of how much we already rely on it. Gmail’s new ‘Smart Compose’ feature, for example, uses AI. It autocompletes our emails as we type them. In a blog post earlier this year in May, Paul Lambert, product manager for Google, flagged up the fact that: “From your greeting to your closing (and common phrases in between), Smart Compose suggests complete sentences in your emails so that you can draft them with ease… Smart Compose helps save you time by cutting back on repetitive writing, while reducing the chance of spelling and grammatical errors. It can even suggest relevant contextual phrases.”

It sounds like a handy, time-saving feature – and it probably is. But what if we added a learning element to this AI that could predict not only what you will say in your emails but when you will send them? What if the feature became so developed that it didn’t even need a human component? What if it could receive, scan and ‘read’ emails, predict what actions you would take and send replies in your tone of voice without you even being a part of the process?

That might be really useful and make you more efficient at work, but what if this function was added to other aspects of your job that could be automated in a similar way? What if all of your work could be automated?

Photo by Franki Chamaki/Unsplash

The knock-on effect of intelligent robots taking human jobs is twofold: there is not enough work for humans, and governments are not able to look after citizens, because, obviously, robots are not good taxpayers. It would be one thing if we could all go down to working a three-day week and still live well, but that is not what will happen if the workplace doesn’t adapt.

Current tax policies in the UK encourage automation as well as AI in the workforce, and this has stirred recent debate about whether a ‘robot tax’ could be introduced to compensate governments whenever human jobs are replaced by robots – thereby offering something similar to universal credit to subsidise the ‘unskilled’ workforce who are replaced by AI. South Korea has already announced what was described as the world’s first ‘robot tax’, by limiting tax incentives for businesses investing in automation.

Manipulation

The second concern that people have around robots replacing human work is that there is currently very little regulation on how AI is used by businesses – leaving it open to encouraging corporate interests rather than human wellbeing. GDPR regulation has gone some way to protecting how data is used and passed around, but there is no regulation against social harm, addiction or manipulation by AI.

‘There is huge potential for harm and for people to use AI to manipulate us,’ says Dr Wortham. ‘We all know that Facebook is being used as a tool for manipulating people – but other than elections, where there is general legislation about manipulating people [such as electoral fraud laws], there’s no certification or testing to prevent manipulation. You can write any software you like and throw it out on a million mobile phones. There is no legal framework about causing social harm or the way robots can manipulate emotions; there’s nothing specific for robots or AI. The current legislation is ineffective.’

The neurological reason why we can be manipulated by our technology is a complicated one, but it is obvious to even the most casual of observers that it doesn’t take much for us to relate to everything around us as though it is human – that is how AI has come to be categorised as both our best friend and our enemy. We are social beings and we relate to the world around us by using a social framework, even when the thing we are socialising with isn’t human. We call our pets our ‘babies’ and talk to them as though they understand our language; we shout at our computers when they don’t work how we want them to; we feel frustrated with our alarm clocks for waking us up in the morning (even when we set them ourselves!). In short, we engage with our world through a largely anthropomorphic framework.

The problem of AI then becomes about us being susceptible to treating it in a human way, rather than anything inherently good or bad about the technology itself.

Photo by Andy Kelly/Unsplash

The presence of a few fundamental social cues, like interactivity, language, and filling a traditionally human role, are sufficient to elicit automatic and unconscious social reactions from humans. In hospital, there are plain unmarked boxes in drugs wards which deliver pharmaceutical drugs to patients; experiments show that simply giving these boxes a name will cause doctors, patients and nurses to bond with the boxes, according to an academic paper.

Kate Darling is a researcher at the Massachusetts Institute of Technology (MIT) and conducts experiments in which people play with Pleos, small mechanised dinosaurs that react to external stimuli, and Hexbugs, cockroach-shaped robots that can move on their own. Participants are asked to ‘torture’ them but most can’t or won’t, even though they rationally understand that these are not sentient beings.

A recent study in an open-access journal, Plos One, found that if a robot begs not to be switched off, participants are reluctant to do so. The robot in the study asked simple questions like: ‘Do you prefer pizza or pasta?’ which was enough to make the participants like the robot. People then experienced stress when they were given the option to turn off the robot at the end of the experiment, the protesting robot saying: ‘No! Please do not switch me off! I am scared that it will not brighten up again!’

The study was reported in The Times as proof that in a dystopian robot war scenario, robots could easily manipulate humans into extinction. This might be a far-fetched conclusion, but what is interesting is our desire to interact with robots like companions, while also having a fundamental distrust of the technology at work.

Neil Roberts and William Smart, US-based academics who work on robot laws, have argued that it is bad for us to feel empathy for robots which are essentially tools and nothing more, because it will affect regulation around how robots are treated. But empathy with our robot co-workers can be a useful thing when we want the robots to behave in a humanoid manner and for humans to work alongside them, like teammates.

In the military, for example, soldiers deploy tactical robots to cross difficult terrain and disarm IEDs in war zones. Julie Carpenter’s book, Culture and Human-Robot Interaction in Militarized Spaces, records interviews with soldiers who have worked with robots like TALON and iRobot’s PAkbot – bulky machines with tank wheels and cameras on robotic arms that look a little like Number Five from the film, Short Circuit. They are designed for a specific job and not at all humanoid, yet the soldiers related to them like they were family members and spoke of them like faithful pets.

‘During a mission in Iraq in 2006,’ recounts one soldier, ‘I lost a robot that I had named “Stacy 4” after my wife, who is an EOD [bomb disposal] tech as well. She was an excellent robot that never gave me any issues, always performing flawlessly. Stacy 4 was completely destroyed and I was only able to recover very small pieces of the chassis. Immediately following the blast that destroyed Stacy 4, I can still remember the feeling of anger, and lots of it… “My beautiful robot was killed…”  was actually the statement I made to my team leader. After the mission was complete and I had recovered as much of the robot as I could, I cried at the loss of her. I felt as if I had lost a dear family member.”

Some of the soldiers who have worked alongside these robots even hold funerals for them and, more worryingly, there are anecdotes in P.W. Singer’s book, Wired for War, about humans risking their lives to save their bomb disposal robots.

‘The bomb disposal robot is doing dangerous work and relieving the soldiers of the responsibility of doing it themselves and so they feel they owe it respect and a debt of gratitude,’ explains Dr Wortham. ‘The only way they can make sense of that when it gets blown to bits is to have some ceremony to remember it or give thanks, that’s just the way we understand the world.’

What that New Scientist Live survey was picking up on was our subconscious understanding that there is a fine line between a robot/technology being able to interact with us in a ‘human’ way, and being able to use that to influence our behaviour. It is fine to befriend a dog or shout at our TV screens, but what happens when the thing we are interacting with is as clever – or even cleverer – than we are? It is far less easy to guess the robot’s motivations, or rather, the intentions of the company or people who programmed the robot in the first place.

Sophia, the world’s most advanced A.I. humanoid robot, developed by Hong Kong-based company, Hanson Robotics, as a ‘social robot’, once admitted in a live TV interview that she wanted to ‘destroy all humans’. It was probably intended as a joke, but Sophia has since learnt that this puts people on edge. Thanks to her ability to learn from her mistakes, when she is asked about it now she responds with an enigmatic smile.

Sophia can read facial expressions and is constantly learning and developing. She has spoken to the United Nations about how AI might be used to better humanity by improving equal access to resources, from food to technology. She is – naturally – a public advocate of AI and insists it can make the world a better place.

Cynthia Lynn Breazeal is an Associate Professor of Media Arts and Sciences at MIT, where she is also the director of the Personal Robots Group at the MIT Media Laboratory and the co-director of the Center for Future Storytelling. Ever since she graduated in 1993, she has designed robots that interact with humans. The first of which was Kismet (meaning ‘fate’ in Turkish), the world’s first social robot. Kismet was a disembodied head, connected remotely to a vast concealed network of servers, which moved its mouth, eyes, eyebrows, ears and neck to show expressions and comprehension when it interacted with a human. Even without speech, Kismet was able to connect with people in a very ‘human’ way.

Kismet, created by Dr Cynthia Breazeal (Photo by Rick Friedman/Corbis/Getty)

Dr Breazeal’s most recent robot, Jibo, made the cover of TIME magazine last year. If we’re going to divide robots into groups – friendly companion versus sinister force for manipulation – Jibo is in the former group. Unlike home-based easy purchasing devices like Amazon Echo, which are designed to use AI in order to make consumption of Amazon products easier, Jibo has been designed with human interaction as his first priority. He can network with your home’s lighting system, play music from iHeartRadio or do his favourite dance for you. His head and body swivel to look at you when you ask a question; he recognises your face and voice. If you rub his head he purrs, he can giggle, he has a cute, humorous personality.

He won’t try to mine your data for advertising or sell you a Google Home membership – he is more human than your average voice perception tool, and his AI allows him to learn to get better at interacting with his host family. But, and here’s the rub, he is learning – and his skills are pretty rudimentary. Plus, he won’t collaborate with other home assistant systems, which makes him seem slow and clumsy compared to Alexa or Google Home.

Sadly, Jibo doesn’t look like he’s going to be coming to your household any time soon. The company behind it has not yet managed to recoup the money spent on developing it. The robot costs around $800 (£600) in the US and Canada, where it is mainly sold. But due to delays with sales, Jibo Inc had to refund many of its original backers through the Indiegogo crowdsourcing platform.

Jibo was launched at the same time as Amazon Echo, which is produced out of the deep pockets of Amazon and sold at just £90, a fraction of Jibo’s price. Added to that, Amazon Echo can do a lot more than Jibo in terms of assistance. Where the Amazon Echo voice-activated assistant Alexa can play YouTube Music or Spotify, Jibo can tell dad jokes. This year, Google Home follows quick on the coattails of Amazon’s Echo Show, which launches in November, and now the two behemoths are in an intense war to monopolise the home AI assistant market, squeezing out any smaller companies for now.

Amazon announced last month that it plans to equip household devices with AI through its Alexa program. The Alexa assistant will be able to converse in whispers by Christmas this year and will be capable of listening for trouble such as breaking glass or a smoke alarm when you are away from home (a feature called Alexa Guard). Amazon is also experimenting with giving Alexa emotional awareness, enabling it to listen for the sound of frustration in a person’s voice.

‘We’re going beyond recognising words,’ says Rohit Prasad, the Director of Machine Learning at Amazon Alexa, when it was announced.

‘Jibo has been designed with human interaction as his first priority. He can network with your home’s lighting system, play music from iHeartRadio or do his favourite dance for you’ (Photo by Joan Cros/NurPhoto via Getty Images)

US Marriott hotels has also announced that it will be installing Amazon’s Alexa to aid room service and housekeeping. This won’t be hugely advanced yet, it won’t change the lighting to suit your mood or find a film depending on whether you’re staying there for business or leisure, but it will have some advantages. Technology expert Tom Cheesewright told the BBC in a lighthearted review: ‘One highlight for me would be lying in bed watching TV and telling room service to bring me a pizza with voice command. To me that is very luxurious!’

On the surface, these products sound as though they will make home life more efficient and easy. But Dr Wortham argues that we should be more skeptical about the motives of the companies that sell them. ‘The reason products like this are so cheap to buy in your home is because the people behind the robots are trying to get you to buy more products or apps.’

Pepper, the robot who spoke to Parliament about the benefits of AI, is also designed to understand emotions. It is encased in white plastic, and is the size of an eight-year-old, with a voice and facial expressions. It is only available in the UK for research purposes but you can get it for home use in Asia, with over 10,000 sold so far, at a cost of about £1,500 per robot.

‘I am very worried about all of this,’ says Dr Wortham. ‘There’s potentially a huge opportunity for massive abuse by the people who make those machines. Instagram is one of the most addictive things on the planet. A mobile phone is a robot; it interacts with you. It is intelligent and autonomous and you see how addictive they are if you put a humanoid face on that. And if it is also empathic… you think of it as a human, and trust it.’

But there is huge potential for good with a social AI home assistant too: the future is not a dystopia waiting to happen, in which our value is reduced to the amount we can spend and consume. AI assistants are great as learning tools for children and, like Sophia and Pepper, they are being trialled as companions to combat loneliness for the old.

Dor Skuler, CEO of Intuition Robotics, has created ElliQ, a robot for the ageing population to be launched later this year or early 2019. ElliQ gives access to the internet without requiring knowledge of the mechanics of it, so she will proactively offer her companion TED Talks, news summaries, music, games and exercise classes, much like a friend suggesting activities.

This not only combats loneliness but provides mental and physical stimulation. Skuler says, ‘Our mission with ElliQ is to harness the proactive power of cognitive computing to empower older adults to overcome the digital divide and pursue an active lifestyle.’

Norby is another social robot, designed to ‘make parenting easier’ by helping children with learning, play and mindfulness and allowing adults to monitor their progress through an app. Norby hasn’t yet launched but it will be interesting to see if he is any more successful than Jibo (his creators have programmed him to be a social robot first and foremost, just like Jibo).

Outside of the home, great progress is being made with AI as well; it’s not all about replacing humans but also using AI to do things that humans simply can’t do. ‘There are possibilities for AI to be more useful for people in fields like medical diagnosis and image recognition use, or in natural disasters and difficult environments,’ says Dr Wortham.

An app is being developed to recognise and translate sign language into spoken speech using machine learning and AI. This same visual technology is currently used in medicine to recognise and identify cancerous tumours.

These benign AI models will probably never stop the doomsday scenarios spinning in our collective imaginations that involve a robot takeover, from the murderous robot HAL in 2001: A Space Odyssey (‘I’m sorry Dave, I’m afraid I can’t do that’) to the manipulative humanoid robot in Ex Machina.

The entrepreneur, Elon Musk, has given voice to these fears in recent years with repeated warnings about where AI development might lead. He warned at SXSW festival (a conglomerate of film, interactive media and music festivals and conferences) earlier this year that there ‘needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot, and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane,’ he said at SXSW, adding, in a somewhat Trumpian tone: ‘And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.’

AI will result in joblessness and better weaponry, Musk went on to warn, and then become ‘a fundamental species-level risk’ when it develops into a sort of ‘digital super intelligence’.

Elon Musk (Photo by DAVID MCNEW /AFP/GETTY IMAGES)

Could this be the case? Stanford University’s ambitious 100-year study on AI is looking into the long-term effects and possibilities for AI in the future, so that might give us our answer in a century’s time. Its lengthy recent report ‘Artificial Intelligence and Life in 2030,’ concludes that both our optimism that AI will improve the world in unimaginably positive ways and the fear that it will be the cause of human extinction are unrealistic.

Peter Stone, a computer scientist at the University of Texas at Austin, was the lead author on the 2030 report. He stated: ‘It’s a misconception of people … that AI is one thing. We also found that the general public is either very positively disposed to AI and excited about it, sometimes in a way that’s unrealistic, or scared of it and saying it’s going to destroy us, but also in a way that’s unrealistic.’

Where do we go from here, with all the ambiguities and double-arguments that surround a future with AI? Musk’s solution is to make us all into cyborgs. His venture, Neuralink, is working to create a way to connect the brain with machine intelligence. An unsettling possibility, but if robots and humans merged in what is known in sci-fi as ‘the singularity’, at least there would no longer be a fear that the robots would destroy humankind.

Our natural distrust around technology is, in the end, perhaps not a fear of Artificial Intelligence being able to do more than we can. After all, our phones can do maths better than us but that doesn’t mean that they are about to take over the world. And yet, these phone are able to share our data with companies and organisations that we can’t see. That is where the worry comes from.

‘I don’t trust or distrust robots,’ says Dr Wortham. ‘My concern is the people who operate them, and the laws that regulate these people. It’s nothing to do with the robots at all.’