Robots: the elephant(s) in the room?
Almost every sci-fi film you will ever see will feature some kind of robot. In some of these robots can be a force for good (WALL-E), in some a force for bad (I, Robot) and in some, just a fact of life in the future (Star Wars). The trouble is that the environments these cinematic robots inhabit seems distant from our present reality. The question I want to pose in this post is what happens to society when robots become part of the fabric?
One of the films I’ve already mentioned, I, Robot, is a dystopian vision of how things could go spectacularly wrong. Surrogates is another, potentially even more problematic, vision. In line with my previous post on growing inequalities in global society, I want to consider what would happen if robots became good enough to carry out more of the human jobs that currently attract the lowest levels of renumeration. In other words, what happens when the financial elite can obtain ‘efficiency savings’ by employing robots instead of paying minimum wage to some of the poorest in our society?
We have a historical precedent for people who violently oppose technological innovation. In the 19th century a loosely-organised group of people collectively known as Luddites smashed machines that made it easier, quicker and cheaper to produce textiles. Although I don’t condone their violence (they attempted to assassinate factory owners) I’m in full agreement that ‘efficiency’ is less important than human welfare. So who thinks it’s a safe bet that the first wave of robots to take (visible) jobs from humans will be set-upon and destroyed? I do. In countries like the USA where guns are a normal part of society this could lead to robot owners arguing that they should be able to arm them to protect their investment. If that happens, it’s armageddon time.
And what about education? If you consider learning to be akin to knowledge transfer, then before Matrix-style human brain ‘upgrades’ become commonplace, some states/countries will seriously consider using robots to teach children. Japan will be first, no doubt. Unless we undergo a transformation in our collective thinking, we will end up sending our children to institutions with high fences to drill-and-practice skills that are not needed now, never mind in 2020 and beyond. Sometimes it’s good to investigate the thick end of the wedge to test our intuitions.
Part of the problem is that our view of human flourishing is based on a scientific rationality that, at its logical extreme culminates in us ‘uprading’ ourselves to be functionally indistinguishable from robots. When I mentioned this to Louise Thomas from the RSA recently she said that something similar to this forms the basis of one of Iain M. Banks’ novels. I shall have to investigate. All in all, I think that not only do I think we need a conversation about the purpose(s) of education, but we also need a conversation about what it means to be human. People will do what they can get away with, what it is socially acceptable to do, what gives them a competitive advantage. Once robots become involved, things get serious on a whole new level. And I haven’t even mentioned robots for security, warfare and policing… 😮
Image CC BY-NC-SA STCroiss
7 thoughts on “Robots: the elephant(s) in the room?”
What do you mean by a robot? We already have them in car manufacturing plants, but the only guns they tote produce paint spray.
Is a voice recognition machine a robot? These are taking the repetitive tasks away from poorly paid call centre staff, and there’s no revolution happening that I can see.
At what point will robots look human? Apart from oddities like Asimo, I’ve seen futurologists suggesting that robots will be unlike humans for a long time yet. Why do they have to be human? Unless they are for a social purpose (companionship, education…?), there’s a huge waste of resources (and many engineering problems to be overcome) in making robots like C3PO.
Robots are much more likely to look like utility devices (at best R2D2, or worse Danny Nicholson’s Roomba) than people.
However, the Luddites may still rise – knitting machines certainly didn’t have to look human to stir up ire!
I don’t think they need to *look* human to threaten human society. What I’m trying to get at here is that, as Cathy Davidson points out in ‘Now You See It’, if our attention is focused on the wrong thing it doesn’t matter if we’re getting faster and more efficient.
I think you miss my point. The robots are already here – they have just emerged from less intelligent machines.
In education they may be called learning management systems – and not be very good at their job, even with human assistance – because drill and practice is easily “mechanised”.
Will the “robots” be used for education, though – which is cheaper, educating a human with a robot, or just replacing the human with a robot? Why not cut out the expensive middleman? Machines have been doing this for years, leaving the under-educated sectors of society little to do :(
Indeed, but I think you miss the point of my post, which is that unless we think about what we’re doing as a global society (jobs, education, etc.) then we’re sleepwalking into a future that many of us would find fairly abhorrent. :-o
Our relationship with the power of machines and technology continues to be a relevant topic.
We are indeed sleepwalking into a bleak future, our minds numbed and dumbed by a deluge of irrelevant information and entertainment. We can only learn how to think – not what to think – in contact with other thinking, feeling humans.About your last comment: Social acceptance and legislation lag far behind the advances of technology – as it always has. Just one example: http://news.bbc.co.uk/1/hi/england/merseyside/8517726.stmOlder example is the birth of the word “sabotage”, from wooden shoes “sabots”: Dans l’histoire de Lyon à l’époque des canuts , les machines ayant évolué , lorsqu’on été présentées les premières machines à tisser de type jacquard les ouvriers , de peur de ne plus avoir de travail , se sont révoltés et ont cassé les nouvelles machines avec leur sabots .. c’est depuis qu’est née l’idée de sabotage.
Thanks for the link! :-)
What does it mean to be human? An important question that deserves consideration. Sherry Turkle describes “therapeutic” robots designed to replace the very human function of providing companionship for the elderly. Half of her new book “Alone Together” explores this topic.
Have you read Ray Kurzweil’s” Singularity is Near” book? That one really raises questions about what it means to be human as he explores “human augmentation” through AI technology. According to his predictions, humans will one day have direct, biologically-linked access to the world’s storehouse of knowledge (matrix-style upgrades) through computer chip assisted brains making them vastly intellectually superior to their “unaugmented” brothers & sisters.
“What does it mean to be human?” is the question that is the elephant in the room.