The Three Laws of Robotics, And It’s The Humans Breaking Them All

via Twitter

The very foundation of robotics and their ethical operations with humans were founded on the three Laws of Robotics, set forth and carved into stone by the wise and trusted Isaac Asimov. They were simple enough, and I thought it was imperative to touch on them in my research. These are the codes of conduct all robots must pass in order to validate their existence and enact their function. Unfortunately, some don’t follow the code. Including humans.

I did a lot of reading on robotics and human interactions these last few weeks. Looking for case studies where robots were not the antagonists, but humans were. I sought for examples and evidence where humans were the ones breaking the Robot Laws. The same basic guidelines of do no harm and protect should apply to creator and created, should it not?

A very invaluable article, passed along from fellow blogger whom I shall edit his name into this paragraph once I remember, by Lee McCauley, compared Asimov’s Laws to the parable of Frankenstein and his monster. The Laws were Asimov’s way to dissipate public fear, but were unable to fully address our paranoia of independent, uncontrollable AI’s or humanoids. From the golum to Frankenstein, we fear that the creation once free of the creator will revolt in retribution, and so we revolt before it gets the chance.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

“Hackers Found a Way to Make Furbies Even Creepier” Gizomodo

I admit. I hated furbies too, they were incessantly creepy, and demonic….but did not deserve our hatred. Too long and too often were they thrown into microwaves or down stairs, thrown into the bottom of the toy box until their eyes malfunction and die. They did nothing to earn their hatred, despite their devilish appearances- but suffered at the hands of humans. Furbies were made to make young humans happy, with a little furry animal that was made for companionship. Furbie never raised a hand to the helpess human, their creator even debated the definition of existence in terms of the Furbie.

“Furby can remember these events, they affect what he does going forward, and it changes his personality over time. He has all the attributes of fear or happiness, and those add up to change his behavior and how he interacts with the world. So how is that different than us?”” Caleb Chung, ‘Is It Okay to Torture a Robot?’

We feel we can justify it because they aren’t alive, thus it isn’t abuse. But is ‘alive’ a measure of existence, or of sentience? If it’s level of sentience is the same as a human, do they not deserve “the same protections offered to humans by the legal system”? (Duncan Trusell via Inverse) How do we justify the torture of robots?

The tragic tale of HitchBOT, that was built to rely on human kindness, travelled with us for 26 days across Canada and Germany, met its end in Philadelphia. The culprits remain faceless, but a reminder of how we so flauntingly break Asimov’s First Law. A Robot may do no harm onto us, but we shall onto it. HitchBOT was a “social robotics experiment” that it did not fail, we failed it (Madrigal, A 2014).

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

” `Hateful day when I received life!’ I exclaimed in agony. `Accursed creator! Why did you form a monster so hideous that even YOU turned from me in disgust?” ” Frankenstein, Mary Shelley via Shmoop

Here it came back to the Frankenstein Complex. The moment we come to hate our creations and our creations hate us.

Humans is a UK mini-series from last year I am planning to watch/study as part of this research project. It is like many other scifi shows on robots, with ‘synths’ the latest must have gadget in our homes. The focus-family represent the major viewpoints on human-like androids, the teenager who seems them as ‘slaves’, the embracing technological father, the apprehensive and paranoid mother, and the naive child who sees a new playmate.  A faction of these synths however, have developed personality- and very big no-no because it shall mean, they are no longer bound to obey our orders. They even touched on Asimov’s laws, and showed almost immediately how easily they can come undone when the Synth was distracted, and burnt the mother’s arm. The Synth, Anita, obeyed all her commands, but cracks could already become apparent as she showed signs of free will and interpretation of her orders. We fear the same happening in reality, anything outside of out control, or that threatens our own humanity with its own is seen as a danger.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

As we learnt from the tragic result of furbies, and HitchBOT, robots cannot protect themselves from the violence of humanity.

ATLAS was an ambitious endeavour for Boston Dynamics, a robot that could remarkably balance itself like a human, or like a human toddler. When it was tasked with picking up a box, it was met with human conflict….and a hockey stick. Atlas could not very well karate-self defence himself back, all it could do was try and complete its task, its primary function. It could not protect itself. The final clip of the video is what i assume the robot finally standing up to its bullies and venturing off to find some safer haven.

I even came across a website to advocate the abuse against robots to stop, with many videos and text for evidence of the crimes against Robotics. A website like this would make for an interesting digital artefact, or something similar.

With the end goal on making robots as human as possible, robot abuse becomes a tricky situation. It’s fine attacking a simple, evil Furbie, but when it has human facial expressions, a human voice, is it still okay to hurt them? When they are as human as possible, would it be human assault? We in no way accept shooting a fellow citizen, yet there becomes the distinction that it is okay to kill a synth or such.

robot abuse this website actually exists! #freetherobots

I am a big fan of shows and movies involving futuristic-tech. Another notably recent show worth mention is Extant, an alien drama that included a subplot of a child-humanoid named Ethan.
It captures that same fascination Back To The Future II gave many of us. Like the Hanson Robotic invention Sophia I mentioned in my last post, we are not that far off the fantastical inventions we glimpse at through popular culture. Smartphones became inevitable, and so are robots becoming their own entities designed to make our lives easier.

Next post I am going to research more on the paranoia’s society holds for when/if the robots become more intergral into our lives. There is the threat of job loss, as manual labour becomes ‘robotic’ labour, and the issue of privacy. Synths and androids are simply computers, that would be living in our homes and can record just as much information as a simple, hidden webcam.


One thought on “The Three Laws of Robotics, And It’s The Humans Breaking Them All”

  1. I think the Laws in and of themselves are indicative of the problem. AI should not be ABOUT humans. It’s about being something more…other…than human. By tying down technology to rules a simple as these, rules that frankly have nothing to do with the technology itself, it absolutely creates the room for the chaos we see play out in so many media representations (I, Robot springs to mind immediately). What makes this equally as problematic is the fact that humans, as the creators, are unable to remove themselves from the creation entirely. It’s written into the code, sometimes literally, as a patent and so on. Here’s some further reading on this:


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s