In the popular 1993 thriller ‘Jurassic Park’, Jeff Goldblum’s character says to Richard Attenborough’s character ” your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” The reason I quote this, is that in this post, I intend to focus on the ethical aspect of AI. However, before I focusing on the ethical issues of relating to Artificial Intelligence, I will first attempt to differentiate ethics and morals, as they are often intertwined and confused with each other.
Separating the ethical and moral aspects of any particular topic is incredibly difficult, as ethics and morals often cross-over and are almost one of the same. Now for those of you who don’t know, the word ‘ethics’ originates from the Greek word ethos and ethikos and the word ‘morals’ is derived from the Latin word mores and moralis. In an article for The Conversation, Walker & Lovat state that “‘ethics’ leans towards decisions based upon individual character” whilst ” ‘morals’ emphasises the widely shared communal or societal norms about right and wrong” (Walker & Lovat, 2014). So, if we follow these differences, where does that leave us, in regards to the various issues regarding Artificial Intelligence?
In regards to Artificial Intelligence, it is incredibly difficult to the ethical and moral issues, as they are often intertwined. The moral (societal) issues are well-known to us: what happens if robots turn on us? what happens when we lose our jobs to robots? Can we feel truly safe in the presence of robots? However, what are the ethical (individual) issues that are associated with Artificial Intelligence?
One ethics-driven issue that seems to be prevalent amongst the scientific community is that of technological singularity. Technological singularity refers to a hypothetical moment in the future when artificial intelligence surpasses the limitations of mankind and would therefore be the ones developing new technologies, rather than scientists. Why is this an ethical issue? Well, if you think about it, the scientists who are developing the technology for artificial intelligence are essentially helping create a possible future where humans are no longer useful and are no longer in control. There are many ongoing arguments as to whether technological singularity is something we should fear or embrace. Which is why it can be considered to be an ethical issue of artificial intelligence and is arguably the most important.
Arguably the more recognized and acknowledged ethical issue, “The Frankenstein Complex” is an issue that remains significant even today and is one that can be discussed with enormous depth (on this note, this issue will be further explored in my podcast series). “The Frankenstein Complex” refers to the “almost religious notion that there are some things only God should know” (McCauley 2007, pg. 10). Although this idea may be more prominent in science-fiction than in everyday life, “The Frankenstein Complex” is still a prevalent issue amongst the scientific community and one that continues to cause debate.
Image from: https://rhulgeopolitics.wordpress.com/2015/10/17/ships-brooms-and-the-end-of-humanity/
To conclude, there are many ethical issues associated with artificial intelligence, yet many of them are intertwined with the moral aspects (which I will discuss in next week’s blog post). Having said this, technological singularity and “The Frankenstein Complex” are both issues that stand out from an ethical perspective and are issues that continue to divide.
McCauley, L. 2007, “The Frankenstein Complex and Asimov’s three laws”, AAAI Workshop – Technical Report, pgs. 9-14
Walker, P & Lovat T 2014, ‘ You say morals, I say ethics – what’s the difference?’, The Conversation, September 18th, viewed 19th April 2016, <http://theconversation.com/you-say-morals-i-say-ethics-whats-the-difference-30913>