What’s the future of Artifical Intelligence? (Raconteur) :
This being the second post about my research project, i have decided to change tact and head towards a more philosophical angle, a departure from the topic i original chose in VR technology towards A.I and certain moral panic surrounding the subject. This came about due to the fact that my computer had issues and i could not begin the digital artefact which i initially chose.
However, the new topic in a research report/essay form appears to be a more interesting route for myself and thus easier to apply critical thinking and time to achieve a better finished product. I have created this Prezi to outline further, in-depth exactly what this structure will be and the main arguments of the piece.
Since its modern iterations, artificial intelligence (AI) has been – unfortunately and possibly mistakenly – linked to gender. Even though AI has been theorised about since the Ancient Greeks (you can find a timeline of AI here), it was Alan Turing’s conceptualisation of a test to ascertain a machine’s intelligence (now known as the Turing test) that may have caused this (Halberstam 1991). To conduct the Turing test, a judge communicates with a man and a machine via written means and without ever coming into contact with either subject. The machine should be indiscernible from the man. The issue with this test is that Turing uses a male and a female as the control for the test, erroneously believing gender is an intrinsic value in a human (based on anatomy alone).
In our postfeminist context, we know that gender is a complex spectrum amounting from a combination of brain structure, genetics, sex hormones, genitals and most importantly societal conditioning. “Turing does not stress the obvious connection between gender and computer intelligence: both are in fact imitative systems” (Halberstam 1991). We know now that gender is constructed and reconstructed over time. If gender should apply to AI, it would present itself as a product of the AI’s programmer/s individual gender practice rather than something innate to the machine.
Instances of AI in everyday life already surround us, the most easily recognisable of which are the personal assistant softwares in smartphones, tablets and computers (Siri, Cortana, and now Google Assistant). Each of these have female voices as a default setting. In a discussion of the many feminine-named assistants, Dennis Mortensen, founder of x.ai, has said that we take orders better from a female rather than a male. This is trend continues in Microsoft’s endeavors to create AI bots on Twitter, most namely the “teen girl” conversation bot, Tay.
Bots and smartphone apps are both examples of weak AI – AI that simulates human intelligence by executing the simplest version of a task. In this podcast about Tay’s rapid corruption into racist Tweets, Alex Hern refers Microsoft’s previous app Fetch!, which identifies dog breeds from pictures – any picture, it need not include an actual dog. Based on this understanding of weak AI, I can only assume female voices are programmed in order to make the apps and bots more palatable and appealing. However this can only be described as “machines in drag“, with very little positive effect on intersectional feminism in society today (Robbins 2016).
Due to the close association of the machine with military intelligence (one of the first iterations of computer was developed by Turing in WWII in response to the Nazi’s Enigma after all), “computer technology is in many ways the progeny of war in the modern age” (Halbersham 1991). The probability of weaponised autonomous AI becoming a threat led to a gendering of the technology as female. Feminist theory sees the female as Other by comparison to the male in the same way that, even in the Turing test, technology is also othered. Andreas Huyssen identifies writers at the heart of this imagining of technology as female harbingers of destruction (cited in Halberstam 1991).
Halberstam, J 1991, ‘Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine’, in Feminist Studies, vol. 17, no. 3, pp439-460.
Haraway, D 1991. ‘A Cyborg Manifesto: Science, technology and socialist-feminism in the late twentieth century’, in Simians, Cyborgs and Women: The Reinvention of Nature, Free Association, London.
When it comes to artificial intelligence, there are many people who believe that the introduction of artificial intelligence and robots could lead to a dystopian world similar to that portrayed in “Terminator Salvation” (p.s. Terminator Salvation is a terrible film), where robots have enslaved humanity. Whilst not entirely implausible, the threat of unemployment is a much greater moral concern surrounding unemployment, with the World Economic Forum suggesting that as many as 5 million jobs, from 15 developed and emerging economies could be lost by 2020 (Brinded, 2016). In fact, many people are already starting to lose their jobs to machines with self-serve checkouts being a major example of the way machines have been able to do a job, previously undertaken by human employees, but with greater efficiency and lower cost. However, I am more focused on investigating the threat posed by human-like robots, rather than machines in general. Why? Because that’s what society imagines when you mention artificial intelligence. They imagine machines that replicate our human bodies.
In his book ‘Digital Soul: Intelligent Machines and Human Values ‘, Thomas M. Georges hypothesizes how the introduction of sentient beings in society might be received by humans. Georges states that “learning to live with superintelligent machines will require us to rethink our concept of ourselves and our place in the scheme of things” (Georges 2003, pg. 181). This statement raises many philosophical questions, which I will explore in my next blog post alongside an in-depth look at the 2015 film ‘Ex-Machina’. Georges’ statement does, however, imply that unsurprisingly living with robots would cause some conflict and would not be a smooth transition for humans. Having said that, many will say that we are already living amongst various forms of “weak” AI such as Siri or Cotana, smart home devices and the somewhat annoying purchase prediction. However, these are forms of “weak” AI and we are still a long way away from a society where humans co-exist with sentient beings. All we can do, for now, is worry and imagine.
Do you have a digital plan for when you die? An idea of what you want to do with your online presence after death? “Nine out of 10 Australians have a social media account of some description, yet the vast majority have not even had a conversation – let alone written anything down – about what should happen to these accounts when they die,” (Brad Hazzard, 2014)
What if you could live on after death? What if, when you died, your social networks took the information you had provided it with, and then integrated it with software which analysed the way you interacted with the medium, and was able to continue your interaction for you?
Although the above video is intended as a parody of sorts, this sort of thinking isn’t too far off, with research underway, and programs already existing that play this sort of role.
Currently, Facebook opts to memorialise accounts when people pass away, unless family members request for it to be deleted, but what if we didn’t have to stop at the idea of posting tributes, and tagging our loved ones in the statuses. What if we could just message them, tell them how much we loved and/or missed them and get a response?
Two years ago, I had a friend my age pass away from cancer, and I had sent her messages in the days leading up to this. I had dyed my hair purple as it was her favourite colour, and wanted to show my love and support for her through this difficult time. While I’m sure she did not see the post, it makes me wonder what would have happened if this technology was available. What would she have said? Would it have reflected the girl I knew, and if it did, would she really be dead? And if the AI which responded evolved over time based on conversations, would she still be the same person as when she physically died?
Twitter is all a-flutter about Tay, the racist lady-AI from Microsoft who was taken offline less than a day after her launch. According to her makers, “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” Unfortunately this makes her extremely easy to manipulate and she was quickly transformed into a genocide-loving racist.
Tay is an example of a phenomenon in AI theory: the emergence of a gendered AI.
AI has been described as the mimicking of human intelligence to different degrees: ‘strong AI’ attempts to recreate every aspect, costing much more money, resources and time; while ‘weak AI’ focuses on a specific aspect. Tay, as a female AI targeted towards 18-24 year olds in the US, is very much about communicating with Millennials. In my previous posts, I’ve mentioned a number of AI representations in the media, all of which are gendered, usually as female. Dalke and Blankenship point out “Some AI works to imitate aspects of human intelligence not related to gender, although the very method of their knowing may still be gendered.”
They go on to suggest that the Turing Test “arose from a gendered consideration, not a technological one,” wherein Turing’s original paper proposing this test, the examiner is trying to determine the difference between a man and woman and that the same differentiation process could be applied to humans and AI.
If AI is gendered, then the researchers are proposing there is an algorithm for gender, which in our post-feminist context seems to be oversimplifying the issue. Gender is entirely constructed and would be constructed on the part of the AI in its development in the same way that humans construct and reconstruct their own gender in tandem with their identity.
Tay is a glorified bot that responds to specific stimuli. Perhaps it’s the other way around – AI is a glorified bot designed to respond to stimuli and learn from it.
Artificial Intelligence has always been a fascinating aspect of cyber culture, particularly for someone like me, who grew up on thought-provoking science-fiction films such as ‘Her’ and ‘Ex-Machina’. In fact, ‘Ex-Machina’ was one of the reasons for which I chose to explore the idea of Artificial Intelligence for my digital artifact. My intention is to create a 4-part podcast exploring the historical, moral, philosophical and ethical aspects of Artificial Intelligence.
The origins of Artificial Intelligence are commonly traced backed to the revolutionary mathematician Alan Turing, who first questioned whether machines can think. His adaptation of a the imitation game, which eventually became famous as ‘the Turing test’ , and according to prof. Noel Sharkey is “a useful way to chart the progress of AI” (Sharkey, 2012). Turing’s revolutionary research into Artificial Intelligence set the way for authors such as Arthur C. Clarke and Isaac Asimov, whose three laws of robotics have evolved from just a literature device into sort-of rules which are upheld by robotic scientists and researchers.
Another interesting aspect of artificial intelligence that I intend on researching, is the way that artificial intelligence has been represented in science-fiction films, from ‘Blade Runner’ to ‘Ex-Machina’. In presenting this idea, I will review the 2015 film ‘Ex-Machina’ and examine how the film deals with the fore-mentioned aspects of Artificial Intelligence. I have already begun extensive research into each individual aspect and look forward to presenting my progress through each blog post.