Tag Archives: artificial intelligence

Naive and Dangerous: A New Podcast on Emergent Media

This is Episode One of Naive and Dangerous, a podcast about emergent media brought to you by two media researchers, Ted Mitew (on Twitter as @tedmitew and Chris Moore (on Twitter as @cl_moore).

The episode is titled: “I’m sorry Dave. I’m afraid I can’t do that.” Why Humans Fear A.I.

The intro/outro music is ‘No BPM’ by Loveshadow: dig.ccmixter.org/files/Loveshadow/58707

What will Artificial Superintelligence mean for Human life?

The following is an essay i produced for my research in the subject of emerging media issues (BCM310) as a student of the University of Wollongong. I considered it relevant to the topic of cybercultures as well therefore i am sharing it here – I am by no means an authority on the matter of superintelligence, however it is a topic which intrigued me. Any comments or feedback, you can reach me at @samhazeldine.

Transcript:

“What will Artificial Superintelligence mean for Human life: A conceptualisation of the coming technological singularity & it’s impact on human existence”

Introduction

Throughout the last century in popular culture there has been representation of superintelligent or human level A.I with varying sense of morality dating back to cinema of the late 1920’s. These representations have forged popular discourses around advanced A.I and their role as catalysts, creating a dichotomy of thought towards a dystopian or utopian future beyond the singularity. Academic understanding suggests we utilise cautionary dystopian ideals to reinforce the notion of prevention of uncontrollable A.I. growth. This is assuming our technological development reaches a degree whereat deep learning aided by quantum computing is efficient and reliable, following which the singularity can unfold.

Through careful analysis of the works produced by philosophers and theorists such as: I.J Good, Ray Kurzweil, Nick Bostrom and others –  this piece will discuss the potential for artificially superintelligent beings to lead us towards a bright utopian future or a uncertain dystopian future where we survive as relics of a bygone era.

Developing the notion of ‘The Singularity’

The original concept of a technological singularity was set up by mathematician; Alan Turing in the 1950’s:

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control. (Turing, 1952)

This somewhat gloomy prediction plays on the developing notion of an eventual dystopia which in the years since Turing’s expression, has been reinforced by popular culture with films such as Blade Runner (1982), Terminator (1984)  and I Robot (2004) reinforcing these ideas of machines in control.

A contemporary and coworker of Turing’s;  I.J Good is another theorist which provides an important theory known as the ‘intelligence explosion’ (I.J. Good, 1965). This hypothesis details how at the point of achieving superintelligence, A.I will be able to build more sophisticated computers and this feedback loop reaches a speed where innovation is incomprehensible by current standards and creates unlimited intellectual potential beyond our reaches of control.

This idea of gradual, exponential increasing in computational potential is the basis for Moore’s Law which follows a formula which calculates that the power of computing and integrated circuits doubles each year.

A trend which author and computer scientist Ray Kurzweil applies to the potential of artificial superintelligence, driving innovation to reach the singularity by 2045:

$1000 buys a computer a billion times more intelligent than every human combined. This means that average and even low-end computers are vastly smarter than even highly intelligent, unenhanced humans. (Kurzweil, 2006)

With regards to impact, Kurzweil reimagined the phenomenon of the singularity as being ‘neither utopian nor dystopian but having an irreversible impact on our lives as humans, transforming concepts we rely on to give meanings to our lives’

Despite the future beyond the singularity being largely debated, there is no doubt among those who study A.I. that the singularity will occur, it is only a matter of achieving the level of sophisticated computing necessary.

Superintelligent A.I. by 2045

To achieve a point of singularity within the timeframe ascribed to by several modern theorists including Kurzweil and also Swedish philosopher Nick Bostrom who states that:

There is more than 50% chance that superintelligence will be created within 40 years, possibly much sooner. By “superintelligence” I mean a cognitive system that drastically outperforms the best present-day humans in every way… (Bostrom, 1997)

This opinion, like Kurzweil’s should be considered as just that, an opinion, however as is the case with all visionaries the degree of credence which can be placed on their ideas require further, deep examination. By deconstructing the ability to fulfill such a prediction and what it might require happens in the next 20-30 years to reach this point,  it can be better understood what the likelihood and consequences may be of this intelligence explosion occurring. The concept of deep learning is a key factor in the progression towards human level artificial intelligence.

Deep learning is essentially a computer’s ability to capture information from various sources including inputs by users, analysis of big data and store this information in neural networks. This is similar to the functioning of the human neural/memory networks created in our brains, however this method in machines such as computers is not limited to physical space like that of the human cranium thanks to ‘the cloud’.

For example, through programs such as GoogleDeepmind, experts were able to utilise deep learning techniques to teach their AlphaGo A.I. how to defeat the current European champion of a board game known as Go – a 2,500-year-old game that’s exponentially more complex than chess (Metz, 2016). Such an achievement is a clear-cut example of the early potential in deep learning technology, moreover this method of machine learning is also utilised on a consumer scale in the form of Netflix entertainment and Amazon purchase suggestions with benefit to both audience and business.  

Running in congruence with this development of deep learning technology is the race to develop a stable, usable and reliable quantum computer. Quantum computing involves the processing of superpositioned qubits of data and applied algorithms, which is then used to solve complex problems potentially much faster than traditional binary computers. With current iterations in their infancy, examples such as the cutting edge: D-Wave 2000Q 2048-Qubit computer are the size of a small bathroom and cost $15 million USD (Temperton, 2017). Despite this, experts at the Google A.I. innovation laboratory have led the surge in development of this potential into results, with Google’s director of engineering claiming in 2015 after a collaborative research project with NASA and USRA: “What a D-Wave does in a second would take a conventional computer 10,000 years to do…(Manners, 2015). However; academics, scientists and philosophers alike concur that this technology still requires significant development in usability and general optimisation to reach anything resembling practical application.

In an attempt to speed up the optimisation and usability of their computers D-Wave Systems Inc. has introduced Qbsolv, an open-source software designed for anyone with an internet connection to experiment and attempt to solve optimisation (QUBO) problems unique to the quantum computer through a simulation on traditional computers or on one of their own systems. The open source community has been a tremendous driver for several technologies such as Android OS, WordPress and Linux OS, assisting these programs in removing bugs and optimisation inspiring the creation of the Qbsolv software for users to tangle with. An action which would please the authors of The Journal of Machine Learning Research 8, by way of their 2007 paper concluded:

Researchers in machine learning should not be content with writing small pieces of software for personal use. If machine learning is to solve real scientific and technological problems, the community needs to build on each others’ open source software tools. (Sonnenburg et al., 2007)

By utilising this inherently collaborative action in development, quantum processing capability will continue on it’s exponential upward trajectory. Then, by applying this sophisticated method of computing to the equally exciting deep learning potential of machines, the idea that superintelligent artificial life is more than 30 years away is scarcely believable. Thus Kurzweil’s prediction of 2045 doesn’t appear to be outside the realm of possibility – So what does this mean for humans beyond 2045?

Planning for Singularity

Regardless of timeline, there is a universal truth among all well-versed researchers that experts will achieve superintelligent A.I. at some moment in the coming decades. At this point of Singularity, if events unfold as I.J Good hypothesised in 1960 we should share more concern:

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.  Thus the first ultraintelligent machine is the last invention that man need ever make. (I.J Good, 1965)

There is an inherent relevance to these sentiments due to the nature of deep learning among A.I. and although an intelligence ‘explosion’ is a fairly dramatic term the end result could quite possibly be the same, although more gradual – but does our intellectual inferiority necessarily determine our place ‘under machines control’ as Turing foreshadowed?

Perhaps a better angle of enquiry is to consider why a number of researchers and industry leaders reflect the perception that we have ‘no need to be nervous’ about the future after superintelligent A.I. as though we will somehow be able to control these machines or simply ‘unplug’ them as notable software engineer Grady Booch expresses:

We are not building A.I. that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end — don’t tell Siri this — we can always unplug them (Booch, 2017).

These ideas are problematic in more than one way; for example: the level of credence placed on the integration of human values and laws into the psyche of a superintelligent A.I. is too high, a view which is shared by the likes of Facebook founder and CEO; Mark Zuckerberg (Dopfner and Welt, 2016). It is a naive anthropomorphist assumption that once superintelligent A.I. begin to create other, more sophisticated machines that our value system won’t filter out gradually through each iteration, much like the initial message in a game of chinese whispers. Booch ends with reflection on the notion that this point of our technological development is far away and that we are being distracted from more ‘pressing issues’ in society.

This lack of mindfulness surrounding the potential consequences of superintelligence concerns those who advocate for an oversight of the rapid A.I. development, in particular Philosopher and Neuroscientist; Sam Harris who makes one point in particular which resonates powerfully:

No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely (Harris, 2016).

The final sentence, although dark and speculative, is an accurate assessment of the state of affairs. For example: the rate at which both Google’s Deepmind and quantum computing/A.I. programs are advancing have resulted in some anxiety as development exists largely unregulated. Granted this  allows unencumbered freedom of innovation thus speeding up development, however these companies driving forward are taking an incredible risk if superintelligent A.I. exists without proper safeguards in place – Bostrom likens this to children playing with a bomb. (Bostrom, 2013)

Such safeguards could be as simple as determining what jobs will exist and what will be redundant once humans are replaced by A.I. – a process which has already been undertaken in the field of manufacturing. A significantly more complex consideration would be to reorganise the social structure in areas such as: government, education and business management. This will become necessary as the efficiency and overall output of superintelligent A.I. is naturally higher, thus having these machines in roles such as an educator, or in an organisational position will become commonplace.

There has been progress towards safeguarding the development of superintelligence, with Business Magnate and Futurist Elon Musk as well as several other technology moguls generating a project called OpenAI in 2015, this is a research company aimed at ensuring ‘friendly A.I.’ The way OpenAI plans to achieve their utopian mission is by utilising the cautionary predictions from the likes of Stephen Hawking, Stuart Russell and Nick Bostrom who believe that entering the singularity unprepared is existential suicide – and putting A.I. source code into the open source community for widespread ubiquitous access. This method seems counter-intuitive, however by placing the same technology in the hands of everyone, it takes the potential power out any particular individual company or agency. Co-chairman of OpenAI, Sam Altman explains:

Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think it’s far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else (Levy, 2015).

Despite these noble intentions it is easy to see how this utopian mission could be laid by the wayside by some form of collaboration between like-minded ‘bad actors’, crucially though, this is a step in the right direction. With the progress which will unfold over the coming decades, it is to the benefit of all humankind that visionaries such as Musk, Kurzweil and Bostrom continue to review and address the risks surrounding superintelligent A.I. development – to remove the possibility of an existential crisis.

Conclusion

In the thorough examination of the works of philosophers, scientists, experts and businesspeople, there can be no doubt that the singularity will occur, speculation as to when and how are generally subjective predictions based on computational trends or isolated empirical research. However, there is even less certainty beyond the singularity, thus it is essential to apply a cautious, scientific approach to instilling values, ethics and integrity in our original iterations of superintelligent A.I. – thus responding directly to Harris’ and Bostrom’s anxieties.

An idealistic future perspective would be that instead of existing in extremes, such as the dystopian wasteland in Blade Runner or have a race of subservient robot slaves – but rather live in an environment where humans coexist with A.I. in a collaborative effort reaching common objectives – an ideal which will take some serious planning.

References

The Future & A.I: How do we ensure the beginning is not the end for humans? – Research Report Plan

What’s the future of Artifical Intelligence? (Raconteur) :

This being the second post about my research project, i have decided to change tact and head towards a more philosophical angle, a departure from the topic i original chose in VR technology towards A.I and certain moral panic surrounding the subject. This came about due to the fact that my computer had issues and i could not begin the digital artefact which i initially chose.

However, the new topic in a research report/essay form appears to be a more interesting route for myself and thus easier to apply critical thinking and time to achieve a better finished product. I have created this Prezi to outline further, in-depth exactly what this structure will be and the main arguments of the piece.

>>Click for Prezi<<

Feminist, Artificial and Intelligent

Since its modern iterations, artificial intelligence (AI) has been – unfortunately and possibly mistakenly – linked to gender. Even though AI has been theorised about since the Ancient Greeks (you can find a timeline of AI here), it was Alan Turing’s conceptualisation of a test to ascertain a machine’s intelligence (now known as the Turing test) that may have caused this (Halberstam 1991). To conduct the Turing test, a judge communicates with a man and a machine via written means and without ever coming into contact with either subject. The machine should be indiscernible from the man. The issue with this test is that Turing uses a male and a female as the control for the test, erroneously believing gender is an intrinsic value in a human (based on anatomy alone).

In our postfeminist context, we know that gender is a complex spectrum amounting from a combination of brain structure, genetics, sex hormones, genitals and most importantly societal conditioning. “Turing does not stress the obvious connection between gender and computer intelligence: both are in fact imitative systems” (Halberstam 1991). We know now that gender is constructed and reconstructed over time. If gender should apply to AI, it would present itself as a product of the AI’s programmer/s individual gender practice rather than something innate to the machine.

gender
visualisation of the contributing factors to human gender

Instances of AI in everyday life already surround us, the most easily recognisable of which are the personal assistant softwares in smartphones, tablets and computers (Siri, Cortana, and now Google Assistant). Each of these have female voices as a default setting. In a discussion of the many feminine-named assistants, Dennis Mortensen, founder of x.ai, has said that we take orders better from a female rather than a male. This is trend continues in Microsoft’s endeavors to create AI bots on Twitter, most namely the “teen girl” conversation bot, Tay.

Bots and smartphone apps are both examples of weak AI – AI that simulates human intelligence by executing the simplest version of a task. In this podcast about Tay’s rapid corruption into racist Tweets, Alex Hern refers Microsoft’s previous app Fetch!, which identifies dog breeds from pictures – any picture, it need not include an actual dog. Based on this understanding of weak AI, I can only assume female voices are programmed in order to make the apps and bots more palatable and appealing. However this can only be described as “machines in drag“, with very little positive effect on intersectional feminism in society today (Robbins 2016).

Due to the close association of the machine with military intelligence (one of the first iterations of computer was developed by Turing in WWII in response to the Nazi’s Enigma after all), “computer technology is in many ways the progeny of war in the modern age” (Halbersham 1991). The probability of weaponised autonomous AI becoming a threat led to a gendering of the technology as female. Feminist theory sees the female as Other by comparison to the male in the same way that, even in the Turing test, technology is also othered. Andreas Huyssen identifies writers at the heart of this imagining of technology as female harbingers of destruction (cited in Halberstam 1991).

 

References

Halberstam, J 1991, ‘Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine’, in Feminist Studies, vol. 17, no. 3, pp439-460.

Haraway, D 1991. ‘A Cyborg Manifesto: Science, technology and socialist-feminism in the late twentieth century’, in Simians, Cyborgs and Women: The Reinvention of Nature, Free Association, London.

Robbins, M 2016, ‘Is BB-8 a Woman: why are we so determined to assign gender to AI?’, The Guardian, 12 February, viewed 6 April, <https://www.theguardian.com/science/the-lay-scientist/2016/feb/12/is-bb-8-a-woman-artificial-intelligence-gender-identity>.

Employment and living with AI

When it comes to artificial intelligence, there are many people who believe that the introduction of artificial intelligence and robots could lead to a  dystopian world similar to that portrayed in “Terminator Salvation” (p.s. Terminator Salvation is a terrible film), where robots have enslaved humanity. Whilst not entirely implausible, the threat of unemployment is a much greater moral concern  surrounding unemployment, with the World Economic Forum  suggesting that as many as 5 million jobs, from 15 developed and emerging economies could be lost by 2020 (Brinded, 2016). In fact, many people are already starting to lose their jobs to machines with self-serve checkouts being a major example of the way machines have been able to do a job, previously undertaken by human employees, but with greater efficiency and lower cost.  However, I am more focused on investigating the threat posed by human-like robots, rather than machines in general. Why? Because that’s what society imagines when you mention artificial intelligence. They imagine machines that replicate our human bodies.

terminator-0-0
source

In countries such as Japan, many more jobs are now being done by robots. In fact, there is a theme park in Nagasaki, Japan that is about open a “robot kingdom” section where over 200  robots will work as bartenders, chefs, luggage carriers and more.(Niinuma, 2016).  At the 2016 Milken Institute’s Global Conference in Beverly Hills, California, many of the guests confirmed that robots are slowly becoming employed by various companies, at the expense of us humans (Japan Today, 2016). The idea of robots or sentient beings in relation to the workforce, leads to a greater moral question: Could humans and robots co-exist peacefully?

In his book ‘Digital Soul: Intelligent Machines and Human Values ‘, Thomas M. Georges hypothesizes how the introduction of sentient beings in society might be received by humans. Georges states that “learning to live with superintelligent machines will require us to rethink our concept of ourselves and our place in the scheme of things” (Georges 2003, pg. 181). This statement raises many philosophical questions, which I will explore in my next blog post alongside an in-depth look at the 2015 film ‘Ex-Machina’. Georges’ statement does, however, imply that unsurprisingly living with robots would cause some conflict and would not be a smooth transition for humans. Having said that, many will say that we are already living amongst various forms of “weak” AI such as Siri or Cotana, smart home devices and the somewhat annoying purchase prediction. However, these are forms of “weak” AI and we are still a long way away from a society where humans co-exist  with sentient beings. All we can do, for now, is worry and imagine.

References

Brinded, L 2016, “WEF: Robots, automation and AI will replace 5 million human jobs by 2020”, Business Insider Australia, viewed 4th May 2016, http://www.businessinsider.com.au/wef-davos-report-on-robots-replacing-human-jobs-2016-1?r=UK&IR=T

Georges, T. M. 2003, Digital soul: intelligent machines and human values. Boulder, CO: Westview Press

N/A 2016, “Rich and powerful warn robots are coming for your jobs.”, Japan Today  Viewed May 5, 2016, http://www.japantoday.com/category/technology/view/rich-and-powerful-warn-robots-are-coming-for-your-jobs

Niinuma, O 2016, “Theme park’s ‘robot kingdom’ seeks to upend Japan’s service industry”, Nikkei Asian Review, viewed May 5 2016, http://asia.nikkei.com/Tech-Science/Tech/Theme-park-s-robot-kingdom-seeks-to-upend-Japan-s-service-industry?utm_source=fark&utm_medium=website&utm_content=link

Ethical Issues of AI

In the popular 1993 thriller ‘Jurassic Park’, Jeff Goldblum’s  character says to Richard Attenborough’s character ” your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” The reason I quote this, is that in this post, I intend to focus on the ethical aspect of AI. However, before I focusing on the ethical issues of relating to Artificial Intelligence, I will first attempt to differentiate ethics and morals, as they are often intertwined and confused with each other.

Separating the ethical and moral aspects of any particular topic is incredibly difficult, as ethics and morals often cross-over and are almost one of the same. Now for those of you who don’t know, the word ‘ethics’ originates from the Greek word ethos and ethikos and the word ‘morals’ is derived from the Latin word mores and moralis. In an article for The Conversation, Walker & Lovat state that “‘ethics’ leans towards decisions based upon individual character” whilst  ” ‘morals’ emphasises the widely shared communal or societal norms about right and wrong” (Walker & Lovat, 2014). So, if we follow these differences, where does that leave us, in regards to the various issues regarding Artificial Intelligence?

In regards to Artificial Intelligence, it is incredibly difficult to the ethical and moral issues, as they are often intertwined. The moral (societal) issues  are well-known to us: what happens if robots turn on us? what happens when we lose our jobs to robots? Can we feel truly safe in the presence of robots?  However, what are the ethical (individual) issues that are associated with Artificial Intelligence?

One ethics-driven issue that seems to be prevalent amongst the scientific community is that of technological singularity. Technological singularity refers to a hypothetical moment in the future when artificial intelligence surpasses the limitations of mankind and would therefore be the ones developing new technologies, rather than scientists. Why is this an ethical issue? Well, if you think about it, the scientists who are developing the technology for artificial intelligence are essentially helping create a possible future where humans are no longer useful  and are no longer in control. There are many ongoing arguments as to whether technological singularity is something we should fear or embrace. Which is why it can be considered to be an ethical issue of artificial intelligence and is arguably the most important.

Arguably the more recognized and acknowledged ethical issue, “The Frankenstein Complex” is an issue that remains significant even today and is one that can be discussed with enormous depth (on this note, this issue will be further explored in my podcast series). “The Frankenstein Complex” refers to the “almost religious notion that there are some things only God should know” (McCauley 2007, pg. 10). Although this idea may be more prominent in science-fiction than in everyday life, “The Frankenstein Complex” is still a prevalent issue amongst the scientific community and one that continues to cause debate.

frankenstein-bladerunner1

Image from: https://rhulgeopolitics.wordpress.com/2015/10/17/ships-brooms-and-the-end-of-humanity/

 

To conclude, there are many ethical issues associated with artificial intelligence, yet many of them are intertwined with the moral aspects (which I will discuss in next week’s blog post). Having said this, technological singularity and “The Frankenstein Complex” are both issues that stand out from an ethical perspective and are issues that continue to divide.

References

McCauley, L. 2007, “The Frankenstein Complex and Asimov’s three laws”, AAAI Workshop – Technical Report, pgs. 9-14

Walker, P & Lovat T 2014, ‘ You say morals, I say ethics – what’s the difference?’, The Conversation, September 18th, viewed 19th April 2016, <http://theconversation.com/you-say-morals-i-say-ethics-whats-the-difference-30913&gt;

It’s a…n AI?

Twitter is all a-flutter about Tay, the racist lady-AI from Microsoft who was taken offline less than a day after her launch. According to her makers, “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” Unfortunately this makes her extremely easy to manipulate and she was quickly transformed into a genocide-loving racist.

Tay is an example of a phenomenon in AI theory: the emergence of a gendered AI.

AI has been described as the mimicking of human intelligence to different degrees: ‘strong AI’ attempts to recreate every aspect, costing much more money, resources and time; while ‘weak AI’ focuses on a specific aspect. Tay, as a female AI targeted towards 18-24 year olds in the US, is very much about communicating with Millennials. In my previous posts, I’ve mentioned a number of AI representations in the media, all of which are gendered, usually as female. Dalke and Blankenship point out “Some AI works to imitate aspects of human intelligence not related to gender, although the very method of their knowing may still be gendered.”

They go on to suggest that the Turing Test “arose from a gendered consideration, not a technological one,” wherein Turing’s original paper proposing this test, the examiner is trying to determine the difference between a man and woman and that the same differentiation process could be applied to humans and AI.

If AI is gendered, then the researchers are proposing there is an algorithm for gender, which in our post-feminist context seems to be oversimplifying the issue. Gender is entirely constructed and would be constructed on the part of the AI in its development in the same way that humans construct and reconstruct their own gender in tandem with their identity.

Tay is a glorified bot that responds to specific stimuli. Perhaps it’s the other way around – AI is a glorified bot designed to respond to stimuli and learn from it.

More sources to consider:

Click to access man1996030047.pdf


http://www.jstor.org/stable/3178281?seq=1#page_scan_tab_contents
https://www.theguardian.com/science/the-lay-scientist/2016/feb/12/is-bb-8-a-woman-artificial-intelligence-gender-identity

Online Branding and Power: Who has control over the presence?

So in my last post, I set up my ideas of branding, and started to explore how the concept related to cyberculture.  That post was a great foundation leading into the next leg of research: a series of questions about how these brands interact with users, and what the future could also hold.

Cybercultures allows for a great deal of interaction between brand and consumer, but I question who really has the most control?  The internet is a great space for open communication, but does that tip the balance of control in the opposite direction to where it has typically been.  The lecture on Cyberpunks let me consider this idea, in relation to users who have that power and choose to abuse it – trolls who are interacting with brands for the sole purpose of derailing the brand image.  The ‘trollpunk‘ audience hijacks the presence of the brand with the intention to disrupt the hierarchy of power, (Chen 2012) and this is becoming a social norm.  Chris’s comments in the wk4 lecture: “[I]n the absence of the body, means people can have powerful emotional responses” (Moore 2016), could also lead into this idea, of having heightened emotional responses. The lack of physical, real time presence means there is this time to plan, curate, and execute never-ending arguments – either to troll, or to respond.

This idea of trolling leads me to consider online presences, and automatic responses, either from brand or consumer.  Twitter bots are quick and easy to set up, and could be used for a great number of things, but does this mean that we are heading towards an online social media network of artificial intelligence?  If twitter bots are becoming more accessible to create and utilise, and the responses are becoming more realistic, then does the future of online branding lie in a self evolving AI structure with base ideologies that mirror those of the brand, and evolve depending on the audience that interacts with them.  Microsoft’s recent attempt resulted in something they were not proud of, however it mirrored the idea of “destabilisation of established order by the development of artificial intelligence” (Moore 2016) as users interacted with the AI account in order to change it from an ‘innocent’ bot modelled after a teenage girl, into a nazi sex bot (Horton 2016).  The Barbie brand is also planning on peering into the cyberculture world, incorporating their dolls with AI so that children can have real conversations with the toys, adding a new layer to the identity of both the doll and their brand, creating a new brand presence through each doll as they are interacted with.

*FURTHER OPTIONAL READING ABOUT PERSONAL BRANDING AND IDEAS CAN BE FOUND HERE*

 


Chen, A 2012, Trollpunk is the New Cyberpunk, The World of Today, viewed 30 March 2016, <http://worldoftoday.tumblr.com/post/24514056899/trollpunk-is-the-new-cyberpunk&gt;

Gershgorn, D 2015, Barbie Learns to Chat Using Artificial Intelligence, Australian Popular Science, viewed 30 March 2016, <http://www.popsci.com.au/robots/artificial-intelligence/barbie-learns-to-chat-using-artificial-intelligence,409334&gt;

Horton, H 2016, Microsoft deletes ‘teen girl’ AI after it becomes a Hitler-loving sex robot within 24 hours, The Telegraph, viewed 25 March 2016, <http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/&gt;

Moore, C 2016, Week Four – Experiencing Cyberculture, Cybercultures Blog, viewed 30 March 2016, <https://cyberculturesblog.wordpress.com/week-four-experiencing-cyberculture/&gt;

Shani, O 2015, From Science Fiction to Reality: The Evolution of Artificial Intelligence, Wired, viewed 30 March 2016, <http://www.wired.com/insights/2015/01/the-evolution-of-artificial-intelligence/&gt;

 

Automatonophobia, The Future of Robots and Artificial Intelligences. I, for one, accept our new robot overlords.

http://player.cnbc.com/p/gZWlPC/cnbc_global?playertype=synd&byGuid=3000502094&size=530_298

Automatonophobia is the fear of anything that falsely represents having sentience, the autonomy to act out of human control. Typical humans, afraid of what they can’t control or manipulate.

A common theme of cyberculture, and a running trope in media & film, is the fear and demonisation of robots. More so are we fearing the robots, but what they are capable of, and will be capable of the farther technology advances. It’s seen time and time again, from Ultron to Ava, that we create these fictional stories of doomed robots and their flawed understanding of humanity (a reflection on our own humanity they tell us), will ultimately doom us.

ex ultron 2

Robotics have come a long way in a very short amount of time, and companies like Hanson Robotics have their eyes firmly set on creating lifelike, animatronic-androids designed solely for human interaction. To be more human than human. Sophia, is the real-life Ava of Ex Machina. Creator Dr David Hanson’s goal is to make robots “as conscious, creative and capable as any human” and eventually, to one day “be indistinguishable from humans”. He envisions a world of robots not dehumanising us, but reminding us of our humanity.

Capture
Via Facebook

But more on that later, essentially, I wish to say, robots are not evil. They are not Ultron because they were programmed by Tony Stark’s (our) flaws and faults. They do not become Ava because their intelligence is so far more superior that it uses our own humanity against us. They are what we make them to be. Cyberculture, society, or whoever, needs something to fear that we think is threatening what makes us human.

Capture
comments on ‘Sophia’

What I will be talking about instead, is the path of robotics or humanbotics and where its heading. Starting with the history of robots and how we came to fear them, I wish to track through media the villain label we have come to attach to robots and offer a more friendlier take on robots and us. How many innocent robots have succumbed to human hands in films and television? How do the news and internet react to the human like animatronics? Do we really even need to fear the power of robots? Will they actually take over the world?

Nobody puts Robot in the corner.

Artificial and Sentient

Sentient artificial intelligence has been an ongoing preoccupation in science fiction movies, TV shows, books and other media. It’s not limited to science fiction, it’s slowly making its way into popular culture as seen with Marvel’s Avengers 2: Age of Ultron, a film focused on a sentient artificial intelligence (Ultron, voiced by James Spader) becoming cognisant of the needlessness of the human race. The same can be seen with I, Robot and Ex Machina, and is explored in Humans and Almost Human.

George Dvorsky has listed a number of myths about artificial intelligence in the wake of AlphaGo winning two out of three games in the Go tournament against grandmaster Lee Sedol. So it seems like AI is becoming a closer and closer reality but continues to be depicted as something that will replace humanity rather than work with us.

For my project, I’d like to play with this idea and test how people feel about AI walking among us by developing a simple game of deception. The concept of the Turing Test, a test designed to determine if AI is indistinguishable from humanity, underlies this idea.

Things to research from here:

  • Origins of artificial intelligence (definition would be good)
    • R.U.R (Rossum)
  • Golem, automaton, ?
  • Gendered Robots
    • Metropolis
  • Why are people pursuing this?
  • At what point does a human become a robot from cybernetics?
  • With sentience would artificial intelligence be a replacement/challenger or an aid?
  • Ethics
  • Charles Babbage – calculating engine – hand-cranked computational
  • Genie?

Other related texts:

  • Skinners trilogy – brain scans are taken from a terminal car crash victim and imprinted on a cyborg
  • Mr Robot
  • Chappie
  • Her
  • Bicentennial Man
  • Ghost in the Shell
  • Terminator franchise