Digital Artefact; Simple // Complex Machines

5330a78143387c32a34775495f887376

A Rube Goldberg machine is a contraption, invention, device, or apparatus that is deliberately over-engineered to perform a simple task in a complicated fashion, generally consisting of chain reactions. This work aims to explore this idea of simple/complex machines, in hopes to educate, create a community, entertain and showcase my knowledge about these wacky mechanisms. There are so many elements that can be explored, discovered and documented throughout the process of creating a Rube Goldberg Machine, so this space intends to focus on this concept in diverse ways.

Follow this link to follow my current exploration of the Rube Goldberg Machine through emerging technology and the underlying notion of Cybernetics:

https://dailysonny.wordpress.com/category/the-rube-goldberg-machine-%F0%9F%94%A7/ 

Digital Artefact: Learning to Code with Arduino

A Blog in the Life of Melissa

Over the last few weeks I have explored STEM learning and code and have created a digital artefact reflecting on my experiences. I wanted my artefact to be empowering for myself and others. I wanted to tackle coding, a specialised work that requires some skill with programming and HTML experience, and prove that with the right support, anyone can do it.

I think that I have achieved my goal, and hope that my experience with coding and the Arduino hardware will influence someone else to try and challenge themselves to get creative. Check out my documented experiences below.

View original post 32 more words

NANOMACHINES, SON! (DIGC335 DA)

Sanjihan

The digital artifact is a video analysis of the parallels the Metal Gear series has with real life. The whole theme stems from the question “Does Science Fiction inspire reality or is it the other way around?”. After doing some research myself, the answer is both. Science fiction can inspire new ideas to be replicated in real life. While on the other hand, ideas based on real life ideas can be put to practice within the universe of Science Fiction.

The original idea was to go with the limits of our technology while using Metal Gear Rising as the basis for this theme. Unfortunately, the idea didn’t go very far. This was due to not being able to make a solid enough video on the topic. There were some ideas to go with it, but ultimately it wouldn’t have been a meaty video.

Originally the video would focus on Metal…

View original post 418 more words

What will Artificial Superintelligence mean for Human life?

The following is an essay i produced for my research in the subject of emerging media issues (BCM310) as a student of the University of Wollongong. I considered it relevant to the topic of cybercultures as well therefore i am sharing it here – I am by no means an authority on the matter of superintelligence, however it is a topic which intrigued me. Any comments or feedback, you can reach me at @samhazeldine.

Transcript:

“What will Artificial Superintelligence mean for Human life: A conceptualisation of the coming technological singularity & it’s impact on human existence”

Introduction

Throughout the last century in popular culture there has been representation of superintelligent or human level A.I with varying sense of morality dating back to cinema of the late 1920’s. These representations have forged popular discourses around advanced A.I and their role as catalysts, creating a dichotomy of thought towards a dystopian or utopian future beyond the singularity. Academic understanding suggests we utilise cautionary dystopian ideals to reinforce the notion of prevention of uncontrollable A.I. growth. This is assuming our technological development reaches a degree whereat deep learning aided by quantum computing is efficient and reliable, following which the singularity can unfold.

Through careful analysis of the works produced by philosophers and theorists such as: I.J Good, Ray Kurzweil, Nick Bostrom and others –  this piece will discuss the potential for artificially superintelligent beings to lead us towards a bright utopian future or a uncertain dystopian future where we survive as relics of a bygone era.

Developing the notion of ‘The Singularity’

The original concept of a technological singularity was set up by mathematician; Alan Turing in the 1950’s:

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control. (Turing, 1952)

This somewhat gloomy prediction plays on the developing notion of an eventual dystopia which in the years since Turing’s expression, has been reinforced by popular culture with films such as Blade Runner (1982), Terminator (1984)  and I Robot (2004) reinforcing these ideas of machines in control.

A contemporary and coworker of Turing’s;  I.J Good is another theorist which provides an important theory known as the ‘intelligence explosion’ (I.J. Good, 1965). This hypothesis details how at the point of achieving superintelligence, A.I will be able to build more sophisticated computers and this feedback loop reaches a speed where innovation is incomprehensible by current standards and creates unlimited intellectual potential beyond our reaches of control.

This idea of gradual, exponential increasing in computational potential is the basis for Moore’s Law which follows a formula which calculates that the power of computing and integrated circuits doubles each year.

A trend which author and computer scientist Ray Kurzweil applies to the potential of artificial superintelligence, driving innovation to reach the singularity by 2045:

$1000 buys a computer a billion times more intelligent than every human combined. This means that average and even low-end computers are vastly smarter than even highly intelligent, unenhanced humans. (Kurzweil, 2006)

With regards to impact, Kurzweil reimagined the phenomenon of the singularity as being ‘neither utopian nor dystopian but having an irreversible impact on our lives as humans, transforming concepts we rely on to give meanings to our lives’

Despite the future beyond the singularity being largely debated, there is no doubt among those who study A.I. that the singularity will occur, it is only a matter of achieving the level of sophisticated computing necessary.

Superintelligent A.I. by 2045

To achieve a point of singularity within the timeframe ascribed to by several modern theorists including Kurzweil and also Swedish philosopher Nick Bostrom who states that:

There is more than 50% chance that superintelligence will be created within 40 years, possibly much sooner. By “superintelligence” I mean a cognitive system that drastically outperforms the best present-day humans in every way… (Bostrom, 1997)

This opinion, like Kurzweil’s should be considered as just that, an opinion, however as is the case with all visionaries the degree of credence which can be placed on their ideas require further, deep examination. By deconstructing the ability to fulfill such a prediction and what it might require happens in the next 20-30 years to reach this point,  it can be better understood what the likelihood and consequences may be of this intelligence explosion occurring. The concept of deep learning is a key factor in the progression towards human level artificial intelligence.

Deep learning is essentially a computer’s ability to capture information from various sources including inputs by users, analysis of big data and store this information in neural networks. This is similar to the functioning of the human neural/memory networks created in our brains, however this method in machines such as computers is not limited to physical space like that of the human cranium thanks to ‘the cloud’.

For example, through programs such as GoogleDeepmind, experts were able to utilise deep learning techniques to teach their AlphaGo A.I. how to defeat the current European champion of a board game known as Go – a 2,500-year-old game that’s exponentially more complex than chess (Metz, 2016). Such an achievement is a clear-cut example of the early potential in deep learning technology, moreover this method of machine learning is also utilised on a consumer scale in the form of Netflix entertainment and Amazon purchase suggestions with benefit to both audience and business.  

Running in congruence with this development of deep learning technology is the race to develop a stable, usable and reliable quantum computer. Quantum computing involves the processing of superpositioned qubits of data and applied algorithms, which is then used to solve complex problems potentially much faster than traditional binary computers. With current iterations in their infancy, examples such as the cutting edge: D-Wave 2000Q 2048-Qubit computer are the size of a small bathroom and cost $15 million USD (Temperton, 2017). Despite this, experts at the Google A.I. innovation laboratory have led the surge in development of this potential into results, with Google’s director of engineering claiming in 2015 after a collaborative research project with NASA and USRA: “What a D-Wave does in a second would take a conventional computer 10,000 years to do…(Manners, 2015). However; academics, scientists and philosophers alike concur that this technology still requires significant development in usability and general optimisation to reach anything resembling practical application.

In an attempt to speed up the optimisation and usability of their computers D-Wave Systems Inc. has introduced Qbsolv, an open-source software designed for anyone with an internet connection to experiment and attempt to solve optimisation (QUBO) problems unique to the quantum computer through a simulation on traditional computers or on one of their own systems. The open source community has been a tremendous driver for several technologies such as Android OS, WordPress and Linux OS, assisting these programs in removing bugs and optimisation inspiring the creation of the Qbsolv software for users to tangle with. An action which would please the authors of The Journal of Machine Learning Research 8, by way of their 2007 paper concluded:

Researchers in machine learning should not be content with writing small pieces of software for personal use. If machine learning is to solve real scientific and technological problems, the community needs to build on each others’ open source software tools. (Sonnenburg et al., 2007)

By utilising this inherently collaborative action in development, quantum processing capability will continue on it’s exponential upward trajectory. Then, by applying this sophisticated method of computing to the equally exciting deep learning potential of machines, the idea that superintelligent artificial life is more than 30 years away is scarcely believable. Thus Kurzweil’s prediction of 2045 doesn’t appear to be outside the realm of possibility – So what does this mean for humans beyond 2045?

Planning for Singularity

Regardless of timeline, there is a universal truth among all well-versed researchers that experts will achieve superintelligent A.I. at some moment in the coming decades. At this point of Singularity, if events unfold as I.J Good hypothesised in 1960 we should share more concern:

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.  Thus the first ultraintelligent machine is the last invention that man need ever make. (I.J Good, 1965)

There is an inherent relevance to these sentiments due to the nature of deep learning among A.I. and although an intelligence ‘explosion’ is a fairly dramatic term the end result could quite possibly be the same, although more gradual – but does our intellectual inferiority necessarily determine our place ‘under machines control’ as Turing foreshadowed?

Perhaps a better angle of enquiry is to consider why a number of researchers and industry leaders reflect the perception that we have ‘no need to be nervous’ about the future after superintelligent A.I. as though we will somehow be able to control these machines or simply ‘unplug’ them as notable software engineer Grady Booch expresses:

We are not building A.I. that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end — don’t tell Siri this — we can always unplug them (Booch, 2017).

These ideas are problematic in more than one way; for example: the level of credence placed on the integration of human values and laws into the psyche of a superintelligent A.I. is too high, a view which is shared by the likes of Facebook founder and CEO; Mark Zuckerberg (Dopfner and Welt, 2016). It is a naive anthropomorphist assumption that once superintelligent A.I. begin to create other, more sophisticated machines that our value system won’t filter out gradually through each iteration, much like the initial message in a game of chinese whispers. Booch ends with reflection on the notion that this point of our technological development is far away and that we are being distracted from more ‘pressing issues’ in society.

This lack of mindfulness surrounding the potential consequences of superintelligence concerns those who advocate for an oversight of the rapid A.I. development, in particular Philosopher and Neuroscientist; Sam Harris who makes one point in particular which resonates powerfully:

No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely (Harris, 2016).

The final sentence, although dark and speculative, is an accurate assessment of the state of affairs. For example: the rate at which both Google’s Deepmind and quantum computing/A.I. programs are advancing have resulted in some anxiety as development exists largely unregulated. Granted this  allows unencumbered freedom of innovation thus speeding up development, however these companies driving forward are taking an incredible risk if superintelligent A.I. exists without proper safeguards in place – Bostrom likens this to children playing with a bomb. (Bostrom, 2013)

Such safeguards could be as simple as determining what jobs will exist and what will be redundant once humans are replaced by A.I. – a process which has already been undertaken in the field of manufacturing. A significantly more complex consideration would be to reorganise the social structure in areas such as: government, education and business management. This will become necessary as the efficiency and overall output of superintelligent A.I. is naturally higher, thus having these machines in roles such as an educator, or in an organisational position will become commonplace.

There has been progress towards safeguarding the development of superintelligence, with Business Magnate and Futurist Elon Musk as well as several other technology moguls generating a project called OpenAI in 2015, this is a research company aimed at ensuring ‘friendly A.I.’ The way OpenAI plans to achieve their utopian mission is by utilising the cautionary predictions from the likes of Stephen Hawking, Stuart Russell and Nick Bostrom who believe that entering the singularity unprepared is existential suicide – and putting A.I. source code into the open source community for widespread ubiquitous access. This method seems counter-intuitive, however by placing the same technology in the hands of everyone, it takes the potential power out any particular individual company or agency. Co-chairman of OpenAI, Sam Altman explains:

Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think it’s far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else (Levy, 2015).

Despite these noble intentions it is easy to see how this utopian mission could be laid by the wayside by some form of collaboration between like-minded ‘bad actors’, crucially though, this is a step in the right direction. With the progress which will unfold over the coming decades, it is to the benefit of all humankind that visionaries such as Musk, Kurzweil and Bostrom continue to review and address the risks surrounding superintelligent A.I. development – to remove the possibility of an existential crisis.

Conclusion

In the thorough examination of the works of philosophers, scientists, experts and businesspeople, there can be no doubt that the singularity will occur, speculation as to when and how are generally subjective predictions based on computational trends or isolated empirical research. However, there is even less certainty beyond the singularity, thus it is essential to apply a cautious, scientific approach to instilling values, ethics and integrity in our original iterations of superintelligent A.I. – thus responding directly to Harris’ and Bostrom’s anxieties.

An idealistic future perspective would be that instead of existing in extremes, such as the dystopian wasteland in Blade Runner or have a race of subservient robot slaves – but rather live in an environment where humans coexist with A.I. in a collaborative effort reaching common objectives – an ideal which will take some serious planning.

References

The impending societal ramifications of automation

The impending societal ramifications of automation

My last blog post focused on Amazon’s, now confirmed, entry into the Australian market and the potential impact that such a move might have on domestic consumers, retailers and workers. Many of the sources I came across while digging deeper concerned Amazon’s increasing use of automated systems. As such, I’ve decided to shift the focus of my project towards the broader implications of automation on the global workforce. This change means I don’t have to limit myself topically to either Amazon or, necessarily, Australia.


As early as 1967, figures like Marshall McLuhan were criticized (p.237) for believing that ‘total automation is upon us’.  So to did William Gibson poignantly state time and again, that ‘the future is already here — it’s just not very evenly distributed’. So to that end, let us assess the current status of automation: What systems have been made obsolete by automation? What specific technologies are emerging today, and who is it displacing? Finally; what is on the horizon, and what professions, if any, will be safe from the process of automation creep? These will be the questions that my research report will engage with, and what I’ll be touching briefly upon in this post.

468px-william_gibson_60th_birthday_portrait1
Novelist William Gibson

To talk about automation is to talk about what John Maynard Keynes coined (p.3) in 1930 as ‘technological unemployment’. He described this emerging phenomenon as the unfortunate ‘[availability] of labour outrunning the pace at which we can find new uses for labour’. Keynes added that this is only ‘temporary’, and standards of living will be multitudes better in one hundred years when there’s little work for anyone to do. But it was Keynes belief that ‘everybody will need to do some work if he is to be contented’ (p.6) as work provides meaning to one’s life, a topic for another time.

Since the process of industrial mechanisation saw a decline in production-line jobs that manufacturing industries provided, we haven’t yet seen any mass unemployment from the introduction of new technologies. Aside from the advent of electronic computing decreasing the need for human computers, and automatic exchanges largely making switchboard operators redundant, the workforce has survived. We’re only now seeing the beginnings of the technological unemployment Keynes imagined.

With the introduction of technologies such as the self-checkout machines at supermarkets, many commentators including Barack Obama himself, see automation as ‘relentless’ and  ‘killing traditional retail’ jobs. With robots capable of sorting more than 200,000 packages a day in warehouses, and capable of working on cents worth of electricity instead of minimum wage, it’s hard not to be concerned. But importantly, it’s not just blue-collar industry workers who are at threat. White-collar professions relying on skills like decision making, paperwork, and writing are newly susceptible to automation via learning AI.

Platforms like Quill from Narrative Science can analyse large amounts of data and identify meaningful trends, then output a report reflecting these findings in ‘everyday language’, be it finance or sports results. While it’s been criticized for an inability to ‘discern the relative newsworthiness’ of stories, the unmatched speed and lack of bias that an AI system writes with is undeniable.

ns_sig_solid_rgb

In addition to AI software, ‘general purpose’ robots are being developed with an ability to ‘learn’ new tasks. ‘Baxter’, from Rethink Robotics and Roomba creator Rodney Brooks, is being developed to fulfill ‘quality assurance or small assembly’ in factories, but still requires a human to initially ‘teach’ it these functions. This universal robot represents a leap in usefulness comparable to the first personal computers. Baxter is capable of fulfilling whatever task is ‘within his reach‘, but perhaps this is an agreeable compromise; there will still be work available for workers on an assembly line, but it will be less laborious and more about oversight and refinement of process.

Other systems are being designed to take over more skilled professions. IBM’s ‘Watson‘ for example is being touted as an AI doctor, networked to be constantly up to date with the newest research and possessing the ability to instantly access and share your medical records as required. Similarly, Enlitic has a program which can analyse medical imaging results and boasts a ‘false-negative rate of zero’.

The impact that automation makes on employment isn’t always clear until years later, however. The Economist reminds that although automated teller machines briefly reduced the number of human tellers in 1988, bank branches became cheaper to operate and so they grew by ‘43% over the same period’. So, will a technology like self-driving cars destroy the transport and hauling industry, or will new, unprecedented roles appear for the millions employed in those sectors?

While time will tell, I’ve plenty of sources to investigate for my final report in the meantime.

“Real Enthusiasts Drive Their Own Cars” – Autonomous Cars and the Enthusiast Perspective

Jesse Max Muir

In first approaching my research towards the topic of autonomous cars I began looking at the various perspectives centred on the technology. In the wake of modern developments such as Tesla’s self-proclaimed “auto-pilot” function, there was no denying that the technology was here/fast approaching. As such, I decided as opposed to researching the potential future developments of autonomous cars, I would provide an in-depth analysis of the dominant perspectives and apply this to a large gap in the research. This gap came in the form of the ‘enthusiast perspective’ as through the course of my research I found very little information on the treatment of self-driving cars by automotive enthusiasts. Thus, my goal for this project was established in determining what this enthusiast perspective was after firmly establishing the current dominating perspectives, this being that of early adopters, and the concerned public. My two blog posts and final podcast have…

View original post 1,089 more words