Does assessing consciousness in a machine matter?

There are many contesting arguments for what consciousness is and if/how it can be adopted by a robot (Kuhn, 2015). The traditional divide is between physicalism where consciousness is purely a physical construct and in that way can be perfectly replicated into a machine. The other is that consciousness exists in a non-physical entity, completely separate from the physical brain and that in death, our consciousness can continue in a non-physical experience.

It is not only the essence of consciousness that causes contention but the application of it in a robot. Philosopher John Searle has argued that even if consciousness is purely physical and thus a replication of the brain would replicate consciousness, a perfect simulation ‘would be no more conscious than a perfect simulation of a rainstorm would make us all wet’. Alternatively, Rodney Brooks argues that consciousness is not a special phenomenon reserved only for humans/animals and that we have fooled ourselves into thinking a conscious machine can not be created.

We have not yet addressed the ‘other minds problem’ which is that one cannot ever know if another person (or machine) is conscious. You can only truly know that you are conscious and you have no means on determining if another being is or isn’t. Neuroscientist Michael Graziano suggests that the assumption of consciousness is a social attribution  and argues that ‘when a robot acts like it’s conscious and can talk about its own awareness, and when we interact with it, we will inevitably have that social perception, that gut feeling, that the robot is conscious.’

In light of this, I will be assuming that it is irrelevant to discuss wether or not advanced artificial intelligence actually possesses true consciousness. Instead I will focus on the perception of consciousness in a machine and how this will prompt deep emotional relationships between humans and advanced A.I.


One thought on “Does assessing consciousness in a machine matter?

  1. It feels like it will always come back to that ambiguous realisation that we won’t ever really know how our conciousness works, and thus how can we understand it working in an AI or robot? We can all only know our own conciousness and assume for everyone else. Our conciousness may exist in our brain, but can a conciousness exist in an operating system, which is essentially an AI’s brain. It kind of reminds me of how when things that cannot be explained yet by science we categorise it by faith, religion, magic or the unexplained. Yet conciousness is in a strange middleground that it isn’t 100% locked down in scientific fact, but it isn’t ‘magic’. Sorry…i get a bit lost in discussions about conciousness :$ I agree with you in that even if an AI or robot had conciousness, how would be be able to prove it, when if we program it to be concious, is it only concious that it is concious, and thus acting like it is when it is not….then I guess we’d start to head down an ‘Ex Machina’ road of an AI fooling humans with its ‘humanity’

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s