By Dante Clark

Sophia the Robot

What is intelligence? According to Google dictionary, the definition of intelligence is “the ability to acquire and apply knowledge and skills.” According to the Oxford Dictionaries, however, other definitions of intelligence that you can find mention the abilities of self-awareness and emotional intelligence. These two qualities reference something more specific relating to intelligence, which is sentience. Sentience is defined as the ability to “perceive and feel things”. If sentience is needed for intelligence, then how can someone tell if something possesses intelligence or not? Is artificial intelligence actually intelligent or does it fall short? Should AI be considered for citizenship like the robot Sophia was in 2017? Should AI be seen as equivalent to people and thus protected by universal human rights or as nothing more than algorithms imitating intelligence?

Some, such as Hugh Mclachlan from the Independent, believe that sentient machines are possible, but that, regardless of the components that make up their sentience, they should be regarded morally the same way as people with no “viable body,” such as the dead and those yet to be born. Others, such as Wesley J. Smith from the National Review, believe that robots and artificial intelligence will never be anything more than machines designed to perform a task. By that line of thought, any type of consciousness it could display would only be an imitation of the real thing made through computing, thus AI should not have “[any] more moral importance in and of themselves than a toaster”.

I myself relate more with Smith’s take on the subject. I don’t believe that robots should be given the same moral consideration as people because I don’t believe that they can ever be truly sentient no matter how advanced AI technology may become. In my earlier definition, sentience was described as the ability to “perceive and feel things”. To perceive means to be aware of or understand something by definition. Therefore, to be able to understand things is necessary to be sentient. And as Smith alluded to in his article, machines and algorithms can’t truly understand things. Even if we get to the point where it seems like they can, their facade of conscious thought is nothing more than computation underneath, and they are doing nothing but what they were programmed to do. AI cannot understand, but instead only reacts to things as it was programmed to react to them. Since they cannot understand, they cannot be sentient. Therefore, they cannot be truly intelligent, thus they should not be considered on the same level as intelligent beings and by proxy should not be considered under the same moral law.

The stance shared by me and Wesley J. Smith is not a rare one at all; it is supported by many others including Mohana Das of Towards Data Science who states,

“A lot of chatbots today can easily pass the Turing test. They mimic human emotion to a frightening extent. This doesn’t make the bot human though. All it means is that bots can be exceptional at creating the illusion of emotion, even though they have none”.

Throughout his piece on the subject, he keeps constant the notion that the “consciousness” of artificial intelligence is just a mimicry of real consciousness, as seen in this quote.

There are arguments against the dismissal of robot intelligence such as the aforementioned product-over-material in which it was claimed that sentience is sentience regardless of whether it’s made of biological material or wiring. This argument does not really hold up considering that those two things have two completely different natures. Wiring and machinery is made by people in order to serve a designated purpose given to it by whoever created it, while organic life and, subsequently, the sentience that comes of it is formed naturally by the coagulation, procreation, and evolution of carbon-based cells. The machinery cannot go beyond its specified purpose.

However, biological life, and more specifically humans, have no agreed upon purpose and can do and experience many things which AI simply cannot. Additionally, the unreplicatable complexity of humans allows for variance of human intelligence. For instance, some people may have less of a sense of emotional intelligence than others due to that person being autistic. This is caused by someone naturally having a different genetic structure from most other people. This impacts their behavior in ways that could not be replicated by AI for two reasons. The first reason is that AI would be programmed to be “perfect” and would not be able to program itself to limit its functions while still maintaining its working order. The second is that AI could never simulate that genetic variability in code as anything of the sort would be considered an error and the robot would be rendered useless and unable to compensate the same way that a human brain can.

In his article, Smith brings up the counter argument that since humans were created in the image of God and sophisticated AI robots are made in the image of humans, the same divine purpose bestowed upon man would also be bestowed upon AI, making it it intelligent and worthy of the ensuing moral consideration. This argument is flawed because it assumes that humanity was made in God’s image which, in reality, could never be known for sure. Furthermore, if that were indubitably the case then that still wouldn’t grant AI any special purpose nor consideration as that same exact argument could be used to call statues (which are often modeled after people) as things bestowed with a purpose and made in the image of God. This idea would simply not hold up, especially considering that parts of the Bible call statues and statuettes “idols” declaring them as a bad thing that’s against the wishes of God. It would be unlikely that God would assign special purposes to something he does not approve of.

Overall, artificial intelligence and machines that make use of it are and can be nothing more than that: machines. No matter how far technology progresses we will never be able to create true sentience out of what is, at the end of the day, inanimate objects. We may be able to create convincing simulations and facades of consciousness, but there will never be a machine in existence that can truly possess it beyond a surface level disguise.

Similar Posts