Image By: Paul Wicks at WikiMedia Commons |
The
future is an amazing place filled with wonder, speculation, doubts and fears.
There are endless possibilities in infinite combinations and no one knows what
is just around that corner. Every decision that is made today, from the
seemingly simple to the extremely complex, will in fact affect us and future
generations for millennia to come. With such a huge burden to bear, how does
one handle such responsibility? If even the small decisions can have such an
impact, how does one decide what to do about the larger and far more complex
questions of the universe? How do we, as a species, decide if we should attempt
to create a thinking computer or if that crosses a line somehow? Would creating
the thinking computer be a benefit to society, or would we just be playing God?
People have been fascinated with the idea of robots for years. Every child wants one and every adult needs one. What is this fascination with robots? For a lonely child it is a friend, for an adult it is free housekeeping, and for employers it is a free employee. But I am not talking about robots here; I am talking about computers that not only take commands and carry out orders, but actual thinking computers. Computers that will take in information, process it, and then come to some type of logical conclusion. I am talking about computers that can form opinions and are capable of making decisions on their own. I am talking about A.I. (Artificial Intelligence).
People have been fascinated with the idea of robots for years. Every child wants one and every adult needs one. What is this fascination with robots? For a lonely child it is a friend, for an adult it is free housekeeping, and for employers it is a free employee. But I am not talking about robots here; I am talking about computers that not only take commands and carry out orders, but actual thinking computers. Computers that will take in information, process it, and then come to some type of logical conclusion. I am talking about computers that can form opinions and are capable of making decisions on their own. I am talking about A.I. (Artificial Intelligence).
When
people think of artificial intelligence, there are a few things that come to
mind: The Terminator, Star Trek, Battle Star Galactica, HAL, and Eureka’s
Sarah. From a friendly smart house to a vengeful machine, to a humanoid machine
hell bent on taking over the world, people have many ideas and fears when it
comes to creating artificial intelligence. If science fiction can teach us
anything when it comes to A.I., it is that there are some very serious
questions that we need to ask before we embark on the quest for new life.
Creating a computer that can think raises many social, ethical and moral
questions. The first question we should be asking is not, “can it be done?” ,
but should it be done? Is there even a need for a thinking computer, or are we
just creating a whole new set of problems without much reward?
Humans
have feared the machines since their inception. This seems rather silly at
first glance, but it is true, and there are several reasons. The first is the
obvious complexity of machines. People do not usually like what they do not
understand; as the machines have grown, so has our disdain for them. The next
reason is safety. There are many machines that have been the instrument of
death and dismemberment. But the biggest fear that people have regarding
machines is automation. People fear being replaced by the machines, they fear
becoming obsolete.
Another
question that needs serious consideration is what the criteria are for sentient
beings. Could we ever create a computer that may become sentient? If this
happens, the moral implications are endless. If a computer became sentient,
would it have a soul? Even if one is an atheist, would it be morally
irresponsible to keep religion from a sentient robot, or could giving a
thinking computer access to God be even more destructive than giving humans
access to God?
What
about the rights of a computer? If the computer became sentient, would we
realize it? Not without a specific definition and very specific criteria. If we
created a sentient being, at some point, the machine would realize that it was
treated differently. Would these computers be entitled to the same rights as
humans? Would we be willing to give
these computers equal rights?
This
leads to the biggest fear humans have regarding A.I., the rise of the machines.
If this were to happen, it surely won’t be in our lifetime, but it is still a
fear. In all of history, when one group demands equal rights, I have yet to
hear of one case where it ends well for the oppressors when those rights are
denied. If we create a sentient being, and refuse to recognize them as such, is
there a possibility that they may rise up and enslave or kill us all?
All
of these fears are unwarranted if we are unable to accomplish this goal. Is
this even something that we could do? It is probably far off, but I don’t think
anything is outside of the realm of possibility. We are making new strides and
advances every day. Watson was a huge success and was for all intents and
purposes, thinking. However, in order to truly create the thinking computer, we
will have to better understand the human brain, and then we will have to find a
way to either recreate it, or make it better.
Equal
rights for computers and the rise of the machines may seem like silly things to
ponder, but if we are able to truly create the thinking computer, these crazy
ideas may not be so far-fetched. We may
need to reconsider the moral, social and ethical implications of our creation.
We may need to decide if we are ready to go from playing God, to becoming God. And
are we ready to deal with those repercussions?
Look for my next installment of this series: What's the Point?
Discussing the reasons why we should even bother with Artificial Intelligence.
No comments:
Post a Comment
Don't forget to tell me what you think.