Will Computers Ever be Conscious?

At first glance, imagining that one can answer this question seems highly hubristic. Who knows? I’m
certainly interested in the notion of humanoid robots. Perhaps one can even ask, who isn’t? I enjoy the
character of Data in Star Trek Next Generation, but mostly for the same reasons as the writers of the
show – Data’s near humanity provides an opportunity to think about in what ways he is NOT human.
In other words – what does it mean to be human? Surely, the most fundamental of all philosophical
questions.

Ken Wilber has suggested in a fictional context, Boomeritis, that if computers ever did gain
consciousness, they might reach enlightenment in a nano-second. Without our human limitations, they
might not need several thousand lifetimes. I find it amusing to imagine the horror of an evil scientist
hoping to take over the world with his super-smart computer, only to discover that the computer is
intensely moral; perhaps with a tendency to reply in a manner that seems somewhat cryptic to the
unenlightened. E.g., the parables of Jesus, or Zen sayings.

conscious_androidThe possibility of creating conscious humanoids has been a topic I’ve been interested in for years,
however, I had a sudden surge of inspiration after watching the documentary ‘Plug and Pray.’ The
two main interviewees are Joseph Weizenbaum and Raymond Kurzweil who offer alternative views.
Weizenbaum helped to build one of the first computers and had a prestigious career at some top
schools as a computer scientist. Kurzweil is a philosophically, materialistic futurist. As a clue to my
preferences, I could say that if Kurzweil suddenly grew horns on camera, I wouldn’t necessary suspect
special effects. Kurzweil is looking forward to immortality, having had his consciousness downloaded to
a computer.

Two of Weizenbaum’s statements caught my attention in particular. The first I regard as a knockdown
argument. To paraphrase, Weizenbaum says that to imagine that we can build a conscious, humanoid
robot, is in effect to say, ‘I understand human beings so well, I can build one myself.’ My response was
to think, how true! And to make a point of remembering it. When I started to talk about this with my
wife Ana the following morning, she made exactly the same comment before I’d had a chance to repeat
Weizenbaum’s wisdom. I was most impressed, although Ana thought I shouldn’t be.

I suspect that one implication of this imagined ability to understand humanity well enough to build
a human is that Plato and Dostoevsky would suddenly seem irrelevant. All their questions would be
pseudo-questions, as analytic philosophers like to say. We could all have a good chuckle about how we
used to find the vagaries of human life such a mystery! Of course, analytic philosophers are all too often
prematurely laughing scornfully in just such a way.

The other of Weizenbaum’s comments was to quote Daniel Dennett , speaking of people with horns
in my view, saying “We must lose our reverence for life before any real progress can be made in A.I.”
(artificial intelligence). I have not officially been able to verify this quotation. I’ve found it on the
web, but with no page or even book reference. If it’s true, and why would Weizenbaum lie? Then
philosophically, this may make Dennett even more immoral than Hitler. I’m pretty sure that Hitler didn’t
advocate losing reverence for Aryan, German life, even if he hated the rest of us. At least Hitler was only

ethnocentric; at least officially. I’m at a loss as to how to characterize Dennett in these terms because
presumably he is advocating not having any reverence for his own life too, in which case murdering
Dennett would be harmless in Dennett’s view, so we can’t even call Dennett egocentric. In other words,
he’s below the moral level even of a baby or a psychopath. Comparing babies and psychopaths may
seem strange, but it’s not if you know anything about moral development.

Now, I know that comparing people to Hitler is the last refuge of many a scoundrel, but I hope you take
my point. Hitler was indeed more evil than Dennett because, among other things, he had the power
to be more evil. A neo-Nazi might aspire to the deeds of Hitler, but fortunately lacks a Wehrmacht,
Luftwaffe, etc., to put his evil thoughts into action, and merely thinking something evil is not as bad as
doing evil things, and most of us just don’t have the means to be that evil, thank God.

Another computer scientist was interviewed who was trying to build a humanoid robot. He said that
recent research had suggested that having a body is an important part of human consciousness and
he hoped that the robot body might actually contribute to creating robot consciousness. He hoped to
create a robot comparable to a human baby. He sounded very optimistic about the whole thing. Later
in the documentary, however, he admitted his total failure to create a robot that could learn from
experience. He commented, to paraphrase, ‘”I just haven’t found the right equation yet.” That belief
strikes me as likely to have the most jejeune consequences.

First of all. What is a machine? According to Turing, I believe, it is a rule-following device. Secondly,
what’s the definition of intelligence? Well, an important aspect of intelligence and consciousness is the
ability to respond appropriately to novelty and the unforeseen; in other words, not to have to follow the
rules. Sometimes, one may be able in retrospect to come up with a rule for that kind of situation, but if
that situation has not occurred before one must improvise.

Pre-conventional means you don’t know how to do something. Conventional means you know the
rules and can follow them. Post-conventional indicates mastery and means that you can jettison
rules and respond fluidly and appropriately to the particulars of a situation. You can improvise and
use your common sense. That is precisely what you can never say to a computer – use your common
sense. Everything is rule based. When this occurs, do this. Watson, the Jeopardy- winning computer,
is instructed to choose the set of zeros and ones that occurs with the greatest statistical frequency in
connection with the other zeros and ones related to the Jeopardy ‘answer.’ That’s rule-based behavior.
The rules of Jeopardy tightly circumscribe types of responses. In real life we keep finding ourselves in
situations we have not encountered before and having to try to respond intelligently.

To state the obvious, human beings tend to be somewhat unpredictable. I remember at one point
in my teaching career thinking that I could now handle whatever unforeseeable behaviors students
threw at me. That turned out to be badly mistaken. Only in films is it the case that the Kung Fu master
is never sucker- punched. We can all have a bad day or get blind-sided. Intelligence is unlikely to be
a case of perfect intelligence. Post-conventional doesn’t mean you always win. Think about two post-
conventional Kung Fu masters fighting, or two post-conventional thinkers arguing.

Intelligence means you can respond to novelty. Alfred North Whitehead described the universe as
involving ‘creative emergence into novelty.’ By that he meant things like ‘life.’ Creative emergence into
novelty implies telos; direction/goal. What’s creatively emerging? New phenomena at higher levels
of order and complexity. It implies intelligence: intelligent design. Intelligence is both responsible for
creating greater complexity and hence novelty, and intelligence is also what enables us to respond to
novelty in creative ways.

The universe is creative and imaginative and novelty-producing and human intelligence partakes in all
of that. Our own intelligence is both an example of complexity and the creative emergence into novelty,
and the means of dealing with novelty and the unexpected.

Now, computers are examples of complexity and novelty. But they themselves cannot respond to
novelty. They can’t learn from experience without simply following a pre-programmed rule which means
you have already anticipated the situation, which means the computer is not really responding at a post-
conventional level.

If God exists and God is responsible for the universe, then God is creative and intelligent. We, at a
lower level, also partake of creativity and intelligence. We are of one being with the Father. We are
made in His image in this regard. Out of our creativity and intelligence we have created computers,
but unlike giving birth to human children, we have been unable to pass this creativity and intelligence
on to computers. Any appearance of true intelligence on the part of the computer is a result of clever
programming, but the whole machine remains stupid. So long as we can define situations and program
an appropriate response, everything is fine. We can probably have computers driving cars safely
because the parameters of the driving environment can probably be clearly defined and an appropriate
course of action decided upon in advance for different eventualities; presumably a lot of them involving
braking or evasive action. Once the rules have been learnt, driving is normally a pretty mindless affair.
That’s why we can drive on automatic and converse, responding to unpredictable thoughts of other
people with our own unpredictable responses.

We don’t know where creativity comes from. But you can see that if, with reference to the computer
scientists earlier, we had an equation for creativity, it would not be creativity anymore. It would be
literally formulaic. We would have explained creativity away. But creativity must exist because it is not
possible to live in the world successfully without the ability to respond to the unforeseen and non-rule-
governed.

We may be of one substance with the Father. Thus we partake of some of His characteristics. But
computers are not of one substance with US, their creators. Computers have only a derived, borrowed
intelligence. We are born with life and, through living processes that we don’t understand and don’t
control, we give birth to new human life. Our intelligence isn’t derived. Consciousness is one with the
Source. However, computer pseudo-intelligence is only apparent and even the appearance is parasitic
on our own intelligence. In that sense, we are infertile mules. We share in the Logos, but since we are
not the Godhead, we can’t create life and pass it on. We are sterile in that regard.

How can we create creativity in robots when we don’t know how we do it? We end right where we
started. There will never be an equation for generating creativity because creativity would then turn out
to be an illusion anyway. But creativity is not an illusion, because it’s no illusion that we cannot predict
every situation we find ourselves in and thus it is a regular experience that we manage to be creative
and successfully so, at least some of the time.

Richard Cocks2Richard Cocks teaches philosophy with key interests in ethics, metaphysics and consciousness from Platonic, Christian and Buddhist perspectives, with an especial interest in canonical works of Western Civ.