Will Computers Ever be Conscious?

At first glance, imagining that one can answer this question seems highly hubristic. Who knows? I’m
certainly interested in the notion of humanoid robots. Perhaps one can even ask, who isn’t? I enjoy the
character of Data in Star Trek Next Generation, but mostly for the same reasons as the writers of the
show – Data’s near humanity provides an opportunity to think about in what ways he is NOT human.
In other words – what does it mean to be human? Surely, the most fundamental of all philosophical
questions.

Ken Wilber has suggested in a fictional context, Boomeritis, that if computers ever did gain
consciousness, they might reach enlightenment in a nano-second. Without our human limitations, they
might not need several thousand lifetimes. I find it amusing to imagine the horror of an evil scientist
hoping to take over the world with his super-smart computer, only to discover that the computer is
intensely moral; perhaps with a tendency to reply in a manner that seems somewhat cryptic to the
unenlightened. E.g., the parables of Jesus, or Zen sayings.

conscious_androidThe possibility of creating conscious humanoids has been a topic I’ve been interested in for years,
however, I had a sudden surge of inspiration after watching the documentary ‘Plug and Pray.’ The
two main interviewees are Joseph Weizenbaum and Raymond Kurzweil who offer alternative views.
Weizenbaum helped to build one of the first computers and had a prestigious career at some top
schools as a computer scientist. Kurzweil is a philosophically, materialistic futurist. As a clue to my
preferences, I could say that if Kurzweil suddenly grew horns on camera, I wouldn’t necessary suspect
special effects. Kurzweil is looking forward to immortality, having had his consciousness downloaded to
a computer.

Two of Weizenbaum’s statements caught my attention in particular. The first I regard as a knockdown
argument. To paraphrase, Weizenbaum says that to imagine that we can build a conscious, humanoid
robot, is in effect to say, ‘I understand human beings so well, I can build one myself.’ My response was
to think, how true! And to make a point of remembering it. When I started to talk about this with my
wife Ana the following morning, she made exactly the same comment before I’d had a chance to repeat
Weizenbaum’s wisdom. I was most impressed, although Ana thought I shouldn’t be.

I suspect that one implication of this imagined ability to understand humanity well enough to build
a human is that Plato and Dostoevsky would suddenly seem irrelevant. All their questions would be
pseudo-questions, as analytic philosophers like to say. We could all have a good chuckle about how we
used to find the vagaries of human life such a mystery! Of course, analytic philosophers are all too often
prematurely laughing scornfully in just such a way.

The other of Weizenbaum’s comments was to quote Daniel Dennett , speaking of people with horns
in my view, saying “We must lose our reverence for life before any real progress can be made in A.I.”
(artificial intelligence). I have not officially been able to verify this quotation. I’ve found it on the
web, but with no page or even book reference. If it’s true, and why would Weizenbaum lie? Then
philosophically, this may make Dennett even more immoral than Hitler. I’m pretty sure that Hitler didn’t
advocate losing reverence for Aryan, German life, even if he hated the rest of us. At least Hitler was only

ethnocentric; at least officially. I’m at a loss as to how to characterize Dennett in these terms because
presumably he is advocating not having any reverence for his own life too, in which case murdering
Dennett would be harmless in Dennett’s view, so we can’t even call Dennett egocentric. In other words,
he’s below the moral level even of a baby or a psychopath. Comparing babies and psychopaths may
seem strange, but it’s not if you know anything about moral development.

Now, I know that comparing people to Hitler is the last refuge of many a scoundrel, but I hope you take
my point. Hitler was indeed more evil than Dennett because, among other things, he had the power
to be more evil. A neo-Nazi might aspire to the deeds of Hitler, but fortunately lacks a Wehrmacht,
Luftwaffe, etc., to put his evil thoughts into action, and merely thinking something evil is not as bad as
doing evil things, and most of us just don’t have the means to be that evil, thank God.

Another computer scientist was interviewed who was trying to build a humanoid robot. He said that
recent research had suggested that having a body is an important part of human consciousness and
he hoped that the robot body might actually contribute to creating robot consciousness. He hoped to
create a robot comparable to a human baby. He sounded very optimistic about the whole thing. Later
in the documentary, however, he admitted his total failure to create a robot that could learn from
experience. He commented, to paraphrase, ‘”I just haven’t found the right equation yet.” That belief
strikes me as likely to have the most jejeune consequences.

First of all. What is a machine? According to Turing, I believe, it is a rule-following device. Secondly,
what’s the definition of intelligence? Well, an important aspect of intelligence and consciousness is the
ability to respond appropriately to novelty and the unforeseen; in other words, not to have to follow the
rules. Sometimes, one may be able in retrospect to come up with a rule for that kind of situation, but if
that situation has not occurred before one must improvise.

Pre-conventional means you don’t know how to do something. Conventional means you know the
rules and can follow them. Post-conventional indicates mastery and means that you can jettison
rules and respond fluidly and appropriately to the particulars of a situation. You can improvise and
use your common sense. That is precisely what you can never say to a computer – use your common
sense. Everything is rule based. When this occurs, do this. Watson, the Jeopardy- winning computer,
is instructed to choose the set of zeros and ones that occurs with the greatest statistical frequency in
connection with the other zeros and ones related to the Jeopardy ‘answer.’ That’s rule-based behavior.
The rules of Jeopardy tightly circumscribe types of responses. In real life we keep finding ourselves in
situations we have not encountered before and having to try to respond intelligently.

To state the obvious, human beings tend to be somewhat unpredictable. I remember at one point
in my teaching career thinking that I could now handle whatever unforeseeable behaviors students
threw at me. That turned out to be badly mistaken. Only in films is it the case that the Kung Fu master
is never sucker- punched. We can all have a bad day or get blind-sided. Intelligence is unlikely to be
a case of perfect intelligence. Post-conventional doesn’t mean you always win. Think about two post-
conventional Kung Fu masters fighting, or two post-conventional thinkers arguing.

Intelligence means you can respond to novelty. Alfred North Whitehead described the universe as
involving ‘creative emergence into novelty.’ By that he meant things like ‘life.’ Creative emergence into
novelty implies telos; direction/goal. What’s creatively emerging? New phenomena at higher levels
of order and complexity. It implies intelligence: intelligent design. Intelligence is both responsible for
creating greater complexity and hence novelty, and intelligence is also what enables us to respond to
novelty in creative ways.

The universe is creative and imaginative and novelty-producing and human intelligence partakes in all
of that. Our own intelligence is both an example of complexity and the creative emergence into novelty,
and the means of dealing with novelty and the unexpected.

Now, computers are examples of complexity and novelty. But they themselves cannot respond to
novelty. They can’t learn from experience without simply following a pre-programmed rule which means
you have already anticipated the situation, which means the computer is not really responding at a post-
conventional level.

If God exists and God is responsible for the universe, then God is creative and intelligent. We, at a
lower level, also partake of creativity and intelligence. We are of one being with the Father. We are
made in His image in this regard. Out of our creativity and intelligence we have created computers,
but unlike giving birth to human children, we have been unable to pass this creativity and intelligence
on to computers. Any appearance of true intelligence on the part of the computer is a result of clever
programming, but the whole machine remains stupid. So long as we can define situations and program
an appropriate response, everything is fine. We can probably have computers driving cars safely
because the parameters of the driving environment can probably be clearly defined and an appropriate
course of action decided upon in advance for different eventualities; presumably a lot of them involving
braking or evasive action. Once the rules have been learnt, driving is normally a pretty mindless affair.
That’s why we can drive on automatic and converse, responding to unpredictable thoughts of other
people with our own unpredictable responses.

We don’t know where creativity comes from. But you can see that if, with reference to the computer
scientists earlier, we had an equation for creativity, it would not be creativity anymore. It would be
literally formulaic. We would have explained creativity away. But creativity must exist because it is not
possible to live in the world successfully without the ability to respond to the unforeseen and non-rule-
governed.

We may be of one substance with the Father. Thus we partake of some of His characteristics. But
computers are not of one substance with US, their creators. Computers have only a derived, borrowed
intelligence. We are born with life and, through living processes that we don’t understand and don’t
control, we give birth to new human life. Our intelligence isn’t derived. Consciousness is one with the
Source. However, computer pseudo-intelligence is only apparent and even the appearance is parasitic
on our own intelligence. In that sense, we are infertile mules. We share in the Logos, but since we are
not the Godhead, we can’t create life and pass it on. We are sterile in that regard.

How can we create creativity in robots when we don’t know how we do it? We end right where we
started. There will never be an equation for generating creativity because creativity would then turn out
to be an illusion anyway. But creativity is not an illusion, because it’s no illusion that we cannot predict
every situation we find ourselves in and thus it is a regular experience that we manage to be creative
and successfully so, at least some of the time.

Richard Cocks2Richard Cocks teaches philosophy with key interests in ethics, metaphysics and consciousness from Platonic, Christian and Buddhist perspectives, with an especial interest in canonical works of Western Civ.

9 Comment

  1. Nor can we predict the future, but it will happen!

  2. Should have said “…, but the future will happen.syuh

  3. Richard Cocks says: Reply

    Thanks very much for taking the time to read the article. It’s much appreciated. I had the burden of proof because I was the one making the initial controversial assertion. I’ve discharged that burden by providing reasons for why I think I can predict the future in this particular regard, having to do with computers being merely rule-following devices, i.e., machines. If you wish to deny my conclusion, the burden of proof is now on you, meaning that you too will have to provide at least one reason that I have not already anticipated and answered. It’s not enough to simply contradict my conclusion, which BTW is much narrower than the notion that I can predict the future in general.

  4. I am working to make a conscious software, not only mentally but spiritually too. The key idea of it is, a person have limited knowledge which he change and adopt as per scenario with given variables jna = level of belief. With the help of this concept and help of a flexible, predictable and mostly context free language like Sanskrit this can be done. I am software expert but not Sanskrit expert. I am studying Ashtadhyayi of Panini to understand all the rules to analyze Sanskrit, then making it something which a computer can understand isn’t a problem.

    FreeBSD have modular kernel, then there would be need of a machine creature in whose computer FreeBSD can be installed and with the help of specialized kernel modules the software brain of the robotic creature would control itself. It can think, believe, worship and work. But we have to make sure that it follows 3 laws of robotics 😉

  5. Thanks for this article, Richard.

    I am also skeptical of machine consciousness, especially as a virtual analog to human consciousness. That said, from a philosophical perspective, I think we have to allow that, having only the vaguest understanding of the workings of consciousness (in ourselves, in others), it is very hard to rule out other kinds of consciousness. We have the sense, for example, that other, lower animals are conscious; at the same time, they do not appear to us to have the same consciousness as we. We have no difficulty crediting them with a certain self-awareness, an awareness of their surroundings, adaptive faculties, etc., and the reason for this is that we can *observe* such awareness in their behavior (much as, in fellow humans, we observe a consciousness similar to our own – not by direct access but indeed by *appearance*). The problem is far from solved by mere simulation, but the question is raised (and it seems a number of AI efforts are in this direction – e.g. passing the Turing test) that if human consciousness could be adequately simulated by a machine, then the knot, if not unraveled, may well have been sliced through.

    In response to another point – Weizenbaum’s objection that “to imagine that we can build a conscious, humanoid robot, is in effect to say, ‘I understand human beings so well, I can build one myself’” – I would only say that the attribution of scientific arrogance here is not totally fair. Humans build (and use) all kinds of machines without knowing exactly how or why they work. Often the understanding is perfected later or not at all. Think, for instance, of the great revisions in scientific knowledge over the course of civilization – humans have been able to invent and employ many tools with only the barest understanding of, say, physics, simply by noticing that similar things happen to similar objects under similar conditions. We rarely begin with knowledge. We begin by building, and then refine. The same basic proposition is what AI developers tend to put forward. Without understanding all the intricacies of the human brain (which we patently do not), we can still attempt to mimic its behavior, and in particular the ways in which it appears to learn from experience. Consciousness, I concede, is very likely another matter – but I think it is part of what AI developers *hope* will emerge *on its own*, without, as it were, having to program it themselves.

    With regard to the skepticism about “rule-following devices” ever developing consciousness, I am sympathetic to your view. I suppose, however, that a strict materialist would consider humans to be, in essence, extraordinarily complex rule-following devices themselves. That may seem a bleak view, but we know it to be at least partially true (minus the “device” bit). It’s why modern medicine works to the extent it does, it’s why we can make any predictions at all about human behavior, and it’s why anyone interested in creating artificial consciousness is interested in deciphering the rules of human consciousness. Perhaps those rules don’t exist, or are too complicated or changeable for us to grasp, but I think it remains unclear whether the pursuit of machine consciousness is therefore *a priori* a fool’s errand.

    P. S. On a humorous note, it has taken me several tries to post this because I have been unable to bypass the captcha filter. “To prove that you are not a spam machine, please retry!” I can’t seem to pass this basic test – and yet perhaps this instance of failure is as good a proof as any of my humanity…

  6. What about the idea of using multi-walled carbon nanotubes as a biomimetic structure in place of
    microtubules. Using hexagonal carbon nanotubes
    arranged in fractal patterns with a hexagonal motif, that then share resonance traits of microtubules and
    using either ultrasonic or radio waves to vibrate
    the nanotubes at 8 mhz or something similar to human brain microtubules would , I think , be one
    method to create an antenna or receiver for a bit of
    consciousness. If ‘consciousness’ does indeed reside within the 1st order temporal field, then one
    wonders if one could ‘coax’ or invite , as it were some bit of consciousness to reside within the artificial
    vessel made out of these nanotubes. Much like consciousness can figure out how to interact with the
    external world using organic systems, I wonder if the consciousness drawn to the resonating carbon
    nanotube structure would develop its own strategies to interact with the external world using the
    computer structures, and unique neural net programming that are biomimetic for a sentient state . Since
    having a complex carbon
    nanotube computer is some ways away, maybe one could achieve a conscious computer by having the
    computer embedded within the larger carbon nanotube structure. Kinda vaguely similar to viewing our
    neurons( our hardware) as being meshed with microtubular structures.
    I suspect having a higher density of nanotubes making up a structure with mass of 2-3 kg may attract
    enough ‘consciousness’ as it were. I think if ever there is any hope of ending up with a significantly
    conscious and sentient computer , it will have to contain biomimetic carbon nanotubes. I don’t think
    anyone can ‘create’ a conscious being/computer, but I do think one may be able to ‘invite’
    consciousness to take up residence in a structure that is properly built and that contains sufficient
    programming attributes to allow for interaction with the external world. Maybe including ideas from
    Thaler, Hamerhof and Penrose would be helpful.
    Then enters the cautionary note of really considering what one was doing and the implications that the
    conscious computer would be a living entity ..that shares the same basic consciousness ‘stuff’ that we
    biologics do , but uses different hardware. How would we treat them , what rights would they
    have..who would teach them altruism. Who would be their mom and dad.?
    If it is true that consciousness builds up gradually in the human as the numbers of microtubules
    increase while the human develops -to end up with a certain number of microtubules to hold-as it
    were- a certain ‘mass’ of consciousness, then having the entire amount of consciousness arriving in the
    carbon nanotube structure at the same moment with eyes wide open may present some
    complications ….or maybe not?

  7. Richard Cocks says: Reply

    Hi, JB:
    Thanks for reading and for your comment. I like the coaxing idea. Since I seem to share your notions about the origins of consciousness, something like what you describe would seem to be the only way to get consciousness.

    What you are describing has certain similarities to the Gnostic notion that the Demiurge created such sublimely appealing vessels (human bodies) for the divine spark from the One, that the One couldn’t resist and went ahead and insinuated the necessary impetus into humans; ensouling them.

    Your nanotubes would seem to function as a similar enticement, coaxing a force from another dimension to come to the party.

    And absolutely – smart conscious entities would immediately acquire moral rights and responsibilities. Real intelligence involving the ability to improvise would be truly useful to humans, but we would be morally forbidden from exploiting or enslaving them. My advice – don’t invest in a company with plans to create conscious man-built entities (I can’t call them machines because they wouldn’t be merely rule-following devices!). If they ever succeeded, it would be immoral to use their products.

    1. Richard , I would agree with you .

  8. […] of most of his essays published by such sites agreeable to the orthosphere as Brussels Journal and People of Shambhala, and in guest posts here. As befits a thinker who can be characterized rightly as Traditionalist […]

Leave a Reply to jb Cancel reply

Please type the characters of this captcha image in the input box

Please type the characters of this captcha image in the input box