Are Friends Electric?: Machines, Emotions, And The Importance of Rule Breaking

A robot is a machine. One definition of a machine, used by people like Alan Turing who invented the concept of a modern computer, is a rule-following device. A computer, and by extension, a robot satisfies this definition. Computers are rule-following devices. They do what we program them to do; no more, no less.

Clearly, some of what humans do is to follow rules. There are rules of social etiquette, of classroom behavior. There are road rules and the rules of logic. People who are cognitively concrete operational are rule followers. In the area of morality, being concrete operational is also what Lawrence Kohlberg described as ‘conventional.’ The conventional thinker knows that, generally speaking, stealing is wrong, slavery is wrong and murder is always wrong. But they don’t know why and don’t have the cognitive wherewithal to figure out why.


Ken Wilber points out that pre-conventional, conventional and post-conventional are categories that can apply to nearly any human endeavor requiring skill. Pre-conventional means you are a novice and you don’t know the rules. Conventional means you know the rules. Post-conventional implies mastery. Mastery means you know the rules but can break them intelligently when the situation requires it. In other words, the post-conventional person can improvise. They are NOT similar to machines.

In fact, consciousness seems to have evolved precisely because machine-like behavior is not adequate for a satisfactory adjustment to one’s environment. Rules imply that one can anticipate a phenomenon and foresee an appropriate response. In very early experiments using computers and robotic arms, Kelly and Kelly in Irreducible Mind point out that computer scientists in say, the 1970s, had a great deal of success and it would commonly be announced that an amazing breakthrough in so-called artificial intelligence was about to be made. However, these apparent ‘breakthroughs’ were achieved by having an artificially simplified environment. For instance, an environment in which there was one blue ball and one red ball. No such environments exist in real life. Oh, there might be one blue ball and one red ball, but there will also be a floor, ceiling, air vent, tables, chairs, an atmosphere and any number of other things. In the artificial context, the computer would be ordered to pick up the red ball, or pick up the blue ball. Lo and behold, the robotic arm would do just what it was told. Apparent understanding! Intelligence!

The trouble was that as the scientists attempted to up the ante, their success turned out to be short-lived. They were unable to replicate their success in even slightly more complicated environments. I remember reading about a bomb-finding robot who would enter a realistically complicated room and start looking. It would proceed by elimination. ‘Not a bomb, not a bomb, not a bomb, not a bomb, not a bomb’ followed inevitably by ‘BOOM!’ Of course, even then cheating would be involved because the computer would have to know in advance what the bomb looked like.

What the bomb detector needs is ‘common sense.’ The rule is that when meeting new people in certain contexts, you shake their hand. However, common sense says, if they are bleeding profusely from the neck, you forego shaking their hand and try to administer first aid or find someone who can. Common sense is a feature of intelligence, not rule following. Common sense requires one to improvise a solution to an unanticipated eventuality.

If all eventualities could be anticipated, then we could function perfectly well as machines; as rule-following devices. But the future is unpredictable, so we have to be conscious. We have to have ‘common sense.’ All animals are conscious and are thus not merely machines.

For a little while, I brought to class a doll-like mechanism called the ‘Yes Man.’ The Yes Man is a parody of a compliant company employee. When you tap him on the head, he annunciates variations on phrases like “Absolutely. I couldn’t agree with you more.” I said to a colleague that I used the doll to demonstrate the difference between actual human beings and the machine account of human nature. The colleague’s response was that we ARE like ‘The Yes Man’ – just ‘more complicated.’ In other words, while appearing to really think and reflect we are actually just machines with a large set of preprogrammed responses. My argument is that if this were true, we could not improvise. We could not adequately adjust our behavior to the world.

Once we have encountered a particular circumstance, we can then write a rule to cover it. But we can’t have a rule for every eventuality in advance because not every circumstance can be anticipated.

Now, the person who thought that we humans function like the Yes Man had, on another occasion, indirectly told me that she was of low emotional intelligence. She had trained as a social worker and part of her training involved Rogerian counseling, as in Carl Rogers. This involves ‘mirroring,’ reflecting back to a person the feeling they seem to be conveying. You say things like ‘You appear to be angry that your contributions haven’t been acknowledged.’ Someone says they are sick of not being trusted to do a good job and the ‘mirror’ says ‘So, you are tired of not being trusted.’ The purpose is to help the person identify what they are feeling and to validate that feeling, so the person feels ‘heard,’ and understood. A contrary reaction would be to argue with the person and tell them that they shouldn’t be feeling that way. I’m not sure exactly what Carl Rogers would say, but I can imagine eventually suggesting that the person might want to come to see the situation in a different light, but it seems likely that feeling heard and understood and empathized with could be a good first step.

The colleague was amazed at how comforting and rewarding people found this pretty simple ‘mirroring’ technique. It is itself an example of rule-following. The colleague confessed that she was by far the worst at accurately describing what emotion people were in fact displaying.

Emotional intelligence is the ability to describe what you are feeling and also to be able to describe the feelings of others. The colleague was particularly bad at this.

My contention is that people of low emotional intelligence, in my experience, tend to be the ones who are most attracted to the ‘people are machines’ thesis. My hypothesis is that their ontological, or metaphysical contention, has, in part, a phenomenological basis. Their experience of their own internal mental states, at least when it comes to feelings, is relatively opaque to them. While people of normal emotional intelligence have a rich inner experience filled with emotional nuance, these other people look within, and without, and are faced with a vacuum. A nothing. An absence. Phenomenologically, they experience themselves to be something akin to a robot. And to a degree; to the extent that robots are imagined to lack an emotional life, these people really are robotic. The difference being that actual people do have emotions; but these actual people lack epistemic (knowledge) access to them. This lack of self-awareness is going to cause all sorts of problems. Their lack of knowledge of other people’s emotional reactions is going to be equally problematic.

In about 2012, I had a long talk to a young man who was just finishing up a degree in cognitive science. His face failed to convey any emotional variation during the entire discussion. No smiles, frowns, pensive contemplation, raised eyebrows, pursing of lips. Nothing. His brother had the same stone-faced demeanor. The brother was listening as we talked. After an hour or two of this rather bizarre lack of emoting, I said that I hoped he didn’t mind me mentioning it but that he appeared to be rather lacking in emotional intelligence.

Now, that may appear to be a rather bizarre conversational gambit and indicate a possible lack of emotional intelligence on my own part. However, the young man wasn’t in the least offended and instantly confirmed with no hesitation that I was right. I like to think that this indicates that I had correctly assessed whether my question was likely to cause offense or not. The young man went on to tell me that in fact, up until one year ago, he did not believe that emotions existed. Let me repeat that. He did not believe that emotions existed, but he had recently changed his mind on the topic. I don’t remember if he explained what had changed his mind or not.

Believing that emotions don’t exist is pretty bizarre and seems to imply a phenomenology (subjective experience of one’s own conscious states) quite different from a normal person’s. So, the argument is that these people experience themselves to be akin to robots. That’s understandable. They are. I just wish that such people didn’t extrapolate from their own strange epistemic position, not knowing what they feeling, to me. I have no wish to be a robot and I don’t experience life robotically. But, as mentioned above, it is a feature of low emotional intelligence that one cannot identify what other people are feeling either.

The funny thing is that dogs can do what these humans cannot. Dogs have coevolved with human beings. They have a limbic system and thus they have emotions. Dogs and humans relate to each other on an emotional level. Dogs study human faces to assess our emotional state. What’s interesting about this is that dogs express their emotions using their whole bodies, not facially. So, this skill in interpreting facial expressions is uniquely applied by dogs to humans, but not to other dogs.

This makes Descartes’ assertion that dogs and other animals are merely machines with no souls particularly annoying and also autistic seeming. Dogs clearly feel things and only the most extreme dogmatism (!), immune to the evidence of one’s experience, could lead to such a conclusion.

One may object that I am engaging in an ad hominem attack against the arguers for the machine-like nature of human beings. I’m accusing them of lacking intelligence instead of showing what’s wrong with their arguments.

I admit that I take a certain malicious glee in expressing my incredulity about their beliefs. I enjoy seeing the look of surprise and humor on students’ faces when I describe the young man who said he didn’t believe emotions existed.

Philosophical questions are by their nature controversial and generally not susceptible to definitive proof. Usually, somewhere in there is an intuition about the nature of reality, about the nature of human existence that functions as a first principle, as a metaphysical assumption that can’t be proven except by relying on other controversial and unprovable notions. Examining one’s own experience of life and reality is the inevitable starting point for much philosophical thinking.

The machine-interpretation of human consciousness is also reinforced by beliefs in materialism and the associated notion of determinism. Materialism is the notion that what is really real is just atoms and molecules and everything can in some way be reduced to interactions between these things. Determinism is the notion that these atoms and molecules are governed by certain laws of physics. If minds are in some way brains and brains are physical mechanisms, then events in our minds are the results of the laws of physics. Free will is an illusion.

Materialism is a notion that requires its own separate treatment. A belief in determinism involves logical problems which seem to me to be insurmountable and that also need a lengthy response.

But for now, I would like to end with one final ad hominem, again directed at an unprovable intuition, not as a response to an argument.

People of low emotional intelligence are understandably frightened of emotions. They don’t understand them and thus can’t deal with them. It is similar to someone who is math phobic, going out of their way to avoid equations. To a low emotional intelligence person, the notion that human beings are in fact machines; that we resemble robots, is comforting. To someone like me, it is distressing. You’re attacking my humanity. You’re devaluing things like love and friendship about which my life centers. You’re making great philosophy (Plato) and great literature pointless and redundant and delusional.

To some people being an emotionless machine is enormously attractive. Any view that seems to offer support for the mechanistic notion will seem appealing. It’s actually going to be an enormously counter-productive attitude to adopt in living one’s actual life. But it remains an ideal for many. It would indeed make things simpler. When eighteenth century British philosophers suggested that pleasure was the one and only source of human motivation, one seems to be encountering such emotional stupidity with so little insight into the multi-faceted nature of human existence that it beggars belief. That this notion became taken for granted in Anglo-American philosophy indicates to me that this kind of philosophy has become a self-selecting discipline filled with emotional imbeciles, just as psychopaths were once described as moral imbeciles, just as psychopaths were once described as moral imbeciles. Sociopaths tend to be skeptical about morality because they lack empathy and a conscience of their own. They extrapolate from their own experience, as we all do. Interestingly, many Anglo-American philosophers have expressed skepticism about moral matters too, such as the logical positivists. To be frank, being a normally mentally functioning person in such a context can be a very bizarre experience.

Anglo-American philosophers know that what they are saying is counter-intuitive, but they seem to imagine that their superior logico-mathematical type abilities, coupled with their immunity to mere emotions, or so they think, has actually led to some profound insight.

Richard Cocks2Richard Cocks teaches philosophy with key interests in ethics, metaphysics and consciousness from Platonic, Christian and Buddhist perspectives, with an especial interest in canonical works of Western Civ.

5 Replies to “Are Friends Electric?: Machines, Emotions, And The Importance of Rule Breaking”

  1. Sarah Ngakaari Hammond says: Reply

    Really interesting article. I’ve been a fan of Blade Runner (adapted of course from Philip K. Dick’s “Do Androids Dream of Electric Sheep?”) Also I think the young man who lacked, being able to show emotion. May be on the Autistic Spectrum (which of course includes Aspergers.) People with Aspergers/Autism (depending on how severe they have it.) Find it hard to show emotion upon their face and in their body movements/mannerisms. How do I know this? I have Aspergers myself. I’m also one of the more higher functioning Aspergians.

  2. Richard Cocks says: Reply

    @Sarah. Thank you for reading and for taking the trouble to comment. I too am a fan of Blade Runner and of Philip K. Dick. I sometimes teach ‘Do Androids Dream of Electric Sheep,’ partly as an adjunct to discussing psychopathy. I would definitely think the young man and his brother are on the autistic spectrum. I’ve read with interest of some on the spectrum learning to read other people’s emotion by referring to a kind of rule book describing what anger, etc., typically look like. It can be helpful to know someone has Aspergers because one is less likely to imagine that the person is intentionally being rude or insensitive or is just inexplicably ‘strange.’ Talk of emotional intelligence is quite new, and I’ve found it quite enlightening. I’m glad that you don’t seem to have taken offense at what I wrote. We all have strengths and weaknesses and one of mine was performing very poorly in mathematics at school.

  3. […] friend and colleague Richard Cocks has an article, “Are Friends Electric,” at The People of Shambhala that will, I believe, be of interest to Orthosphereans.  (None […]

  4. […] Are Friends Electric?: Machines, Emotions, And The Importance of Rule Breaking […]

  5. […] Are Friends Electric?: Machines, Emotions, And The Importance of Rule Breaking […]

Leave a Reply

Please type the characters of this captcha image in the input box

Please type the characters of this captcha image in the input box