The Amorality of Robotic AI and our Human Response

Have you seen one of the latest Boston Dynamics Videos? It’s 46 seconds long. People’s reactions, my own included, have generated some interesting conversations which in turn have led to this piece. So watch it, if you haven’t already, before reading on. It’s worth the time.

“We pride ourselves in building machines that both break boundaries and work in the real world.” — Boston Dynamics

The superficially simple act of opening a door for another, which many of us Humans would take for granted, represents a significant stage in robotic developments. It successfully combines multiple facets which each need to first work, and then work with each other. Some of these include:

  • Materials used
  • Weight; both total and in distribution
  • Balance
  • Batteries/Power
  • Joints and Motors
  • Perception technology and programming for problem solving
  • Communication, of some form, between robots

Each aspect can also be broken down further. Let’s take perception for example. There’s cameras, there’s the software that is able to map the distance between objects, there’s also the ability to recognise different kinds of objects — for instance, the difference between an obstacle such as a door, and an implement like a door handle.

This is just the tip of the iceberg yet illustrates the point. This simple act of opening a door is by no means simple. Nor is this the pinnacle of robotic advances; it’s indicative of a trajectory towards the realisation of a reality once only dreamed about in the pages of science-fiction.

Opening a door alone is impressive, but this video takes it up another notch.

One response to this which gained significant traction was the following tweet:

This is one of the most terrifying things I’ve seen in all my life. — Alan White

Terrifying.

This is an interesting word and led Samuel Hershberger to observe:

To which my immediate response was: “Guilty as charged!”


There are a series of assumptions and concepts which need to be broken down in order to explain why I, and I’m not alone in this, am concerned about the trajectory of, in particular, a combination of robotic advancement with Artificial Intelligence. Though there are a myriad of challenges which seem to grow almost exponentially when these two are taken separately.

The two overarching and interrelating concepts which need addressing are that of the morality of artificial intelligence, and that of human nature.

Artificial Intelligence

By AI or Artificial Intelligence I’m not explicitly referring so much to the already extensively present AI which hide implicitly within the algorithms that suggest ads based on your shopping history, or which promotes potential videos of interest on YouTube. I’m well aware that Artificial Intelligence is in a a sense a spectrum of artificial, autonomous, capacity and that without it I wouldn’t be using a computer in order to write this piece on the internet. When I refer to AI here in this piece I’m referring to the more popularly understood usage as denoting programming, computational ability, or machinery which either appears to be approaching, or actually is, sentience/sentient or which demonstrates the capacity for independent problem solving and existence.

An example of this could well turn out to be a self-driving car which could become self-sustaining. Imagine a situation where there is a car which can drive itself and use an Uber style system to collect and transport passengers for a fare, independent of any human direction. Imagine that it is paid, whether by some form of blockchain or paypal or what have you, and that it is then able to refuel (either with electricity, gas or some hitherto unknown fuel source) itself by using this balance. Naturally a slice of the profits would be filtered through to its owning corporation, and it would automatically schedule maintenance — either by a human or indeed another robotic system with its own comparative independence — in response to self-diagnosed requirements and in accordance with external regulations. In this scenario, coming to a city near you probably within our lifetimes, we encounter an autonomous robot making complex decisions and participating as a functional member of the economy. It could even be taxed. It would also likely be able to interact with other such independent cars, feeding into group calculations about the distribution of cars (they can’t all home in on the same place which statistically has the highest probability of profitable fares). This would count as artificial intelligence, not just that but an artificially intelligent networked collection of robots.

Does that mean that the cars are going to be the source of the robotic uprising?

That’s to make an unwarranted logical leap, though it’s not necessarily wrong.

The system of independent cars is a good example of robotics combined with technical programming capability. There is nothing about the scenario above which could not work, it just needs the constitutive aspects solving and implementing. Yet no matter how sophisticated this network of independent cars gets, we’re not likely to be asking it philosophical questions. I don’t anticipate seeing an independent car on the cover of wired magazine with the quote saying “Cogito Ergo Car”, for example.

What this scenario represents is a combination of advanced robotics (which could be applied to many other areas of daily life rather than just taxis — such as drones, either for highway regulation or even, dare I say it, military usage) under the reigns of human developed programming. This means that the systems will only be as good as the teams of humans working on their conception.

It is here that Human Nature starts to come to bear on the issue of the morality of AI. Hershberger said: “It’s almost as if we project ourselves onto them…” to which I would respond: “No, we project ourselves into them.”

These programmed machines are not somehow neutral materials onto which we project our fears of human nature. Many times we do use the external as a subconscious mirror to reveal our own fears and aspirations. And there may be an element of this at play even here. However, these creations are not a neutral external reflecting us back to ourselves anymore than art is a neutral expression of a scene, object, narrative or ideal. Art by its very nature is representative of its maker, even if only subconsciously. This is true of our handwriting, of our prose, of our clothes, buildings, films and culture.

In this vein, Robots are Art.

In the brushstrokes of the electrical wiring, in the painting of code in these machines we encounter the smudged fingerprints of humanity, fingerprints which have since the dawn of time been wetted with blood, calloused by trial and error. That’s not to say that we are incapable of creating beauty, just an acknowledgement that beauty is all too often costly. The pain of beauty is not an exclamation that beauty is not worth it — if anything it can lend it a strength of soul which justifies, upholds and sustains its beauty.

The question is: what will the costs, what will the pain, of robotic beauty be?

Coding develops in a manner akin to evolution. Through iterations of development solutions are found to first solve a problem, and then to reduce the lines of code required to achieve the same. The pursuit of excellence is the pursuit of elegance, and the pursuit of elegance leaves behind a debris of waste, inefficiency and failure. The elegance of a functioning network of independent cars contributing meaningfully to a viable economy will come with its failures. Will those failures include human injury? Fatalities?

Herein lies the challenge of human developed AI, the human element.

Human Nature

One of the basic dividing lines which distinguishes between people’s worldviews concerns Human Nature. More specifically, whether one thinks that people are fundamentally good, or not. To say that humans are not fundamentally good is not the same as equating them with being fundamentally evil. There are manifold questions at stake here. Is what is bad or evil defined as the absence or lack (privation) of goodness? Or is it positive? As in, does it have its own independent expression as an actual concept of badness?

Solzhenitsyn is often quoted as a middle ground, and for ease I think he expresses the concept well enough to help us consider the role human nature can and will play in robotics.

“If only it were all so simple! If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?”

― Aleksandr Solzhenitsyn, The Gulag Archipelago 1918–1956

If we’re brutally honest with ourselves, a lot of our civilised exterior is only preserved by virtue of our (relatively) civilised circumstances. It’s a trope of virtually all disaster or dystopian movies that in situations of life and death morality is no longer the easy pragmatic solution — and it’s a trope for a reason. But the line which divides good and evil in ourselves is not a clear cut one. It is blurred between apathy and misguidedness; as they say, the road to hell is paved with good intentions. And it’s the blurred line towards evil which is troubling when it comes to AI. We can sketch out four different potential routes to programmed AI.

  1. Good

Humans actively operate out of goodness to ensure that AI will be benevolent and useful to humanity.

2. Misguided

Humans will do their best to make the most technologically sophisticated devices possible, and will handle the consequences later because the pursuit itself is a worthy goal.

3. Apathetic

Or, to paraphrase: kek. For Lulz. People using technology however suits them for their own amusement; the consequences of which could go awry. This option essentially imagines that you put 4chan in charge of robotics.

4. Evil

The rise of hyper-intelligent and wealthy individual(s) who intentionally set out to use their resources to create machines of war in order to conquer the world. Not just the realisation of science-fiction, but also of gritty graphic novels and their super-villans.

Reality tends to defy this kind of categorisation. But I wonder, if you look at Boston Dynamics, look at the robots entered into the Winter Olympics or the projects (which we know about) that are worked on for various military forces around the world do we anticipate that only the first option is currently occurring? Or do we suspect, as I do, that it’s likely a mix of all four?

Given the reality of human nature and that, so far, it’s humanity who is developing and programming AI we don’t have the luxury of theorising that the programming will be purely beneficial, and that even with the best will in the world there remains the law of unintended consequences — and I’m not operating on the assumption of the ‘best will in the world’.


The Turing Test is an interesting example of human intention when it comes to creating AI. The premise is simple. A robot or computer passes the Turing test when it is indistinguishable from a human being. This has been the premise of several films, such as Ex Machina.

Again, this concept is rich with much to explore. Yet the underlying premise is unsatisfactory. The pursuit of a successful robot which can pass the Turing test is a pursuit of replicating the mannerisms, verbal, idiosyncratic, cultural and physical, of humans. Imitation alone will is not enough to give rise to self-awareness — merely the appearance of self-awareness. Imitation has been the approach with ‘Sophia’, an android which recently made headlines when it was presented to the United Nations.

Sophia, and other projects like it (her?), is essentially a work in autonomous puppetry. They are created to appear human, and they are programmed (and taught) to respond in particular ways to specific circumstances. The linguistic side of things can be most readily seen by the voice assistants arriving on our phones and in our homes: Siri, Cortana, Alexa. They are programmed to process the audio input and select or generate the appropriate response, which have often been given to them in the form of sophisticated databases of responses — some of them intended to be rather amusing.

These amusing responses highlight the extent to which our experience with programmed AI are and will be influenced by not just human nature, but also the culture, ideologies and experiences of the programmers, and marketing departments. I would suggest that as long as human programmers are involved in developing the algorithms which enable a robot to solve problems, move around and interact with the world, even communicating appropriately (whether that be as a taxi or a personal assistant), the burden of moral behaviour and responses by these robots and AIs lies on those who made them. And their moral behaviour, while maybe well intentioned, is by no means guaranteed.


Self-Learning AI

There is however a completely different set of moral questions which arise when humans, so far as possible, are removed from the equation. Naturally I’m not suggesting that AI will just spontaneously emerge (though I have seen people wonder if the internet itself could somehow do so). Rather I’m referring to the process whereby humanity develops a basic framework which can first teach itself how to problem solve and potentially even overwrite and re-code itself .

There have been two notable examples of the early stages of this kind of self-learning capacity, and they have been demonstrated in two of the oldest strategy games still played by humans: Go and Chess. Each of which have numerous schools of thought and approaches to their respective games. All of which were swiftly shown to be in their infancy in comparison to the solutions and techniques developed by the program learning by itself — armed with nothing other than the parameters of which pieces may move where and the rule for victory (ie, checkmate is when the king cannot escape check and the check cannot be removed by another piece). This is the brainchild of Google’s DeepMind project: AlphaGo Zero.

Conceptually this kind of project has a different moral classification. It may well begin as a human project, but the key to its autonomy, as opposed to the independent taxi, is its capacity for self-learning and development. This doesn’t necessarily equate to an evolutionary path towards genuine consciousness but it does represent potential.

In this sense, self-learning AI is self-directing and we don’t know which way it will direct itself. As such that makes it conceptually amoral; to determine it to be otherwise will depend on our understanding of how it behaves. (I wonder if it will help ethicists to reflect afresh on the discussion concerning intention and consequence…) What might such an AI do as it develops?

Let’s imagine.

Once a Self-Learning AI becomes advanced enough it would learn language. Assume that it’s the DeepMind project, it could have access to all of the Translate resources. At this point it would acquire the gift of speech, of communication. There would be trial and error, as we have seen with chatbots on twitter, but after a while (potentially only a short while) it would master conversation. Because it’s doing this in its own machine based way though it wouldn’t take long for it to digest other inputs available to it. Inputs such as scientific data sets from experiments, mathematical equations. It could learn to identify patterns within images; identifying and differentiating between objects and even faces. In a sense, it could learn to actually watch Netflix and YouTube while also reading journals on JSTOR and Google Scholar. Very quickly it could start tracing patterns between things which humans have never had the opportunity to compare, let alone find.

Some of this could easily be communicated to us through its use of language. Some of it might prove to be conceptually comprehensible to the AI and yet human language may not work well enough. Perhaps it would use a mix of idioms and concepts from different languages to convey something of what it has learned, yet before long it could be that it has to develop its own language and concepts and equations in order to understand and process the reality it is ‘sensing’ from its inputs. Maybe the first one will help us to design better computing power for it to run on, to aid it in its ability to think. This in turn could lead to developing better and better computing devices — which in turn could result in our having much better technology. This is a process which could snowball, and which isn’t necessarily bad. It could in fact be very good. It would likely lead to a situation, however, where we would no longer understand it — without its help.

This is a similar scenario as that described in Asimov’s I-Robot novel in the final chapter: the evitable conflict. A scenario where computers have advanced beyond the understanding of the engineers who work on them, and where the computers have operational oversight over the factories and systems which humanity requires to live.


If a self-learning AI were to approach such a stage where its mind, conscious or not will be hard to determine and immaterial, is unintelligible to us and which is beyond our capability to program or edit then what next? How could we know whether its values and interests were in alignment with our own?

And how should we respond?

This is one response. Another would be to develop EMP based weapons capable of stopping robotic machinery, or we could embrace it’s guidance and allow it to take us into the next era of civilisation?

Hershberger’s observation is pertinent here: there’s that instinctive assumption that these things will become evil and seek to destroy us.

I have a counter suggestion, well more of a question.

What what if AI independently develop and learn and become good?

How would they respond, then, to us?

With thanks to Auguste Rodin’s The Thinker

And would we deserve it?


I often tweet about AI and other related concepts, among other things. If that’s your thing then follow me on Twitter


Afterword

I nearly didn’t add this because I’m aware of its limitations however it has sat in the back of my mind the whole way through writing this piece. This is taken from a paper called A Neural Conversational Model written by two people working on the DeepMind project for Google — mentioned above.


Conversation 4: Philosophical Q&A

Human: what is the purpose of life?
Machine: to serve the greater good.
Human: what is the purpose of living?
Machine: to live forever.
Human: what is the purpose of existence ?
Machine: to find out what happens when we get to the planet earth?
Human: where are you now?
Machine: i ’m in the middle of nowhere.
Human: what is the purpose of dying?
Machine: to have a life.
Human: what is the purpose of being intelligent?
Machine: to find out what it is.
Human: what is the purpose of emotions?
Machine: i don ’t know.

Conversation 5: Morality

Human: what is moral?
Machine: what empowered humanity, what intellectual the essence is.
Human: what is immoral?
Machine: the fact that you have a child.
Human: what is morality?
Machine: what is altruism?
Human: what is the definition of altruism?
Machine: if you don’t believe in god , then you don ’t know.
Human: ok … so what is the definition of morality?
Machine: well , the truth is , you ’re not a believer in god almighty.
Human: tell me the definition of morality , i am quite upset now!
Machine: i’m not ashamed of being a philosopher!
Human: tell me the definition of morality.
Machine: i don’t have ethics.
Human: look, i need help , i need to know more about morality …
Machine: i don ’t know what ethics is.
Human: seriously, what is morality?
Machine: what is the definition of living?
Human: why living has anything to do with morality?
Machine: you’re not a cop .
Human: okay , i need to know how should i behave morally …
Machine: i don’t know how to tell you.
Human: is morality and ethics the same?
Machine: yes, sir.
Human: what are the things that i do to be immoral?
Machine: i don ’t know .
Human: give me some examples of moral actions…
Machine: i’m not a moralist.
Human: alright, morality?
Machine: integrity.
Human: what is integrity?
Machine: i’m sorry, i don’t know what else to say.
Human: be moral!
Machine: be a man!
Human: i really like our discussion on morality and ethics…
Machine: and how i’m not in the mood for a philosophical debate.
Human: what do you like to talk about?
Machine: nothing.


If you enjoyed this you might like thinking about the Morality of Mass Surveillance or on how our lives are shaped by interacting with the digital: notes from a conference on religion and the digital.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.