Tags

, , , ,

Previous blog post in this series: Restating the problem

+++

As discussed in the first instalment of this blog series (see above), I believe that our current patterns of how to portray interactions between us and “the others,” namely artificial agents, is not adequate for grasping how our future might look like, namely a future that is sated with fairly but not too intelligent distributed and interconnected agents. In such a future, our interactions with “artificial idiots” will be much tighter and intimate than with existing IT. However, the language we currently use for describing our interactions with machines only work if the machines are simple and deterministic in their behaviour, or when their level of intelligence approaches that of humans, or even surpasses it. Currently, we do not possess a narrative style for describing anything between the above extremes, i.e. a symbiosis with “artificial idiots.”

But maybe our way of talking about the world we live in, about how we interact with machines, can be modified. Addressing this question is the task of this blog post.

Man and machine

For the intended analysis we need to first look at stories about the interaction between humans and machines and how -and why- we portray artificial intelligence. For this purpose, let us have a look at two film franchises and one standalone film: The Matrix, Terminator, and Colossus [Wikipedia 2014a, 2014b, 2014c].

In The Matrix, the hero Neo is introduced to the awful truth that the reality he lives in is one big simulation, and that super-smart machines actually rule the real world. Humanity lost the war against the machines, and humans are now kept for energy harvesting, lulled into submission by being imprisoned in a simulation that has the extend of the entire planet. The simulation is guarded by specialised programmes, called agents, and in the simulation they appear human. Well, they all wear suits and shades and act a bit machine-like, but what is significant is that they exhibit human emotions. Neo’s main adversary is, for instance, driven by disgust of the simulated world he is forced to live in [YouTube 2013]. So, not only do the agents in The Matrix look human, which is a necessity for the simulation to work, the agents also pursue their goals in a way that is very human-like and not so much machine-like (grandeur, domination, spite …). But maybe this is just an artefact of the necessity of emulating an entire civilisation and of operating within that simulation? Let us have a look at a franchise, where both intelligent agents and humanity solely operate in the real world and where this kind of straight jacket does thus not exist.

In the Terminator franchise [Wikipedia 2014b], Skynet, a defence intelligence developed for the US military, is out to destroy humanity [Wikipedia 2014d]. At the beginning, Skynet has a very non-human appearance and it does not emanate human emotion [Wikipedia 2014e]. However, the more the time line of the Terminator universe progresses (first in Terminator Salvation [Wikipedia 2014e], and then in the early instalment of Terminator [Wikipedia 2014f]), the Skynet battle robots assume more and more of a human appearance. The Terminator series provides a simple explanation for this trend: human-looking terminators make it harder for humans to distinguish machines from man, and it makes it thus easier for machines to go under cover and to compromise the human resistance forces. However, while the robots’ appearance becomes increasingly human, they do not exhibit many an emotion. So, in contrast to The Matrix, in this franchise it is sufficient for intelligent agents to look like humans, but not to behave like them.

So how does a story look like, in which artificial intelligence does not behave human and does not look human either?

Colossus is a 1960s scifi film based on the namesake novel [Wikipedia 2014g]. Colossus is also the name of a fictitious, mountain-sized central-defence computer that, together with its Soviet counterpart, usurps world rule, putting all humans under its peace-enforcing nuclear fist (peace as in the absence of war). Colossus does not have a human appearance at all, it rather looks like a skyscraper-sized machine . Colossus is also void of human emotion. However, humans express emotions toward Colossus, for instance anger.

 

Summary of the three examples

The Matrix: artificial intelligence looks humans and exhibits human emotions;

Terminator: artificial intelligence looks progressively human but does not express emotions;

Colossus: artificial intelligence does not look human and does not express emotions; in contrast, humans express emotions toward the artificial intelligence.

It seems these stories are quite different when it comes to “the other,” but actually they can all be aligned along a single, simple equation. For this, let us have a look at common denominators in these films.

 

Common denominator one: anthropomorphism

One of the common denominators in portraying artificial intelligence is to anthropomorphise it. This approach applies to some of the artificial intelligence in The Matrix and the Terminator franchises. One reason for the anthropomorphic portrayal of artificial intelligence is, of course, the lower production cost in films (human actors are “cheap”). Another reason is that the watcher more easily can identify with the artificial intelligence. However, the examples of Colossus and that of HAL in “Odyssey 2001” [Wikipedia 2014h] shows that our story telling can function without anthropomorphisms. However, what the latter examples have in common, is that the story does not really portray the artificial minds, they remain something of an enigma. To my knowledge there is no deep dive into a non-anthropomorphic artificial mind in film or literature, instead non-anthropomorphic artificial minds are always portrayed as an “external force.” If an actor is in the centre of story telling it always is human or human-like.

 

Common denominator two: character + predicament + attempted extrication

The above observation brings us closer to the simple equation I eluded to. In order to get there, we need to answer the question why it is that there has been literally no attempt in film nor literature to imagine and describe the inner lives of non-anthropomorphic artificial intelligence? Besides practical answers (see above), the answer of interest for us is: we do not delve into the inner lives of non-anthropomorphic intelligence because we do not delve into the inner life of rocks either. This is not meant to sound pertly, rather it points toward an inherent limitation of human storytelling. Let me address this important point in some more detail.

In the book The Storytelling Animal, Jonathan Gottschall tries to summarise our current understanding of what the basic function of stories are, and why we are so enthralled by them [Gottschall 2012]. One of his theses is that while we often portend to read stories for relaxation, the stories themselves pursue anything but relaxing topics. People get murdered, Middle Earth is threatened, Moby Dick takes revenge, John McClane needs to rise again, and so on. So, instead of telling each other of happy events (stories that are cornucopias of happy endings, happy families, happy couples, happy kittens, happy puppies, etc.), we seem to rather engross ourselves in gore, misery, and struggle. According to Gottschall, there is a very simple structure that permeates all stories [Gottschall 2012, p. 52]: first, stories are about trouble; second, they are about animated entities (characters), and third, these entities posses a human character.

According to Gottschall

Story = Character + Predicament + Attempted Extrication,

i.e. a character (hero) faces issues, and the issues are resolved. The latter is either done by the character itself, or by others, or a combination thereof.

Let us assume that Gottschall is right, and that story telling is indeed that simplistic. What are the repercussions of the above equation for understanding and telling stories about us and artificial life?

First, there has to be trouble. If there are no problems we cannot tell stories. This might seem to be a very restrictive condition, but if we keep in mind that a world of change can readily be construed as a predicament, this characteristic does not have to be too much of a restraining jacket. After all, we humans prefer stability, and a world that, within only few decades, changes from being populated by rather dumb machinery to a world that is populated by artificial, distributed agents, can upset societal structures, change the definition of work, etc. So there is enough material for great stories.

However, this story structure is anything but ideal when it comes to dealing with artificial agents. They are not agents in a traditional sense, i.e. they are very much different from the characters we are used to (humans, animals), and they can thus not become characters in the above equation. In any tale about a future world filled with non-anthropomorphic artificial agents, we always have to construe our stories from a human perspective, and the artificial agents can only be seen as outside forces, i.e. as trouble. An example for this is Colossus. Although Colossus is conscious and possesses agency, it is not approached as a character, the viewer is never invited to share the inner thoughts of Colossus. Rather, Colossus is portrayed as an opaque historical force.

If Gottschall’s equation holds, humans are not able to to tell stories about artificial agents in their own right. Either we make them human (The Matrix, Terminator), or we turn them into “deus ex machinas” (Colossus, Odyssey 2001). In other words, artificial agents either have to be human (character) or they are part of the predicament that the human character faces.

But why is the way we tell stories so limited, and why does there always need to be a character at their centre?

 

Tell me why!

To tell the truth, we do not know, why the above equation holds and why it is so central to our story telling. However, first explanations are emerging.

 

Stories and the flight simulator

First, and this is quite obvious, stories are permeating societies, and in many respects they represent a glue that holds us together. For instance, many religions are story-based, and religion literally stands for binding [Wikipedia 2014i]. We exchange stories when we meet and we also tell each others stories about our lives. But besides the role of social glue, stories also act as simulators of human social life [Gottschall 2012, p. 58]; “story is where people go to practice the key skills of human life” [ibid, p. 57]. Instead of learning everything “the hard way,” we use stories and the low-cost vicarious experiences they offer for learning the lessons without endangering ourselves [ibid, p. 57]. In a sense, fiction “is an ancient virtual reality technology that spezializes in simulating human problems” [ibid, p.59].

But how does this exactly work? Is story-telling simply something we learn, or are we made for stories? In other word, was “the human mind shaped for story, so that it could be shaped by story” [ibid, p. 56]? The answer is most likely yes, and I will review the pertinent mechanisms in the following sections.

 

Mirror, mirror in my mind …

First, not only do we generate a model of other people’s minds, we also incorporate these models [Baron-Cohen 1995]. When we see someone cutting a finger, we feel the pain we imagine the other is feeling, and it darn hurts. This is not limited to pain; we simulate the joy other feel, their arousal, etc. [Gottschall 2012, p. 60]. Intriguingly we do not even need to see the other person for these neurological mechanisms to work, it suffices that we hear about them. Stories affect us both mentally and physically (just think about the palpitation of your heart you felt last time you read a really scary book) [ibid, p. 61]; and the stories and the story telling do not need to be super realistic either. People “respond to the stuff of fiction and computer games much as they respond to real events” [ibid, p. 61]

 

The Matrix inside

But how does story-telling affect us, what cerebral mechanisms are at work? It cannot be that what we remember is the learning result of stories. After all, who remembers all the details of the Harry-Potter books or of the Lord-of-the-Rings trilogy? As you might have guessed, the learning process is implicit, it does not leave major traces in our conscious memory. A flight-simulator model of learning through stories helps us understanding this: pilots do not explicitly remember all the movements and steps they need to go through in order to safely land an air craft, they remember them “by heart,” the routines become ingrained. In this context, “we” are not fully aware of what our brain knows [ibid, p. 65]. The simple metaphor of implicit learning in a flight simulator holds actually quite some water when it comes to learning from stories. For instance, Oatley and Mar found through repeated studies that heavy fiction readers have better social skills than their non-reading peers, and that even if they do not remember much of what they have read [ibib, p. 66]. Oatley and Mar are not the only one pointing out this fact. “In his book The Moral Laboratory, the Dutch scholar Jèmeljan Hakemulder reviewed dozens of scientific studies indicating that fiction has positive effects on readers’ moral development and sense of empathy” [ibid, p. 134-135]. “Story, in other words, continues to fulfil its ancient function of binding society by reinforcing a set of common values and strengthening the ties of common culture” [ibid, p. 137].

What dreams may come

Most interestingly, we don’t stop simulating when the sun goes down. In other words, we even simulate “reality” in our dreams. One of the indications for this is atonia, i.e. the paralysis of our muscles when we dream. If dreams were just noise in our brains, and if acting out our dreams could hurt us, evolution would most likely have weeded out the ability to dream long ago. So what do we do in dreams and what is it good for? In order to answer this let us have a look at cats.

In the 1950s, the French researcher Jouvet conducted dream-related research on cats [ibid, p. 76-79]. He suppressed atonia through a surgical procedure, and once the cats fell asleep, all of them started “night acting:” they pursued invisible prey, they fought off other cats, they flew from danger, and all that without waking up. “Jouvet’s experiment showed not only that cats dream but also that they dream about very specific things” [ibid, p. 77]. Cats do not dream of warm sunshine, catnip, and sexy other cats, they dream of problems, and apparently only that. Situations that are complex and essential for their survival are simulated and repeated night after night.

Is there any evidence for this also being true for humans? Well, obviously, Jouvet’s experiments have not been repeated with humans, but patients afflicted with the REM behaviour disorder, which accompanies neuro-degenerative disorders such a Parkinson’s disease, exhibit very much the same behaviour as Jouvet’s cats: they rehearse problematic situations while they dream [ibid, p. 81]. Another sign is that dreamland is “more threatening than the average person’s waking world” [ibid, p. 82]. About ¾ of all drams are reported to encompass threatening situations, which is systematically more frequent than what we experience in daily life [ibid, p. 82]. Since our brains learn through repetition, the theory is that how we react to these simulated situations eventually gets translated into implicit knowledge. When we encounter such situations in real life we react much swifter to them, and maybe even more cunningly.

 

Imposing meaning

Another mechanism that contributes here is our brains’ constant strive to imbue the world with meaning. “The storytelling mind is allergic to uncertainty, randomness, and coincidence” [ibid, p. 103]. “The human is tuned to detect patterns, and it is biased toward false positives, rather than false negatives” [ibid, p. 103]. In other words, we rather project a menacing attacker in a dark forest than assuming that what we see is just a tree jostled by the wind. Not only do we imbue the world with meaning (the tree/person attacks us), we also imbue it with agency (it has a goal, it wants us evil9. Other objects do not simply move, they have a goal. This was brilliantly demonstrated by Heider and Simmel, who showed a simple short film about geometric shapes that are moving around on a white background [ibid, p. 105; YouTube 2010]. Interestingly, only 3 out of 114 test subjects described the film accurately, i.e. without anthropomorphising the geometric shapes [ibid, p. 106]. All the others “made up stories,” for instance about predatory big triangles that want to assault innocent small circles, and of brave small triangles that fight the evil big triangle.

 

Narrative lost?

So, in essence, we learn about the world by simulating it through our stories. But it is not the inanimate world that plays a prominent role in these simulations, it is “the others.” These simulations even go so far that we turn inanimate objects into person-like characters.

Interestingly, we even have a strong tendency to interpret predicament as agency: for instance, somebody must be behind the troubles of the good wizards in Harry Potter’s world (and we all know it is Voldemort and his minions).

There is nothing intrinsically wrong about this tendency of ours, i.e. constant “simple” story telling and seeing human/animal-like agency everywhere, but it is rather obvious that this kind of storytelling is not suited for talking about “artificial idiots.” Since they are not human they cannot act as characters, and they usually do not exhibit human/animal-like agency. In some sense they rather resemble cars, trains, etc. than animate beings. As we all know, human storytelling does not result in epic narration about the behaviour of machines. We talk about humans that are affected by machines, for instance in the runaway-train trope, but I have yet to see a story (that is actually read), which solely focuses on the machine itself. In light of the facts at hand, the shortcoming I bemoaned in my earlier blog post has a natural explanation, and, unfortunately, there seems to be nothing we can do about it. What stories are and how we construct them seems to be hard-wired into the human body.

If the situation is indeed this lacklustre will be discussed soon, but before that I will argue why even the traditional scientific approach is ill-suited for understanding our relationship with “artificial idiots.”

 

References

[Baron-Cohen 1995] Simon Baron-Cohen. Mindblindness. The MIT Press. 1997.

[Gottschall 2012] Jonathan Gottschall, The Storytelling Animal, Mariner, 2012.

[YouTube 2010] HeiderSimmel_refurbished. https://www.youtube.com/watch?v=sF0SVBBfwNg. 2010.

[YouTube 2013] The Matrix (1999) Agent Smith talking to Morpheus. http://youtu.be/32jHiAkQzfk?t=3m6s. 2013.

[Wikipedia 2014a] Wikipedia contributors. The Matrix. Wikipedia, The Free Encyclopedia. July 26, 2014, 10:43 UTC. Available at: http://en.wikipedia.org/w/index.php?title=The_Matrix&oldid=618528553. Accessed July 27, 2014.

[Wikipedia 2014b] Wikipedia contributors. Terminator (franchise). Wikipedia, The Free Encyclopedia. July 25, 2014, 20:41 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Terminator_(franchise)&oldid=618462641. Accessed July 27, 2014.

[Wikipedia 2014c] Wikipedia contributors. Colossus: The Forbin Project. Wikipedia, The Free Encyclopedia. July 8, 2014, 08:58 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Colossus:_The_Forbin_Project&oldid=616060398. Accessed July 27, 2014.

[Wikipedia 2014d] Wikipedia contributors. Terminator 3: Rise of the Machines. Wikipedia, The Free Encyclopedia. July 27, 2014, 13:38 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Terminator_3:_Rise_of_the_Machines&oldid=618674567. Accessed July 27, 2014.

[Wikipedia 2014e] Wikipedia contributors. Terminator Salvation. Wikipedia, The Free Encyclopedia. July 24, 2014, 19:16 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Terminator_Salvation&oldid=618312526. Accessed July 27, 2014.

[Wikipedia 2014f] Wikipedia contributors. The Terminator. Wikipedia, The Free Encyclopedia. July 26, 2014, 11:41 UTC. Available at: http://en.wikipedia.org/w/index.php?title=The_Terminator&oldid=618532643. Accessed July 27, 2014.

[Wikipedia 2014g] Wikipedia contributors. Colossus (novel). Wikipedia, The Free Encyclopedia. January 26, 2014, 01:04 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Colossus_(novel)&oldid=592408387. Accessed July 27, 2014.

[Wikipedia 2014h] Wikipedia contributors. 2001: A Space Odyssey (film). Wikipedia, The Free Encyclopedia. July 26, 2014, 01:52 UTC. Available at: http://en.wikipedia.org/w/index.php?title=2001:_A_Space_Odyssey_(film)&oldid=618491168. Accessed July 27, 2014.

[Wikipedia 2014i] Wikipedia contributors. Religio. Wikipedia, The Free Encyclopedia. January 23, 2014, 05:30 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Religio&oldid=591981812. Accessed July 27, 2014.

++++

If you like this blog post please click the below “like” button. If you want to stay up to date with my island letters, please use Worpress’s “follow” function or the options provided in the panel to the right.

Advertisements