Tags

, , , ,

Previous blog posts in this series:

Part 1, Restating the problem

Part 2, The I in you

Part 3, Meta-modelling

Part 4, Let us think about it

++++

In my previous blog post I laid out approaches toward understanding the issue of “artificial idiots,” where the approaches rely on rational thinking as the modus operandum. In that blog post, I also laid out what I expect from a good understanding of the issue at hand.

In this instalment I am looking at alternative approaches, i.e. approaches that do not have their exclusive base in rational thinking.

Coming to our senses

Usually, explanations, methods, and models are tacitly assumed to be based on and formulated through rational thought. But who is to say that rational thinking is the only game in town? Maybe, enquiries based on our senses can be another avenue, or at least powerful supplements to rational thinking?

Emotions are commonly thought of as “ante-rational,” but, as pointed out by Martha Nussbaum and Ronald de Sousa, this is an impoverished view of how we humans navigate the world and of the important contribution emotions make to the decisions we make from day to day [de Sousa 1990; Nussbaum 2003]. Patricia Churchland, if I remember correctly, approaches this topic from an evolutionary angle, but comes to a very similar result. She argues that the limbic system, which is involved in many an emotion, was the first higher-order cognitive system to emerge in vertebrates. Since especially cerebral systems come with a high fitness cost (energy consumption, protection through thick bones, etc.), the limbic system was not just and “appendix,” it had to serve a critical role in the survival of vertebrates. If we consider what emotions do for animals, we readily understand that emotions are coarse attempts at valuing and processing sensory input. “Is this situation dangerous?” “Is this other animal threatening me?” “Should I help my fellow?” “Is this the right partner to mate with?” Most of these situations do not come with simple solutions, and emotions were evolution’s “first go” at higher-order problem solving. In light of this, why should we only rely on rational thinking, which came later, while abandoning well-proven venerable helpers? Think of it, why should emotions and our senses only play a supporting role? The argument might be that rational thought is a recent product of evolution and therefore better suited for the situations we face in our every-day life. I think this reasoning is wrong and contradicted by obvious facts. Emotions are not just vestiges, they are still our first, and sometimes only means for solving every-day problems. Think of it, who on earth approaches every-day situations solely through rational thinking? Are you always employing rational thought when deciding what to do when given too much change in a store, or are you, as I do, simply relying on your “gut feeling?” Or do you rationally think about what to do and what risks to take while driving a car? Most of what we do is based on subconscious, non-rational thinking and decision making. So why would we expect all of our interactions and decisions in the context of “artificial idiots” to be amenable to rational, conscious thought?

An example for how emotions can gainfully be employed when talking about “big data” in the context of sensor networks, etc., was provided by Alison Powell, who pointed about that the question is not what we think about that our personal data is collected in huge quantities, but how this collection does feel [Powell 2014]. In her essay, Powell does not use emotions as the arbiter, but as something akin of a “rescue dog” that directs us to where problems are situated, before we take the emotional assessment of the situation as input for a structured, rational analysis. As I laid out earlier, we do not yet have the semantic language to adequately talk about “artificial idiots,” and engaging emotions in the way Powell proposes makes all the sense in the world to me.

Painting with a fine brush

Some people go even further: why do we think that rational thought is the only way of thinking? What about, for instance, aesthetics? Is not art a way for thinking about the world, however in a completely different way than when engaging in rational discourse [Merleau-Ponty 1964]?

Contemporary artists, of course, tend to agree: “What makes art valuable is its ability to apprehend the conditions of our lives and articulate them in such a manner that they become tangible as propositions and questions.” [McKeown 2014]. This stance is even reflected in publications by the European Commission (see, for instance, [Sundmaeker 2010, pp. 25-26]).

To illustrate that this is not just arts theory or some political statement, let me highlight some practical examples.

First we have Sensity, a U.K. art installation that sonifies and visualises the state of the environment around the artist’s home [Stanza 2004-2010]. Granted, this installation does not address “artificial idiots,” but it focuses on the inanimate part of our environment and could thus be used as a stepping stone toward developing a semantics for the beings and doings of artificial agents.

An example that is closer to our topic is Aristotle’s Office, in which nine office devices are coupled with each other and react to changes based on the connections chosen on a connecting patch bay [Keene 2007]. By aid of this installation visitors can experience the complex behaviours of networked devices.

An art installation even closer to the questions mulled over in this series is the Moody Mushroom Floor [Haque 1993-2014]. Here, autonomous mobile agents can freely choose their goals from a given register (for instance to be “sullen”) and they then automatically develop and test strategies that best fit their goal. Reactive measures available to them are the emission of light, sound, and odours. While this might come across as a single-sided exercise, viz. that of the moody mushrooms, it actually is a double sided exercise, since the strategies are dependent on how many people that are in their vicinity and how they move, how they react to the stimuli put out by the moody mushrooms, and all observables change in response to the emerging strategies of the mushrooms.

Maybe stories are not dead, after all?

My analysis of narration as a means for understanding our symbiosis with “artificial idiots” was quite dismissive (see part 2), but this does not mean that narration is completely out of the picture. First, maybe even the simple, character-based narrative model discussed in part 2 can be exploited for at least getting a glimpse into the “social” reality of “artificial idiots.” An example for this approach is Simone Rebaudengo‘s story about Brad the Toaster [Rebaudengo 2012]. While this story humanises the toaster to a certain extend, it tries, at the same time, to expose the viewer to the complex, non-human dynamics that will arise in a world populated with interconnected, autonomous devices, one type of which will be toasters.

Another example is Timo Arnall’s video “Robot readable world,” which consists of a collage of short video clips that illustrate how artificial agents perceive the world around us [Arnall 2012]. This video does not rely on human narrative and immerses us into a strange world of perception.

These are not isolated examples, but there is actually a vibrant interest not only in enhancing human story telling through interactions with distributed artificial agents, but also in developing storytelling strategies and methods that let humans appreciate the inner world of “artificial idiots” and how they perceive the world around us. An indicator for this interest is the conference “The Future of Storytelling,” which was launched in 2013 and experiences its second instalment in 2014. One of the driving forces behind bringing the “the story of things” to “The Future of Storytelling” is Alis Lloyd, who also published an enlightening taxonomy of agent-centric storytelling [Lloyd 2013]. Not up for even more reading? Try her short “Object Narratives” on YouTube instead.

Last but not least, we should not forget that there is a small but strong tradition of science-based non-fiction writing. Yes, less than 10% of Time’s list of the all-time 100 best non-fiction books address non-human topics, but it is not 0%. Also, Time’s short list of non-fiction “best ofs encompasses classics such as Dawkins’s The Selfish Gene, which has been sold more than one million times so far. What I want to say with this is that once science has caught up with this topic and produced its first results, we will not have to wait all that long before non-fiction narrations of these results will be available to anyone inclined.

Really?

So yes, none of the examples I have presented here deliver all three “properties” I expect from a good understanding, of “artificial idiots,” i.e.

  • lending us the power to predict,
  • lending us the power to decide, and
  • being methods which are “developable.”

Definitely all of them are open ended, but they do not seem to help a lot with predicting and deciding. At least not yet. However, they could help us learn to live, react, and assess “artificial idiots,” how they change the world around us, how we change their world, on a non-rational level. I liken this often to learning swimming, learning how to play football. Yes, some rational overlay does exist, for instance research into swimming techniques and what makes a football team good, but this overlay is only an add on (some might say supervenient). What we usually do is to internalise the advice gleaned from such study and internalise it. Also, one can become a perfectly good runner without thinking too much about it. Maybe the same can happen with “artificial idiots,” and the avenues pointed out above can guide us into the right direction.

Coda

This is where this series of blog posts end for now. Thanks are due to you, the reader, for following me through all the twists and turns of this investigation into how we possibly can talk about an unfolding world that is sated with distributed artificial agents of modest intelligence. As I pointed out in the beginning of the blog series, neither my findings nor my speculative ideas are fully fleshed out. Far from delivering final and exhaustive truths about this topics, I wanted to share the notes and thoughts I had collected so far, and my hopes are that they will inspire others to embark on imaginative and inquisitive journeys of their own. Send me a postcard if you do so.

Acknowledgements

Thanks are due to Justin McKeown at York St. John University, for sharing his thoughts about the phenomenology of art and about ontologies based on sensations rather than reasoning.

Thanks are also due to Rob van Kranenburg at IoT Council, for providing me with ample pointers to IoT-related arts projects.

References

[Arnall 2012] Timo Arnall. “Robot readable world.” Available: http://vimeo.com/36239715 (accessed: 2014-08-28), 2012.

[Haque 1993-2014] Usman Haque. Moody Mushroom Floor. Available: http://haque.co.uk/moodymushroomfloor.php (accessed: 2014-12-04), 1993-2014.

[Keene 2007] Tom Keene and Kypros Kyprianou. Aristotle’s Office.Available: http://www.electronicsunset.org/node/310 (accessed: 2014-12-04), 2007.

[Lloyd 2013] Alexis Lloyd. “If This Toaster Could Talk.” The Atlantic. Available: http://www.theatlantic.com/technology/archive/2013/09/if-this-toaster-could-talk/279276/?single_page=true (accessed: 2014-08-28), 2013-09-03.

[McKeown 2014] Justin McKeown. “Art and the Internet of Things: a turning point in creative education.” The Guardian. Available: http://www.theguardian.com/culture-professionals-network/culture-professionals-blog/2014/may/05/art-internet-of-things-education-society (accessed: 2014-08-28), 2014-05-05.

[Merleau-Ponty 1964] Maurice Merleau-Ponty. “Eye and mind (C. Dallery, Trans.).” in The primacy of perception, pp. 159-190, 1964.

[Nussbaum 2003] Martha C. Nussbaum. Upheavals of thought: The intelligence of emotions. Cambridge University Press, 2003.

[Powell 2014] Alison Powell, “How does it feel? Philosophy in the Data City.” Philosophy of the Internet of Things, York, U.K. Available: http://internetofthingsphilosophy.com/wp-content/uploads/2014/07/Alison-Powell-How-does-it-feel-Philosophy-in-the-Data-Ciy-Philosophy-of-the-Internet-of-Things-full-paper.pdf (accessed 2014-08-28), 2014.

[Rebaudengo 2012] Simone Rebaudengo. “Addicted products: The story of Brad the Toaster.” Available: http://vimeo.com/41363473 (accessed: 2014-08-28), 2012.

[de Sousa 1990] Ronald de Sousa. The rationality of emotion. MIT Press, 1990.

[Stanza 2004-2010] Stanza. Sensity. Available: http://www.stanza.co.uk/sensity/index.html (accessed: 2014-12-04), 2004-2010.

[Sundmaeker 2010] Harald Sundmaeker, Patrick Guillemin, Peter Friess, and Sylvie Woelfflé. Vision and challenges for realising the Internet of things. European Commission. Available: http://www.theinternetofthings.eu/sites/default/files/Rob%20van%20Kranenburg/Clusterbook%202009_0.pdf (accessed: 2014-12-04), 2010.

++++

If you remember where Churchland said the thing I think she said about the limbic system please let me know.

++++

If you like this blog post please click the below “like” button. If you want to stay up to date with my island letters please use Worpress’s “follow” function or the option provided in the panel to the right.

Also, please share your thoughts in the comment section; letters can and should be sent both ways!

Advertisements