Tags

, , , , ,

Let me return to one of my previous blog posts, namely The age of artificial idiots. In that blog post I shared my observation that when we talk about artificial life in popular media and art (I Robot, Terminator, The Matrix, Metropolis, Colossus …), the artificial life forms in question all too often possess human traits, and these life forms also happen to be either somewhat smart, or even hyper smart. Also, more often than not, these artificial life forms look and behave revealingly human (Terminator, The Matrix …). I pointed out that this way of story telling seriously limits our chances of understanding a world in which we, quite soon, will be surrounded by ubiquitous, interconnected machines of rather modest intelligence, and these machines will exhibit anything but human traits and looks.

This topic has not ceased to occupy me, and it got rekindled during the run-up to the Conference on the Philosophy of the Internet of Things, which I organised together with Justin McKeown and Rob van Kranenburg.

Although I have not come to a final conclusion about this topic, I have amassed a sizeable bulk of thoughts and first insights. Since I do not think I will arrive at final conclusions in the foreseeable future, I decided to instead compile a series of blog posts resembling something of an interim report. The lead title of this series is “Our stories, our world.” Right now I have enough material for three more blog posts, but more might be posted in the future.

Before I get started …

Before I get started, let me address one particularity of this and the following blog post. Since I will also quote from passages in books (and not books in their entirety), I will not use hyper-links for references in this blog-post series. I will rather use traditional square-bracket quotations, and the references themselves are listed at the end of each blog post.

More on “artificial idiots”

What I mean by “artificial idiots” is a subset of intelligent agents [Wikipedia, 2014a], namely learning agents. A learning agent is a machine that can sense aspects of the world and decide on actions based on the sensory input. What differentiates a learning agent from other intelligent agents -for instance simple reflex agents- is their ability to learn from current and past sensory input and to revise either their goals or their strategies of how to achieve these goals. An example for an intelligent agent is, for instance, a climate controller in a residential flat. The agent is equipped with initial goals and initial strategies of how to achieve a suitable climate in the flat. An indoor climate suitable for the human inhabitants constitutes its prime goal. Imagine now, that the number of occupants increases (acquaintances are visiting). In this case, the goal remains the same, while the strategy of how to achieve it is different, since more occupants consume more air, produce more heat, etc.

An example for a change of the agent’s goal is illustrated in the following scenario. New tenants move into the above flat, and they like it a bit cooler than the previous tenants. They find the current air temperature frequently too high and use a manual override to lower the output of the flat’s heating elements. The learning agent picks up on these repeated overrides and is incrementally lowering the ideal temperature (sub-goal) unless the manual overrides seize.

The above example becomes much more complicated and interesting, when learning agents are connected with each other. In this case large-scale dynamics between humans and agents can arise. Just imagine that you like your flat cool and your neighbour likes it warm. In this case, the agents of both flats could coöperate, and the excess heat generated from cooling your flat could be used for heating up your neighbour’s flat.

Notice that learning does not need to take place in the agent itself. An alternative approach to learning is, for instance, survival of the fittest. Imagine that distributed intelligent agents have a short life time and come with slightly different strategies and goals. Now imagine that they are deployed for solving the same goal (e.g., transporting orders from shelves to the shipping station), and that at the end of their life time their solving strategies and goals are only inherited to the next generation of agents if they exhibited a satisfactory performance. In this case, the next generation of agents would perform better, and it would seem that the individual agents have learned, but the learning took actually place not at the level of individual agents but on a collective level.

Notice that none of the above examples require high levels of intelligence. The tasks in questions are rather mundane and limited, and the learning agents in questions will thus not exhibit anything close to the level of human intelligence.

Why “artificial idiots” will stay with us for the foreseeable future

One could of course argue that the rather unintelligent agents I have been talking about in the previous section will soon be replaced by super-smart agents, but I think that perspective is unrealistic. Yes, I am fully aware of Moore’s law [Wikipedia 2014b] and that it becomes increasingly economic to pack plenty of computational power even on small devices, but, paradoxically, our interest in economic solutions will nonetheless result in agents that are up to their specific task, and not much more. One of the reasons for this limited scope is that the management of agents becomes increasingly laborious the more complex they get, and a management overhead reflects negatively on operational cost and user-friendliness. But even if we could hide the management overhead of complex agents from users, the problem of predictability remains. If we expect a clear-cut behaviour from intelligent agents (keep the room temperature at a comfortable level), making them hyper-complex does not increase necessarily their performance, and their performance might readily become unpredictable. Having a somewhat intelligent toaster that remembers your preferences is nice, but few people would require the toaster to predict your change of preferences in the future, nor to be able to discuss current political events with you. I know, the philosophical toaster makes for a nice story, but we rarely organise our everyday lives around nice stories.

So what is the problem then?

As pointed out in my previous blog post, I observe a general shortcoming in our narration about intelligent agents in that we focus on super-intelligent, close-to-human agents, while more realistic, rather mundane agents are rarely the subject of artistic narration, and even scientific enquiry seems to be occupied by questions such as when intelligent agents become indistinguishable from humans [Turing 1950, Frischmann 2014] rather than looking at how our world, how we change before the majority of agents becomes that intelligent. But why is this shortcoming important? Am I just floating a snobbish notion about how limited popular media narratives and scientific enquiries are, and how better the world would be if our thinking about intelligent agents would be more varied? The short answer is: I might be a snob, but this problem is important and systemic. Ournarrative shortcoming” indicates a blind spot in our perception of the world. This blind spot is of course not the only one [Wikipedia 2014c], and in the historical past this blind spot has not been all that important, since artificial agents are children of the computer age. But, if the predictions about the proliferation of connected, intelligent, embedded devices is even remotely true [Postscapes 2014], and if the computational power of these devices continues to increase geometrically [Wikipedia 2014b], this blind spot might become rather prominent and even debilitating already in the next few years. After all, if we want to live in the world, our model of the world, i.e. how we describe it, should be sufficiently realistic, instead of relishing in rather twisted stories about quasi-human intelligence that, for a long while, will constitute the minority of our interactions with intelligent agents. While the comedic and dramatic potential of super-intelligent agents is of course much greater than that of “artificial idiots,” overusing the trope of human-like intelligence in both artistic and scientific enquiry is not increasing our understanding and control of the world we will soon live in, rather it estranges us from the world.

For the above reasons I think it is important to understand our blind spot.

More about this in my next blog post.

References

[Frischmann 2014] Brett Frischmann, “Can Humans Not-Think?”, Conference on the Philosophy of the Internet of Things, York, 2014.

[Postscapes 2014] Postscapes, Internet of Things Market Forecast, available: http://postscapes.com/internet-of-things-market-size, 2014.

[Turing 1950] A. M. Turing, “Computing Machinery and Intelligence,” Mind, Vol. 59, pp. 433-460, available:http://loebner.net/Prizef/TuringArticle.html.

[Wikipedia 2014a] Wikipedia contributors, “Intelligent agent,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Intelligent_agent&oldid=595123850 (accessed July 20, 2014).

[Wikipedia 2014b] Wikipedia contributors, “Moore’s law,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Moore%27s_law&oldid=616974521 (accessed July 20, 2014).

[Wikipedia 2014c] Wikipedia contributors, “Cognitive bias,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Cognitive_bias&oldid=613941523 (accessed July 20, 2014).

++++

If you like this blog post please click the below “like” button. If you want to stay up to date with my island letters please use Worpress’s “follow” function or the option provided in the panel to the right.

Advertisements