Some days ago I watched a cheeky talk about the rapture, and the presenter Richard Carrier made me aware of an odd link between scull-crushing fantasies in the apocalypses of the Old Testament and the Terminator Series. Well, maybe not that odd in hindsight. Anyway, that got me thinking about Skynet, Hal 9000, Colossus, and many more. Why is it that these artificial systems always have to be super smart? Why aren’t there any artificial idiots in science fiction? By this I do not mean artificial intelligence that, for instance due to an accident or malfunction, has become dumb, rather I mean distributed IT systems that possess agency, but who’s computational faculties do not compare at all to that of humans.
So why is that, why are there so few artificial idiots in literature, film, etc.? Many hypotheses come to mind. One of them are the limitations of human psychology. After all, super-smart computers are just poorly disguised version of super-smart humans. They are more exciting, they can be evil in an exciting way. But, most importantly, we understand them. How could we even tell a story when there is no there there?
What happens if we do not understand artificial life is the topic of Stanislaw Lem’s seminal novel The Invincible, in which a rescue expedition encounters artificial life on an extra-solar planet. This life form killed the crew of an advance party some years ago, and the crew of the star ship The Invincible is confronted with a life form who’s individuals are extremely simplistic, but who, under duress, can group together, connect their tiny minds, and become Invincible themselves. While this might not read very different from other evil-artificial-life stories, the catch in Lem’s novel is that the situation rapidly deteriorates not so much because of the threat of the artificial life form (which actually lets the crew of The Invincible be), but because the human crew does not understand how to engage this life form. After several disastrous encounters, the crew of the The Invincible finally decides to withdraw and to let be what they do not -and maybe cannot- understand.
This topic fascinates me for many reasons, one of which is that I have been doing R&D in the field of the Internet of Things (IoT) for the last three years. The main idea of IoT is to provide our communication networks with sensing and actuating capabilities, viz. the ability to perceive the physical world, to infer knowledge from the gathered data, and to even act upon this knowledge. The cyber-physical systems that are at the core of IoT are currently anything but intelligent, and at least for the foreseeable future, they will not become very intelligent either. The reason for this is pretty obvious: your eyes do not need to be very intelligent, neither your hands, and a lot of life forms get away with very simple brains for pursuing their rather limited actions.
Number estimates for such cyber-physical system range from tens of billions to a trillion devices in the near future. It is not this sheer number that is interesting, rather that all of these devices will be connected to something else. So while each of them will, as a general rule, be rather dumb, that does not imply that the dynamics emerging from the IoT will be simple. After all, our neurons are, each of them, rather dumb, but their conglomerates, i.e. our brains, exhibit extremely complex behaviour patterns.
What is so amazing about IoT is that it is already here. However, we do not have any idea of how the rather dumb jinns we released from the bottles of our silicon fabs will behave as groups, how they actually will change the world we life in (beyond what we originally intended). Most importantly maybe, we do not know how theses devices will change our perception of the world. But that is another story.
Art has not too much to say about this topic, about the dynamics many dumb devices will develop. Art mostly focuses on the calamities artificial life might result in (Skynet, The Matrix, …), and I actually do agree that there are more ways to get IoT wrong than right (as with everything). However, such thinking does not really help us in finding out how to do this right. There are some few exemptions though. First, Lem’s novel The Invincible, while being rather pessimistic about our encounters with a totally different form of artificial life, has a silver lining. It tells us we might want to take a step back if we do not understand what we met, and we can use the time thus gained for thinking about alternatives. This sounds like an unbelievable obvious statement to make, for this is what you should do anyway, right? Well, we as a species are not really good at doing it this way. We always need to have an explanation, and we prefer acting on an explanation that is wrong than not having any explanation and not acting. Also, there is Brian M. Stableford’s novel The Omega Expedition, in which intelligent machines finally gain consciousness. It turns out they do not want to exterminate humanity, for they are the outcome of many generations of video-game development, and their very essence, their “life juice”, are stories. And, admit it, humans are good at telling stories. I am not claiming that this is a realistic development, but it at least points out that what we -now- think is central about being an artificial intelligence maybe is totally irrelevant, and seemingly irrelevant factors are the really significant ones. In other words, even if everything goes right with IoT, it most likely will not have gone right in the expected way.
So, there are some examples out there, but what I really crave is a framework that helps me thinking of what happens when billions and billions of fairly dumb devices are connected on a global scale. I know, I could simply wait, but that does not seem to be my cup of tea.
P.S.: The phrase “artificial idiot” is taken from Brian M. Stableford’s novel The Dragon Man.