In the previous parts of this blog posts I discussed the shortcomings of an established ontological model of technical artefacts and presented an alternative model that is grounded in phenomenology. The new model rectifies these shortcomings, but, at the end of a very protracted discourse that stretched over four blog posts and seemingly minor adjustments to the original model, one is left to wonder whether pondering the issue “what is a technical artefact?” actually is worthwhile entertaining. The established model, with its basis in an essentialist interpretation of the world, has some shortcomings, and I do bother about these shortcomings because I am interested in utile models of “things” in the Internet of Things. However, does this imply that you should bother?
In this last and final instalment of this blog-post series I provide my arguments for why I think you actually should bother.
The global success of universal computing machines
Computers are universal computing machines [Delvenne 2009]: they can systematically “manipulate” anything that can be formulated with symbols. It does not matter whether the symbols are just referencing themselves (for instance the numbers in a Sudoku riddle) or if they reference the physical world (for instance the pixel information in a raw digital image). The second example, i.e. the transformation of physical states into symbols that then can be processed by a universal computer, is of special significance since it unbridles computing machines from a pure information realm and makes them versatile tools for adapting “the world in which we live to meet our needs and desires” [Vermaas 2011; p. 1]. A good example for this adaptation is drive-by-wire. In the past, movements of car steering wheels were mechanically translated into side movements of the front wheels by levers, bars, etcetera. In more recent times, steering was made easier by electro-mechanical systems, for instance electrical engines, which amplified the torque exerted on the steering wheel by the driver. Next in this evolutionary lineage was a complete digitalisation of this action. The torque of the steering wheel was translated by a sensor into digital information and this information was transmitted to actuators near the wheel, which turned this information into physical action, i.e. a turn of the front wheels. So instead of a purely mechanical interaction, digital technology becomes a de-materialised part of this interaction. And all of this for making driving more comfortable and safer.
What is so intriguing about universal computing machines is that the generalisation of information transformation through computing allows us to decouple the technical artefact, i.e. the computer, from goals and use plans: the same processor can be used for generating artificial pictures for video games and for processing steering-sensor signals of a drive-by-wire system. True, computational solutions are, in the beginning, typically tied to a concrete problem (for instance ray tracing and the design of optical instruments), but the computational action can later realise originally unforeseen goals. An example for this is the application of ray tracing, which initially was developed for constructing better optical systems, to the generation of “artificial” pictures, for instance in video games. A prime example for completely unforeseen use plans is the use of airtime mobile-phone equivalents for banking in Africa and elsewhere. A form of computer that is maximally uncoupled from concrete goals and use plans is the platform (see part IV). This is why I chose platforms for illustrating why the traditional definition of technical artefacts is deficient.
Note that we live in a world of an ever increasing number of computational devices and electronically stored information [Hilbert 2011]. Therefore, a model of technical artefacts that takes the universal nature of these devices into consideration is also of increasing importance.
Do not forget humans!
Another reason for why I think a better model of technical artefacts is of broad significance is the following. The established model obscures the fact that technical artefacts are primarily social in nature. It is true that the established model is not oblivious to the social actions that give rise to them, but the social nature of technical artefacts enters the definition only indirectly (see part III). Instead of locking in technical artefacts with preconceived goals and use plans, as is done in the established model, I instead liberate the goals from the artefact and put them in the forefront. Why is this important? It informs our choice of methods when constructing artefacts. Instead of mainly framing the creation of technical artefacts as a question of how to best exploit insight provided by the natural sciences, one recognises that artefacts are contingent on a web of social aspects, and that one therefore also needs to apply insights from the social sciences for the proper creation of technical artefacts. This insight also makes one aware that not only do the capabilities and behaviours of humans have to be factored into the design of technical artefacts (example: red is a stronger stimulus than other colours), but, most importantly, human needs and wants. Needs and wants come first. This is especially important in light of the fact that we humans fail at predicting what applications will result from novel technical artefacts: “The principal applications of any sufﬁciently new and innovative technology always have been—and will continue to be—applications created by that technology.” [Kroemer 2001]. However, we have a fair understanding of human wants and needs [Deci 2000, Nussbaum 2011]. So, instead of starting with what we do not know (applications of novel technical artefacts), let us start with what justifies the existence of technical artefacts in the first place: human wants and needs! I believe that such an approach leads to better engineering of technical artefacts and to a better understanding of what roles technical artefacts play in our lives.
Unshackle your mind!
Assuming that technical artefacts come with pre-defined use plans and goals can lead to a mental lock in. What I mean by this is that there is a pattern of assumed unbreakable linkage between goals and the technical artefact used for realising these goals. An example that comes to mind is xeroxing, i.e. using a machine for creating photocopies (mostly of paper documents). While xeroxing refers to the process used for creating the photocopies (xerography), it also refers to the company that built these copying machines, i.e. Xerox. Using the term xeroxing for photocopying creates a false equivalence between making photocopies, xeroxing, and the company Xerox. This might sound like me being concerned about market stratification due to subconscious marketing, but that is actually not my real concern. Rather, it is the equivocations of technical artefacts and prescribed values. Let me illustrate my fear with two examples.
(1) Firearms and protection
Firearms are, for instance, in the US promoted as protection from physical assault and such. While it is right that one of the use plans for firearms is personal protection, setting an equal sign between firearms and personal protection raises a fundamental question, which easily is overseen when starting with the technical artefact (firearm) and not the human goal (personal protection). The fundamental question is: are firearms actually efficacious in preventing physical assault and such (see, for instance [Ludwig 1998])? Furthermore: what are the downsides? If one starts with the need (physical safety) instead of a perceived solution (firearms), firearms become one of many choices. In the latter case, the existing plethora of individual and societal means for protection from assault can be analysed in parallel instead of resorting to foregone conclusions.
(2) Transport and freedom
A similar phenomenon can be observed in regards to means of transportation. In light of the detrimental impacts of transportation [Litman 2009], the real question to be asked is whether we need transportation at all. Remember, at the beginning are human needs and wants, and the question is then if and how these needs are best solved by technical means. Also, the question is then whether many of our transport needs are caused by circumstances other than our needs themselves. Take, for instance, the need for physical proximity. Whether I need transportation means to see friends and loved ones depends on many factors unrelated to my need, among them city planning. As we saw in part III, whether to solve a problem with entirely technical or social means is often a choice, and the primacy of needs and wants over technical solutions in my model, brings this choice to the fore, instead of short-circuiting the question by instead focusing on technical means.
Re-purposing technical artefacts and the idea of open source
A game consoles is a game console; or maybe it is not. As I have pointed out above, computers are universal computation machines, so why should a game console just be a game console? The reason is of course that the producer of the console makes that decision. But some activists disagree. What if one could change a Nintendo game console into a VoIP phone? This might not come across as a problem to most of my readers. After all, why not just buy a phone and a game console? However, remember that most people on earth cannot afford both. Even if they can afford one, why bar them from re-purposing a device once it is not longer in use?
Another argument for not locking in technical artefacts with their supposed use plans is that of bottom-up versus top-down. What if instead of using pre-configured commercial tools denizens and organisations are encouraged to build their own, open tools? This is not a revolutionary idea, rather one that squarely has taken hold in the realm of software engineering. (I am of course talking about open-source software.) What emerged over the last or so decade is a parallel movement, focusing on hardware. It is fittingly called the open-source hardware movement. Among many arguments made for open-source hardware, one is that open-source technology can enhance democracy.
In both cases the goal takes primacy over composition and function and the use plan of the technical artefact, exactly as stipulated by my model.
The established model of technical artefacts is —at heart— essentialist. Above I explained that the essentialist way of thinking about technical artefacts is cumbersome. For instance, we start equivocating means and goals when it comes to personal protection and transportation. However, essentialism is not limited to thinking about technical artefacts; rather it permeates pretty much all of our thinking. For a summary of essentialism see part I. So yes, we suffer from this little cognitive bias that makes us imbue everything in an around us with essences, but why is that such a bad thing? One of the main issues I have with essentialism is that it exacerbates our native tendency to in-group thinking. The perils of in-group thinking are discussed elsewhere in the literature [Castano 2008]. According to essentialism, something can be either as I think it is—its essence—, or it is something completely different, and there is no middle ground. For instance, race either exists (and its many implications), or it does not. But what if the concept of race is neither here nor there? For those interested in the many, many facets of the concept of race I recommend to watch the namesake series at Wireless Philosophy. Another example is the question of when a foetus becomes a person. Gestation is a gradual process, and any point in time that we define as the threshold after which the foetus is a person and enjoys certain inalienable rights, is arbitrary in the sense that there is no basis in biology to choose this particular point over another. This is not to say that we cannot choose such a point, but we cannot rely on any essential argument when so doing. There are no transcendental properties that make a human a human. Instead we have to rely on other, seemingly more subjective arguments, and we have to be mindful that our decision is contingent on these additional assumptions (danger for the mother; ability to feel pain; etcetera). But do not dispair, we, as a society do this all the time. For instance, a minor is not allowed to vote, but—boom! magic!—she is apparently suddenly in possession of all the necessary mental faculties in order to—wisely—cast her vote the day when she turns 18. I think everyone is conscious of the fact that this over-night transition from childhood to adulthood is legal fiction, but also that this fact is not important. Rather, the question is whether—tacit—goals are achieved when using this assumption (protection of vulnerable denizens; stability of our society; practicability of assessing one’s legal standing, etcetera).
So, if abandoning the essentialist definition of technical artefacts helps us with becoming more aware of the artificiality of our essentialist bias and its many repercussions, then that is a good thing.
The title of this blog post is a variation on Kallinikos et al.’s paper title “The ambivalent ontology of digital artifacts” [Kallinikos 2013].
[Castano 2008] Castano, Emanuele. “On the perils of glorifying the in-group: Intergroup violence, in-group glorification, and moral disengagement.” Social and Personality Psychology Compass 2.1 (2008): 154-170.
[Deci 2000] Deci, Edward L., and Richard M. Ryan. “The” what” and” why” of goal pursuits: Human needs and the self-determination of behavior.” Psychological inquiry 11.4 (2000): 227-268.
[Delvenne 2009] Delvenne, Jean-Charles. “What is a universal computing machine?.” Applied Mathematics and Computation 215.4 (2009): 1368-1374.
[Hilbert 2011] Hilbert, Martin, and Priscila López. “The world’s technological capacity to store, communicate, and compute information.” science 332.6025 (2011): 60-65.
[Kallinokos 2013] Kallinikos, Jannis, Aleksi Aaltonen, and Attila Marton. “The Ambivalent Ontology of Digital Artifacts.” Mis Quarterly 37.2 (2013): 357-370.
[Kroemer 2001] Kroemer, Herbert. “Nobel Lecture: Quasielectric fields and band offsets: teaching electrons new tricks.” Reviews of modern physics 73.3 (2001): 783.
[Litman 2009] Litman, Todd. “Transportation cost and benefit analysis.” Victoria Transport Policy Institute 31 (2009).
[Ludwidg 1998] Ludwig, Jens. “Concealed-gun-carrying laws and violent crime: evidence from state panel data.” International Review of law and Economics 18.3 (1998): 239-254.
[Nussbaum 2011] Nussbaum, Martha C. Creating capabilities. Harvard University Press, 2011.
[Vermaas 2011] Vermaas, Pieter, et al. “A philosophy of technology: From technical artefacts to sociotechnical systems.” Synthesis Lectures on Engineers, Technology, and Society (2011).
If you like this blog post please click the below “like” button.
If you want to stay up to date with my island letters please use WordPress’s “follow” function or the email option provided in the panel on the right.
Also, please share your thoughts in the comment section; letters can and should be sent both ways.