In our everyday life we imagine situations, events, projects and decline them to the future: we simulate possible worlds and test them in a sort of permanent “what if” which is continuously reworked and modified. This process has been methodologically formalized in the sciences, where we build models, simulations, which try to describe facts, events and phenomena. Models and simulations have a very important cognitive role in knowing and understanding the world we live in.
Simulation is also at the basis of human communication and it has been often theoretically discussed in the media field1. In 1991 Gianfranco Bettetini, a semiologist mainly working in the mass media field, wrote that “… every language, whatever the materiality of the signs is which structure it, gives birth to operations which can’t find a more appropriate term to be defined than ‘simulation’. Whatever may be their style or genre, the writer, the painter, the photographer, the cinema and television author, the computer graphic artist… simulate.”2 But, in a wider, general view, every device humans design, build and use has at least to simulate how it can be grasped by our perceptual system, by the senses, by the mind, or work with the body: which means that it has to simulate some of the body’s operations, structures, functions, behaviours…
So oral language, written works, television, press, Internet, simulate because they tell us stories which try to represent or describe something factual or invented related with the world we are in. In this “diegetic simulation” the signified communicates something more or less familiar and shareable, although not necessarily true or existent.
Simulation and representation
But simulation also concerns the nature of the signifier. Computer technologies refer to simulation, although they add specific possibilities (i.e. the copy&paste action). The visual interfaces of the computer’s most popular operative systems simulate some visual properties of the desktop in order to be more intuitive.
The computer programs for writing and desktop publishing simulate the paper page, the typewriter’s organization and the characters’ disposition. The 2D applications for painting or drawing simulate the canvas, the tools and the effects of the painting/drawing techniques.
The software applications for creating 3D images simulate a 3D space onto the 2D space of the computer screen using the basic rules of the Renaissance perspective, and simulate the physics of the light and its behaviour, the appearance of the matter and of the so called “real world”.
3D computer image and animation, 3D cinema, holography, virtual reality, flight simulators, 3D computer games, the metaverse… are based on the simulation of some aspects of reality or create new plausible worlds.
Maybe one of the most celebrated simulation techniques in art history is the Renaissance perspective, which represents the three-dimensional physical space onto a bi-dimensional space. Starting at least from the Renaissance, the thrust for reproducing the way we perceive space has played a key role in western culture. And although the perspective was invented in the third decade of the XV century – Filippo Brunelleschi and Leon Battista Alberti’s tractatus (treaty) De pictura3 – we are still immersed in (and influenced by) this way of representing and seeing the world4. The perspective is in photography, cinema, video, computer photorealistic images, virtual reality, 3D videogames, the metaverses…
Except holography, which simulates the 3D space using the relative differences of the light’s phase, that in part explains the difficulties of this technique to emerge and integrate into the mediascape, all the other visual media simulate through the perspective (using the same, although simplified, rules of the Renaissance perspective).
Maybe it could be of some interest to establish a categorization. In this graph the images’ realm has been classified in two families, based on how the images are made and not on what they represent. They are “referential images” and “non-referential images”.
In the first category the images can only be obtained in presence of the referent (from the latin res ferens, which means “that carries the thing”), that is of what is represented5. In this category the presence of the subject, of the object or of the phenomenon during the image making process is mandatory: without this “being there”, in front of the camera objective or the photosensitive plate, there is no image. Recalling Roland Barthes, in front of a photo I can never deny that the represented subject, object or phenomenon has been there, for some occurrence, in some time of its existence, in front of the photosensitive plate6. The image is generated by that presence (being there) during the image making process, it is some sort of emanating made by the light action and the chemicals and/or the physics. On the other hand, in the “non-referential” images that co-presence is simply not mandatory nor relevant in the image process making.
This graph only shows the roots of some images. As we know, the cinema belongs to both categories and can also combine them in the same image. Moreover, in some of Richard Linklater’s movies7 referential images (normal cinema frames) are painted over with a non-referential technique (computer interpolated rotoscoping). And in the motion capture based representations although the final image is non referential, the movement is referentially based.
And the holographic images can also be obtained by the computer, coding and creating in a digital way the interference pattern which contains the holographic information without the presence of the referent. By the way it may be of some interest noting that media images are becoming more and more non-referential, as is happening, for instance, in the evolution of the cinema8 and the video.
The audiovisual cinematic media also simulate through movement, with a stream of sequential static images (frames per second), and sound (stereophony, quadrophony, holophony). In the acoustic field, besides architecture which historically deals (or should deal) with the acoustic issues of the environment9, simulation is relevant in music.
In a musical piece the sound front allows us to spatially locate the sound sources, and effects like echo and reverb define the spatial location and proximity of the sounds. This is commonly done in the recording studios and on stage too. In the music industry the sound synthesis, the acoustic effects, the multitrack recording studio’s possibilities literally build the sound of an artist, creating a sound space which is totally independent from the real, synchronous and direct dimension of the live concert.
However simulation can involve more senses than sight and hearing. Smell and taste can be chemically simulated by creating the molecules responsible for the olfactory information, copying existing perfumes and aromas or inventing new ones, with applications which range from the perfume industry to the food market, from communications to marketing, from security and anti-terrorism to robotics…
As for the sense of touch, virtual reality, telepresence and some videogames platforms can use touch or aptic inputs, and a growing number of smart communication devices are designed with the touch screen, enabling a sort of aptic response and a touch feedback, although in a limited way.
So simulation plays a pivotal role in communication and creation. The simulation of aspects of the so called “real”, or, better, the modelization of some issues which are closer to the model of that “real” we have or we build in our brain, seems crucial in representing and creating. Since the simulation is a mix of both perception and experience, of innate and acquired elements, it is historically and culturally declinated. And this simulation-based symbolic dimension is a constantly expanding and restructuring universe that often totally substitutes what we call “the real physical world”.
Simulation and behaviour
But there are also other kinds of simulation that could be called “behaviour simulation”. In the interpretation McLuhan gives of the myth of Narcissus, Narcissus is staring at his reflected image, but he thinks it is another person, a real person he falls in love with, and enters a state of narcosis, a continuous loop, numb and fascinated by this. So the nymph Eco, who is in love with Narcissus, tries to divert his attention telling him pieces of his own discourses: she simulates his behaviour. This beautiful story is indeed very common in human relationships. Isn’t this just what we do when we fall in love? Mostly unawarely, we simulate some non verbal (postures, gestures, prosody…) as well as verbal (phrases, words…) behaviours of the loved one, since this helps us to cross the distance and facilitates in capturing her or his attention and favour. And, in a wider way, isn’t this just what we do when we try to attract the interest of (or wish to be accepted by) another person (in the social psychology this is called the “chameleon effect”)?
A key role in the human social and interpersonal interaction is played by the mirror neurons. This system activates a mirroring mechanism of resonance and motor inner simulation – an “embodied simulation”, as Vittorio Gallese, one of the members of the Giacomo Rizzolatti’s team who discovered mirror neurons, calls it10 – which enhances intersubjectivity and social relations and literally helps to read into the mind of others.
Simulation is crucial in the technoscientific field. According to Louis Bec, “after the advent of the cognitive sciences, informatics, Artificial Intelligence, robotics, interactivity, it is possible to simulate and modelise more and more complex behaviours and perform them”11. The artefacts and machines we invent stem from the use of symbolic intelligence, and often, such as in the case of Artificial Intelligence, try to simulate it. Although today the purpose of AI is by far less ambitious and more delimited, the original one, based on the claim that human intelligence could be precisely described, was to build intelligent machines.
The “Dartmouth proposal”, maybe the AI birth document, states that “[…] every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”12
Artificial Life13 is a field of study and an art form which seeks to decipher the basic principles of life and implement them in a simulation (or modelization14 ). A-Life examines systems related to life, its processes and its evolution through simulations using computer models, robotics and biochemistry. According to Christopher Langton, A-Life “[…] studies ‘natural’ life by attempting to recreate biological phenomena from scratch within computers and other ‘artificial’ media. A-life complements the analytic approach of traditional biology with a synthetic approach: rather than studying biological phenomena by taking living organisms apart to see how they work, we attempt to put together systems that behave like living organisms.”15
Robotics, which today mixes with biology (biorobotics), has a history which dates back to the Greeks and today involves many applications. Unlike the original AI, robotics has a bottom-up approach: from simpler structures and behaviours it builds increasingly complex “machines”.
It does not only simulate creating entities which have a similar human structure, shape and interaction (anthropomorphic robotics), but also in a more general behavioural field. The “living” is the best model in making tools, machines, artefacts, devices, organisms, entities which must survive damages, errors, defects, viruses, and autonomously work in and adapt to many environments, which have to interact with unexpected situations and hitches, like the living normally does.
The living is the best model because it has demonstrated its efficiency in the last four billions years of evolution. It already knows these problems because it has embodied them since the dawn of its evolution: the best strategy is inscribed in the organization, behaviours and strategies of the living because it has already experience of the world. Sometimes certain complex behaviours of the living organisms spontaneously may emerge in robotics, A-Life, synthetic biology, showing a sort of “third life” in evolution.
Some final questions
So simulation, in its variety, is a constant presence in human culture: humans have always been simulating nature, the living, the world they live in, in the arts, the sciences, the technologies… But maybe simulation is at the core of the evolution itself, beyond the human realm. Apart from the mirror neurons system, which is also present in other species, what about those animals whose fur changes colour adapting to the season’s change, or about the species’ mimicry in the environment?
And, in the end, couldn’t the natural selection process, the “survival of the fittest”, that is the survival of the best adaptation to the environment, be possibly interpreted as a kind of simulation?
Recurring simulation in the anthroposphere could be intended as a confirmation that we – and all that we build – are nature. And, to a more general extent, the presence of simulation in nature could lead us to a higher level: nature which simulates nature. More questions emerge. Could simulation, in all its forms, also be considered as a universal presence, like a sort of cosmic background radiation? Could it emerge as a compatibility tryout, as a mutual adaptation, as an exchange of information, as a flux of energy among systems? Could it be considered, at any levels, as a consequence of interaction?
[This paper was originally presented at the International Conference “Consciousness Reframed 10 – experiencing [design] – behaving [media]”, Munich, MHMK, University of Applied Sciences, November 19 – 21, 2009]
Simulation as a global resource (pdf)
- Among the others: Jean Baudrillard, Simulacres et Simulation (Paris: Galilée, 1981); Philippe Quéau, Éloge de la simulation (Seyssel: Champ Vallon-INA, 1986); Gianfranco Bettetini, La simulazione visiva. Inganno, finzione, poesia, computer graphics (Milano: Bompiani, 1991); Domenico Parisi, Simulazioni. La realtà rifatta nel computer (Bologna: Il Mulino, 2001). [↩]
- Bettetini, La simulazione visiva, VII-VIII. [↩]
- Leon Battista Alberti, De pictura (1435-36). Here the text in vulgar and latin: http://www.liberliber.it/biblioteca/a/alberti/de_pictura/html/index.htm [↩]
- Erwin Panofsky, Die Perspektive als “symbolische Form”, in “Vorträge der Bibliothek Warburg” (Leipizig-Berlin: Teubner, 1927); Decio Gioseffi, Perspectiva artificialis. Per la storia della prospettiva, spigolature e appunti (Trieste: Università degli Studi di Trieste, 1957); Omar Calabrese, La macchina della pittura (Bari: Laterza, 1985). [↩]
- Referent in semiotics is “the thing that a symbol (as a word or sign) stands for” – from the Merriam-Webster online dictionary, http://mw1.m-w.com/dictionary/referent [↩]
- As to the referentiality in photography, Roland Barthes, La chambre claire: note sur la photographie – Paris: Cahiers du cinéma/Gallimard/Seuil, 1980; Jean-Marie Schaeffer, L’image précaire – Paris: Seuil, 1987 [↩]
- Waking Life (2001) and A Scanner Darkly (2006). [↩]
- Lev Manovich, The Language of New Media (Cambridge: MIT Press, 2002), 307-308. [↩]
- Berry Blesser, and Linda-Ruth Salter, Spaces Speak, Are You Listening? Experiencing Aural Architecture (Cambridge: MIT Press, 2007). [↩]
- Vittorio Gallese, “Neuroni specchio e intersoggettività” (Paper presented at the Neuroni Specchio. La relazione empatica tra Scienza, Filosofia, Arte e Cura, Ferrara, 2008), 15, 18. On the mirror neurons: Giacomo Rizzolatti, and Corrado Sinigaglia, Mirrors in the Brain: How Our Minds Share Actions, Emotions, and Experience (Oxford: Oxford University Press, 2008); Marco Iacoboni, Mirroring People: The New Science of How We Connect With Others (New York: Farrar, Straus and Giroux, 2008). [↩]
- Louis Bec, “Les Gestes Prolongés. Postface,” Flusser Studies, http://www.flusserstudies.net/pag/04/louis_bec_post_geste.pdf, p. 4 (accessed 17/09/09). [↩]
- John McCarthy, et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” Dartmouth Summer Research Conference on Artificial Intelligence (1955). http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html, (accessed 17/09/09). [↩]
- Christopher G. Langton, ed. Artificial Life (Reading: Addison-Wesley, 1989). [↩]
- “Modélisation” is the term used by Louis Bec in the field of AI, for instance in “Les Chromatologues,” http://www.noemalab.org/sections/ideas/ideas_articles/bec_chromatologues.html (accessed 17/09/2009). [↩]
- Christopher G. Langton, “Artificial Life,” http://www.probelog.com/texts/Langton_al.pdf, 1, (accessed 17/09/09). [↩]
Comments are closed