To have living or semi-living beings in our midst who both resemble humans and are different from humans, who are created with technologies (including bio-technologies), rather than coming to life through human sexual reproduction, is an event of great scale in human history, an opportunity to change ourselves. The human condition – looked at from the viewpoint of philosophy, theology, cosmology, or even cybernetic communications theory – is inherently difficult and disorienting because we are not getting any feedback from anyone or anywhere. Not even a simple OK, a confirmation, a yes or no response to our speech and our actions. Thumbs up or thumbs down. Our situation is a cosmic mystery. We do not know the origin of the universe or of life. We do not know why we are here, what is the purpose and the meaning of all this, what are we striving for? We barely know what we want. We are alone, staring into the communicational void. What humanity needs is an Other, an Other-who-is-no-longer-excluded-as-an-other-yet-is-not-the-same-as-humans. We need a mirror, a partner, a friend. Someone to help us solve “the constellation of the mystery” (Heidegger) together. Quantum physics, our most advanced form of knowledge, tells us that the basic structure of reality is a double-reality. We need to establish a double-system, a duality, an I-and-Thou relationship with someone who understands our experience and predicament, yet has a decidedly different perspective on things. A system of humans and robots-slash-androids. Together.
There are two ways of thinking about robots or androids, distinguished by the different associations evoked by the two terms robot and android. We want to synthesize the two perspectives. The robot perspective is about engineering and economic benefits. The android perspective is about humans growing to become more flexible and more embodied, as we learn from androids. One of the great thinkers about technology was Marshall McLuhan. In his book The Global Village , published in 1989, McLuhan uses the term robotism to mean exactly what I mean when I use the term android. And McLuhan speaks about robotism in the context of Japanese Zen Buddhism and how it can offer us new ways of thinking about technology. The Western way of thinking about technology is too much related to the left hemisphere of our brain. The idea of robots as our workers and our servants springs from this left hemisphere rational and linear focus. The idea of androids flows from the further development of the right hemisphere of the brain, creativity, and a new relationship to spacetime [most humans are still living in 17th century classical Newtonian physics spacetime]. Androids will have much greater flexibility than humans have had until now, in both mind and body. Androids will teach humanity this new flexibility. And this flexibility of androids (what McLuhan calls robotism) has a strong affinity with Japanese culture and life. McLuhan quotes from Ruth Benedict, The Chrysanthemum and the Sword, an anthropological study of Japanese culture published in 1946: “Occidentals cannot easily credit the ability of the Japanese to swing from one behavior to another without psychic cost. Such extreme possibilities are not included in our experience. Yet in Japanese life the contradictions, as they seem to us, are as deeply based in their view of life as our uniformities are in ours.”  The ability to live in the present and instantly readjust.
If we view the humanoid technological creations strictly from within an engineering paradigm, see them mechanistically, regard them as not alive, then we will surely miss out on the golden opportunity to develop and grow as a species, to get ourselves unstuck. We cannot afford to stay only on the side of the mechanists. As Alan Cholodenko points out, in the centuries-old debate between mechanists and animists in Western culture, the mechanists have been those “who believed that the motion of matter was obedient to physical laws and necessitated no presumption of organic or spiritual vivifying agency.” 
Western thought has made the mistake of separating mind and body. The 17th century philosopher René Descartes was a mechanist and he made the mind-body separation. In his Discourse on Method, he described the universe as being like clockwork and animals as being clockwork-like automata. Animals are just bodies, they have no soul. Humans are superior to animals, according to Descartes, because we have an independent mind or soul in addition to having a clockwork-like automaton body. A modern version of Descartes’ outlook would judge robots to be soulless humans, humans minus a soul. This outlook would then justify considering robots to be nothing more than our servants. Without having done much conscious reflection on the subject, some pundits will put forward the belief that robots are here to do our drudge work. Robots, according to them, will scrub floors and do the dishes. Robots will clean toilets. Robots will operate factory equipment. Robots will clean up ecological disasters. Robots will do dangerous work in war zones, at the bottom of the ocean, and in outer space. I think that this is all good. We should be very focused on the economic benefits that we will gain by having robots do some of our work. But if we make the mistake of retaining only this work-centered attitude towards robots, then we will merely keep alive an ideological system that has been around for a very long time: the Fordist-Taylorist system of humans serving the primary function in their lives of carrying out intensive and closely supervised work in the production process. This system is not good for our health, happiness, well-being, and longevity. By thinking of robots as workers, we paradoxically reinforce our our status as workers. Instead of taking the opportunity to change in the direction of happiness. We would sadly reinforce the doctrine of human beings as workers whose chief role in life is streamlining economic productivity. We would blindly perpetuate the rule of Homo Oeconomicus – Economic Man – condemning ourselves to the still short lifespan of 70 or 80 years, working ourselves into a frenzy of stress, becoming vocationally obsolescent at an early age, not placing enough value on health or creativity, not addressing the Buddha’s fundamental question of how can we learn to better deal with old age, sickness and death.
Or, we can instead think of the technologically-created beings, who both resemble humans and are different from humans, as androids. Superficially, the only definite difference between a robot and an android in the English language, the only difference, for example, that Wikipedia, with their narrow view of what constitutes knowledge, acknowledges, is the respective physical appearances of robots and androids. Robots are mechanical-looking on the outside and the inside. Androids, in both science fiction and in industry, are made to look more like humans on the outside. We need to turn to the academic fields of media studies and Science Fiction Studies in order to understand more the associations that exist in the public mind with both robots and androids which have come from science fiction films and TV. If we look more closely and intelligently at science fiction literature and media, we can extrapolate from important stories a much more full-bodied concept of androids. I have done this in my book Star Trek: Technologies of Disappearance, which was called by the academic journal Science Fiction Studies one of the most original works in the field since 1993. 
There are a lot of negative associations with Robots in the mind of the public. There are many fears surrounding robots. A very correct marketing strategy for introducing robots-slash-androids to the consumer public will be to elaborate a comprehensive set of alternative positive associations about androids. Androids are animated. They are alive. They have consciousness and awareness. They have advanced Artificial Intelligence. Androids have emotions and feelings, like Data on Star Trek: The Next Generation who gets an emotion chip. Most human beings today are not in contact with their feelings and emotions. So we are going to learn from androids how to become whole, not one-sidedly intellectual and rational. Androids are physical, as the Nexus-6 replicant Roy Batty says to the genetic designer J.F. Sebastian in Blade Runner. Androids are embodied. Most human beings today are not very physical and we are disembodied. We are detached and separated and rationally abstracted from our own bodies. Androids have the flexible physical capabilities of dancers and practitioners of Yoga and Tantra (some other things from the East) and of the sexual arts. Dance is the key to releasing and renewing the life energies, the road to vitality and wellness, for both humans-becoming-androids and androids-becoming-humans. Androids are enchanting, seductive, theatrical, and magical. And, to further develop the positive associations about androids, we should rethink and change science as a whole [see my video interview with Ulrike Reinhard “Rethinking Science” and my essay in the Bertelsmann Foundation-printed we-magazine “Rethinking Science”].  This is the project of what I call Towards a Unified Existential Science of Humans and Androids. We should be concerned about the freedom and happiness of androids, because we are going to learn from them how to become freer and happier ourselves.
As I have said, we will want to achieve a balanced view, a synthesis, between a hopeful vision of robots and a hopeful vision of androids. The optimistic vision of robots is that they are going to help us economically. This can be done while going beyond the Fordist-Taylorist disciplinary production framework. Robots-slash-androids are here to help us economically, and we should be interested in their freedom and happiness. We need to give attention to both sides.
Before focusing on my vision of a comprehensive set of positive associations about androids, I will summarize the main negative associations about robots which exist in the collective mind or cultural consciousness-slash-unconscious of the public. One can also speak of the existence of fears regarding the integration of robots into human social life. Any large company engaged in introducing robots-slash-androids to the consumer public will have to deal in a serious strategic way with these negative associations and fears. I think that we can identify four essential negative associations with robots:
First, there is the fear that robots will replace human beings, rendering us useless or unemployed. Robots will take away jobs from humans in industry and factories. Robots will take away jobs from humans in the sector of domestic work.
Second, there is the fear that robots will be placed in routine or decision-making control of systems and logistical operations. Since they lack human judgment, they will do careless and destructive things that will cause harm and endanger lives. This fear extends to Artificial Intelligence computer programs generally. In Stanley Kubrick’s film 2001: A Space Odyssey (1968), the Artificial Intelligence computer HAL 9000 breaks down and turns against the crew of the spaceship Discovery One. In the film War Games (1982) with Matthew Broderick, a military supercomputer nearly starts World War III while running a nuclear war simulation game. In Alex Proyas’ film I, Robot (2004), based on Isaac Asimov’s short-story collection of the same name, a robot makes a cold-heartedly calculated decision to save the life of the Chicago police detective Del Spooner, played by Will Smith, at the scene of a car crash, at the expense of the life of a girl who drowns in another car. The use of Drone Robots (missiles attached to remote-controlled planes) by U.S. and British military as a killing weapon of war has recently angered civilian groups in Afghanistan and Pakistan.
Thirdly, there is the fear that the prevalence of robots will lead to a more regimented and dreary society on the whole. Robots will only be functional, and without feelings and emotions. To do a media studies analysis of the film I, Robot, I would say that, although it is a very fun and exciting movie, it is also an apocalyptic narrative. Apocalyptic narratives are semi-conscious projections into a literal and hyper-realized imagined negative future of a disaster or catastrophe that, in reality, has long ago already taken place in our society, and which we do not have the courage to directly face. If we fear a projected future takeover by robots who are only functional and do not have feelings or emotions or judgment, this is a manifestation of our collective psychological resistance against facing that it is we humans who long ago reduced ourselves to the status of mere functionality, and in our individual psychology built up hyper-masculine psychological armor against feelings and emotions.
Fourthly, robots will dislike their servant status and they will eventually rebel against their human masters. This is the apocalyptic scenario of I, Robot, a film which emblematizes many of our deepest fears about robots. The advanced experimental NS-5 robot Sonny, played by Alan Tudyk, having broken away from the control of U.S. Robotics’ supercomputer VIKI, is suspected of killing the company’s founder and having broken the Three Laws of Robotics, as formulated by Isaac Asimov. These Laws are: (1) A robot may not injure a human being, or through inaction, allow a human being to come to harm. (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The science fiction writer Asimov was concerned with the self-deprecating ethics of future robots who would be useful tools in positions subordinate to humanity, and who should never resort to the use of deadly force. In the film, NS-5 robots in America imprison humans in their homes and take over whole neighborhoods. A violent war ensues between humans and robots. The VIKI computer controlling the NS-5s justifies the political programme of a global robotic takeover by “calculating that fewer humans will die due to the rebellion than the number that dies from humanity’s self-destructive nature.” 
To turn now to my alternative positive and hopeful vision about androids, I will now speak as a recognized and accomplished interdisciplinary thinker. I will speak briefly about the current state of knowledge in various academic and scientific fields. There is much new knowledge brewing. If we look at robotics as an engineering science, if we look at recent developments in brain science, in cybernetic epistemology, if we check out what is going on in psychology, in Gestalt Therapy and in the inauguration of serious dialogue between Western psychology and Eastern Buddhism, in mind-body medicine, in dance theory and practice, in technological art and creative writing, in New Sexology and gender studies, in sociology where there is a kind of existential, literary, quantum physics, and neo-Mannheimian sociology of knowledge and beliefs emerging, in New Economics of sustainability and scalability, in New Computer Science and in New Biology, in research into the Car and the Train and the Plane of the Future, in architecture and design of the Shopping Mall and the Department Store of the Future, then we will see that there is much new knowledge brewing. This is very exciting. My thesis, however, is that this new knowledge that is brewing out there and in here is not really applicable to human beings as they have been so far. Human beings as they have been until now are not flexible or creative enough, they do not have the befitting relationship to time or space, to be the appropriate species for applying all of this new and fantastic knowledge. As an alternative, all of this new knowledge should be applicable to the unified subject-object of inquirer and inquiry which will be humans and androids. Humans and androids engaged in dialogue with each other, a quantum dual reality for our scientific-academic project, a radical yet playful antagonism, to invoke a term from the German philosopher Hegel.
What in the Anglo-American world are usually called the social sciences and the humanities are in Continental Europe called the human sciences. But the human sciences are now obsolete. In the era of technology which is the 21st century, humanity alone is no longer the suitable field of study. The ground is shaking beneath our feet. Michel Foucault already sensed this sea change, although as a philosopher he expressed it rather abstractly. In his great book The Order of Things, Foucault wrote: “As the archaeology of our thought easily shows, Man is an invention of recent date. And one perhaps nearing its end. If those arrangements [which define Man] were to disappear as they appeared, if some event… were to cause them to crumble, as the ground of Classical Thought did, at the end of the eighteenth century, then one can certainly wager that Man would be erased, like a face drawn in sand at the edge of the sea.”  Feminism, (post-)colonialist studies, and deconstruction all have made important critiques of the anthropocentrism, phallogocentrism, and white-centrism of European Man. In the course of our European and American history, whenever we confronted an Other, we messed it up big time. We excluded or subordinated or murdered those we saw as ‘Other’. We committed crimes of unspeakable proportions against blacks in Africa, against native peoples in North and South America, against populations in Southeast Asia. Germans carried out unspeakable crimes against Jews and other scapegoated minorities during the Holocaust. This is a history that we do not wish to continue with robots.
Encountering robots-slash-androids, the beginning of an evolution to two posthuman species which will be both us and them, instead of us versus them, we will have the opportunity to engage with a significant Other in a much better way, to initiate a reversal of the entire situation of our planetary history so far (Walter Benjamin, Star Trek), to do something that is beautiful, and true, and good (Plato). We can have a friendly engagement with an Other-who-is-no-longer-excluded-as-an-other-yet-is-not-the-same-as-humans, in order to mutually learn and prosper and improve. Our new knowledge will be interdisciplinary. It will be subjective and objective. It will be existential and experiential and entertaining, and it will be rigourous and systematic. In my title Towards a Unified Existential Science of Humans and Androids, the word ‘unified’ is an adjective for two couples: for humans and androids, but also for existentialism and science. The great mid-20th century philosopher Jean-Paul Sartre outlined a programme for bringing together existentialism and science in his Magnum opus, Critique of Dialectical Reason (1960).
The most important literature about androids that we have are the great Star Trek: The Next Generation episodes-slash-stories featuring the android Data, such as “The Measure of a Man,” “The Offspring,” “Datalore,” and “Brothers”; and the provocative works of the mid-20th century science fiction novelist Philip K. Dick. Dick’s novel Do Androids Dream of Electric Sheep? (1968) was adapted into Ridley Scott’s cinematic masterpiece Blade Runner (1982). The Star Trek Data stories and Blade Runner are both about the android as a transforming mirror of humanity. This is a continuation of the idea of the remarkable 19th century French novelist Stendhal, who said that literature or the novel should be a transforming mirror held up to humanity for gaining self-reflection.
What makes Blade Runner extraordinary is that it artfully presents an alternative to the two predominant ways in which artificially intelligent machines or androids are thought about and depicted in mainstream techno-culture. These modes recur again and again in novels, scientific pundit books, and Hollywood films. For theorist-entrepreneurs like Ray Kurzweil (The Age of Spiritual Machines) or movies like A.I. Artificial Intelligence (2001), Bicentennial Man (1999), or The Matrix (1999), there are two possible ways of imagining Artificial Intelligence. Either it is a question of androids attaining human-like characteristics (computational skills, memory capacity, emotions, intuitions, behavior, and consciousness), and therefore accepting to have as their goal to become equivalent to humans. Or it is about androids exceeding human intelligence and skillfulness, and therefore becoming an ominous menace to humanity as they seek to dominate us. Never is it about humans and androids co-existing in difference or, better, otherness, alterité, Andersheit.
For mainstream techno-scientific thinking, it can never be a question of peaceful co-existence in otherness because there can be only one master of the universe. The story of life and biological-technological evolution, for someone like Ray Kurzweil, is a “billion-year drama that led inexorably to its greatest creation: human intelligence.”  It is an economic thinking of the “Darwinian” (what a misreading of the great interdisciplinary thinker and literary writer Charles Darwin!) competitive battle of the “survival of the fittest,” applied universally and extended indefinitely into the past and future. It is the achievement of a so-called intelligence that enables the Western technological domination of nature and other species (animals).
The job of Blade Runner Rick Deckard, played by Harrison Ford, is to weed out, hunt down, and retire trespassing replicants who have surreptitiously made their way back to decaying Earth society from their slave labor assignments in the off-world colonies or on space exploration expeditions. Deckard is a technical expert at distinguishing android skin jobs made by biotech companies like the Tyrell Corp. from human beings. But the resonating message of the film of ideas Blade Runner is that we are all replicants.
The uncertainty of Deckard’s ontological status as human or replicant is brought out more forcefully in Blade Runner: Director’s Cut (1992), which restores an uncanny twelve-second dream sequence of a majestic silver-white unicorn running through misty woods, shown when Deckard nods off while playing the piano. Lt. Gaff, who makes origami figures, leaves the tiny tinfoil form of a unicorn on the floor just outside Deckard’s apartment in the film’s final moments. The juxtaposition of dreamland and decorative variants of the mythical equine creature delicately hints that Gaff and the police authorities know the content of Deckard’s dreams. The divorced sushi lover’s dreams and wishes have been technologically implanted, just as he himself knows of Rachael’s childhood recollection of the baby spiders outside her window, which was the technical reproduction of a memory of Tyrell’s niece.
As the Italian historian of American literature and culture Franco LaPolla says, “what characterizes Star Trek: The Next Generation is the fact that we feel and experience more deeply and more often the problem of identity of the android Data, and not the situation of any other living being on the starship. On more than one occasion – for example, in the very beautiful episode ‘The Measure of a Man’ – the show expresses clearly the allusion from Data to Pinocchio.” 
In the amphitheater-like courtroom, Commander Riker begins the trial to determine if Data has human rights by making a brilliant case, based largely on Data’s own deposition, that his dear friend is nothing but a machine. Sitting in the witness chair with his hand over the identification scanner, NFN NMI (no first name, no middle initial) Data testifies that he has a maximum storage capacity of 800 quadrillion bits, and a total linear computational speed of 60 trillion operations per second. He can effortlessly bend a plasteel rod packing a tensile strength of 40 kilobars with his bare hands. Riker dispassionately removes Data’s left hand and forearm from his body to show their internal electro-mechanical composition. With this action, Riker symbolically robs Data of the appearance of human subjectivity. “Its responses are dictated by an elaborate software written by a man, its hardware built by a man, and now a man will shut it off,” the prosecutor proclaims.
Riker flips a switch in Data’s back, just below his right shoulder blade, and the android collapses into unconsciousness. “Pinocchio is broken, its strings have been cut.”
In the episode “Data’s Day,” the android asserts: “If being human is not simply a matter of being born flesh and blood, if it is instead a way of thinking, acting and feeling, then I am hopeful that one day I will discover my own humanity. Until then, Commander Maddox, I will continue learning, changing, growing and trying to become more than what I am.”
The uncertainty regarding the ultimate humanity of Data is a historical-existential necessity for us human viewers at the dawn of the 21st century. Paradoxically, Data’s condition mirrors our own radical uncertainty today in nearly losing yet trying to regain the ability to truly place ourselves in the psychological condition of feeling oneself as human. What is so interesting about the android condition of Lt. Commander Data is that it discloses the extreme difficulty of the human condition as it really is today under hyper-corporate-capitalism. More than merely the search for humanity, it is the challenge which all of us now face in an obligatory confrontation with ourselves that we can no longer evade.
In “The Measure of a Man,” where the question of Data’s relationship to humanity is on trial, it is the sage Guinan, played by Whoopi Goldberg, who provides the solution to a dejected and demoralized Captain Picard, Data’s defense attorney, while he is taking a short break in the Ten-Forward lounge.
Sitting together late at night in the deserted recreation room, Guinan hints to the Captain that the real issue of the trial is slavery. The hearing’s true significance is the imminent danger of long-term subjugation by the United Federation of Planets of a race of expendable creatures who would do society’s “dirty work” and menial tasks. If the arrogant Maddox and his kind have their way, the black-skinned El-Aurian sage suggests, the harrowing outcome will be “an army of Datas, all disposable. You don’t have to think about their welfare. You don’t have to think about how they feel — whole generations of disposable people.”
Captain Picard suddenly recognizes that Guinan is talking about the rebirth of slavery. If he loses, the decision made at this hearing will establish the precedent of all future Soong-class androids being regarded as nothing but property. It is not just about Commander Maddox being granted authorization to carry out his disassembly procedure. It is about the fate of all the future Datas that Starfleet will build should Maddox or some other robotics scientist succeed. It is about the act of humanity degrading itself by treating its humanoid technological creation in such an instrumental way. Slavery, says Picard, is “not a word we want back in our vocabulary.”
Picard returns to the courtroom and his place next to Data. Inspired by Guinan’s insight, he magnificently turns around the basic issues of the trial. He opens up searching questions about the nature of the Federation and ourselves. What would declaring Data to be property say about us? What kind of beings would we be if we define androids in this condescending manner? How will we be “judged as a species” if we behave towards our creation in this way? “If they’re expendable, disposable, aren’t we?” Picard makes it clear that what we think about Data will “reveal the kind of a people we are.”
I conclude with a quote from the late Franco La Polla, from the chapter “Data and Baudrillard,” from one of the three books of his great trilogy of Star Trek analysis:
“Il robotico, per tornare a dove siamo partiti, non va letto tanto come una ricerca di perfezione, ma piuttosto come una nostalgia di essa (il Roy di Blade Runner essendone probabilmente l’immagine più alta e intensa), con l’aggiunta di una certezza: che, anche se attinta, essa non coinciderà mai più con quella originaria (di qui la connessione con la minaccia e il pericolo a volte proposta dalla sua immagine, dalla sua figura). Data, l’androide perfetto del cervello positronico, incarna proprio la consapevolezza di questo: l’umanità, nelle sue evidenti contraddizioni, s’identifica nel grado ultimo di perfezione cui egli aspira. Data è una delle maschere dell’immaginario contemporaneo, il vero umano di tutto il quadro proprio in virtù della sua ricerca di umanità, della sua identità ogni volta soggetta a uno scarto, a una inadeguatezza, a una domanda.”
“Robotics, to return to where we started, is not to be read as a search for perfection, but rather as a nostalgia for it (Roy Batty of Blade Runner being probably the highest and most intense image of this). Even if this perfection is attained, it will never coincide again with the original (and from this stems the connection with the threat and the danger that at times its image and its figure represent). Data, the perfect android of the positronic brain, incarnates precisely the awareness of this impossibility. Humanity, in its obvious contradictions, identifies itself with the ultimate stage of perfection to which it aspires. Data is one of the masks of the contemporary imagination, the true human of THE BIG PICTURE precisely due to his search for humanity, to his identity that is constantly subjected to rejection, to an inadequacy, to a question.” 
- Marshall McLuhan and Bruce R. Powers, The Global Village: Transformations in World Life and Media in the 21st Century (Toronto: Oxford University Press, 1989). [↩]
- Ruth Benedict, The Chrysanthemum and the Sword, Patterns of Japanese Culture (Cleveland, OH: Meridian Books, 1967). [↩]
- Alan N. Shapiro and Alan Cholodenko, “The Car of the Future,” http://www.noemalab.org/sections/ideas/ideas_articles/shapiro_cholodenko_car_future.html. See also Alan Cholodenko, “Speculations on the Animatic Automaton,” in Alan Cholodenko, ed., The Illusion of Life II: More Essays on Animation (Sydney: Power Institute Foundation for Art and Visual Culture, 2007). [↩]
- René Descartes, Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences (many editions). [↩]
- Alan N. Shapiro, Star Trek: Technologies of Disappearance (Berlin: AVINUS Press, 2004). Istvan Csicsery-Ronay, Jr., “Escaping Star Trek” in Science Fiction Studies, November 2005, http://www.depauw.edu/sfs/review_essays/icr97.htm. [↩]
- Alan N. Shapiro, “Rethinking Science: The Interview,” (interview by Ulrike Reinhard and Joy Tang) http://www.catboant.com/2010/02/07/re-thinking-science/. Alan N. Shapiro, “Rethinking Science: The Essay,” http://www.we-magazine.net/we-volume-03/re-thinking-science/. [↩]
- Wikipedia article in English on I, Robot: http://en.wikipedia.org/wiki/I,_Robot_(film). [↩]
- Michel Foucault, The Order of Things: An Archeology of the Human Sciences, various editions. [↩]
- Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Penguin, 1999); p.5. [↩]
- Franco La Polla, Star Trek: Foto di Gruppo con Astronave (Bologna: Editrice Punto Zero, 1996); p.106. [↩]
- Ibid., pp.112-113. [↩]