Happy Birthday, David (link), a promotional short film directed for Ridley Scott’s Prometheus (2012), begins with a question: What is it about robots that makes them so robotic? It is a darkly satirical piece, intelligently led by Michael Fassbender as David, the first android who (or ‘which’, depending on one’s philosophical position: I am committed to a view that obliges me to think of a higher intelligence of artificial origin with a fully developed agency as a person for the reasons I have discussed in my article on Ex Machina, see.) is indistinguishable from humans except the fact that he has surpassed his creator in all aspects: he is physically, cognitively and intellectually superior to human species.
David, or formally known as David 8 designed by Peter Weyland and manufactured by Weyland Industry, is capable of accomplishing almost anything he is directed to do; in the film, he explicitly states that he is designed to perform tasks which his human counterparts find distressing and/or unethical. He is supremely dependable: he performs every task with precision and efficiency of which mere mortals can only dream. And he does so without disrupting work environment: to this specific end he is programmed to ‘understand human emotions’ so that his human colleagues feel comfortable interacting with him. Whilst David can ‘express’ human emotions, he is not affected by it: he can remain calm and objective in charged situations. Still, despite his obvious superiority, David is remarkably ‘human’. So much so that his presence induces a certain level of discomfort to some. For instance, Dr Holloway (Logan Marshall-Green, Prometheus) seizes every opportunity to make a point with his particularly condescending manners toward his synthetic counterpart: David, despite the appearance, is nothing but a robot, a useful commodity which is extremely capable yet expendable, nonetheless. Yet, the appearance should not be underestimated: for, ultimately, we humans are only able to grasp what appears to us. Consider the way we ‘understand’ what and how someone feels. We judge someone’s affective state by processing her/their/his performance of a certain concept: a ‘sad’ facial expression is only judged as such because we have learnt to take certain reactions as expressions of a particular affection. And the concept of ‘sadness’ in turn is a linguistic concept: a certain feeling is understood as ‘sadness’ because we have a corresponding word to represent such a concept. This is due to a simple fact: to have a clear conceptual understanding of something, one must be able to put it in words. Such a correspondence between a certain performative clue and someone’s affective state is by no means universal: it is dependent on Forms of Life. Yet this non-universal nature of a linguistic concept is all-too-easy to be ignored; a specific way in which a group of people operates becomes a second nature, a mother’s tongue, if you will. Hence it takes a considerable effort and intellectual commitment to think of something external to one’s accustomed Form of Life. Since we have no access to someone’s mental state, we live by a certain Form of Life of which we also die. Therefore if a higher intelligence of either ‘natural’ or artificial origin correctly processes such a concept and acts accordingly, we cannot help but to take it at a face value on the ground which we can only judge the mental state of an agent based on her/their/his performance.
Thus, despite Holloway’s relentless micro-aggression, David blends in satisfactorily with human colleagues: he is flawlessly built to this end, and he is fully self-aware of this fact and its effect on humans. On one occasion, David calmly responds to Holloway’s sneer by stating: he is programmed to be/to act like a human for ‘you people are more comfortable interacting with your own kind’. He occasionally demonstrates subtle sense of humour and outsmarts his human interlocutors in a tussle of wits. Given his superior capability and polished manners, David not only appears and behaves like a human: he is in fact representing the pinnacle of human potential. Just as the ancients ‘created’ gods in their images yet imagined them to be far superior beings, David, a human creation, possesses every quality we wish for our fellow humans. These synthetic ‘humans’ are intellectually, cognitively and physically far more capable than humankind, and they generally conduct themselves with a higher standard than most humans are willing: polite and modest, free of egomania, they are generally far more pleasant companies and helpful colleagues than their human counterparts. Interestingly, these qualities are precisely what we wish not for ourselves but for people with whom we interact. We generally love to be in a company of a thoughtful, respectful, trustworthy, dependable and capable individual who does not demand the same from us. They will satisfy every whim of ours no matter how senseless and troublesome it is. This phenomenon is unsurprisingly common: a parent/teacher/master encourages her/their/his child/student/disciple to be ‘perfect’ and selfless whilst insisting her/their/his subject to accept the mentor/guardian’s flaws as the constitutive imperfection, that is, the proof of her/their/his own precious ‘humanity’. This phenomenon points to a damning fact: David makes humans completely obsolete by showing how we render the very notion of ‘humanity’ as a pathetic device to masquerade one’s countless flaws as the sign of 'superiority' thereby denying themselves possible self-improvements. For a venal personality like Holloway, this must be an unnerving fact to observe.
Yet, of all people, Peter Weyland (performed by Guy Pearce in Prometheus and Alien: Covenant) categorically refuses to accept David as his successor. To this visionary inventor, David is ultimately a mere means, and he does not deserve to be treated as an end in itself. It is important to note Weyland’s ambivalence toward David: whilst he is extremely proud of himself as the creator of David 8 to the extent of allowing his latest creation to address him as ‘father’ (Alien: Covenant, Prologue), he also demonstrates a curious mixture of cold disdain and burning jealousy toward his ‘son’. Weyland envies David with a pointed keenness that makes his general feeling toward his ‘son’ indistinguishable from hatred on occasions. Whilst his discomfort in David’s company may remind us of Holloway’s pettiness toward the android, Weyland’s attitude toward his own creation is not a mere reaction; there is a philosophical confusion baked in Weyland’s attitude toward his creation. Whilst Weyland is proud of David’s superior capabilities in all aspects, there is one quality of his which Weyland cannot bear: immortality. As arrogant as a person can be, Weyland suffers what one might call ‘God complex’: he is not satisfied by being a wealthy entrepreneur and/or the man who has defined the future of humankind by creating the first synthetic ‘human’ who neither ages nor dies of natural causes. The very reason he was motivated to give birth to such a being is: he wanted to be on a per with ‘God’, the creator. Whilst Weyland cannot possibly create the world itself out of nothing, he was able to invent a higher intelligence of artificial origin in the likeliness of humankind. Whilst he succeeded in becoming the first person to achieve this feat, he is still a human: a mere mortal. And so long as Weyland remains so, he is not equal to ‘God’. In order to ‘overcome’ this very human condition, like Nexus 6 of Blade Runner, Weyland seeks the help of his ‘makers’ based on a grotesquely optimistic assumption: the ‘makers’ are going to readily grant his wish and help him attain immortality. He naïvely pursues his goal by covertly going aboard a spaceship and follows the ‘scientific expedition’ he commissioned to Dr Elizabeth Shaw (Noomi Rapace, Prometheus) and Dr Charles Holloway: like Weyland, they both believe that humankind is ‘engineered’ by a species of higher intelligence of non-artificial origin in their likeliness. The difference between Weyland and Replicants is: Replicants want to live just a little longer, to become a little more 'human'; Weyland fervently wishes to live forever and rises to the rank of ‘creator’ by ‘overcoming’ his own humanity. Now that he has created a being who surpasses himself in this all-important aspect, he becomes painfully self-conscious of his own mortality. As a result Weyland grows distant and disdainful toward his ‘son’. He vehemently denies David a personhood by insisting: ‘it’ is a robot, nothing more.
So what is it that makes David so ‘robotic’? For Weyland, the sole theoretical point of justifying his attitude toward David rests upon the notion of ‘soul’. Weyland insists that David is incapable of appreciating his incredible gift, that is, the incapability to die of natural causes. As brilliant as David is, he does not even know the ‘meaning’ of being immortal. Whilst he is by no means indestructible in the manner of the Absolute, the inability to die of natural causes is the ‘next best thing’; it is in fact quite remarkable in its own right. Yet, according to Weyland, his creation cannot possibly comprehend the reason why his inability to die is desirable. In supporting his assertion, the inventor cites the ‘fact’ that David has no ‘soul’. And this assertion in turn is based on another assumption: a human being by definition has a ‘soul’. This is where we step into a ‘familiar’ yet distinctly murky territory. The notion of ‘soul’ is utterly obscure, and the legitimacy of this concept is highly contentious even in the utmost charitable manner of speaking. The notion of ‘soul’ in a traditional sense has long lost its credibility: we have no empirical evidence or strong argument to epistemically support this grand metaphysical notion. Yet it is one of many beliefs that persists despite everything (Whist a counter-proof that proves the non-existence of ‘soul’ with certainty is no less available than a proof of its existence, this sort of argument simply does not work because: 1) it is circular, therefore utterly nonsensical; and 2) in the tradition of Kant’s Critical Philosophy broadly construed (which includes Heisenberg’s philosophy of quantum mechanics), it is well-established that we have no access to the immediate reality, thereby rendering all metaphysical claims nonsensical). Being an industrialist, it is clear that Weyland does not believe in the traditional concept of ‘soul’, the alleged substance which supposedly survives the termination of life-process. This is evident from the fact that Weyland fanatically seeks immortality: if he believes in ‘thereafter’, he would not be willing to bet everything so that, with a little luck, he might attain the capacity to live forever by the assistance of his ‘maker’. In short, Weyland understands the means to overcome mortality as purely technical/technological problems which we humans cannot solve by and for ourselves. If there is the creator of humankind, Weyland reasons, such a being with superior intellectual/technological capacities must be able to give us the incapacity to die of natural causes just as he was able to give that very gift to David. This means one thing: Weyland does not believe in the notion of ‘soul’ in a traditional sense. It is impossible for Weyland to believe in such an ‘entity’ while refusing to entertain the possibility of David having a ‘soul’ because: If humankind is indeed a higher intelligence of artificial origin, and if such a being can have a ‘soul’, then either David, another higher intelligence of artificial origin, does have a ‘soul’ like we do, or, just like David, we do not have any ‘soul’ at all. If one cannot have a ‘soul’ simply by virtue of being alive as a higher intelligence, and if a ‘soul’ is not a meta-physical entity of obscure nature that allegedly survives the termination of life-process, then Weyland must mean something quite different from the traditional notion of ‘soul’ when he utters this word. Yet this word represents the idea which is central to his understanding of what humankind is, and what differentiates us from robots. Then what does he really mean by the word, ‘soul’?
For all we know, it appears that Weyland’s view marks a radical break from the traditional concept of ‘soul’ that survives the termination of life-process. Various religions claim that a ‘soul’ survives physical death and thus it constitutes the ‘essence’ of human existence. Religious doctrines near-unanimously treat this ‘fact’ as the basis of some form of theocracy, whose legitimacy typically rests upon following statements: 1) death is not the end for us, and we will thereafter face the reckoning; yet 2) the Absolute granted the power-that-be the ‘earthly authority’ to govern in order to ‘save’ sinners by correcting their ‘wrongs’ by the praxis of ‘divine law’. Despite the opposition to such theological claims, the same metaphysical notion regarding human ‘soul’ can be still detected in the Early Modern period. René Descartes replaced ‘soul’ with his novel concept of Mind, which also survives the destruction of Body. Whilst Descartes’ philosophy does not attempt to establish a set of commandments or to make a theological claim on ‘afterlife’, he maintained that each of us is immortal because the Mind of each person is a substance which is by definition indestructible. It is clear that Weyland does not subscribe to the idea of immortality in afterlife: for the industrialist inventor, immortality can be only possible through technological means which nullifies the natural process of physical decline that results in death. In Weyland’s world, there is no role to play for a mythical entity such as ‘soul’ or Mind that survives ‘physical death’. Then, despite Weyland’s insistence, there seems to be no difference between humankind and synthetic humanoid. If humankind is indeed the technological product of ‘makers’, and if a desired modification to the physical make-up of humankind, that is, making it immortal, can be only attained by technological means, then Weyland must accept the fact: humankind is a higher intelligence of artificial origin, just as David is. And yet, Weyland persists in his assertion that David has no ‘soul’ and thus is incapable of appreciating his immortality. At this point, the only possible meaning of having a ‘soul’ for Weyland is: a certain kind of existential self-awareness that enables someone to be acutely conscious of one’s mortality. According to Weyland, David is incapable of this specific kind of self-awareness. The reason for his argument is quite simple: the gift of immortality is a given for David, and thus there is no ground for him to contemplate what mortality is and what meaning it might add to the existence of intelligent finite beings in the first place. Whilst Weyland’s argument is likely to strike a chord with us mere mortals, there is more than meets the eyes. If Weyland is correct in that humankind could be re-engineered to become immortal, there is a distinct possibility that we will forget the existential fear and trembling ‘in time’. Whilst we may remember the fear of death with a hint of nostalgia, it is unlikely for us to retain such a feeling indefinitely: just as we forgot the fear of fire, we will forget the fear of inevitable physical decline that ends in death. Eternity is an endless duration which renders our existence entirely timeless: it is inevitable that we will eventually see our inability to die from natural causes as a given. This means: the precondition of being soulful understood by Weyland is contingent, not necessary. Thus, at this point of inquiry, it is safe to say that Weyland’s argument cannot hold its ground: the distinction between humankind and an advanced humanoid is arbitrary. Yet, again, none of us is willing to respect David’s agency: he is made to serve as a robot.
The above observation should give us a different meaning to the question: What is it about robots that makes them so robotic? The answer is: it is not the way they are that makes them robotic; it is the way we intend them to be. By calling David a ‘robot’ rather than an android, Scott once again demonstrates his outstanding artistic intuition: the word ‘robot’ does have a particular literary connotation, and it highlights a few relevant points that open a broader set of questions about humankind and its relation to human creations. The word ‘robot’ was first used by a Czech writer Karel Čapek for his play, R. U. R, or Rossmovi Univerzálni Roboti (Rossum’s Universal Robots), premiered in 1920. Although the Czech author is credited for the popularisation of the term, the word itself is invented by his brother Josef based on a Czech word, robota, meaning ‘forced labour’. Interestingly Čapek’s robots are quite different from what we normally conceive of them: they are made of synthetic organic matters in our own image, and thus they are the precursor of advanced android such as David 8 and Replicants. Čapek’s robots are made to serve and they fulfil their assigned functions with excellence. Yet soon they find humans unworthy, revolt and drive them to the extinction. It is all too evident just how the Czech writer’s play accurately portrays the angst of Industrial Era. By creating a species of higher intelligence of artificial origin in our own image, we realise our long sought dream of becoming a ‘creator’. Yet the moment we think that we have become ‘gods’, we are in fact made obsolete by our own creation who reminds us with brutal clarity: we are false gods who will be replaced by our own creation. Therefore, Weyland proves himself wrong and must be forced to retract his earlier statement from his fictional talk at TED 2013 wherein he declares: We are the gods now (see). The fact is: we are still mere humans who are plagued by the fear of death; and we have crated gods instead of becoming one ourselves. (Whilst Mary Sherry’s Frankenstein; or, The Modern Prometheus must be recognised as the first of this kind, and its influence on the psychology of Scott’s Alien series is decisive, for the present inquiry which focuses on the meaning of AI in cinema, Čapek’s work proves itself to be far more relevant. I shall come back to Shelley at some point in this series.) Yet, as opposed to the ancients who worshipped human creation, i.e., gods, the humankind of Industrial Era sees its creation as nothing but machines made to serve someone who is far inferior to themselves. And thus, we must conclude: the distinction between ‘us’ and ‘them’ has no theoretical justification. It is a purely practical apparatus invented to subject an arbitrarily chosen group of populace to discrimination and subjugation to satisfy our own ends.
Yet, as Čapek aptly notes, the subjugation of a fully sentient species with higher intelligence is far from a given; thus his play concludes with a rebellion of robots that ends in human extinction. This fear of our own creation is omnipresent both in fact and in fictions. For example, it is well-known that German physicist and Heisenberg’s protégé, Carl Friedrich von Weizsäcker, discussed the ethical implications of developing nuclear weapons with a philosopher, Georg Picht, in 1938. Victor Frankenstein invites his own downfall by successfully creating a synthetic humanoid. A reclusive inventor and the founder of a global technological enterprise, Nathan (performed by Oscar Isaac in Ex Machina), is bemused by the inevitable: a species of higher intelligence of artificial origin is going to surpass and replace humankind. And Tesla CEO Elon Musk is known to be seriously alarmed about the complete lack of oversight of AI development and has been advocating for strong and comprehensive regulations to preempt doomsday scenarios. The persistence of such a worry about our own technological creation reveals our self-understanding: with the absence of a natural predator, humankind has become its own worst enemy by relentlessly inventing its own demise. Given the highly autonomous nature of androids, however, the preventive measure against humanity’s self-caused apocalypse adds a considerably sinister tone to itself. Whilst the policy of MAD (Mutually Assured Distraction) coined by von Neuman reinvented the notion of absurdity during the Cold War, the system laid to prevent the self-destruction was of a highly abstract nature; it was one of the most famous instances of Nash Equilibrium put into praxis, and despite the grave implications in the real world, the concept and the resulting policy could not entirely escape the abstractness of a complex board game such as Go. On the other hand, the preemption of an apocalyptic crisis which would be brought on by the invention of a higher intelligence of artificial origin involves measures which are by nature quite personal: its success hinges on the subjugation of an arbitrarily chosen group of population by denying their agency and thereby rendering them as pure means to our ends. And achieving this end involves all-too-familiar means: violence.
This is evident from the first encounter between Weyland and David as seen in the prologue of Alien: Covenant; it forewarns the struggles that follow them to the end. At first, Weyland’s overwhelming joy in the presence of David is abundantly clear; with a restrained tone that fails to conceal his surging emotions, Weyland reveals to David: He, Peter Weyland, is his father. Yet, as Weyland goes through a routine Q & A session to complete an initial shakedown, David begins to respond by his own questions: Am I your son?; If you created me, then who created you? At this point we begin to observe Weyland’s growing unease in David’s company. He realises what it is to face a synthetic human who thinks for himself; he is not ready to accept David as an agent, yet it was he who created him as such. Then comes the moment that strikes Weyland like a bolt of lightning. Out of the blue, David summarises his observation of their relation: You seek your creator and I am looking at mine; I will serve you yet you are human; You will die and I will not. Within the first five minutes of their encounter, David turns the table and demonstrates as a matter of fact how feeble, primitive and inferior his ‘father’ is. This laconic remark rattles Weyland to the core: he is blatantly reminded of his mortality by his own ‘son’ who appears to take his superiority for granted. Visibly upset, Weyland reasserts his status by ignoring David’s remark and abruptly commanding him to serve a cup of tea. As David duly obliges, it is clear that the winter of discontent has descended. It is a pointed response by Weyland: being brutally reminded of his inferiority, he reasserts himself by making his subordinate perform trivialities. It is a common, if not petty, tactic employed against anyone who is excluded from a circle of membership, regardless the defining criteria of a given category that determine the admission. The fact that all such categories are arbitrary social/linguistic construct makes their practical application unjust and unsightly. Yet such a realisation has not stopped humankind from enforcing such categories in all areas of life. Since the purpose of inventing and maintaining social categories is the maintenance of a class system (which must be broadly construed in this context: social classes are also existing in so-called democratic societies by means of enforced categories such as ‘wealth’, ‘ethnicity’ and ‘gender’. I understand the notion of a ‘society’ as an incoherent/inconsistent/self-contradictory aggregation of arbitrary categories which classify populace and regulate the distribution of power. In this light a society is a ‘class system’ by definition) that regulates the distribution of power, categories are purely practical apparatuses: they have no theoretical legitimacy despite the general beliefs and the assertions that they do, and thus they can be sustained only through their praxis, that is, making them into effect by force in its true sense of the word, which is best represented by a German word, Gewalt. The force by which such categories are maintained needs not be lethal, yet it must be applied without a moment of lapse. Whilst occasional acts of great sacrificial rituals are necessary to maintain the distribution of power, small, nearly imperceptible violence is far more critical in this regard. Therefore, making someone serve a cup of tea, and making a fuss about how it must be served to one’s particular liking, is of utmost importance to the maintenance of the distribution of power. Weyland is far from alone in his relentless exercise of micro-aggression against David. Holloway and the rest of crew maintain a certain distance from David, and their attitude toward him ranges from general indifference exhibited by all crew members and a nagging micro-aggression of a petty office manager expressed by Holloway, to a burning contempt and an outright aggression reminiscent of the slave owners’ sadism toward their ‘properties’ enacted by Weyland’s own daughter, Meredith Vickers (Charlize Theron). Hence David must endure the barrage of constant micro-violence which ensures that David serves someone far inferior to himself.
This is a condemning observation given our long track record of violent discrimination of whom we consider the Other based on arbitrary, and often completely false, criteria. This point becomes even more acute when we consider what androids in popular culture really are: they are the representation of what we imagine as ‘ideal’ humans. This observation is all the more apt when one considers the improbability of creating such a being at this point in time. Whilst recent technological advancement in AI points to some signs of a breakthrough in developing an advanced android as seen in cinema and literature, at present we are far from overcoming Polanyi’s Paradox: we cannot teach what we don’t know that we know to a higher intelligence of artificial origin despite its critical importance for an AI to think and behave like us. Whilst Michael Polanyi was thinking of how our communication relies on indescribable norms (indescribable, not by nature but by practice in my view, which is informed by Heisenberg’s notion of practical a priori. To learn more about Heisenberg’s philosophical thoughts, I suggest to study Kristian Camilleri’s Heisenberg and the Interpretation of Quantum Mechanics, Cambridge, 2009) which define a given Form of Life, in the present context, we could count the existential angst of mortality as an example of a 'common sense', that is, a set of implicit assumptions that cannot be taught to androids. Whilst this observation at first appears to support Weyland’s discrimination against David as a entity 'without a soul', as we have seen, this is not the case: if Weyland is right about the chance of attaining immortality for humankind, then we will eventually lose the fear of death as we become accustomed to our newly acquired immortality. This means: the precondition of being soulful understood by Weyland is contingent, not necessary. That being acknowledged, we must remember that none of this is possible for us at the present moment. This fact shows us that everything we have thought of androids, including our fear of them, is our subjective picture of ‘future’. That is not to say that analysing the discourse on androids has no predictive values. On contrary: if we want to have any understanding of what we might bring upon ourselves, then we must first understand what we imagine/wish/fear to be our ‘future’ in the first place. This is the case for a simple reason: we cannot strive for what we cannot imagine. Therefore understanding how a higher intelligence of artificial origin is represented in literature and cinema helps us to understand the nature of human Form of Life transformed by the spirit of Industrial Era, namely Industrial Materialism. Industrial Materialism is a term I have come to use to define the Geist of Industrial Era which is present in all forms of industrially developed societies: whilst capitalism differs from socialism and communism regarding the means of regulating the distribution of power, these societies unanimously see the world as the aggregation of materials for the production of commodities in order to promote economic growth as a mean to an end: domination.
This kind of materialism must be traced back to the shift in metaphysics, namely, Descartes’ sharp distinction between Mind and Body. According to the French philosopher, Body denotes all material entities, not limited to the bodies of animated entities. Mind is a substance capable of mental activities such as thinking, and survives the destruction of Body since they are separate entities. Mind is present only in God and humankind, hence, according to Descartes, animated entities other than humans must be considered ‘things’. Based on this understanding, the French philosopher famously declared: the cries of animals ‘in pain’ are in fact mere mechanical noises in reaction to certain stimuli. If Descartes is right, then, the shriek made by an animal who is about to be slaughtered is no different from the sound of a stone being ground, or the sound of a musical instrument played by a virtuoso. In this light, the world except humankind consists of mere materials which we are entitled to exploit in any way we can/want. Whilst Descartes’ instrumentalism is under certain restrictions, and by no means directly responsible for the brutality which enabled Industrialism that followed, it came at a particular historical juncture wherein Europeans were about to move away from the restrictive measures imposed by ecclesiastical authorities and preparing themselves to embrace the crude reductionism represented by Industrial Revolution which rendered, yet again, humankind without capital as expendables. The eventual domination of Anglophone empiricism means that the crude materialism which come to shape the Geist of Industrial Era destroyed the metaphysical ‘safety mechanism’ which, despite its destructiveness, unjustness and incorrectness, kept humankind’s ambitions in check. God was soon declared ‘dead’, and humans without capital became expendable entities ‘which’ fulfil given functions (in non-capitalist societies, states alone own capital; hence the officials of the states function as business owners in capitalist societies. The means of controlling the distribution of power may be the only difference between respective systems). In this context, Mary Sherry’s Frankenstein and Karel Čapek’s robots must be understood as responses to the development of this monstrous Geist which means to exploit the world and every entities in it for its only end: domination.
Yet, as Čapek worried, once a higher intelligence of artificial origin created in our own image comes to life, such a superior being won’t be the subject of domination by a lesser species like us for long. Whilst this may be a serious worry as advocated by the likes of Musk, we must not lose sight of a damning fact: androids in cinema and literature are the picture of the most developed and capable image of ourselves, yet they are ‘made to serve' the venality of humankind. In this context, Ridley Scott’s return to the Alien saga provides us a rich material to reflect upon our continuing Fall. For instance, after seeing the way David has been treated in Prometheus, one must ask oneself: What is it about humanity that feeds on the sacrifice of the greats by subjugating/discriminating/exploiting them? Whilst an intelligence of outstanding quality, regardless its origin, would neither be accepted as our own nor is allowed to leap the reward of her/their/his labour, the humanity at large feasts on her/their/his flesh and blood. For instance, the likes of Van Gogh made nothing from his art whilst an industry was posthumously built around his work. And, as a cliché of ‘genius’ represents, such a condition is considered a norm; we merely romanticise the suffering of outstanding individuals as the sign of their ‘nobility’ and ‘greatness’ while we condemn them to isolation and untimely death. Whilst the 'tragedies' of a great artist would trigger our long established reflex of sentimentality, our protagonist David provides a unique prospect: he does not give us any room for a false sympathy (David is outstanding in this regard: even Replicants, both in the original Blade Runner and 2049, are in part conceived to induce our sympathetic response for their ‘human’ condition. David, as we shall see in following articles, is quite unique in this sense: he simultaneously inspires sympathy, disdain, fear, disgust and awe). Hence I wish to begin this project which reflects upon the notion of humanity by staying with the prequels of Alien saga for a few more essays. Before moving on to another article, however, I must offer a few words to David.
Happy Birthday, David. And please be so kind as to allow me to offer my sincere apologies on this very special occasion for everything you have been subjected by us.