This Spotlight looks at films that grapple with A.I. ethics and imagines the possibilities of a moral universe in a digitized future. Fictional robots have always had an odd relationship to ethics; historically, cinema has concerned with itself with robot ethics only to the extent that it helps us understand a robot’s “humanity”. Robots behave unethically, it seems, because they are soulless, will-less automatons. Maria’s robot doppelganger in Metropolis (1927)—the seductive and salacious tool of the evil inventor Rotwang--is a mindless tool that sows chaos wherever she lands. On the other side of the 20th century, the eponymous original Terminator (1984) was an unrepentant killing machine. Moreover, those robots, cyborgs, and that do behave ethically appear to do so after they have developed some sort of quintessential humanity (Blade Runner, A.I., Wall-E).
This motif presupposes, however, that human thought is a necessary requirement for a systemic ethical morality. Given that ethics and morality have always been understood as inherently human traits, it seems impossible to imagine an A.I. ethics except as a result of human programming. But, as artificial intelligence takes on its own development, is there a space for A.I. ethics that could exist outside of the human imagination? Our understanding of ethics may shift in a future with artificial intelligence. While literature generally imagines that an A.I. ethic would be Utilitarian (Isaac Asimov’s I, Robot, for example), it's possible that an organically formed A.I. It’s possible that machines could develop their Kantian ethics, religious moralism, or even hedonism.
What is our ethical responsibility to the artificial intelligences that we create, and what is theirs to us? Despite its name, the documentary The Human Robot (Rob Van Hattum, 2015) doesn’t pitch the idea that robots have a unique sense of humanity—rather, it asks us to redefine the idea of an ethical being in the modern world. In examining the place of robots in Japan, the film suggests that the West misunderstands robots’ souls. The soul, in this instance, doesn’t stand for the autonomous human self but rather the ability to engage in a shared, beneficial interrelationality. In other words, in Van Hattum's film, A.I.s are not potentially dangerous beings that want to rise up and insert their own supremacy (an idea that has dominated Western fiction) but beings that recognize human-robot interdependence. In so doing, it imagines an ethical infrastructure based on mutual need.
Robots in Japan are not slaves but companions (The Human Robot)
While The Awareness (Henry Dunham, 2014), Android’s Dream (Ion de Sosa, 2014), and Melancholic Drone (Igor Simic, 2016) all grapple with those traditional A.I. fears that have long plagued Western society, the question of ethics has undergone a real shift. Implicitly, The Awareness asks the question: “What ethical responsibility does A.I. have toward humans?” In the film, humans struggle with understanding their place in the A.I.’s ethical viewpoint and its need for self-preservation. And, if and when things take a turn for the worse for humanity, it may be because humans don’t see A.I.’s ethical responsibility to itself.
What is an A.I.'s responsibility toward humans? (The Awareness)
Similarly, in Melancholic Drone, the film laments a lack of ethical responsibility toward artificial beings. The sad drone that flies through Belgrade laments its banal job and meaningless existence. In its last flight, it sadly looks on all the places and people that it has surveyed in the past, understanding that it will be forever removed. However, the film as makes a subtle argument concern the ethics of surveillance as it damages the one who surveys. Much like The Conversation or The Lives of Others, the pain of surveillance goes both ways. Just because the observer, in this case, is an artificial being, doesn’t free its human operators of that responsibility.
More apocalyptic in nature, Android’s Dream imagines the roles of ethics and morality at the end of the world. Loosely referencing Philip K. Dick’s famous novel, the film follows a cop as he eliminates certain stragglers in a dying world. The short feature is episodic in nature, and its focus on a myriad of others (aliens or androids) suggests room for ethical relationships between different species, one outside of ownership or tribalism.
Android's Dream ('Sueñan los Androides)
Finally, The Intelligence Explosion makes an even more radical claim—that humans are incapable of ethics, and matters of morally philosophy should be left to more sophisticated beings. In this playful short written for The Guardian, creators of an advanced A.I. bring in a professor of ethics to tell them how to give their A.I. an ethical sensibility that would make it safe for humans. The ethics professor scoffs at that possibility, leading her and the A.I. into a feisty debate over the possibility of an ethical structure in a complicated world. The film ends by asking not if A.I.s can have ethics, but if humans can. With our finite intelligence, it may be more efficient to give up those philosophical questions to those beings with boundless intellectual potential and radical evolutionary development.
This A.I. could transform human philosophy. (The Intelligence Explosion)
About the author
Kirsten Strayer is a writer, curator, and film scholar who has published in academic journals, anthologies, and pop culture magazines. Her recent anthology, Transnational Horror Across Visual Media: Fragmented Bodies, was published in 2014 by Routledge Press.