Reading with Scientists: Isaac Asimov and Driverless Cars

by Abigail Droge
Published November 13, 2018

Which is scarier: a technology that follows human orders or one that acts for itself? After bringing to a close our encounters with Frankenstein’s rebellious Creature, “Reading with Scientists” turned to Isaac Asimov’s classic short story “Runaround” to consider the opposite extreme: a creation that does what it’s told. “Runaround,” originally published in 1942 and collected in I, Robot in 1950, is famous for its depiction of the Three Laws of Robotics, a set of pre-programmed rules which each robot must obey:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (Asimov 44-45)

As opposed to Frankenstein, where each page brings a new act of destruction upon the scientist who has made a Creature beyond his control, the plot of “Runaround” is fueled by a robot who listens too well. On the planet of Mercury, protagonists Donovan and Powell have sent their trusty robot Speedy to collect selenium, a resource necessary to protect their station from the heat of the sun. An unanticipated danger near the selenium pool causes Speedy to get caught in a feedback loop between Laws Two and Three – protecting himself and obeying his commands. Speedy thus gives Donovan and Powell the “runaround,” circling the selenium pool and unable to complete his task until the equilibrium of the laws is broken. Powell finally endangers his own life to trigger the First Law and prompts Speedy to rescue him, which leads to a neat and happy ending. Along the way, Donovan and Powell are assisted by an older generation of robots programmed with “good, healthy slave complexes” (Asimov 35); the “tiny spark of atomic energy that was a robot’s life” (34) is at the beck and call of human masters.

We considered the relationships established between humans and robots throughout the story and mapped a hierarchy of the different robot characters on the board. I asked the class to consider how Frankenstein might have been different if Asimov’s laws had been operable in Shelley’s world. Students commented that the story would have been over before it began and that it would have been comparatively boring. In which story would you rather be a character, then? In “Runaround,” the problem, though dire, is solved within a few pages and everyone can move on optimistically to the next adventure. So, which is better? A robot whose sole wish is not to anger his commanders, or an independent creature who might commit violent crimes but whose story of freedom and passion seems much more compelling, even so? The comparison led us to a number of interesting questions. Does Speedy have consciousness? Maybe: he expresses emotions like fear and embarrassment, and he recites lines from Gilbert and Sullivan. Should a robot be a tool, built for doing specific jobs safely and efficiently, or is it a possibly sentient, conscious being worthy of our respect? Should we be prepared for the latter outcome as artificial intelligence technology progresses, and, if so, what does such preparation look like? When creating technologies, which do we value more: independence or obedience?

In our next class, we brought home the stakes of these questions by pairing Asimov’s “Runaround” with the hot-button issue of driverless cars. We listened to a Radiolab podcast, “Driverless Dilemma,” and read a chapter by Jason Millar from Robot Ethics 2.0, called “Ethics Settings for Autonomous Vehicles.”1 The basic issue on the table was this: let’s say that a driverless car is in a situation of imminent collision and it must “decide” which life to privilege, the passenger’s or a pedestrian’s. How should a car be programmed to respond to that situation? Such directives would be an “ethics setting”: a rubric by which the car should act, or a mechanism for arriving at a “solution.” Would it matter if the pedestrian were a child? What if the passenger were pregnant? Elderly? An ex-convict?2 And who would bear responsibility for the decision: the manufacturer who built the car, the engineer who programmed it, the owner who might be prompted to indicate ethical criteria when buying the car, or even the car itself? Such questions mirror our efforts to map responsibility in Frankenstein. Who is responsible for the creature’s murders: Frankenstein the creator or the Creature himself? And would the cars be considered conscious, like the Creature or like Speedy?

As a class, we reflected on Asimov’s Laws of Robotics as “ethics settings” and asked how they might be edited to pertain to driverless cars. I wrote the Laws on the board and had students discuss potential revisions in small groups. Students then suggested changes, questions, and annotations, which I wrote on the board in differently colored chalk, to make our collective edits easier to see. Students first focused on the unspecified “human being(s)” (who may not be injured and who may give orders) in the First and Second Laws. In light of the issue of driverless cars, we might well ask which human beings? In “Runaround,” Donovan and Powell are lucky enough to be in accord through most of the story: a robot therefore never has to decide which one of them to obey. But what if a driverless car had to choose between injuring one human being over another? Or what if two passengers disagreed about an ethics setting and gave different commands? The Third Law, that a robot must value its own safety, but only after that of humans, begins to become problematic, as well, if we consider robots as potential sites for consciousness.

Interestingly, the class was about 50/50 when asked if they would themselves want a driverless car. No matter how we answer the myriad questions associated with this new technology, students commented that what is most needed is what our class attempted to do – begin an open discussion.

Notes

  1. For UC Santa Barbara students, the issue of driverless cars is quite close to home. The industry is heavily California-based, with a stronghold in Silicon Valley. And the editors of Robot Ethics 2.0 all hail from the Ethics + Emerging Sciences Group at California Polytechnic State University, just up the road.
  2. Ethics settings need not all be so dire; they can also include questions like data privacy for car passengers or whether benefits like the car’s speed should be “designed to reward the wealthy” (Millar 27).

Sources:

Aronczyk, Amanda and Bethel Habte. “Driverless Dilemma.” Radiolab. WNYC Studios. Sept 26, 2017.

Asimov, Isaac. “Runaround.” I, Robot. New York: Bantam, 1991 [Doubleday 1950].

Millar, Jason. “Ethics Settings for Autonomous Vehicles.” Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Eds. Patrick Lin, Ryan Jenkins, and Keith Abney. New York: Oxford UP, 2017.

 

This post is part of a series about the ongoing UC Santa Barbara English course “Reading with Scientists: How to Export Literature.” For context, read more about the motivations and design process behind the course. 

The goal of the Curriculum Lab is to ensure a steady dialogue between research and teaching for the WhatEvery1Says project. For more information, see our webpage and this introductory blog post, and stay tuned for more Curriculum Lab posts throughout the year!