If your Roomba chews up the fringe on your favorite Oriental rug, is it OK to punch it? If an algorithm recommends a movie to you, and the movie turns out to be crapola, to whom do you direct your online flames? If a drone independently computes a tactical course correction and flies into the wrong airspace, igniting international tensions, would war be averted if our rep stood in the UN assembly and assured everyone, “The drone gravely regrets its error”?
Machine ethics, robot rights — these topics keep popping up in my world. In the last year, I’ve attended three talks addressing various shades of the subject.
A year ago in Chicago, David Gunkel summarized the issues in his book, The Machine Question: Critical Perspectives on AI, Robots, and Ethics, by opening with the root question: “When will we hold a robot, even an algorithm, responsible for its own actions?” He located Heidegger’s original consideration of instrumentation theory — a tool is a tool, and its human user is only responsible for the actions completed with it — extending through more recent research from Andrew Feenberg, Deborah G. Johnson, and J. Storrs Hall that keeps morality firmly yoked to human shoulders: tech is merely the media through which human intention travels. “This theory has served us well for millennia,” Gunkel declared, “but now this is over.” He traced his argument for rethinking morality via machines through Marx’s suggestion that the machine replaces the human, Langdon Winner’s question (“Do Artifacts Have Politics?”), and Rodney Brooks’ more down-to-earth version of Kurzwwilian speculations about the looming future of real, transformative AI. Look to algorithms today, Gunkel said: the “flash crash” caused by Wall Street trading code (semi-autonomous, learning systems), those eerily spot-on Netflix suggestions (critics, who needs critics? — heck, we’ve even got machine-written journalism and machine-played music), the ones actually processing credit applications despite the human intermediary you may speak to on the phone. Plenty of systems currently exercise and even exceed human oversight and control. Gunkel’s tipping point for when a technology transforms from a dumb tool to a machine with rights: “Cognition, when the machine thinks, and suffering, when the machine can experience pain.” It’s social-relational: we’ll grant rights to the techno-creature when we decide it’s worthy, as we have with every other “part that has no part” that’s fought its way through politics to become a rights-bearing subject.
Some, of course, already have proposed slavery 2.0, arguing forcefully that “robots should be built, marketed and considered legally as slaves, not companion peers.” Others, thankfully, have begun to think a bit less stridently about these issues. Johnson, from U. of Virgina, spoke in my Science Studies program last month, delivering a presentation titled, “Anticipatory Ethics, Artificial Agents, and Responsibility” (see this paper). She defined artificial agents as “computational devices that perform tasks on behalf of humans and do so without immediate, direct human control or intervention” — so bots, code, and combinations of both (e.g., drones, self-driving cars). Again the question arose: “Who will be responsible for these entities if they do something harmful or illegal?” Johnson attempted to peer into this “responsibility gap” and found that it separates the people thinking about ethics of the machines and the people designing them; the ethics aren’t yet instrumental to the design process. She called for “a better understanding of how ethical concepts and principles shape and are shaped by technology” and for designers to “figure out how to best bring ethics more intentionally into engagement with future technologies.” I asked where in the development stage these ethics should be concentrated in order to be most effective; Johnson suggested machines could be designed from the outset to be easy to control or shut down, or to reveal information to human handlers or not, and that those decisions determine later how much humans can be held responsible for the machine’s actions.
Jennifer Robertson from U. of Michigan spoke earlier this year at UCSD about her upcoming book Robosapiens Japanicus and these same questions of ethical treatment of machines. Focusing primarily on Japan — which possess more than half of the world’s robots and uses them throughout society, not primarily for military applications as is the case in the United States — Robertson examined the discourse there about utilizing robots in the home as both house workers and care givers. Intelligence, she claimed, is embodied. It can’t exist within the code alone; it emerges only in “dynamic coupling with its environment.” Robertson’s argument is tied to Japanese culture, with its longer history of grappling with the mingling of mind and material. For instance, the legacy of the Shinto religion, which advocates for a living spirit in all material things, assists in the ease of discussing machine ethics in Japan — making it easier to get past human exceptionalism.
Robertson pointed out, too, that these questions are not new, in Japan or elsewhere. I recently read Martin Heidegger’s attempt to ferret out humanity’s burgeoning relationship with tech. “The Question Concerning Technology” that he posed in 1953 was one of orientation — not hand-wringing about tech dominance or determinism, but serious inquiry into our social relation to our technologies. How do we view technology? As more efficient ends to productive means, or as engaging reflections and expansions of ourselves? How we answer — and the need to do so becomes “all the more urgent the more technology threatens to slip from human control,” especially given that a computer finally just passed the Turing test — is a matter of mindset. Heidegger saw art as the way to transcend the control-tech mindset, and we’ve certainly been working out our excitement and paranoia about AI within pop culture’s robot-laden sci-fi for several decades now. We must be making progress toward the embrace-tech way of thinking (Heidegger’s “stewardship” enframing), because increasingly I’ve been running into questions not only about the ethics of how to utilize technology among humans but ethics about how to treat the technology itself. I’ve worked with several researchers who study ethics via the Internet, but what about the ethics of the Internet? Can an algorithm cause harm? Can we abuse that Roomba? When will the first protest occur for robot rights?
I'm THOMAS CONNER, Ph.D. in Communication (Science Studies) and culture journalist.