Who Makes the Rules?
February 1, 2019
Most people have heard of the “trolley problem,” a which-choice-is-preferable ethical exercise testing different outcomes for an out-of-control trolley car and the damage it might do. But let’s say the trolley is, instead, a driverless car, incentivized by its software to complete routes as quickly as possible — that is, reasonably possible. Should it be allowed to exceed the speed limit (as you, a human, often will)? When facing a rushed left-hand turn against oncoming traffic, how much risk is too much? If facing an imminent collision, should it plow into the crosswise vehicle ahead or hazard a diversion onto the sidewalk? Who makes these decisions and who is held accountable? The original software engineer, a remote human overseer, or the government agency that allowed the car on the road in the first place?
“We shouldn’t just think about how we make decisions,” says philosopher Johannes Himmelreich. “We should teach machines to make better decisions.” Because, he adds, “We are not actually very good drivers.”
Himmelreich, a technology ethicist, will join the Maxwell faculty this fall and join a body of Syracuse University researchers studying not only driverless cars, but also robots, unmanned aircraft, and other “autonomous systems” — devices that operate without direct human guidance, often with the ability to respond to unanticipated scenarios.
In early May, Chancellor Kent Syverud announced the formation of the Autonomous Systems Policy Institute at the University. ASPI will draw on faculty from across campus to consider the engineering and design of such systems, how we control and monitor them, and the impacts they are apt to have on society.
According to Maxwell Dean David Van Slyke, there are many universities studying how to build autonomous systems (AS), but none so comprehensively addressing the policy and impact questions as ASPI will. “At Syracuse,” he says, “we have engineers and computer scientists who also acknowledge that any form of technological development has to be informed by — and inform — how we think about policy, law, and governing these systems.”
The category is vast and varied. AS technology has the potential to change the nature of work and travel, how cities are organized, how healthcare is delivered. There are privacy and public-safety and environmental implications. There are considerations of economic equality and access. “Autonomous systems touch on almost every policy area we can imagine,” Van Slyke concludes.
“What these technologies have in common,” says Maxwell geographer Jamie Winders, “is a wide-open policy landscape. The rules and regulations and norms that will govern their use are still coming into being.”
Winders will direct the new institute, employing her proven knack for convening researchers around interdisciplinary ideas. Syracuse University’s great advantage, she says, is its interdisciplinary reach. “SU is taking a very broad approach to autonomous systems.” The University draws on strengths in design, public communications, business, public health, engineering and science, the humanities, and public policy. Plus, social scientists will look at “how this emerging, disruptive technology will change daily life,” she says.
Over six months, Winders held group meetings with 90-plus faculty members across SU, and similar meetings with nearly 100 government, nonprofit, and industry representatives outside SU. She gauged interest, identified research opportunities, and began to tease out curricular ideas.
A sampling of interested professors illustrates the topic’s breadth. At the University, industrial design professor Louise Manfredi studies how human-centered product design might serve AS. The iSchool’s Ingrid Erickson researches the “rhetoric of safety” as it relates to AS, and teaches a course on how A.I. and algorhythmic prediction will affect information-science careers. Architect Bess Krietemeyer studies automated systems and building design. At Maxwell, PA professor Tina Nabatchi has helped conduct community workshops on concerns about drones. Geographer Jane Read co-teaches a course, with Earth scientist Christa Kelleher, on drone use in research, its social impact, and other legal and ethical issues.
For University students, ASPI will bear almost limitless possibilities. Winders imagines new courses and interdisciplinary majors, grad and undergrad research, internships, and possible certificates.
Even ASPI’s pre-launch phase had impact for two students. Lily Datz, a freshman from Skaneateles, N.Y., prepared an audit of autonomous-related work being done at other universities, and she assembled an AS bibliography. “There’s nothing as cross-disciplinary at other universities,” Datz says. “Professor Winders really wants to involve faculty from every single school on campus.”
MPA/cybersecurity grad student Matthew Mittelsteadt, from Minneapolis, prepared white papers on topics in AS — current-state and emerging-idea briefings for stakeholders now joining the ASPI fold. One was on armed drones, for example. They’re human-guided today, but might someday “pull the trigger themselves.” Though the United Nations is attempting to forge policy, Mittelsteadt found no accepted international standards for the use of armed drones. “It’s the Wild West as far as I can tell,” he says. At a day-long conference where ASPI was formally introduced, Chancellor Syverud, too, described an array of wide-open questions, related to privacy, social equity, legality, cultural norms, etc. “The answers are going to come not just from one narrow part of a university or a company,” he said. “They’re going to come from scholars, advocates, policymakers, industry experts of all disciplines working together.”
“Our ecosystems, our economies, our cities, our social systems are not going to adjust to these technologies one at a time,” he later said. “. . . The institute really needs to be broadminded in what it’s looking at and ready to be nimble and adaptable.”
The great advantage of ASPI, says Van Slyke, is foresight. Typically, policy lags behind new technology, arrives only in the wake of a problem, and is delivered top-down and heavy-handed. But with ASPI, “we can begin to anticipate how autonomous systems might play out,” he says, “before they actually do.”
By Dana Cooke
This article appeared in the spring 2019 print edition of Maxwell Perspective © Maxwell School of Syracuse University.
Jun 27, 2022
Jun 23, 2022
Jun 22, 2022
Jun 15, 2022