Significantly, the transformation is being shepherded by a group of elite technologists.
Several years ago Jerry Kaplan, a Silicon Valley veteran who began his career as a Stanford artificial intelligence researcher and then became one of those who walked away from the field during the 1980s, warned a group of Stanford computer scientists and graduate student researchers: “Your actions today, right here in the Artificial Intelligence Lab, as embodied in the systems you create, may determine how society deals with this issue.” The imminent arrival of the next generation of AI is a crucial ethical challenge, he contended: “We’re in danger of incubating robotic life at the expense of our own life.” 1 The dichotomy that he sketched out for the researchers is the gap between intelligent machines that displace humans and human-centered computing systems that extend human capabilities.
Like many technologists in Silicon Valley, Kaplan believes we are on the brink of the creation of an entire economy that runs largely without human intervention. That may sound apocalyptic, but the future Kaplan described will almost certainly arrive. His deeper point was that today’s technology acceleration isn’t arriving blindly. The engineers who are designing our future are each—individually—making choices.
O n an abandoned military base in the California desert during the fall of 2007 a short, heavyset man holding a checkered flag stepped out onto a dusty makeshift racing track and waved it energetically as a Chevrolet Tahoe SUV glided past at a leisurely pace. The flag waver was Tony Tether, the director of DARPA.
There was no driver behind the wheel of the vehicle, which sported a large GM decal. Closer examination revealed no passengers in the car, and none of the other cars in the “race” had drivers or passengers either. Viewing the event, in which the cars glided seemingly endlessly through a makeshift town previously used for training military troops in urban combat, it didn’t seem to be a race at all. It felt more like an afternoon of stop-and-go Sunday traffic in a science-fiction movie like Blade Runner .
Indeed, by almost any standard it was an odd event. The DARPA Urban Challenge pitted teams of roboticists, artificial intelligence researchers, students, automotive engineers, and software hackers against each other in an effort to design and build robot vehicles capable of driving autonomously in an urban traffic setting. The event was the third in the series of contests that Tether organized. At the time military technology largely amplified a soldier’s killing power rather than replacing the soldier. Robotic military planes were flown by humans and, in some cases, by extraordinarily large groups of soldiers. A report by the Defense Science Board in 2012 noted that for many military operations it might take a team of several hundred personnel to fly a single drone mission. 2
Unmanned ground vehicles were a more complicated challenge. The problem in the case of ground vehicles was, as one DARPA manager would put it, that “the ground was hard”—“hard” as in “hard to drive on,” rather than as in “rock.” Following a road is challenging enough, but robot car designers are confronted with an endless array of special cases: driving at night, driving into the sun, driving in rain, on ice—the list goes on indefinitely.
Consider the problem of designing a machine that knows how to react to something as simple as a plastic bag in a lane on the highway. Is the bag hard, or is it soft? Will it damage the vehicle? In a war zone, it might be an improvised explosive device. Humans can see and react to such challenges seemingly without effort, when driving at low speed with good visibility. For AI researchers, however, solving that problem is the holy grail in computer vision. It became one of a myriad of similar challenges that DARPA set out to solve in creating the autonomous vehicle Grand Challenge