When we think of self-driving cars, we envision end-to-end transportation. In other words, we imagine being able to hail a car from a smartphone, get in, and then be taken to our destination in a vehicle that literally drives itself. Self-driving cars, then, are really bellhops on wheels: Once we tell them what to do, they carry out our wishes with no further input or guidance.
Is this going to be a reality anytime soon? Probably not, though many don’t grasp why this is so. As it happens, the biggest roadblocks are technological and normative.
First, let’s talk about software reliability. Modern software runs on hardware, which, of course, fails from time to time. If you’re running a word processing program, this isn’t a big deal. But if you’re operating a fleet of vehicles traveling at 65 mph, even a temporary shut-down or glitch is potentially catastrophic. A 10 second interruption in software-enabled guidance systems could lead to a massive accident. At a minimum, we are going to need to extensively pressure test the software and hardware powering autonomous vehicles.
Second, consider mapping technology. Google, one of the leading players in the autonomous vehicle space, has created virtual maps of parts of California. Google did this because it couldn’t figure out a way to create a self-driving car that could dynamically learn and adapt to a new environment. Its self-driving cars, in other words, do not navigate the roadways of California like humans do – by looking around and continuously scanning for danger. Instead, Google’s cars rely on a comprehensive virtual library of knowledge about those roadways to get from A to B.
This might not be the best way to create the transportation ecosystem of the future for two reasons: 1) Even a perfect virtual representation of an environment cannot account for changes to that environment that occur in real-time; and 2) A model of the environment doesn’t guarantee that self-driving cars will be able to effectively communicate with dynamic agents in the environment – that is, with other self-driving vehicles.
This leads us right into the third obstacle that needs solving: figuring out a sound way of approaching the management of obstacles and other roadway dangers. A prerequisite to forging such an approach is creating a communications protocol (CP) to help self-driving vehicles (SDVs) communicate key metrics to one another. For example, SDVs need to to be able to know the relative position, velocity, and path of other SDVs in order to deal with obstacles and roadway dangers. In addition to a reliable, standard CP, data scientists are going to have to model the various types of accidents to program a set of dynamic reaction patterns into SDVs. (Of course, this assumes that strong AI, or AGI (artificial general intelligence), is not available for implementation in SDVs in the immediate future.)
The fourth and potentially stiffest challenge is a normative one: How can we program SDVs to make tradeoffs when human lives are at stake? For example, if an accident can’t be avoided but there’s a chance that allowing an SDVs occupant to perish will ensure that 15 people in a nearby bus survive, are we prepared to program SDVs to make that call? This would be an important step in granting immeasurable power to machines, since by definition SDVs would be in a position to decide who lives and who dies on our roadways.
These challenges need to be addressed before SDVs become the new normal.