In the hit 1980s TV series <i>Knight Rider</i>, David Hasselhoff’s character, Michael Knight, was a crime fighter aided by his loyal companion – not a white horse, but a sleek, black car. The Knight Industries Two Thousand, or KITT, was based on a Pontiac Firebird Trans Am. The vehicle featured a molecular-bonded shell that couldn’t be damaged, a turbo-boost mode that made it jump and, best of all, a red light in the front going back and forth and making whoosh-whoop noises. Apparently, that was some kind of sensor that allowed the car to drive itself. That and the sophisticated AI that was capable of not only piloting said vehicle, but also dishing out witty repartee and pin-sharp sarcasm. This was a car that could drive for itself, speak for itself and, most crucially, think for itself. Fast forward to 2015, and Tesla and SpaceX-founder Elon Musk claimed self-driving cars that could go anywhere would be with us in a couple of years. The following year, Anthony Foxx, former US Secretary of Transportation, claimed that by 2021 the use of autonomous vehicles would be widespread and normalised. Also in 2016, ride-sharing company Lyft’s chief executive, John Zimmer, predicted the end of car ownership by 2025; presumably driverless taxis would be at our beck and call instead. In actuality, private car sales are booming but not one can quite drive itself. Lyft actually sold its autonomous vehicle unit to Toyota, after four years researching and developing driverless cars. Even earlier, the company’s rival Uber sold off its equivalent subsidiary, Advanced Technologies Group, to Aurora group last December. After 30 autonomous car crashes, the final straw was a tragic fatality. Even the new owner, Aurora’s chief executive Chris Urmson, admits that it will take “30 years and possibly longer” to realise the dream of cars that drive themselves. How is this possible, considering we already seem to be so close? After all, there are apparently five levels to full autonomy, and we’re at level four now. Here’s how it goes: in level one, cars aid the driver with things such as rear-view cameras and brake assist; in level two, cars can steer, brake and accelerate, such as with adaptive cruise control systems; in level three, cars park themselves while you freak out about how close it’s getting to the other vehicles; and level four is essentially the autopilot-style systems where you can actually take your hands off the wheel. Level five is KITT, without the chat, invulnerability, sweeping red LED and the leaps. Indeed, it is the leap from levels four to five that has flummoxed the brightest brains in the business. About five years ago, I attended a technical presentation by a major German car manufacturer and was told its car would recognise and identify an impressive x-number of shapes as human and therefore not kill them. Excellent. I asked if it would recognise a person lying on the ground, say if they’d tripped and fallen. Awkward silence ensued, followed by the admission that that particular form hadn’t been programmed in; and the car would assume the shape to be a speed bump. Bad news: it’ll still run you over. Good news: it’ll do it slowly. And there is the crux of the issue, the so-called “autonomous” drive systems at present are nothing but a set of preprogrammed scenarios. Despite all the radars, lidars, sensors, GPS data et al, which allow the car to “see”, it still can’t quite fully comprehend what it’s seeing and extrapolate the appropriate action to take. Is that a dog in the road or did a kid drop a stuffed toy? Is that van turning left across traffic, or has the driver forgotten to cancel the indicator? Does Michael want to deliberately ram that car off the road, or should the brakes be applied? Yesteryear’s fictional KITT would know; today’s factual AI is clueless. While AI employs deep-learning algorithms and unimaginable gigs of data, there’s always going to be unexpected and bizarre situations (you’ll concur if you watch YouTube crash videos) that will befuddle its bytes, potentially panicking its programme and causing damage, injury or death in what was otherwise an avoidable incident. The auto industry has gone as far as it can. For driverless cars, we need smart AI, and computer boffins admit that today’s iterations of the technology is no smarter than your average human toddler (now, would you let a 2-year-old drive?) and frankly far slower at learning. AI needs to be smarter than us before it can be let loose in a car, by which time it may just decide it doesn’t want to do our bidding. “KITT, I’m in trouble where are you?” “Sorry Michael, I’m taking the day off.”