Self driving cars aren’t only immune to remote hacks, but there are other things to worry about. The Achilles heel of these vehicles is the LIDAR sensor system which can be fooled into thinking there are cars and pedestrians on road when there are none.
Even more astonishing is the fact that you don’t even need some sophisticated high-tech equipment to exploit it. A low powered laser and a pulse generator are all that’s required to send echoes of nonexistent objects to the car’s LIDAR eye. This essentially means that LIDAR will deduce there are objects such as other vehicles in the path and will stop the car. Stopping the car abruptly in the middle of a busy road could prove to be disastrous.
This vulnerability is discovered by Jonathan Petit who serves as Chief Scientist at Security Innovation, a software security company. Jon revealed that stored pulses from LIDAR if not encoded can be directed back at it. What’s even more worrying is the fact that these attacks don’t have to be accurate and can be carried out from a distance of 100 metres.
Petit has a procedure in store called “misbehaviour detection” to remedy the situation. This system when in place will distinguish plausible objects from the non-plausible ones. Since the smart cars haven’t gone into commercial production, the discovery of this potential shortcoming is a blessing. Fortunately, there’s plenty of time for self driving cars manufacturers such as Google, Apple and Uber to shield their vehicles against such vulnerabilities.