Autonomous vehicles fooled by drones that project too-quick-for-humans road-signs


In MobilBye: Attacking ADAS with Camera Spoofing, a group of Ben Gurion security researchers describe how they were able to defeat a Renault Captur’s “Level 0” autopilot (Level 0 systems advise human drivers but do not directly operate cars) by following them with drones that projected images of fake roadsigns for a 100ms instant – too short for human perception, but long enough for the autopilot’s sensors.

Such an attack would leave no physical evidence behind and could be used to trick cars into making maneuvers that compromised the safety or integrity of their passengers and other users of the road – from unexpected swerves to sudden speed-changes to detours into unsafe territory.

As Geoff Manaugh writes on BLDGBLOG, “They are like flickering ghosts only cars can perceive, navigational dazzle imperceptible to humans.”

The “imperceptible to humans” part is the most interesting thing about this: we tend to think of electronic sensors’ ability to exceed human sensory capacity as a feature: but when you’re relying on a “human in the loop” to sanity-check an algorithm’s interpretations of the human-legible world, attackers’ ability to show the computer things that the human can’t see is a really interesting and gnarly problem.