Invisible, targeted infrared light can fool facial recognition software into thinking anyone is anyone else
A group of Chinese computer scientists from academia and industry have published a paper documenting a tool for fooling facial recognition software by shining hat-brim-mounted infrared LEDs on the user’s face, projecting CCTV-visible, human-eye-invisible shapes designed to fool the face recognition software.
The tactic lets the attacker specify which face the categorizer should “see” – the researchers were able to trick the software into recognizing arbitrary faces as belonging to the musician Moby, the Korean politician Hoi-Chang and others.
Their experiment draws on the body of work on adversarial examples: blind spots in machine-learning models that can be systematically discovered and exploited to confuse these classifiers.
The gadget used in their attack is not readily distinguishable from a regular ball-cap, and the attack only needs a single photo of the person to be impersonated in order to set up the correct light patterns. It worked with a 70% success rate used in a “white-box” attack (where the classifier is well understood), and they believe they could migrate this to a “black-box” attack (where the classifier’s workings are a secret) using a technique called “Particle Swarm Optimization.”
https://boingboing.net/2018/03/26/the-threaten-from-infrared.html