Fooling Deep Neural Networks by A Nguyen, J Yosinski, J Clune Researchers have evolved images to still score extremely high…
Fooling Deep Neural Networks by A Nguyen, J Yosinski, J Clune
Researchers have evolved images to still score extremely high for image recognition with DNNs algorithms, yet appearing to be totally unlike the original image.
Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion).
The images are truly curious looking things that only a machine could fully read. Perhaps this is some sort of aesthetic appreciation that only advanced DNN algorithms could have.