Skip to content
NOWCAST 바카라게임 온라인 바카라 게임 5 Today
Live Now
Advertisement

According to image recognition, this turtle is a rifle

By changing tiny details, researchers were able to completely fool image recognition software

Athalye et al. SOURCE: Athalye et al.
Advertisement
According to image recognition, this turtle is a rifle

By changing tiny details, researchers were able to completely fool image recognition software

Machines have become scarily good at identifying images. Your phone probably has an app that can categorize your photos based on what's in them, which is handy if you want to look at all your photos of dogs, for instance. But neural network-based image recognition algorithms are still far from perfect, and according to a pair of recent papers these algorithms can be tricked pretty easily.The first group of researchers, from Kyushu University, discovered a way to trick image recognition software by altering a single pixel in an image. This one pixel is strategically placed to mess with the neural networks used to identify the image, causing pictures of dogs to be mislabeled as cats, or horses as cars.While these test images were limited to only a thousand pixels, larger images with millions of pixels can still be fooled by changing only a few hundred pixels. This means that you can't trick software to mislabel your vacation photos by changing only one pixel, but you can trick the software by changing a handful of pixels in a subtle way.The second group of researchers, from MIT, managed to go one step further: They found a way to 3D-print an object that fooled the software from every angle. Their printed turtle sculpture fooled an algorithm into thinking it was a rifle, and their printed baseball that computers think is an espresso.This second experiment is arguably a bigger issue for image recognition, because these objects fool algorithms in the real world, from any angle. As image recognition becomes more common in everyday life, appearing in self-driving cars and grocery stores, there's more and more potential for carefully crafted objects to mess with that software either by creating false-positives of on benign objects or hiding dangerous ones in plain sight. Together, these studies highlight how much further image recognition still has to go before it reaches human level. After all, our image recognition skills aren't thrown off by a single misplaced pixel, which means software still has plenty of room for improvement. Source: ArXiv, ArXiv

Machines have become scarily good at identifying images. Your phone probably has an app that can categorize your photos based on what's in them, which is handy if you want to look at all your photos of dogs, for instance. But neural network-based image recognition algorithms are still far from perfect, and according to a pair of recent papers these algorithms can be tricked pretty easily.

Advertisement

Related Content

The first group of researchers, from Kyushu University, discovered a way to trick image recognition software by . This one pixel is strategically placed to mess with the neural networks used to identify the image, causing pictures of dogs to be mislabeled as cats, or horses as cars.

바카라게임-TV
Su et al.

While these test images were limited to only a thousand pixels, larger images with millions of pixels can still be fooled by changing only a few hundred pixels. This means that you can't trick software to mislabel your vacation photos by changing only one pixel, but you can trick the software by changing a handful of pixels in a subtle way.

The second group of researchers, from MIT, managed to go one step further: They found a way to . Their printed turtle sculpture fooled an algorithm into thinking it was a rifle, and their printed baseball that computers think is an espresso.

This second experiment is arguably a bigger issue for image recognition, because these objects fool algorithms in the real world, from any angle. As image recognition becomes more common in everyday life, appearing in self-driving cars and grocery stores, there's more and more potential for carefully crafted objects to mess with that software either by creating false-positives of on benign objects or hiding dangerous ones in plain sight.

Together, these studies highlight how much further image recognition still has to go before it reaches human level. After all, our image recognition skills aren't thrown off by a single misplaced pixel, which means software still has plenty of room for improvement.

Source: ,