Researchers develop set of natural images to break image classification systems

July 23, 2019
Remedies for the errors should increase the overall effectiveness of image classification algorithms.

Understanding why image classification algorithms fail to correctly identify specific images is just as important as knowing how to make these systems function successfully. A group of researchers at UC Berkeley, the University of Chicago, and the University of Washington, have developed a new tool to help make sure your algorithm scores a failing grade.

Datasets of images called “adversarial examples” are designed to intentionally cause failure and measure worst-case performance of an image classification algorithm. Often, according to the researchers behind the paper titled “Natural Adversarial Examples,” adversarial examples are created via artificial modification.

The researchers believe that artificially-created images do not test the robustness of an image classification algorithm as well as natural images. The researchers consider the example of a photographer taking images and then submitting them to an image classification algorithm. If the algorithm has only been tested against artificial errors, it may not be able to as readily detect errors in naturally-occurring images for lack of training against them.

ImageNet is a database of images organized by the same principles as WordNet, organizing English nouns, verbs, adjectives, and adverbs into sets that each express a concept, for instance linking general and specific types of an object like “chair” and “armchair,” or “bed” and “bunkbed.”

The researchers created multiple classifier algorithms based on a convolutional neural network called ResNet-50, that is trained on the ImageNet database, and filtered for image classes that caused the classifiers to make “egregious errors.” The researchers then used the iNaturalist and Flickr websites to download images related to these image classes and removed any images that the classifiers were able to correctly identify.

Any images that caused errors deemed to be fair, such as mistaking a grizzly bear for a black bear, were also removed from the dataset. The remaining images were then passed through human review to make sure they were labeled correctly. This final, test dataset, which the researchers named ImageNet-A, and against which the classifiers in the research were tested, is available as a free download on GitHub.

The average level of accuracy during the tests was in the single digits. The researchers determined that over-reliance on color, texture, and background cues were the most common causes of the errors, some of which are downright comical. This, for instance, is not a broom:

Nor is this a school bus:

The researchers attempted to use two best-in-class neural network training schemes to train their classifiers how to avoid the misclassifications observed in the experiment. Robustness gains post-training were described as “miniscule.”

Effective remedies suggested by the researchers to poor performance against natural adversarial images include architecture improvements such as quality uncertainty estimation routines to detect and abstain from probable false predictions, increasing the width and number of neural network layers, and the addition of self-attention techniques such as Squeeze-and-Excitation. Algorithms with self-attention achieved accuracies slightly over 10%, demonstrating the efficacy of this technique.

These remedies should increase the effectiveness of an image classification algorithm against all sorts of images, not only against the images involved in the testing.

All images courtesy of Dan Hendrycks, UC Berkeley

Related stories:

Researchers use open-source deep learning model to extract street sign locations from Google Street View

Single-photon LiDAR research accomplishes 3D imaging at extreme distances

Vision system used to study development of memories for artificial intelligence

Share your vision-related news by contacting Dennis Scimeca, Associate Editor, Vision Systems Design

SUBSCRIBE TO OUR NEWSLETTERS

About the Author

Dennis Scimeca

Dennis Scimeca is a veteran technology journalist with expertise in interactive entertainment and virtual reality. At Vision Systems Design, Dennis covered machine vision and image processing with an eye toward leading-edge technologies and practical applications for making a better world. Currently, he is the senior editor for technology at IndustryWeek, a partner publication to Vision Systems Design. 

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!