Computer vision helps smartphone app provide personalized skin care routine

Sept. 10, 2019
Image processing algorithms analyze images for personalized recommendations.

By Raja Baia

Procter & Gamble (Cincinatti, OH, USA; market research shows that the overwhelming number of products available today has led to the mass beauty aisle becoming crowded and confusing for consumers. Additionally, the research shows that 14% of women do not know what their skin care needs are, while 33% are unable to find what they need in large beauty retail sections.

As a result, PARC (a Xerox Company; Palo Alto, CA, USA; partnered with P&G for the development of a facial analysis platform based on computer vision and deep learning techniques that offers users personalized recommendations for beauty products, such as wrinkle creams.

The Olay Skin Advisor (OSA), a result of this collaboration, offers a web-based skin analyst application and advisor tool that uses artificial intelligence to create a personalized beauty experience once a user uploads a picture of their face (a selfie), according to PARC.

As part of the application, the Smart Selfie agent first checks that the image is of adequate quality to perform meaningful analysis of skin age. Parameters include proper distance to the camera, adequate lighting, frontal post, sharpness of image, absence of a strong facial expression, and absence of occlusion in front of the face.

The app then uses image processing algorithms to identify any of these adverse conditions. For pose and distance to the camera, calculations are based on the bounding box of the face and locations of facial landmarks such as eyes, nose, and mouth. For lighting issues, the app looks at image brightness and contrast statistics, and for sharpness the app uses edge detection algorithms. Lastly, for occlusion and facial expression the app deploys machine learning techniques including deep learning.

If any of these factors are unsatisfactory, the user is prompted to retake the picture with specific advice on improving the picture’s quality. In this way, the system coaches users on how to take an optimal selfie photo for a customized analysis of their own skin care.

OSA uploads the image to the cloud (AWS; Amazon Web Services; Seattle, WA, USA; and analyzes it using computer vision and deep learning techniques to generate skin quality scores (Figure 1) across the forehead, cheek, nose, chin, under eye, and crows-feet at the edges of the eyes. Together these scores assess the consumer’s perceived “skin age,” which OSA then uses to recommend appropriate skin care products.

The method works by first detecting the face and facial landmarks (eyes, nose, mouth, and face border) using computer vision techniques. The landmark points are then fed to a geometric region-growing algorithm (Figure 2) that marks the regions corresponding to the forehead, cheeks, nose, chin, and under eye. Skin image patches extracted from the different facial regions are used to train a convolutional neural network (CNN) for predicting skin age on each region separately.

The app uses a pre-trained network called AgeNet derived from a large dataset for predicting chronological age from facial images. This network is then fine-tuned to predict five chronological age categories from the entire face using smaller datasets. During fine-tuning, only a small fraction of the network’s parameters are updated. Next, the network is further fine-tuned based on training patches from each of the four facial regions to produce four regional CNNs. Features learned by each CNN train a support vector machine (SVM) regression model to predict skin age for that facial region.

A novel data augmentation scheme further refines the regional models as training image patches from each facial region are processed through different levels of digital smoothing, creating a larger augmented set of skin patches simulating a range of skin ages for that region. For example, after smoothing a forehead patch with a standard digital smoothing filter, the forehead CNN processes the patch.

The resulting feature concatenates with the feature vectors extracted from CNNs for the remaining un-smoothed skin regions (chin, cheek, under-eye) to form a single feature vector. The latter maps through a full-face SVM age regressor to estimate the full-face skin age resulting from smoothing only the forehead skin.

This age label acts as the ground truth perceived age for the smoothed forehead patch. Repeating this process for multiple forehead patches at multiple levels of smoothing produces an augmented training dataset for the forehead, which is used for building a refined skin age regressor. This process is repeated for the other skin regions, as well.

Finally, the skin age analysis is fed to a recommendation engine, which pinpoints specific, best-fit beauty products for that person’s skin tone, complexion, and wrinkles.

Since P&G launched the Olay Skin Advisor in August 2016, the website has attracted more than 5 million visitors globally. The innovation has enabled P&G to become more responsive to consumer needs by integrating the power of smartphone cameras and software with the latest advances in computer vision and artificial intelligence. In effect, it has put the beauty of personal skin care back into one’s own control, rather than relying solely on retailers and beauty aisle experts.

Raja Bala is a Principal Scientist at PARC, a Xerox company (Palo Alto, CA, USA;

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!