An artificial intelligence system originally designed to greet event attendees has evolved into a COVID-19 screening system that protects Canada’s largest and most valuable collection of operational, historic military vehicles.
Master Cpl. Lana, an AI assistant developed by CloudConstable (Richmond Hill, Ontario, Canada; www.cloudconstable.com) utilizes an Intel (Santa Clara, CA, USA; www.intel.com) RealSense 415 3D depth camera and a FLIR (Wilsonville, OR, USA; www.flir.com) Lepton 2.5 thermal imaging module to greet volunteers at the Ontario Regiment Museum (Oshawa, Ontario, Canada; www.ontrmuseum.ca) and screen them for COVID-19 infection.
The museum originally intended to deploy the AI as a greeter at the museum’s front entrance, or to provide supplemental information at exhibits (Figure 1). The COVID-19 outbreak forced the museum to temporarily close for visitors, but volunteers still had to continue performing maintenance on the museum’s collection, however, and a second deployment of the technology was installed inside a vestibule located in the vehicle garage.
The system’s hardware attaches to several brackets on a wall mount assembly using pieces of wall track. The Lepton 2.5 module, which connects to the platform using a USB2 cable, sits in a custom-fabricated housing placed above the screen. The RealSense 415 camera, which connects with the platform via USB3 cable, mounts to the same housing.
The housing attaches to a servomechanism designed with off-the-shelf parts that controls a pan/tilt mount. If the subject’s face is not entirely within the camera’s FOV, as determined by the system’s face detection inference models, the software issues servo motion control commands in real time from the hardware platform via serial over USB API calls, to adjust the camera position until the subject’s face is clearly visible An array with speaker and microphone sits below the screen.
The company first experimented with using webcams for the system, but the cameras lacked depth sensing capability, according to Michael Pickering, President and CEO at CloudConstable. The system only interacts with users standing within approximately two yards of the screen. Doing so protects the privacy of anyone passing within the camera’s FOV but not interacting with the system.
Other camera options evaluated include the Kinect Azure from Microsoft (Redmond, WA, US; www.microsoft.com), which had the advantage of a built-in microphone array, and several models of depth camera from ASUS (Beitou District, Taipei, Taiwan; www.asus.com). CloudConstable had difficulty finding any of these cameras readily available in Canada, however, and chose the RealSense camera.
Affordability drove the selection of the FLIR Lepton 2.5 module, capable of radiometric calibration with an acceptable resolution for the application, as well as the module’s readily available API and SDK, says Pickering.
The AI’s platform, an Intel NUC 9 Pro Kit, a PC with a 238 x 216 x 96 mm footprint, mounts behind the screen. The NUC 9 Pro includes an Intel Xeon E-2286M processor, 16 GB of DDR4-2666 memory, and an integrated UHD Graphics P630 GPU. CloudConstable chose the PC for its ability to also run a discrete GPU, in this case an ASUS Dual GeForce RTX 2070 MINI 8 GB GDDR6 , to dedicate to graphics processing and ensure smooth, realistic animations. This allows inference processes to run strictly on the integrated GPU. The NUC 9 Pro also includes remote management software, allowing the company to provide off-site support.
Ambient light proves sufficient at most deployments of the AVA system, says Pickering. A simple LED light can provide extra illumination if required, such as inside the vestibule where museum volunteers go through their automated COVID-19 screening.
Volunteers stand in front of a high-definition ACER (San Jose, CA, USA; www.acer.com) display, on which Master Cpl. Lana appears (Figure 2). Pickering notes that the system supports multiple display types, however. The AI asks and the volunteer answers a set of COVID-19 screening questions, such as whether the volunteer is experiencing symptoms or has been exposed to anyone with the illness. The system then measures the volunteer’s skin temperature using the thermal imaging module.
If the volunteer correctly answers the screening questions and passes the temperature scan, they are checked in by the system and proceed into the museum for their shift. According to Jeremy Blowers, executive director of the Ontario Regiment Museum, the procedure takes less than 60 seconds to complete.
If the screening questions are not answered correctly or the temperature scan fails, the system sends an SMS message to managers’ phones informing them that a person in the facility has failed the COVID screening. The user does not learn that the screening failed, for fear of creating alarm. For example, a previous iteration of the system displayed on the monitor a live infrared image. Blowers asked CloudConstable to remove the image in case it showed elevated temperatures and upset the volunteer.
In the case of a fail result, a human employee delivers a second set of screening questions. They also give the volunteer time to cool down, to account for artificially elevated skin temperatures after working outside on a hot day, for example. A second temperature scan with a hand device then takes place and management decides whether or not to allow the volunteer access to the building.
If volunteers want the system to recognize them, they must register with the software and allow to the system to learn what their face looks like. A video teaches volunteers how to work with the AI in order to allow her to recognize them, for instance by taking off their hats, eyeglasses, and/or masks during the registration process, says Blowers. Lana surprised museum staff by learning within two weeks how to recognize registered volunteers even if they had their masks on, Blowers adds
Once a volunteer registers, the AI greets them, informs them they are checked in, and thanks them for volunteering at the museum, all by name. Museum management receives compiled reports on check-in, check-out, and total volunteer hours on site.
AVA’s development began in the fall of 2018 using the Intel distribution of OpenVINO toolkit, open source software designed to optimize deep learning models from pre-built frameworks and ease the deployment of inference engines onto Intel devices. CloudConstable used pre-trained convolutional neural network models for face detection and head pose detection that the company supplemented with a rules-based algorithm based on the inference results from the head pose model.
Because the AI only asks yes or no questions during the COVID-19 screening, Microsoft Azure’s speech-to-text API suits this and other AVA deployments, says Pickering. Head pose detection algorithms can also determine whether the volunteer nods or shakes their heads and translate the motion as a yes or no answer respectively.
All data generated by interacting with the volunteers, including the answers given to the screening questions and thermal scan results, stores on the Microsoft Azure cloud service.
No false negative cases in COVID-19 detection results exist to date, verified by a lack of reported cases among staff or their families, according to Blowers. False alarms have occurred, however, including two cases where volunteers were working outside in 42° C weather while wearing black hats, which elevated their skin temperature.
CloudConstable currently experiments with using the Intel RealSense 455 model for future AVA deployments. The camera has a wider FOV than the RealSense 415 and therefore presents less of a challenge for tall users. Both cameras use the same SDK such that the 455 can swap out with the 415 without any required software updates. The larger 455 model does require a larger mount than the 415 model, however.