Wearable device provides 3D vision for the visually impaired
FRAMOS (Taufkirchen, Germany; www.framos.com) partnered with the CDTM Institute of the Technical University of Munich (TUM; Munich, Germany; www.cdtm.de) to develop a wearable device that uses real-time 3D vision to support visually impaired people in daily life.
The wearable device called “CU” leverages aRealSense 3D depth camera from Intel (Santa Clara, CA, USA; www.intel.com) and algorithms that translate visual information into haptic and audio feedback. The depth camera used in the device is the D415 3D camera, which features a OV2740 CMOS image sensor from OmniVision Technologies (Santa Clara, CA, USA; www.ovt.com) with a 1.4 µm pixel size, USB 3.0 Type C interface, and active infrared stereo depth technology.
In addition to the cameras, the wearable device features bone conduction speakers for audio feedback. Setup of the device is controlled by a processing hub which includes a processing unit, a GPS sensor, and a GSM module for LTE connection. Based on the exact location and movement of the vibrating feedback on the arm, the visually impaired is informed about the position and distance of things in the surroundings, according to FRAMOS. The voice-controlled glasses are connected to a haptic feedback wristband via Bluetooth. A micro-processing unit translates the data from the Bluetooth into haptic feedback through a 2D array of vibration motors. The wearable device is powered by two rechargeable batteries, located on the glasses and in the wristband, that enable a full day of use.
Additionally, FRAMOS notes that the CU glasses come with a “Smart Assistant” that provides information on facial recognition, text recognition, and object recognition, which enables “visually impaired people to fully understand their environment and to have advanced guidance for safe navigation.”
Dr. Christopher Scheubel, FRAMOS Business Development, commented: “We are proud having found a way to bring state-of-art technology into an application, which provides a huge impact on the daily life of the visually impaired,” he said. “This project hits the sense of innovation by really supporting humans and improving their lives. The exceptional beauty of this technology is the ability to provide visual information normally given by the human eye.”
Evaluation kit leverages AI software and CMOS image sensor for driver and passenger monitoring
OmniVision Technologies, Inc. (Santa Clara, CA, USA; www.ovt.com) has partnered with in-cabin driver monitoring software company Jungo Connectivity (Netanya, Israel; www.jungo.com) to create an evaluation kit for driver and passenger monitoring that utilizes artificial intelligence software and a CMOS sensor designed specifically for driver monitoring.
Designed to enable original equipment manufacturers (OEM) and Tier-1 automotive designers to develop the next generation of driver and occupant monitoring systems for advanced driver assistance systems (ADAS), semi-autonomous vehicles, and fully-autonomous vehicles, the kit combines Jungo’s CoDriver software development kit (SDK) with OmniVision’s OV2311 CMOS sensor, which is built on OmniPixel3-GS global shutter technology. CoDriver is a camera-based driver monitoring software solution that is based on deep learning, machine learning, and computer vision algorithms. The software provides the car with a complete, real-time picture of the driver’s condition, and together with additional ADAS components through sensor fusion, helps cars better understand the relationships between events, both internal and external to the vehicle cabin, according to Jungo.
Designed for mainstream driver monitoring applications, the OV2311 is a black and white 2 MPixel CMOS sensor with a 3 µm pixel size that can capture video up to 60 fps in a 1600 x 1300 resolution format, which is designed to fit the driver’s head box to provide monitoring regardless of driver height or seat position. It is reportedly adept at offering accurate gaze- and eye-tracking capabilities and comes in a 7.2 x 6.2 mm automotive chip-scale package, which allows it to be discreetly designed into the vehicle cockpit. The sensor supports a 4-lane MIPI and 12-bit double-data-rate digital video port interface.
The kit was designed to obtain the “the most complete, real-time picture of the driver’s condition—regardless of the lighting conditions.” It will also be able to determine whether the driver is ready to take control in a semi-autonomous emergency scenario. If not, the vehicle can reportedly take an alternative action, such as pulling off the road and parking. Additionally, in a fully autonomous experience, the sensors and software provide information about the passengers’ characteristics, possessions, and emotional and medical states.
“Increasingly advanced capabilities are necessary to enable safer and more intelligent automotive systems that will power not only next-generation ADAS, but also the first fully autonomous vehicles,” said Cliff Cheng, senior director of automotive marketing at OmniVision. “For all of these next-generation occupant monitoring and identification applications, the ability to perform optimally in low- or no-light conditions is a must.”
Version 1.5 of Jungo’s SDK is included in the kit, which has been configured and tested to perform optimally with the OV2311 image sensor, in real time and using a live video stream.
Artificial intelligence processors enable deep learning at the edge
CEVA, Inc.’s (Mountain View, CA, USA; www.ceva-dsp.com) NeuPro line of artificial intelligence (AI) processors for deep learning inference at the edge are designed for “smart and connected edge device vendors looking for a streamlined way to quickly take advantage of the significant possibilities that deep neural network technologies offer.”
The new self-contained AI processors are designed to handle deep neural networks on-device and range from 2 Tera Ops Per Second (TOPS) for the entry-level processor to 12.5 TOPS for the most advanced configuration, according to CEVA.
“It’s abundantly clear that AI applications are trending toward processing at the edge, rather than relying on services from the cloud,” said Ilan Yona, vice president and general manager of the Vision Business Unit at CEVA. “The computational power required along with the low power constraints for edge processing, calls for specialized processors rather than using CPUs, GPUs or DSPs. We designed the NeuPro processors to reduce the high barriers-to-entry into the AI space in terms of both architecture and software. Our customers now have an optimized and cost-effective standard AI platform that can be utilized for a multitude of AI-based workloads and applications.”
NeuPro architecture is comprised of a combination of hardware- and software-based engines for a complete, scalable, and expandable AI solution. The family consists of four AI processors, offering different levels of parallel processing:
- NP500, the smallest processor, includes 512 multiplier–accumulator (MAC) units and targets IoT, wearables and cameras
- NP1000 includes 1024 MAC units and targets mid-range smartphones, ADAS, industrial applications and AR/VR headsets
- NP2000 includes 2048 MAC units and targets high-end smartphones, surveillance, robots and drones
- NP4000 includes 4096 MAC units for edge processing in enterprise surveillance and autonomous driving
Each processor consists of the NeuPro engine and NeuPro vector processing unit (VPU). The NeuPro engine includes a hardwired implementation of neural network layers among which are convolutional, fully-connected, pooling, and activation, according to CEVA. The NeuPro VPU is a programmable vector DSP, which handles the CEVA deep neural network (CDNN) software and provides software-based support for new advances in AI workloads.
Furthermore, NeuPro supports both 8-bit and 16-bit neural networks, with an optimized decision made in real time. The processors’ MAC units reportedly achieve better than 90% utilization when running, while the overall processor design reduces DDR (double data rate) bandwidth substantially, improving power consumption levels for any AI application, according to CEVA.
NeuPro will be available for licensing to select customers in Q2 of 2018 and for general licensing in Q3 of 2018.
Facial recognition technology spots suspect in crowd of 60,000
Perhaps once considered little more than a science fiction concept, facial recognition technology is becoming increasingly effective, but with it, privacy concerns continue to grow.
The latest example of this is in Nanjang, China, where police used facial recognition technology to locate a 31-year-old suspect in a concert crowd of nearly 60,000 people. The man, identified only by the surname of Ao, was reportedly wanted for “economic crimes,” according to Kan Kan News (http://bit.ly/VSD-KAN).Details about Ao, reported Kan Kan News, were in a national database, and when he arrived at the stadium, cameras at the entrances with facial recognition capabilities had identified him and flagged authorities.
“He was completely shocked when we took him away,” police officer Li Jin told Xinhua news agency. “He couldn’t fathom that police could so quickly capture him in a crowd of 60,000.”
Ao’s capture is the latest example of China’s growing use of facial recognition technology. Law enforcement and security officials in China, according to The Washington Post’s (Washington, D.C.; www.washingtonpost.com) Simon Denyer, aim to use such technology to track suspects and even predict crimes. Ultimately, officials in China want to create a national surveillance system known as “Xue Liang,” or “Sharp Eyes” to monitor the movements of its citizens.
If you thought the Facebook data fiasco was concerning, the database that law enforcement in China is using takes it even further. A vast database of information on every citizen referred to as a public security police cloud infrastructure was implemented to gather information on criminal and medical records, travel bookings, online purchases and even social media comments — and link it to everyone’s identity card and face, according to The Washington Post (http://bit.ly/VSDFR).
When Denyer visited three technology companies in China, he was shown people monitoring cars and people as they passed through an intersection and attached to each entry were text bubbles that showed information including the person’s gender and hometown.
The Washington Post article goes on to note that Human Rights Watch has a page dedicated to mass surveillance and the use of “big data” in China, and how it violates privacy rights and enables officials to “arbitrarily detain people.”
Facial recognition technology, it could be argued, was always destined to present such problems. Ever been creeped out by Facebook suggesting you tag a friend or family member in a photo that is not yet tagged? Plenty of people have. How do you suppose someone in that situation feels about the ability of a system to find a person in a dark concert in a stadium full of tens of thousands of people?
Now, on the flip side of that, not all headlines involving facial recognition technology are going to focus on the negative. An article posted (http://bit.ly/VSD-TELE) on The Telegraph (London, U.K.; www.telegraph.co.uk) on April 17 describes how a mentally ill man—also in China—was missing for over a year, but was found and reunited with his family with the help of China’s “massive and controversial network of facial recognition systems.” Other examples of facial recognition technology applications that most would deem “positive” or at least beneficial to society are things like advanced driver assistance systems (ADAS) that monitor driver fatigue, facial recognition for medical purposes. One recent example, the “CU” wearable device from FRAMOS (See more on pg. 5), uses 3D vision and algorithms that translate visual information into haptic and audio feedback. These glasses feature a “Smart Assistant” that provides information on facial recognition, text recognition, and object recognition for the visually impaired.