March 2016 snapshots: Life sciences imaging, Google Project Tango

March 10, 2016
March's snapshot topics include life sciences imaging and Google Project Tango

Handheld microscope identifies cancer cells

A handheld, miniature microscope being developed by researchers and engineers could allow surgeons to visualize cells in the operating room or in a physician’s offices. The microscope is being developed as a result of a need for miniature optical-sectioning microscopes to enable in vivo interrogation of tissues as a real-time and non-invasive alternative to histopathology.

Developed by the University of Washington (Seattle, WA, USA;www.washington.edu) mechanical engineering department, along with Memorial Sloan Kettering Cancer Center (New York City, NY, USA; www.mskcc.org), Stanford University (Stanford, CA, USA; www.stanford.edu), and the Barrow Neurological Institute (Phoenix, AZ, USA; www.barrowneuro.org) these devices could have a transformative impact for the early detection of cancer and for guiding tumor-resection procedures.

“Surgeons do not have a good way of knowing when they have fully removed a tumor,” says Jonathan Liu, UW assistant professor of mechanical engineering. “They use their sense of sight, their sense of touch and pre-operative images of the brain. Visualizing cells during surgery will help accurately differentiate between tumor and normal tissues and improve patient outcomes.”

According to Milind Rajadhyaksha, associate faculty member in the dermatology service at the Memorial Sloan Kettering Cancer Center, “The microscope technologies that have been developed over the last two decades are expensive and still fairly large, about the size of a hair dryer or a small dental x-ray machine. Thus, there is a requirement to create miniaturized microscopes.”

Roughly the size of a pen, the handheld microscope combines several technologies to deliver images at faster speeds than existing devices. The microscope is a miniature line scanned, dual-axis confocal (DAC) microscope with a 12mm diameter distal tip. Its dual-axis architecture has demonstrated an advantage over the conventional single-axis confocal configuration for reducing both background noise from out-of-focus and scattered light.

Additionally, the use of line scanning enables frame rates of 16 fps, but faster rates are possible. The microscope uses micro-electrical mechanical (MEMS) mirrors to direct an optical beam which scans the tissue, line by line. With the DAC approach, the device can capture details up to 0.5mm beneath the tissue surface where some types of cancerous cells originate.

The method used actively aligns the illumination and collection beams in the microscope through the use of a pair of rotatable alignment mirrors. Incorporation of a custom objective lens, with a small form factor for in-vivo clinical use, enables the device to achieve an optical-sectioning thickness and lateral resolution of 2.0 and 1.1μm, respectively.

For preliminary assessment of the optical design, and to evaluate the performance of the microscope, an ORCA Flash 4.0 camera from Hamamatsu (Shizuoka, Japan;www.hamamatsu.com) with a 2560 x 2160 16-bit sCMOS detector was used to mimic the 1D linear detector that will be incorporated in the final design.

A thin rectangular region of interest within the sCMOS array serves as a digital slit and enables the acquisition of images from a 2048 x 8 pixel region at the center of the camera. Since the pixel spacing is 6.5μm, the team binned the center three rows of pixels to create a digital slit of thickness 6.5 × 3 = 19.5μm which approximately matches the diffraction-limited FWHM spot size of ~15μm that is expected at the detector.

Researchers expect to begin testing the device as a cancer-screening tool in clinical settings next year, with the hopes that it can be introduced into surgeries within the next two to four years.

“In brain tumor surgery, there are often cells left behind that are invisible to the neurosurgeon. This device will let these cells be identified and allow a surgeoen to operate accordingly,” says project collaborator Nader Sanai, professor of neurosurgery at the Barrow Neurological Institute. The research was funded by the National Institutes of Health through its National Institute of Dental and Craniofacial Research and National Cancer Institute.

Lenovo to produce first Project Tango device

Google (Menlo Park, CA, USA;www.google.com) and Lenovo (Morrisville, NC, USA; www.lenovo.com) will collaborate on the development of the first Project Tango-enabled smartphone. Set to become available this summer, the smartphone will be powered by the Qualcomm (San Diego, CA, USA; www.qualcomm.com) Snapdragon processor and will feature a display that overlays digital information and objects onto the real-world images. Lenovo, Google, and Qualcomm Technologies are all collaborating to optimize the software and hardware.

Google’s Project Tango uses advanced computer vision, depth sensing and motion tracking to allow developers to create virtual and augmented environments. The Tango device features OV4682 and OV7251 CMOS image sensors from OmniVision Technologies (Santa Clara, CA, USA;www.ovt.com). The OV4682 records both RGB and IR information, the latter of which is used for depth analysis. The 1/3in, 4MPixel color sensor features a 2μm pixel size, a frame rate of 90fps at full resolution and 330fps at 672 x 380 pixels.

The OV7251 global shutter image sensor is used for the device’s motion tracking and orientation. The 1/7.5in VGA OV7251 sensor features a 2μm pixel size and a frame rate of 100fps. Both sensors feature programmable controls for frame rate, mirror and flip, cropping, and windowing.

To process image data, a Myriad 1 computer vision processor from Movidius (San Mateo, CA, USA;www.movidius.com) has been integrated into the smartphone.

In a related announcement, the company is also teaming with Google to accelerate the adoption of deep learning within mobile devices. Along with the Movidius vision processor, Google will source Movidius’ software development environment, and in turn, Google will contribute to Movidius’ neural network technology.

3D camera provides vision for ICUs

Monitoring all processes and medical devices can be difficult within an intensive care unit (ICU). Because of this, Fraunhofer (Fraunhofer-Gesellschaft, Munich, Germany;www.fraunhofer.de) researchers have developed a vision-aided smart “proxemic monitor,” which optimizes and centralizes all of the vital processes within an ICU.

“It‘s not easy to keep track of everything in an ICU during hectic situations,” says Paul Chojecki, a scientist from the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institute (HHI, Berlin, Germany;www.hhi.fraunhofer.de).

The proxemic monitor shows physicians and nurses the most important information about the vital signs of their patients, has a screen that interfaces to all of the medical equipment in the room and with the information systems in the hospital. The system can be controlled by contact-free gestures and voice commands.

A Microsoft (Redmond, WA, USA;www.microsoft.com) Kinect 3D camera monitors the area in front of the room and two webcams and a microphone scan the area. Using the video data, the system’s software analyzes where people are, how far they are from the screen, and what movements they are making–depending on this distance, the display and functionality of the monitor changes.

“Our monitor distinguishes between near, medium, and further distance. The cameras cover a maximum distance of 4m”, Chojecki explains. From the medium distance, the cursor can be controlled with arm movements, and commands or short reports can be input by voice. With pre-programmed gestures, a video call can be started to have discussions with other physicians within or outside of the hospital.

“Our software records distances and movements of the user in a contactless manner, interprets them and converts them into commands for operating systems or machines.”

The proxemics monitor evaluates the data of the connected medical devices based on the smart alarm design of Fraunhofer’ s partner, the Medical Engineering Department of Aachen University Hospital (Aachen, Germany;www.ukaachen.de), thus preventing false alarms, which Chojecki says are a problem, according to intensive care physicians. Additionally, without the need to touch devices directly, physicians and other healthcare providers will decrease the risk of spreading pathogens.

The device’s user interface is web-based, so is suitable for use on mobile devices. The scientists will demonstrate the proxemic monitor at CeBIT 2016 this March, and will conduct a practical test in cooperation with Uniklinik RWTH Aachen (Aachen, Germany;www.ukaachen.de) later this year.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!