Vision systems rely on hardware and software to complete image processing. They also can integrate with neural networks, modeled on the brain and nervous system, to build and convey images.
Artificial neural networks use computer algorithms and associated hardware and software to simulate biological neural networks in our brains to produce images. They utilize deep learning to process data from images, perform computations about the image data, and translate it into layers that are combined to capture an image.
Optical neural networks (ONNs) expand on this simulation and use optical components to precisely perform as an artificial neural network does. Researchers at Cornell University (Ithaca, NY, USA) developed a multilayer nonlinear optical neural network, incorporating deep learning, to accelerate the processing of image data by extracting only relevant data from a scene and compressing it.
High performance computing of such data ordinarily consumes much physical electrical energy from data processing. Optical neural networks may compute data used to generate images with much less electrical energy.
The implications of their research could mean faster and more accurate image processing with less energy required. This could translate into greatly improved image sensors for use in fields such as manufacturing robotics, human-machine interactions and cancer research studies.
Optical Versus Digital Imaging
Digital imaging relies on image sensing—where an object's characteristics such as position and contour, are computed from a digitized image. Optical systems imaging, instead of relying on a digital image, uses optical systems that serve as encoders to extract relevant data from the subject and compress the data into a very small and finite amount, adequate to process the data into an image. Optical imaging is widely used in various scientific, industrial, and technological applications.
Peter McMahon is an assistant professor of applied and engineering physics at Cornell Engineering. McMahon and his research team demonstrated that optical neural networks can greatly compress images at ratios as great as 800-to-1. This equates to compressing a 1,600-pixel input to just two pixels, without sacrificing accuracy. Their research, Image Sensing with Multilayer, Nonlinear Optical Neural Networks (https://bit.ly/3r23I2D) was published in Nature Photonics.
“Our setup uses an optical neural network [ONN], where the light coming into the sensor is first processed through a series of matrix-vector multiplications that compresses data to the minimum size needed—in this case, four pixels,” says Tianyu Wang, a postdoctoral fellow in AI in science at Cornell and a researcher on McMahon's team. “This is similar to how human vision works: We notice and remember the key features of what we see, but not all the unimportant details. By discarding irrelevant or redundant information, an ONN can quickly sort out important information, yielding a compressed representation of the original data, which may have a higher signal-to-noise ratio per camera pixel.”
The Research and Its Implications
The research team used an ONN image encoder that they built to classify cell images in flow cytometers (cytometers analyze characteristics of cells or particles in a lab setting.) The ONN encoder computes linear and nonlinear layers to capture a compressed signal.
They found that the ONN encoder performed well, indicating that optical neural networks could be useful in cancer research to sort through cells and quickly identify the cancerous ones.
Mandar Sohoni, a doctoral student on the Cornell research team, stated that probably 100 million cells would need to be processed to generate a meaningful sample of cells that would hold up to statistical analysis. “In this situation," says Sohoni, "the test is very specific, and an optical neural network can be trained to allow the detector to process those cells very quickly, which will generate a larger, better dataset.”
The findings of their research also imply that optical neural networks may be useful in scenarios where low power consumption is needed to sense and process data to produce and image, Sohoni adds. One example would be image sensing on a satellite in space, which must be accomplished with limited power consumption.
The ability of an optical neural network to compress spatial information could be a boon to image processing. “By performing image compression in the optical domain, ONN image sensors can fundamentally bypass the optoelectronic bandwidth limit of high-resolution cameras, allowing for faster, more sensitive and more efficient machine vision systems,” the researchers wrote.