3D camera employs neuromorphic image sensor
Both first and second generation Kinect cameras from Microsoft (Redmond, WA, USA; www.microsoft.com) project light patterns onto an object. Reflected light is then sensed and processed to estimate scene depth at each pixel on the sensor. Rather than adopt this approach, a team led by Oliver Cossairt, assistant professor of electrical engineering and computer science at Northwestern's McCormick School of Engineering (Evanston, IL, USA; www.northwestern.edu) has designed a camera that works in a different manner.
The camera, called the Motion Contrast 3D (MC3D) only scans parts of the scene that have changed. The prototype developed uses a DVS128 camera module from iniLabs (Zurich, Switzerland; www.inilabs.com) which is based on the DVS128 sensor-a 128 x 128 pixel vision sensor developed by Professor Tobi Delbruck and his colleagues at the Institute of Neuroinformatics (Zurich, Switzerland; www.ini.uzh.ch)
The DVS is an asynchronous device that only outputs events should a change in brightness occur at any pixel. This reduces the amount of power that needs to be supplied to the device and lowers the data bandwidth required. The DVS detects brightness changes in real-time by incorporating an active logarithmic front end, followed by a switched capacitor differentiator and a pair of comparators. Camera designs such as the DVS 128 can be designed with little on-board memory and low-cost interfaces. (see "Neuromorphic vision sensors emulate the human retina," Vision Systems Design, October 2013; http://bit.ly/1cdtRzJ).
The MC3D consists of a laser line scanner that is swept relative to the DVS sensor. The event timing from the DVS is used to determine scan angle, establishing projector-camera correspondence for each pixel. This allows laser scanning resolution with single-shot speed, even in the presence of strong ambient illumination, inter-reflections, and reflective surfaces. The approach will allow 3D vision systems to be deployed in challenging applications requiring limited power and bandwidth.
Cossairt says his camera will be suitable for use in such applications as robotics, bioinformatics, augmented reality and factory automation. Already, his group has received a Google (Mountain View, CA, USA; www.google.com) Faculty Research Award to integrate the 3D scanning technology onto an autonomous vehicle platform. The research is supported by the Office of Naval Research and the U.S. Department of Energy (see "MC3D Motion Contrast 3D Scanning" http://bit.ly/1K8ammB for more information).
Delphi self-driving car completes coast to coast journey
On March 31, Delphi Automotive PLC's (Gillingham, UK; www.dephi.com) self-driving car completed a journey that began near the Golden Gate Bridge in San Francisco and ended in New York. The autonomous vehicle is based on an SQ5 from Audi (Ingolstadt, Germany; www.audi.com) equipped with Delphi's self-driving car technology.
The car's vision system comprises four short-range radars and six LIDAR sensors. Delphi electronically-scanning radars (ESR) can scan 650ft ahead of a moving vehicle and up to 442 million objects a day. In addition, the car has one advanced driver assistance systems camera from MobilEye (Jerusalem, Israel; www.mobileeye.com), one high-resolution color camera, and one IR camera. In addition, the car uses a multi-domain controller, localization system, V2V/V2X DSRC wireless communication technology, and intelligent software algorithms that enable the vehicle to make complex, human-like decisions. This includes such functions as traffic jam assist, automated highway pilot with lane change, automated urban pilot and automated parking.
Delphi's active safety technologies enable the vehicle to make complex decisions such as stopping and then proceeding at a four-way stop, timing a highway merge, or calculating the safest maneuver around a bicyclist on a city street.
Vision assists in artificial hip measurement system
Surgeons performing hip replacements must accurately determine the length of a patient's leg before and after surgery. If, for example, a leg is not measured precisely and an artificial hip implanted, the operation can result in a patient's leg being longer or shorter than it was originally. In the past, leg measurements were performed by the surgeon using a tape measure both before and after the operation which can lead to errors of up to 2cm.
To reduce this measurement error, researchers at the Fraunhofer Institute for Machine Tools and Forming Technology IWU (Chemnitz, Germany; www.iwu.fraunhofer.de/en.html) have developed a technique that will enable orthopedic surgeons to measure their patients' leg lengths more precisely. To capture such a measurement, a patient lies in a prone position and a physician attaches a small plastic box containing two LED lights to the patient's shin. The physician then lifts the patient's leg up by the heel, and with that motion, the two lights trace an arc that is recorded by a VLG-22C VisiLine GigE camera from Baumer (Radeberg, Germany; www.baumer.com) that is positioned 1.5m to the side of the patient.
This measurement is taken both before and after the hip replacement procedure is performed and the light box remains on the leg during the operation. If the leg becomes shorter or longer, the arc traced by the LEDs will change. Images captured by the camera are then processed to compares both arcs to determine if the leg is the same length it was before the procedure. If necessary, the doctor can then make adjustments to the artificial hip.
"The margin of error in our process is less than 1cm," said Dr. Ronny Grunert, a researcher at IWU. "Eventually we would like to reduce this to 5mm." Initial testing of the measurement system prototype has been met with success at the Leipzig University (Leipzig, Germany; www.zv.uni-leipzig.de/en/) hospital, and there are currently plans in place for a clinical trial later in 2015, with hopes that the system could be commercially available in two years.
In addition , the researchers also optimized the hip implant by replacing the use of prefabricated implants with a modular system so a physician can select the correct artificial hip stem and neck for each patient. Custom screw connections are then used to attach the individual parts to each other and the combined unit is implanted in the hip. With this method, the physician measures leg length, and if necessary, can separate the implant's components to exchange them for a better-fitting part.
Force-limited robot eyes factory automation
Many manufacturers are now introducing next generation robots that employ torque sensors and vision systems to allow them to work more closely with human beings. These so-called force-limited robots will thus increase the productivity of automated manufacturing processes. (see "Robots increase manufacturing productivity," Vision Systems Design, April 2015, http://bit.ly/1Dc4QO1.) One such company, Rethink Robotics' (Boston, MA, USA; www.rethinkrobotics.com) latest force-limited robot dubbed Sawyer is a single-arm, vision-guided robot designed to execute such tasks as machine tending and circuit board testing.
Sawyer is equipped with force sensors embedded at each joint, along with a power and force-limited compliant arm. This allows for compliant motion control and enables Sawyer to "feel" its way into fixtures or machines, even when parts or positions vary. With this feature, Sawyer can work alongside humans without the risk of bodily injury in semi-structured environments.
For its vision system, Sawyer features a camera located above its "head" and a Cognex (Natick, MA, USA; www.cognex.com) smart camera with a built-in light source in its wrist. The robot weighs 42 lbs and features an 8.8lb payload, 7 degrees of freedom and a 1m reach. Like Rethink Robotics' previous robot, Baxter, the Sawyer force-limited robot can dynamically adapt to conditions on the plant floor and integrate into existing work cells. Already, Jabil (North Billerica, MA, USA; www.jabil.com) has partnered with Rethink Robotics as an early adopter and field tester of the $29,000 Sawyer robot.