Before performing neurosurgery, images of the brain are visualized using computer tomography (CT) and magnetic resonance imaging (MRI) techniques that allow the neurosurgeon to preoperatively plan specific procedures. With this anatomical data, it is then possible to tailor a minimally invasive surgical procedure that reduces the craniotomy and lowers patient trauma.
To perform such surgical procedures, ports - tube shaped shafts - are used to provide an opening into a specific area of the brain. These allow the neurosurgeon to surgically access deep-seated brain structures without affecting surrounding brain tissue. When performing this surgery, the physician must also be able to visualize where such instruments as scissors and graspers are within the brain tissue.
"In the past," says Tim Reedman, Product Line Director of Commercial Systems with MDA (Brampton, ON, Canada;www.mdacorporation.com), "this task was performed my mounting a camera system above the field of view of interest. This camera system was then manually moved during the surgical procedure to visualize the position of the surgical instruments within the port."
Numerous times during surgery, the physician would then have to move the camera system to look down the port to the surgical site. Not only was this a laborious task but it would lengthen the time needed to complete the operation.
To alleviate the surgeon from performing this task, MDA collaborated with Synaptive Medical (Toronto, ON, Canada;www.synaptivemedical.com) to develop a vision-guided imaging system that automatically aligns the camera to the port as it moves during surgery. MDA's robotic control and vision tracking algorithms were used to meet Synaptive's concept for the product.
To align the camera with the port, it is first fitted with an imaging marker similar to a QR code that was developed by MDA Corporation for space-based robotics applications. To track the x, y, z, roll, pitch and yaw pose of the marker as it moves, an image of the marker must first be captured and compared to a test image whose coordinates in 3D space are previously known. This, in essence, is similar to the problem faced by Professor Vineet Kamat of the LIVE Robotics Group at the University of Michigan (Ann Arbor, MI, USA; live.engin.umich.edu) who recently announced a machine vision system known as SmartDig to perform the task of determining the x, y, z, roll, pitch and yaw of an excavator in 3D space (see "Vision system helps avoid excavation accidents,"Vision Systems Design, March 2015, http://bit.ly/1yt4pun).
In the system developed by MDA, a GigE camera from Point Grey (Richmond, BC, Canada,www.ptgrey.com) is mounted onto the end-effector of a UR5 six axis robot from Universal Robotics (Odense S, Denmark; www.universal-robots.com). Since the UR5 is force-limited, it does not require safety guards to allow it to safely operate alongside human beings (see "Collaborative robots increase manufacturing productivity," Vision Systems Design, April 2015). To illuminate the field of view of the camera, the camera is fitted with an LED ring light from Advanced illumination (Rochester, VT, USA; www.advancedillumination.com).
Images from the target on the port are then transferred to a host computer over the GigE interface and their pose in x, y, z, roll, pitch and yaw computed by software developed by MDA. A visual servo algorithm (also developed by MDA) running on the UR5's controller is then used to dynamically position the camera system over the port as it moves during surgery.
MDA would like readers to note that the concepts presented in this article are prototypes of a medical device and the specific illustrations have not been cleared for sale by regulatory authorities.