Vision-based cutting tool compensates for human error

By marrying a vision system with an inexpensive actuation system, students at MIT (Cambridge, MA, USA) have developed a handheld cutting tool that can automatically adjust its position to compensate for any positioning errors caused by a user.

Vision-based cutting tool compensates for human error
Vision-based cutting tool compensates for human error

By marrying a vision system with an inexpensive actuation system, students at MIT (Cambridge, MA, USA) have developed a handheld cutting tool that can automatically adjust its position to compensate for any positioning errors caused by a user.

"You load the system up with a digital plan that you would like it to follow, and then you are only responsible for getting it to within a quarter-inch or so of that plan," says Alec Rivers, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). "The system then adjusts the position of the cutting bit within the tool to keep it to the plan."

For the system to follow any given plan, it must first know exactly where it is on the material to be cut. For it to do so, the user first places marker tape containing a black and white pattern onto the material. Then, the system is moved over the material while an on-board camera films the surface and stitches together the video frames it captures into a 2-D map.

The design of an object to be cut is then loaded onto a computer which is registered onto the 2-D map of the material, after which the tool is lined up on the material.

To cut the material, the user only needs to move the tool in a rough approximation of the desired cutting path. The camera on the system tracks the location of the system, while a linear actuation system moves the position of the tool to correct for any deviation from the cutting path.

The system also helps the user to follow the cutting plan by displaying it on a display screen mounted on a frame on the tool.

The researchers presented their design in a paper at this month's Siggraph conference (Los Angeles, CA, USA). A copy of the paper can be found here.

Vision Systems Design has covered many notable developments from MIT over the past year. Here are a few that you might find of interest.

1. MIT researchers develop new programming language for image processing

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL; Cambridge, MA, USA) aim to make writing image-processing algorithms easier with a new programming language called Halide.

2. Image processing technique magnifies movement

Researchers at MIT (Cambridge, MA, USA) and Quanta Research Cambridge (Cambridge, MA, USA) have developed a novel image processing technique to reveal and display temporal variations in videos that are difficult or impossible to see with the naked eye.

3. Metal flecks make 3-D imaging magic

By combining a novel sensor with a computer-vision system, researchers in the Massachusetts Institute of Technology's (MIT's) Department of Brain and Cognitive Sciences (Cambridge, MA, USA) have created a portable imaging system that can achieve resolutions previously possible only with large and expensive lab equipment.

4. Time of flight provides cheaper technology option

Researchers led by MIT (Cambridge, MA, USA) Electrical engineering professor Vivek Goyal have developed a new time-of-flight (TOF) sensor that can acquire 3-D depth maps of scenes with high spatial resolution using just a single photodetector and no scanning components.

5. Algorithm cuts down MRI scan time

The time taken to perform a magnetic-resonance-imaging (MRI) scan could be cut from 45 minutes down to just 15 minutes, thanks to a new algorithm developed at the Massachusetts Institute of Technology (MIT; Cambridge, MA, USA).

-- Dave Wilson, Senior Editor, Vision Systems Design

More in Factory