Automated vision system speeds behavioral analysis research

July 29, 2011
At the Tufts Center for Regenerative and Developmental Biology, Dr. Michael Levin and his colleagues are using quantitative automated behavioral analysis techniques to study living animals.

At the Tufts Center for Regenerative and Developmental Biology (Boston, MA, USA), Dr. Michael Levin and his colleagues are using quantitative automated behavioral analysis techniques to study living animals. Since simple animals such as flatworms share many of the same behavioral pathways and neurotransmitters as human beings, studies of their cognitive ability under the influence of various drugs will lead to a better understanding of memory storage and transmission in tissue.

“Observing test subject behavior manually during research experiments can be time-consuming and expensive since manpower is limited and human observation is inherently subjective,” says Levin. “Also, the small sample sizes used in manual training methods have limited a general consensus of such learning abilities and make it difficult for other researchers to replicate results.”

To address these limitations, the Tufts Center partnered with Boston Engineering (Waltham, MA, USA) to develop an automated chamber to study the abilities of living things to learn from their environment. Light stimuli are used to train worms and tadpoles to perform specific tasks, and the animals then are tested for recall in a variety of moleculargenetic and pharmacological experiments.

In the design of the chamber, 12 cells are arranged in a grid, each of which houses a disposable Petri dish in which the worm is placed (see figure at top). Each individual cell is illuminated by a set of four visible LEDs that are used to illuminate a single quadrant of each dish. In a typical experiment, worms are trained to stay in, or avoid, specific parts of the dish or to move at specific rates. Worms that successfully perform the task will be rewarded by lowered light levels since worms inherently prefer the dark.

Each cell is also illuminated by infrared LEDs to track the motion of each worm. This invisible lighting allows an In-Sight Micro 1400 vision system from Cognex (Natick, MA, USA) to image each cell without affecting the worm’s behavior. Electrodes placed in the dish allow researchers to stimulate the animals with weak electrical signals.

In operation, the position of the worm is first recorded by the vision camera. To teach the animal to move to a lit quadrant, for example, a single quadrant may first be illuminated. Then the position of the worm is again recorded by the vision camera. Based on the position, speed, and direction of the worm’s movement, the animal may be rewarded by dimming the LED if it moved to the correct quadrant or increasing the brightness because it did not perform the task properly. This series of measurements and actions continues until the animal understands the task to be learned.

“Water touching the sides of the Petri dish creates a meniscus that rises and falls, imparting shadows over the image that are difficult to distinguish from the worms,” says Levin. “To differentiate worms from randomly changing water shadows, images of empty quadrants are captured every 20 s by tracking the worm’s position and capturing quadrants while they are unoccupied. When the system captures an image of the worm in a quadrant, it subtracts the most recent image of the same unoccupied quadrant to remove the shadows and determine the worm’s position.”

To identify possible positions of the worm, a histogram tool identifies and groups the lightest colored pixels. Morphological filtering is then applied to connect white pixels of close proximity. Then, a blob detection tool determines the three largest groups of light-colored pixels and sorts them in order of size. In almost every case, the largest object is the worm; however, multiple objects are tracked to address the rare possibility that one or more shadows may be larger than the worm (see figure at bottom).

Because the system is automated, 12 experiments can be performed continuously without human intervention. As a result, larger sample sizes can be achieved, and experiments can be performed over longer periods. “Millions of observation and training cycles can be performed, a level of training beyond what can realistically be accomplished by manual methods. The system also provides complete consistency among experiments, allowing labs to replicate experiments performed elsewhere,” says Levin.

-- By Andrew Wilson, Vision Systems Design

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!