Bioanalytical research laboratories often face challenges automating routine tasks, such as reading digital displays across a variety of instruments. But a group of researchers has developed a low-cost and flexible solution that integrates into existing workflows easily and is well suited for small labs with limited resources.
High-cost approaches to automation aren’t feasible for most bioanalytical research laboratories, which track and analyze body tissue and fluid samples as part of the R&D process for new medications. Because these labs work on temporary contracts, buying expensive and permanent automation systems isn’t financially prudent, particularly for the smallest bioanalytical research labs.
Instead, the researchers developed an AI-enabled vision-guided robotic system to facilitate reading and recording of digital information displayed on existing pH meters, shakers, and other common instruments. The researchers are from Albstadt-Sigmaringen University (Sigmaringen, Germany), a public institution, and jetzt GmbH, an engineering institute(Konstanz, Germany).
Related: Time for Pharma 4.0
Their goal was to provide a user friendly, standardized approach that reduces the risk of errors associated with manual data entry, they explain in a 2025 article in Scientific Reports (http://bit.ly/47gBgfK).
Here’s how it works. A robotic arm picks up a sample and places it on the correct lab instrument without colliding with other objects in the environment. The system recognizes and reads the digital display on the instrument and records the results automatically in either CSV or Excel format.
The camera-based system addresses two challenges to automating these lab processes. First, it helps with the tedious and skill-dependent process of programming robots to execute precise, collision-free motions. It also overcomes the lack of built-in interfaces in laboratory instruments, which hinders their integration with automated systems.
Designing the Vision-Guided Robotic System
There are two applications. The first model creates a 3D digital CAD representation of the environment, while the second model uses deep learning to automate digital display recognition.
They built and trained their system using a standard machine vision setup. It includes a Raspberry Pi Camera Module 2 connected to a Raspberry Pi 4 Model B board, both from Raspberry Pi Trading (South Cambridgeshire, UK). The camera is mounted on a six-axis Horst600 robot arm from fruitcore robotics GmbH (Konstanz, Germany).
Related: Vision System Detects Pharmaceutical Contaminants
They used horstFX software version 2022.07 to create a scanning routine for the robot and Python to develop imaging processing scripts for LCD digit recognition and 3D reconstruction of the environment. They used AutoIt from AutoIt Consulting Ltd., a free scripting language, (Worcestershire, UK) to create the scanning program and the camera interface.
Streamlined Set Up for Scientists
Through a graphical user interface (GUI), users select whether they want to create a digital model of their lab environment or detect LCD displays. The software then runs them through the set up and execution processes. It is designed to integrate easily with operating systems for robotic arms.
While end users need to install Python and AutoIt programs initially to set up the software, the researchers also automated this process.