Around 7% of children in every British classroom suffer from speech sound disorders (SSDs), making it difficult for them to communicate with their peers. Most techniques to alleviate the disorders rely heavily on auditory skills, where the children must listen to the sounds that they produce and then modify their speech accordingly using auditory cues.
A technique that uses visual feedback, however, may benefit those for whom visual skills are stronger than auditory skills -- as is often the case in people with SSDs. By actually showing a child the required articulation, the need to describe it would be avoided.
Working on that premise, a team ofresearchers from Edinburgh University (Edinburgh, Scotland), Queen Margaret University (Edinburgh, Scotland), and Articulate Instruments (Edinburgh, Scotland) are working on an ultrasound-based system called Ultrax that aims to provide visual feedback of what is happening inside a child’s mouth as they speak. By seeing the movements of their own tongues, they will then be able to use it as a guide to changing their speech.
According to the researchers, it is already possible to capture tongue movements by placing a standard medical ultrasound probe under the chin. However, the image is grainy, information -- especially about the tongue tip -- is often lost, and the image is difficult to interpret.
To alleviate that problem, the researchers are improving these images by exploiting prior knowledge about the possible ranges of tongue shapes and movements from a large database of ultrasound andMRI images. From these, they can then transform the acquired images from a patient into clear 2-D videos of the tongue's movements.
The ability of the children to imitate the tongue shapes and movements from the images will be evaluated to determine whether they find the enhanced images easier to interpret than the un-enhanced images.
-- Posted byVision Systems Design