http://localhost:4503/content/vsd/en/blogs/vision-insider.html2017-01-10T11:22:35.103ZVision InsiderAdobe Experience ManagerCamera captures voices without a microphonenoemail@noemail.orgAndy WilsonYasuhiro Oikawa of Waseda University in Tokyo pointed a high-speed camera at the throat of a volunteer with one task in mind: To capture his/her voice without the use of a microphone.<br /><br />Yes, you read that correctly. Oikawa and his team announced at the International Congress on Acoustics on June 3 that they used cameras to take thousands of images per second and record the motions of a person&rsquo;s neck and voice box as they spoke. A computer program then <a href="" target="_blank">turned the recorded vibrations into sound waves. </a><br /><br />Why did they do this, you ask? Some lip-reading software programs are sophisticated enough to recognize different languages, but the end result doesn&rsquo;t usually involve much more than a transcript, according to a <a href="" target="_blank">ScienceNews article</a>. In addition, microphones often record too much background noise, so Oikawa and his colleagues, looking for a new method of capturing vocal tones, came up with this idea. <br /><br />The article explains that the researchers pointed the camera at the throats of two volunteers and had them say the Japanese word tawara, which means straw bale or bag. The team recorded them at 10,000 fps, and at the same time, recorded the volunteers&rsquo; words with a standard microphone and a vibrometer for comparison. The vibrations recorded by the camera vibrations can&rsquo;t be recorded by a camera &ndash; I think you mean &ldquo;interpreted by the camera data) were similar to the ones from the microphone and vibrometer, Oikawa said in the article.<br /><br />After running the images though a computer program, the team reconstructed the volunteers&rsquo; voices well enough to hear and understand them saying tawara. Mechanical engineer Weikang Jiang of Shanghai Jiao Tong University in China noted Oikawa did not play audio of the reconstructed voices, but instead showed the comparison photos of the sound waves and vibrations. <br /><br />Like Weikang, I am interested to hear what the audio sounds like.<br />, advertising with facial detectionnoemail@noemail.orgAndy Wilson&ldquo;Cara&rdquo; is new facial detection software from <a href="" target="_blank">IMRSV</a> that uses a standard webcam to scan faces up to 25 feet away and determines age and gender. It&rsquo;s currently being used on a wall of shoes in the back of a Reebok store in Fifth Avenue in New York, where it is helping the store to see which customers are spending more time at the shoe wall, quickly walking away, or actually buying something.<br /><br />If this experiment goes well, Reebok could install an advertising display that would intelligently react to different customers. For instance, if I were to walk into a store and pick up a pair of size 10 running shoes, a video might pop up on the screen to tell me about these shoes. <br /><br /><a href="" target="_blank">No, really. </a><br /><br />According to IMRSV, Cara collects data with 93% detection accuracy. Its demographics include gender (92% accuracy), and age (Child, young adult, adult, senior, with 80% accuracy).&nbsp; It detects at a distance of up to 25 feet away and can scan multiple people at the same time. In addition to customized marketing, Cara could be used to watch audiences during live performances and monitor whether drivers are looking at the road, says IMRSV.<br /><br />While this is all quite fascinating, I can&rsquo;t help but think of a scene in Minority Report, where Tom Cruise is walking down a hallway rather quickly and the digital billboards are bombarding him with personalized ads. (Check it out here.) While the technology isn&rsquo;t nearly as intrusive&mdash;it&rsquo;s certainly not scanning your retina and immediately placing exactly who you are and where you&rsquo;re from&mdash;it is eerily reminiscent of the futuristic adverts portrayed in the film. <br />, booming with Kinectnoemail@noemail.orgAndy Wilson<p><br> At a panel discussion at the <a target="_blank" href="">American Telemedicine Association trade show</a>,&nbsp; Dr. Kouroush Parsapour said that physical therapy in the United States is approaching crisis, so much so that by 2030, the number of states with sub-standard physical therapy will increase from 12 to 48.&nbsp; With this in mind he created 5plus Therapy, a startup that works on building digital health physical therapy tools.</p> <p><span style="mso-bidi-font-weight: normal;"><span style="font-family: &quot;Arial&quot;,&quot;sans-serif&quot;; font-size: 10.0pt; line-height: 115%;">At 5plus Therapy, Parsapour uses Microsoft’s Kinect to measure a patient’s movement, a task that he had previously performed with a goniometer. Parsapour is not alone. A number of tele-rehabilitation startup companies nationwide are using the Kinect.&nbsp;</span></span></p> <div class="MsoNormal"><span style="mso-bidi-font-weight: normal;"><span style="font-family: &quot;Arial&quot;,&quot;sans-serif&quot;; font-size: 10.0pt; line-height: 115%;"><br> <a target="_blank" href="">Reflexion Health</a>, has started clinical trials to validate the technology. Reflexion offers a rehab measurement tool that uses Kinect to instruct the patient on exercises and measure whether they are performing their exercises correctly.&nbsp;</span></span></div> <div class="MsoNormal"><span style="mso-bidi-font-weight: normal;"><span style="font-family: &quot;Arial&quot;,&quot;sans-serif&quot;; font-size: 10.0pt; line-height: 115%;"><br> <a target="_blank" href="">MobiHealthNews</a> has a list of <a target="_blank" href="">nine companies that are using digital rehabilitation solutions</a>, all of which use or plan to use the Microsoft Kinect.&nbsp;</span></span></div> <div class="MsoNormal"><span style="mso-bidi-font-weight: normal;"><span style="font-family: &quot;Arial&quot;,&quot;sans-serif&quot;; font-size: 10.0pt; line-height: 115%;"><br> Microsoft Kinect can be used for rehab, <a target="_blank" href="">weightlifting</a>, <a target="_blank" href="">medical record review,</a> <a target="_blank" href="">death mask creation</a>, <a target="_blank" href="">tracking workers</a> and <a target="_blank" href="">identifying car defects</a>.&nbsp;&nbsp;&nbsp;</span></span></div> <div class="MsoNormal"><span style="mso-bidi-font-weight: normal;"><span style="font-family: &quot;Arial&quot;,&quot;sans-serif&quot;; font-size: 10.0pt; line-height: 115%;"><br> Last week I wrote about how <a target="_blank" href="">video games may be able to help improve 3D vision</a> in adults with lazy eye. In that blog I mentioned how I was never a fan of video games, but with all of the good they are capable of, should I give them a second chance? </span></span><b style="mso-bidi-font-weight: normal;"><span style="font-family: &quot;Arial&quot;,&quot;sans-serif&quot;; font-size: 10.0pt; line-height: 115%;"><br> </span></b></div>, atlas of earthnoemail@noemail.orgAndy Wilson<br />Using two years&rsquo; worth of images taken by Nasa satellites, a mapping site called <a href="">MapBox</a> has created a brilliant cloudless atlas of Earth. <br /><br />The process begins with Nasa&rsquo;s Moderate-resolution Imaging Spectroradiometer (MODIS) technology, which can image the entire Earth every one to two days. The system is attached to the Terra and Aqua satellites that were launched into orbit in 1999 and 2002, respectively. The satellites and the MODIS system images collected data using the visual field wavelength, and once MapBox received the data it wanted from Nasa, it began processing the images.<br /><br />Prior to the launch, MapBox cartographer Charlie Lloyd told the <a href="">Daily Mail</a> that the 339,000, 16-megapixel+ satellite images totaled more than 5,687,476,224,000 pixels. Lloyd and fellow cartographers at MapBox then began the process of identifying images that had a clear view of the ground. By processing the images, the team was able to remove the clouds.<br /><br />This gent and his team did this for every pixel in the world! This enables folks to see images of Earth that have never been seen before, including things like land-use patterns, deforestation, cities, and so on. The images created by MapBox essentially provide an idea of what astronauts on board the International Space Station see on a clear day.<br /><br />While all of the images are undoubtedly impressive, a select few are truly and utterly remarkable. Take <a href="">this one</a>, for example, which shows a clear image of the UK. In this image, you can see London, The Brecon Beacons in Wales, and the highlands in Scotland. <br /><br />Rather cool stuff, no doubt about it. If you&rsquo;re interested in reading more, <a href="">click here</a>.<br />, 3D vision with video gamesnoemail@noemail.orgAndy Wilson<br />I&rsquo;ve never been particularly interested in playing video games. Considering the fact that they can sometimes lead to <a href="">addiction</a> and/or <a href="">violence</a>, it actually makes me question whether or not they are best left alone, at least for young children. On the flip side, though, are the potential positives that games can bring to the table.<br /><br />As it turns out, playing the right video games may actually help you <a href="">improve brain function</a>, <a href="">lose weight</a>, and…restore 3D vision for people with a lazy eye? If you had to read that last part twice, you aren&rsquo;t alone. A study performed at <a href="">McGill University</a> has found that playing video games with both eyes can dramatically improve vision in adults with lazy eye, which is a condition that was thought to be all but untreatable in adults, according to a <a href="">CBCNews article</a>. <br /><br />Lazy eye, also known as amblyopia, is an eye disorder characterized by impaired vision in an eye that otherwise appears normal. This is a condition that is estimated to affect 1% to 5% of the global population. Those with the condition have limited depth perception and hence cannot judge distances as well as people with normal vision.<br /><br />With the new treatment developed by a team led by Robert Hess, director of the opthalmology research department at McGill and the Research Institute of the McGill University Health Centre, vision in the weaker eye of someone with lazy eye drastically improves, and rather quickly. Here&rsquo;s how they did it: <br /><br />They chose the game of Tetris, a game that can only be played effectively using both eyes. By splitting the image between eyepieces of head-mounted goggles, one eye sees the falling pieces and the other eye sees pieces already fitted at the bottom of the screen. After playing Tetris for an hour a day for two weeks (that&rsquo;s a lot of Tetris!), nine adults with lazy eye showed vast improvement in the vision of the weaker eye and in their 3D depth perception.<br /><br />Rather cool stuff, if you ask me, but I can&rsquo;t say for certain whether or not I&rsquo;ll be playing Tetris any time soon.<br />, future is herenoemail@noemail.orgAndy Wilson<br />Science and engineering aside, what is the first thing that you think of when the idea of 3D holographic images comes to mind? <i>Star Wars? The Jetsons? Red Dwarf? </i>For decades, the idea of 3D holography has been referenced in pop culture. So when William Hanna and Joseph Barbera portrayed the Jetsons&rsquo; using holographic televisions and telephones in 2062, just how grounded in reality were these depictions? <br /><br />As it turns out, the Hanna-Barbera duo was onto something.<br /><br />3D holograms are already being <a href="" target="_blank">used to create maps that enable soldiers and commanders to navigate the terrain</a> in which they are operating without 3D glasses or goggles. The same technology could be making its way into people&rsquo;s homes and offices sooner than Hanna and Barbera might have thought. <br /><br />A <a href=";pg=0&amp;so=&amp;rw=1&amp;jid=109433&amp;jlang=EN&amp;pp=SS" target="_blank">job listing</a> from Microsoft suggests that the company is working on telepresence technology that would depict a virtual hologram of the person on the other end of a conversation. In other words, Microsoft is reportedly bringing 3D holograms to Skype, says <a href="" target="_blank">Laptop Mag</a>.<br /><br />We&rsquo;ve seen similar technology developed already, as <a href="" target="_blank">researchers at Queen&rsquo;s University created a human-sized 3D videoconferencing system</a> that allows people in different locations to communicate as if they were face to face. But with the Skype hologram technology, no pods and no sensors would be involved.<br /><br />Needless to say, this could revolutionize the way that offsite colleagues and business partners interact with one another. On one hand, it would be beneficial for those who are unable to meet in person for one reason or another. Meeting and chatting face-to-face and in person is something that cannot be replaced. But on the other hand, will the technology begin to erode the need for a common, shared workplace? Conjecture, no doubt, but it is interesting to think about. <br /><br />In considering some of the advances involving 3D technology we&rsquo;ve seen of late, what&rsquo;s next? <a href="" target="_blank">Here's what I'm thinking.</a><br /><br /><object height="255" width="340"><param name="movie" value=";version=3"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src=";version=3" type="application/x-shockwave-flash" width="340" height="255" allowscriptaccess="always" allowfullscreen="true"></embed></object>, the past, create the newnoemail@noemail.orgAndy Wilson<br />Roughly translated from Japanese to English, the phrase <i>onkochishin</i> means &ldquo;Respect the past, create the new.&rdquo; For this particular blog topic that advice adheres well, as scientists have produced an audio file from a 128-year-old relic.<br /><br />Alexander Graham Bell, the man who is credited with inventing the first practical telephone, does not have a voice. This is not to say the man was a mute--he was not--but given that he passed away nearly 91 years ago, nobody has actually heard his voice for nearly a century?<br /><br />Until now.<br /><br />The Smithsonian National Museum of American History, working in tandem with the Library of Congress and Lawrence Berkeley National Laboratory, has <a href="" target="_blank">identified a recording of Bell&rsquo;s voice for the first time</a>. It all began when a transcript that was signed and dated by Bell on April 15, 1885, was matched with a wax-on-binder-board disc that carries his initials with the same date. The Smithsonian sent the disc to go through the noninvasive optical sound recovery process&nbsp; on equipment developed by the Berkeley Lab to it to be audibly matched to the transcript, and to produce an audio file. <br /><br />How? Well that, of course, is the interesting part.<br /><br />In this process, 3D optical metrology and surface profiling methods create a 3D digital map. The map is then processed to remove evidence of wear or damage, and software calculates the motion of a stylus moving through the disc&rsquo;s grooves, reproducing the audio content into a digital file. An in-depth look at how this technology was developed and it utilized (how it is used) can be <a href="" target="_blank">found here</a>.<br /><br />The <a href="" target="_blank">group</a> that produced the recording was also responsible for retrieving <a href="" target="_blank">10 seconds of the French folk song &ldquo;Au Clair de la Lune</a>,&rdquo; from an 1860 recording of sound waves made as squiggles on a piece of paper.<br /><br />So while it may not be the high-quality audio that folks today are used to today, it is only fitting that the man--who may or <a href="" target="_blank">may not have invented</a> the telephone--now has a voice., eyes have itnoemail@noemail.orgAndy WilsonCamera-based surveillance systems have definitely played an important role in helping to keep crime down. With the knowledge that their activities will be captured on cameras, members of the criminal fraternity have been dissuaded from committing felonious acts in the community.<br /><br />But while such systems are undoubtedly effective, they do cost money to commission and maintain. And that's cash that many hard-up communities may be loathed to part with in these financially challenging times.<br /><br />So could there be a cheaper way to reduce crime without the use of such cameras? Well, apparently, yes, there is. Researchers at <a href="" target="_blank">Newcastle University</a> (Newcastle, UK) have now discovered that bicycle theft, for example, can be significantly reduced simply by placing pictures of staring eyes above bike racks.<br /><br />In a two year experiment on the university campus, the academics showed that the eye&nbsp; pictures -- which were combined with a short anti-theft message -- reduced thefts from the bike racks by 62 per cent. <br /><br />Newcastle University's Professor Daniel Nettle said that the images of eyes could act to dissuade crime by making people feel that they are being observed -- in a similar way to surveillance cameras -- and as a result behave in a more honest fashion.<br /><br />That, of course, is the good news. The bad news is that there was also a noticeable difference in crime in places without the signs, where bike theft went up by 63 per cent, suggesting that the crime had been displaced to other locations, rather than eliminated.<br /><br />Despite that fact, the British Transport Police are now trialing the idea with train Company C2C on a route between Fenchurch Street Station in London and Southend in Essex.<br /><br />While the idea undoubtedly has its merits, I'd like to think that a more comprehensive solution to the bike theft issue might be to install a couple of surveillance cameras behind the pictures of the staring eyes. <br /><br />Although my belt-and-braces idea might cost a few more shillings to implement, the combination of the eye pictures and the vision-based solution would not only lead to an even greater reduction in bicycle thefts, but also provide the police with detailed images of those still intent on a life of, on a barcodenoemail@noemail.orgAndy WilsonIf you are a private investor engaged in online trading and banking, having a Trojan attack your PC and whisk your personal financial details off into the nether regions of the internet is a rather horrid experience.<br /><br />Fortunately, some rather clever chaps at Cambridge University spin-out <a href="" target="_blank">Cronto</a> (Cambridge, UK) have now developed a system called CrontoSign to address this issue -- a data security solution that makes use of nothing less than a two-dimensional barcode.<br /><br />In use, a bank generates the proprietary two-dimensional barcode -- a matrix of colored dots containing a cryptographically-encoded message -- and then sends it to a customer. The code is then decoded by the customer using an app running on a handheld device such as a cell phone or on dedicated hardware supplied by the company. <br /><br />The bar code provides a secure "envelope" around the data so that it can be displayed to the customer over any unsecured channel. So although a Trojan might see the image being sent by the bank, it cannot change the secure data inside.<br /><br />Two German banks -- comdirect bank and Commerzbank -- have already rolled out the system, which is known in Germany as photoTAN.<br /><br />Customers can now scan a photoTAN image displayed on the banks' websites using the photoTAN mobile app or dedicated photoTAN hardware device. A customer then sees the message from their bank, which typically asks them to confirm the action they are attempting to perform.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="170" src="" width="320" /></a></div><br />To confirm the transaction, the customer uses a six-digit code, generated by the app or device, and enters it into the browser on their PC. The code acts as the customer's signature for a specific instruction, and once received and validated by the bank's server, completes the transaction.<br /><br />While Cronto is currently focused on the online banking sector, the company also sees commercial possibilities for the system in e-commerce, peer-to-peer online payments, or any other application where there is a need to create a trusted connection between two parties.<br /><br />You can read more about the CrontoSign system <a href="" target="_blank">here</a>. A video demonstrating the system is also available on YouTube in German <a href=";v=B0B28S26zuk" target="_blank">here</a>, detectornoemail@noemail.orgAndy WilsonUsing one of those new-fangled computer tablets while walking along the street can be a dangerous affair.<br /><br />Just the other day, for example, I saw one self-absorbed individual who collided with another pedestrian while strolling down a pedestrian precinct as he used such a tablet to surf the internet.<br /><br />It could have been a whole lot worse. He could have walked into something a lot harder, such as a brick wall or a lamp post, and caused some serious injuries to either himself or the infrastructure. <br /><br />One answer to this problem, of course, is not to use such mobile devices while walking, and concentrate on negotiating the environment instead. But these days, when we all like to be permanently wired into the web, many individuals are unlikely to heed such practical advice.<br /><br />Recognizing that fact, a team of researchers at the <a href="" target="_blank">University of Manitoba</a> (Winnipeg, Manitoba, Canada) have now developed a rather nifty little vision-based system that could be the answer to mobile users' prayers.<br /><br />Their so-called "CrashAlert" system augments mobile devices such as tablets with a Microsoft Kinect depth camera to provide distance and location visual cues of obstacles on a user's path. The Kinect camera itself is connected to a battery powered laptop computer carried in a backpack via a USB connection. When it receives images from the Kinect, it processes them and sends them off to the tablet via a Bluetooth connection.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="168" src="" width="320" /></a></div><br />In this way, a user can see their surroundings on the tablet while they walk, dodging and slowing down or lifting their head to avoid any potential collisions and related injuries.<br /><br />Now the cynics amongst my blog followers might consider that hauling around a bulky computer and a Kinect system in a backpack completely defeats the purpose of using a lightweight tablet in the first place. And, of course, they're probably right.<br /><br />But if such a system was miniaturized and actually fitted to the tablet itself, then it might actually prove to be of some practical use. And I'm sure that such systems will be in the future.<br /><br />A research paper entitled "<i>Crash Alert: Enhancing Peripheral Alertness for Eyes-Busy Mobile Interaction while Walking</i>," by Manitoba University researchers Juan David Hincapi&eacute;-Ramos and Pourang Irani is available on the internet <a href="" target="_blank">here</a>. Just don't read it on a tablet while attempting to negotiate a busy pedestrian precinct.<br /><br /><u><i>Editor's note</i></u>: Interested in reading more about novel uses of the Kinect? Then why not browse through our recent slideshow <a href="" target="_blank">here</a>, and runningnoemail@noemail.orgAndy WilsonAsk anyone and they will tell you the same thing. I've never been too fond of outdoor sports. My dislike probably dates back to my time at school in England, where all the lads were required to take part in rather rough games of "rugger"&rdquo; three times a week.<br /><br />Nevertheless, I can certainly see the advantage of giving the heart and muscles a good old work out in the comfort of my own home on one of those new fangled running machines. <br /><br />But the problem with those running machines is that, up until now, it&rsquo;s been impossible to partake in anything more intellectual -- like reading a treatise on how to program computer vision systems with Python -- whilst pounding away on the treadmill.<br /><br />Thankfully though, some rather clever chaps at <a href="" target="_blank">Purdue University</a> (West Lafayette, IN, USA) have now come up with a solution to this problem. That's right. They have developed a system called "Readmate" that allows treadmill users to read text on a small monitor mounted in front of the machine while they are exercising.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />To do so, a user must first don a pair of goggles equipped with infrared LEDs. An infrared camera can then track the runner's bobbing head by capturing images from the LEDs. Then the text on the screen is moved in unison with the head movement.<br /><br />According to Ji Soo Yi, an assistant professor of industrial engineering at Purdue, the text cannot be moved exactly in synch with the head because the eye is already doing what it can to compensate. So the system accounts for that compensation by moving the text slightly out of synch with the head motion.<br /><br />While the new system could prove to be a boon for those who get easily bored by endlessly running on the same spot, it also might be used by heavy equipment operators and aircraft pilots who experience heavy shaking and turbulence while reading information from a, goodsnoemail@noemail.orgAndy WilsonIf heavy goods vehicles and their trailers are too heavily loaded, or the loads incorrectly distributed, they may constitute a traffic hazard and damage road surfaces.<br /><br />But stopping vehicles randomly at weigh stations for no good reason -- especially those weighing hundreds of tons &ndash; can mean that a lot of fuel is wasted unnecessarily in the stopping and starting process.<br /><br />Now though, SINTEF, the largest independent research organization in Scandinavia, is heading up the development of a new system called "NonStop" that could offer a novel solution to the weighty problem.<br /><br />The system makes use of a special piezoelectric cable countersunk into the road surface. The cable generates an electrical voltage when subjected to pressure, and in this way the weight of a vehicle passing over it can be determined and recorded by a computer.<br /><br />Complementing the road sensor is an automatic number plate recognition system that will read vehicle number plates from which a vehicle's permitted load can then be determined.<br /><br />The measured weight and the load that the vehicle is allowed to carry will then be used by inspectors from the Norwegian Public Roads Administration (NPRA) to assess whether it should be stopped or not.<br /><br />SINTEF was commissioned by the NPRA to develop the system. Other partners involved are the Norwegian Hauliers' Association and the Oslo firm Ciber.<br /><br />What I particularly like about the Scandinavian idea is that it not only makes use of state of the art vision systems, but uses them in conjunction with a cable whose piezoelectric properties were discovered way back in 1880 by French physicists Jacques and Pierre Curie.<br /><br />Indeed, considering how to couple older technologies with the new might also provide many developers of vision systems in other fields some ideas along similar, mirrornoemail@noemail.orgAndy WilsonRegular readers to this blog will be aware of the coverage that I have devoted in the past to a company called <a href="" target="_blank">Raspberry Pi</a>,&nbsp; a spin out from that UK hive of cerebral activity, Cambridge University (Cambridge, UK).<br /><br />The engineers at Raspberry Pi have developed, and are now selling, a small inexpensive Arm (Cambridge, UK) based computer that plugs into a TV and a keyboard.&nbsp; It's a capable little PC which can be used for many of the things that a desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video.<br /><br />Since its launch, the inexpensive computer has attracted a lot of interest from hobbyists and academics alike who have deployed it in a variety of innovative ways. <br /><br />In one of the more recent applications, a French chap by the name of Pierre Raufast has used his Raspberry Pi computer, a webcam and OpenCV software to create a "Magic Mirror" with a disembodied voice which recognizes the person looking into it and responds accordingly. <br /><br />To enable others to build a similar system, the generous Frenchman has posted up an easy-to-follow tutorial, including a hardware list, software, instructions and tips on successfully using OpenCV for face-recognition. It can be found on the Think RPI web site <a href="" target="_blank">here</a>.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="258" src="" width="320" /></a></div><br />Having finished assembling the hardware, downloading and compiling the source code and training your system to recognize individuals, Monsieur Raufast recommends that you take a break and read "L'homme qui plantait des arbres," an allegorical tale by French author Jean Giono, published in 1953.<br /><br />Personally, if I were Monsieur Raufast, I wouldn't be sitting back on my laurels and reading anything. If I had developed such a system, I'd be investigating whether the good folks at The Walt Disney Company might be interested in parting with some of their money to help me commercialize it. <br /><br />Monsieur Raufast has posted a video of his Magic Mirror in action on YouTube. You can view it <a href=";feature=player_detailpage" target="_blank">here</a>. I wonder what the Brothers Grimm would make of it?, camnoemail@noemail.orgAndy WilsonA camera hidden in a teddy bear has caught a care worker stealing money from the home of an old age pensioner in the UK.<br /><br />According to a report in the UK's Daily Telegraph, 28-year old care worker Emelie Kleen-Barry was caught red handed stealing from 81-year old grandmother Margret Birch by the hidden camera. She was jailed at Leicester Crown Court for 13 months for stealing &pound;40 in cash from Birch's home.<br /><br />Mrs Birch kept her purse in a wardrobe in her room, but when funds appeared to be diminishing faster than she expected, her family planted the so-called "Teddy Cam" in her room and focused it on the wardrobe to find out what was happening to the money.<br /><br />The family called the trap 'Operation Narnia' and the device soon came into its own when it caught Kleen-Barry taking money from the grandmother's purse.<br /><br />"In my professional opinion, Teddy Cams and other hidden cameras are the best way to give peace of mind to families and friends who are worried about what goes on when they can't be around to care for vulnerable people," says Kristy George, a spokesperson for the Birmingham, UK-based firm <a href="" target="_blank">Private Detective</a> (Birmingham, UK).<br /><br />That company claims to have had great success in the past with Teddy Cams. Not only are they an excellent way to keep an eye on an elderly or infirm relative of friend who lives alone or in warden controlled home, they can also can be given to a child to find out how they are being cared for by others when the parents are not around.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="238" /></a></div><br />Clearly, such wireless monitoring devices can be a great boon for those with elderly parents or those with small children. But such Teddy Cameras are not inexpensive. One such bear from <a href="" target="_blank">Eyetek Surveillance </a>(Chaddesden, UK) that comes equipped with a color camera and transmitter which transmits video to a receiver currently retails in the UK for &pound;145 (almost $220).<br /><br />Faced with similar surveillance issues, those of us with an engineering bent might like to consider constructing our own such bears instead of buying one. Certainly, it's an idea that has caught the imagination of our European Editor Dave Wilson.<br /><br />He tells me that he could pick up a wireless camera and receiver for around &pound;45 ($68) from UK electronics gizmo supplier <a href="" target="_blank">Maplin</a> that he is almost certain he could retrofit quite easily to the somewhat elderly bear from his childhood.<br /><br />Soon then, the unfortunate beast might be taken from the upstairs cupboard where he has lived for the past fifty years only to have his soft delicate insides drawn out to make way for the new vision-based implant.<br /><br /><b>References:</b><br /><br /><b>1. <a href="" target="_blank">Thieving care worker caught on camera inside teddy bear</a><br /><br />2. <a href="" target="_blank">Eyetek Surveillance Wireless Teddy Camera</a>&nbsp;</b><br /><br /><b>3. <a href="" target="_blank">Discreet Wireless Colour CCTV Camera</a></b>, defectsnoemail@noemail.orgAndy WilsonMany reasons are often cited for deploying machine vision systems. These include improving tedious and repetitive manual inspection tasks that are prone to human error while increasing productivity at the same time.<br /><br />In many manufacturing environments, single point inspection systems can determine whether a product has been properly assembled. If not, the product is then rejected and may be reworked or scrapped. Needless to say, such reworking may prove expensive if a product has gone through multiple manufacturing stages before being inspected.<br /><br />To reduce the amount of reworking required, many manufacturers employ what is known as a "zero defects forward" approach. Rather than inspect a product after it has been fully assembled, the product is inspected after each step in the assembly process.<br /><br />In this way, any defects that occur at each stage are recognized and can be more easily corrected before the next phase of assembly. In addition, such "zero defects forward" approaches save the time and money that would have otherwise be wasted by completing the assembly of a defective product that would ultimately need to be disassembled and then re-assembled.<br /><br />To further improve the manufacturing process, manufacturers can also deploy vision-based robotics systems to automate the assembly task itself, relieving operators of the tedious job of doing so.<br /><br />By deploying such systems in conjunction with vision-based inspection systems, manufacturers can further reduce assembly cost, increase productivity and eliminate human error. However, for many Smaller to Medium sized Enterprises (SMEs), implementing fully automated robotic assembly systems may only be justified if the return on investment is high enough.<br /><br />As an alternative, those enterprises who have already realized the benefits of implementing vision systems to inspect their products could consider deploy semi-automated assembly systems to evaluate their effectiveness.<br /><br />In doing so, they will avoid the costs of implementing fully automated assembly systems, while at the same time reaping the benefits of semi-automated assembly before they ultimately and inexorably move towards a totally automated manufacturing, that spitnoemail@noemail.orgAndy WilsonWhen Swiss-born ophthalmologist Dr. Edmund Landolt proposed a new type of symbol for testing visual acuity in 1888, he probably would never have dreamed that one day it would be used to help explain how fish are able to feed on insects.<br /><br />But that is exactly what Dr. Shelby Temple at <a href="" target="_blank">Bristol University</a> (Bristol, UK), and a team at the University of Queensland and the University of Western Australia have done. They have modified Dr. Landolt's so-called "C test" to discover the resolving power of the eyes of a family of fish known as archerfish.<br /><br />The archerfish themselves are rather unusual creatures. They have a special way of hunting for food that involves spitting jets of water at tiny aerial insects high above the water's surface. Because sound and smell do not cross the air-water interface, these fish must depend on their visual capabilities to find, identify and accurately spit at their prey.<br /><br />To discover how visually acute such fish are, the researchers first trained them to spit at one of two letters -- an 'O' or a 'C' -- by rewarding them with food.&nbsp; Then they showed them small versions of both letters together and recorded which letter they spat at.<br /><br />"This modified Landolt C test works because the only difference between the two letters is the gap in the C, so in order to tell the difference and spit at the right target to get their reward the fish must be able to resolve the gap," says Dr. Temple. <br /><br /><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="180" /></a><br /><br />To test the archerfish's resolving power, the size of the letters were decreased in steps. The scientists then compared the behavioral results from their experiments to the fishes' predicted acuity based on measurements of the photoreceptor density in their retinas.<br /><br />The results, published in the journal Vision Research, show that archerfish are one of the most visually acute freshwater fish. They are able to resolve between 3.23 and 3.57 cycles per degree (0.155-0.140&deg; of visual arc) with the part of their retina that looks up and forwards, which is not surprising given their interesting foraging strategy. <br /><br />If Dr. Landolt were alive today, I'm sure he'd undoubtedly be amused to learn how his test had been repurposed, and impressed to see how effective it was at helping biological scientists such as Dr. Temple determine the visual acuity of animals other than human beings.<br /><br />More information on the research is available on the Bristol University web site <a href="" target="_blank">here</a>. The researchers' technical paper "A comparison of behavioural (Landolt C) and anatomical estimates of visual acuity in archerfish (Toxotes chatareus)" is available <a href="" target="_blank">here</a>, fishnoemail@noemail.orgAndy WilsonThe "discarding" of fish by commercial fishermen is a term commonly used to describe the practice of throwing unwanted fish back into the sea -- usually dead.<br /><br />Under the <a href="" target="_blank">European Common Fisheries Policy</a> (CFP) discards have historically taken place for three reasons. Firstly, fish are discarded if they are small and under the legal minimum size which fishermen are allowed to land for that species. Second, they are thrown back into the sea if a fisherman's annual quota for that species has already been reached, making it illegal to land it. Lastly, fish are dispensed with if they are of a species which has no commercial market value.<br /><br />Thankfully, discarding fish will soon become a thing of the past after the UK Government secured a historic victory in Brussels to set firm dates to reform the EU Common Fisheries Policy and introduce a ban on the practice. Hence the wasteful practice of discarding edible fish stocks like herring and mackerel will end from January 2014. A ban for white fish stocks will begin in January 2016.<br /><br />The legislation will undoubtedly save the lives of many thousands of fish. But what of the scavenging birds that follow the fishing vessels to help themselves to a free lunch? How will they be affected by the move? <br /><br />That's exactly what Dr. Stephen C. Votier, an Associate Professor in Marine Ecology at the School of Marine Science and Engineering at <a href="" target="_blank">Plymouth University</a>&nbsp; (Plymouth, UK) wanted to find out.&nbsp; To do so, Dr. Votier and his team attached cameras and GPS systems to ten <a href="" target="_blank">Northern Gannets</a> (Morus bassanus) to provide a unique view of how the seabirds interacted with the fishing vessels. <br /><br />Results from his research revealed that all the cameras on the birds captured images of large (&gt;15m) boats, but not smaller vessels. Virtually all the vessels were trawlers, and the gannets were almost always accompanied by other scavenging birds. All the birds&nbsp; exhibited an area-restricted search during foraging, but only 42 per cent of such searches were associated with the fishing vessels, indicating that the birds were still foraging naturally. <br /><br />The research illustrates the fact that those vessels discarding fish provide an important source of food for foraging gannets, but that they will be able to adapt well if discards were to disappear altogether -- if there is sufficient food to meet their nutritional needs.<br /><br />While Dr. Votier's research has provided a fascinating insight into the foraging habits of seabirds, his technique of equipping birds with cameras and GPS systems might also be used in the future to fight in the battle against illegal, unreported and unregulated fishing activity, which may still result in a significant amount of discards.<br /><br /><u><b>References:</b></u><br /><br /><b>1. <a href="" target="_blank">A Bird&rsquo;s Eye View of Discard Reforms: Bird-Borne Cameras Reveal Seabird/Fishery Interactions</a></b><br /><br /><b>2. <a href="" target="_blank">Historic day as Fisheries Ministers agree a date for discards ban</a></b><br /><br /><b>Related articles from <a href="" target="_blank">Vision Systems Design</a> that you might </b><b>also find of interest.</b><br /><br /><b>1. <a href="" target="_blank">Cameras and computers count fish</a></b><br /><br />Researchers at the University of Western Australia (Perth, Australia) led by associate professor Euan Harvey have been awarded a three-year, $450,000 Australian Research Council Linkage grant to develop a vision-based computer algorithm to count and measure fish.<br /><br /><b>2. <a href="" target="_blank">Fish processed by vision system</a></b><br /><br />Engineers at the Icelandic firm Valka (Kópavogur, Iceland) have designed a vision-based cutting machine that can cut out pin bones from red fish, as well as trim and portion the fish into fish fillets.<br /><br /><b>3. <a href="" target="_blank">A fish tale</a></b><br /><br />To maximize profits, fish farmers are using a machine-vision system and smart software to sort good fish eggs from, girls make gravesnoemail@noemail.orgAndy WilsonForty years ago, I read an interesting science fiction story about a novel device worn on the top of the head that lit up when its wearer was in the presence of an individual that he or she found particularly attractive.<br /><br />The device in question removed all of the ambiguity involved in determining whether individuals thought that strangers were particularly desirable, and the story delved into the social implications of doing so.<br /><br />While it sounded rather far fetched at the time, this week, science fiction became a bit closer to science reality with the news that <a href="" target="_blank">Fujitsu Laboratories</a> (Kawasaki, Japan) has developed software that is capable of measuring an individual's pulse rate in real time by calculating variations in the brightness of their face.<br /><br />The software, which captures and processes images from a built-in camera in a PC, smart phone or tablet, can measure a pulse rate simply by pointing the device at a person's face for as little as five seconds.<br /><br />The system takes advantage of the fact that one of the characteristics of hemoglobin in blood is that it absorbs green light. So, after capturing a video of the face with the camera, the software uses the peaks in the brightness of the green component of the RGB images in the video frames from which to compute a pulse rate.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="214" src="" width="320" /></a></div>Naturally enough, the good folks at Fujitsu see the development of such a system as enabling individuals to use their own inexpensive camera-equipped devices to determine how healthy they are. And while that's all well and good, there are clearly other social implications.<br /><br />It's a well known fact, for example, that a racing pulse is a sign of instant attraction. So it's quite possible that -- should such software be offered onto the open market -- it could be used by certain folks to determine how attractive others find them.<br /><br />While that might seem to be a rather frivolous use of the technology, there could be some health implications here too. According to that bastion of editorial excellence, the UK Daily Mail, for example, researchers at the <a href="" target="_blank">University of Valencia</a> (Valencia, Spain) have shown that -- for men -- just five minutes spent alone with a beautiful stranger can cause so much stress it may be bad for the heart.<br /><br />For that reason, men lucky enough to spend time in the company of beautiful ladies might be better off pointing their smart phones at themselves -- rather than at the ladies -- to determine the effects their socializing is having on their health.<br /><br /><b>References: </b><br /><br />1. <a href="" target="_blank">Fujitsu Laboratories develops real-time pulse monitor using facial imaging</a><br /><br />2. <a href="" target="_blank">How a beautiful stranger will send a man&rsquo;s stress hormones soaring</a>, moment rememberednoemail@noemail.orgAndy Wilson<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div>These days, most folks in the western world carry a smart-phone with them at all times, enabling them not only to talk and text, but to also capture images of their friends and family.<br /><br />Many such people, of course, only haul out their smart-phones to take a snap when they feel that the picture that they are taking is important enough to justify doing so. But the folks at <a href="" target="_blank">Memoto</a> (Linköping, Sweden) believe that because of that, many important moments in an individual's life might well be lost.<br /><br />To rectify the situation, the engineers there have developed a miniature 36 x 36 x 9 mm wearable camera that a captures images continuously, creating a visual log of the life of the user.<br /><br />Memoto itself was founded by six Swedish serial entrepreneurs just last year. Martin Källström, formerly founder and CEO of blog search engine Twingly, came up with the concept of the Memoto camera. To develop it further, he recruited his friend Oskar Kalmaru, founder of an online video provider, as well as Björn Wes&eacute;n, who was, at the time, a freelance electronic design engineer.<br /><br />The Memoto GPS-enabled weatherproof camera they have created has no buttons. While a user is wearing it, it takes two timed geotagged photos a minute, enabling a user to immediately review when and where he or she was when the image was taken.<br /><br />Now there are some who might regard such a device as only pandering to the needs of individuals who are egotistical enough to believe that every moment of their lives should be recorded for posterity. <br /><br />On the other hand, such cameras might prove extremely useful in certain industrial or security applications. I'm sure that many companies, for example, would be only too delighted to attach such cameras to their employees, so they might track their activities on a daily basis. And the police could certainly use them to tag known offenders and capture a visual log of their activities to ensure that they weren&rsquo;t involved in any nefarious businesses.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="185" src="" width="320" /></a></div><br />If you would like to purchase one of the wearable cameras -- which come in three colors - orange, graphite grey and arctic white -- they can now be pre-ordered at the company web site for the price of $279.00.<br /><br />Personally, I wouldn't have much use for one, unless I wanted to capture hours of images of the PC screen in front of which I sit for most of the day writing news and features for Vision Systems Design., matternoemail@noemail.orgAndy WilsonMany companies who design and develop vision systems are founded by engineers with an entrepreneurial spirit who want to carve out their own niche in life rather than working in a nine-to-five job for a larger organization.<br /><br />Such individuals often have the unique ability to not only identify a specific requirement in the marketplace, but are also able to direct their teams of engineers to create systems to meet those needs as well.<br /><br />But what happens to such organizations when their founder dies? According to new research from the Universities of Warwick and Bergen, the result is rather astonishing -- the death of a founding entrepreneur wipes out on average 60 percent of a firm&rsquo;s sales and cuts jobs by around 17 percent.<br /><br />The research, by Professor Sascha O. Becker at <a href="" target="_blank">Warwick University</a> (Coventry, UK) and Professor Hans K. Hvide at <a href="" target="_blank">Bergen University</a> (Bergen, Norway) has shed some light on exactly how much a founder-entrepreneur 'matters' in terms of influencing the performance of privately-owned businesses. <br /><br />To reach their conclusion, the researchers analyzed firms' performance up to four years after the death of the founder-entrepreneur and found a long-lasting and significant negative impact. Although they didn't specifically look at companies in the vision systems design marketplace, from my own experience, there is no doubt that the conclusions that they reached are equally applicable. <br /><br />As well as the striking effect on sales, companies whose entrepreneur dies have 20 per cent lower survival rates two years after the death, compared to similar firms where the entrepreneur remains alive.<br /><br />Warwick University's Professor Becker said that while the researchers expected businesses that experienced the death of a founder-entrepreneur to have some kind of a dip in performance immediately after the death owing to the upheaval, they had anticipated that there would be a bounce-back.<br /><br />However, even four years after the death of a company founder, most firms showed no sign of recovering and the negative effect on performance appeared to continue even further beyond that.<br /><br />Professor Becker said that the results showed what a vital role such people played in maintaining productivity levels within a firm, but could only surmise as to why that might be so.&nbsp; <br /><br />While it could be because the founder was a fantastic sales-person who generated a disproportionately high level of sales, it could be due to their leadership, where the employees were inspired to perform to their best ability.<br /><br />The researchers looked at various different types of firms to see how they were affected by founder-entrepreneur death. But they found no difference between results for family or non-family firms, urban or rural businesses, and no significant variation across sectors.<br /><br />The level of education of the founder-entrepreneur also played a role in determining how badly the firms were affected -- those with the most highly educated founders experienced a bigger drop in performance after their death. The researchers also looked at whether ownership shares mattered. What they found was that the effect of the death of a 50-per-cent owner was roughly half that of the death of a majority owner.<br /><br />In the vision industry, I'm pleased to say that there are some companies that have continued to succeed after their founders have passed away. In many cases, however, they were founded by individuals who recognized the need to entrust the responsibility of running their companies to younger equally entrepreneurially-inspired individuals long before they died.<br />, best of timesnoemail@noemail.orgAndy WilsonIt was the best of times. It was the worst of times. Unfortunately, it was the latter that greeted me upon arriving in Orlando, Florida at the Automated Imaging Association's business conference.<br /><br />Originally, I had not planned to attend the conference, delegating the task to our European Editor, David Wilson, whom I decided needed a week in the sun after a cruel English winter.<br /><br />However, after Jeff Burnstein, the President of the Association, informed me that I was to be presented with the AIA's annual achievement award, my plans quickly changed. After booking my flight, I decided that, after the conference was over, I would treat my brother to visit Mickey and Minnie at Disneyland.<br /><br />&nbsp;Unfortunately, this was not to be. On the day before the conference, I received a telephone call from Dave who informed me that he had pulled a muscle in his chest and could not get out of bed. And, so, armed with 35 tablets of Vicodin, I disembarked from the airplane to reconnoiter the situation.<br /><br />From then on, things went south. The "beautiful" International Palms Resort hotel in Florida was not that beautiful -- in fact there were cigarette burns on the carpets, the place was filthy, the shower in my room was broken and the elevators did not work properly. To add to this nightmare, I discovered my brother in great pain lying in bed unable to move. Needless to say, I immediately called for a doctor. <br /><br />After about two hours, an elderly gentleman arrived with his assistant replete with a bag full of more drugs than my local pharmacy. After what seemed like an hour long exam, my brother was diagnosed with a cracked rib, given an injection of hydrocortisone and a prescription for Vicodin. Needless to say, my brother was somewhat incapacitated for the whole week.<br /><br />To make matters worse, I only had four hours before accepting my award. After driving to the airport to re-register Dave's car, I drove back to the beautiful International Palms Resort hotel and picked up the prescription. Luckily, there was a pharmacy opposite the hotel. Unluckily, the pharmacist who ran the shop informed me that the doctor had forgotten to write the dosage on the script. After a few telephone calls, I was presented with a bill for $95 for forty-four of the wonderful white tablets.<br /><br />After I delivered them, I drove the car to the conference to accept my award, only to find some very worried organizers wondering where I had spent most of the day. Only having had four hours sleep, I decided that, after checking on my patient, I would try to get an early night.<br /><br />Like the rest of my plans, this was not to be. At about 10:15pm that night, two of the attendees at the conference (who must remain nameless) decided to surprise me with a bottle of Moet and Chandon. After sitting around the hotel's swimming pool talking for the next four hours, it was finally time for bed. Four hours later, I went back to attend the rest of the conference.<br /><br />Needless to say, our European Editor and I never did get to greet any rodents in the Magic Kingdom.<br /><br />, crackdown in Sloughnoemail@noemail.orgAndy WilsonTo exploit those individuals in the UK on low incomes who may not be able to afford to live in conventional lodgings, some unscrupulous home owners have converted their outbuildings into accommodation which they then rent out at low cost.<br /><br />The problem with such dwellings -- known in the UK as &ldquo;sheds with beds&rdquo; -- is that they may not comply with UK building or fire safety regulations, and hence could represent a hazard to the hapless tenants who are forced to rent them due to their unfortunate circumstances.<br /><br />Now, Slough Borough Council is set to become the first local authority in the UK to do something about the problem, with the help of thermal imaging technology. To assist it in its endeavors, the council has commissioned geographic imaging company <a href="" target="_blank">Bluesky International</a> (Coalville, UK)&nbsp; to fly an airplane fitted with aerial imaging cameras over the whole borough at night, after which it will produce a thermal map of the town. Officers will then use the map to pin-point warm areas in outbuildings.<br /><br />It is not known exactly how many sheds with beds there are in Slough, but estimates range from 700 to 3,000. The occupants are believed to be mostly single adults or childless couples with low incomes.<br /><br />According to Ray Haslam, head of environmental services and resilience for Slough Borough Council, aerial photography is one of a range of tactics the council is using to crack down on the problem, and it hopes that evidence of heat in outbuildings will help it build a true picture of how many sheds are being lived in and where they are.<br /><br />&ldquo;We will be able to cross-check and see whether they have valid Energy Performance Certificates (EPCs) which are required by law for places where people live. If they don&rsquo;t, we will be speaking to landlords and offering some advice and guidance, and enforcing the law if we need to. One option is to repeatedly fine a landlord for not having an EPC. The fine is &pound;200 a day, making it very expensive for people to continue using the outbuilding,&rdquo; he said.<br /><br />Slough Borough Council is one of a handful of local authorities who have been granted extra money from the UK Government to help improve conditions in houses of multi occupancy (HMOs) and reduce the number of sheds being used as accommodation without permission.<br /><br />Cracking down on the exploitation of individuals living in unsafe housing clearly has its merits. The problem is, of course, that the technology itself does nothing to address the real issue of the lack of affordable housing. If more of that had been created in the first instance, then the need for the flying thermal cameras would be, to Florida?noemail@noemail.orgAndy WilsonYou've got to feel a bit sorry for our poor beleaguered European Editor. You see at around 9 am GMT today, he received a call from a close friend who had discovered some important information relating to his trip next week to the 21st annual AIA Business Conference in Orlando, Florida.<br /><br />She informed him that she had heard on the television that a law introduced in Florida on January 1, 2013 now requires all persons who hold a license issued outside of the US to carry an International Driving Permit (IDP) along with their national driving license.<br /><br />Apparently, the new law says that -- without an IDP -- a driver is therefore driving without a valid license, and if stopped, law enforcement officers have the option of either arresting the driver and taking them to jail or giving the driver a citation with a mandatory court appearance.<br /><br />Not wanting either option to happen to him, our European Editor walked down to his local Post Office to see if they might supply him with the relevant documentation.&nbsp; Sadly, they weren't able to help, directing him to the nearest larger Post Office in Bedford, a town no more than five miles away.<br /><br />Unfortunately, upon driving into this town, parking his car and walking into the establishment in question, he was told that only a few Post Offices were capable of dealing with such requests and the nearest one was, in fact, in Luton -- over forty miles away.<br /><br />Somewhat peeved, our European Editor drove home to telephone the Automobile Association (AA) who confirmed that the only way to obtain the IDP was to present his existing UK photocard license and passport at the Post Office in Luton.<br /><br />After an hour long drive, he finally reached his destination. But alas, there was more bad news in store for our editorial friend. <br /><br />That's right. You see, the folks at the Post Office in Luton informed him that -- in addition to his photocard license and passport -- he would also be required to present what in the UK is known as a "Counterpart Driving License (CDT)," a small green piece of paper that appears to all intents and purposes to contain exactly the same information as the photocard license itself. Our Editor, naturally enough, had left his CDT at home.<br /><br />Needless to say, it took most of the day before our European Editor was actually issued with his brand spanking new IDP. He can now rest assured that nothing particularly nasty will happen to him should he be stopped by the police while attending the AIA Conference.<br /><br />But he needn't have worried. Because after checking on the Internet just hours ago, I have discovered that the Florida Department of Highway Safety &amp; Motor Vehicles has now issued a statement saying that the recently enacted IDP requirement has been suspended pending further study.<br /><br />All apparently due to the fact that the requirement may violate the Geneva Convention on Road Traffic (1949), an international treaty to which the United States is a signatory.<br /><br />See you in Florida!, seeds in Mongolianoemail@noemail.orgAndy WilsonIt's unlikely that you will have ever heard of the Shandong Luhua Group. Yet this rather substantial Chinese enterprise produces no less than 600,000 tons of peanut oil and 100,000 tons of sunflower seed oil each year.<br /><br />Needless to say, with production volumes like that, it's hardly surprising that its products have been exported to numerous countries, providing it with revenues in the millions.<br /><br />The company itself has a number of subsidiaries -- including one in Inner Mongolia that hails by the incredibly lengthy name of the Inner Mongolia Luhua Sunflower Seeds Oil Company Limited.<br /><br />The reason that I mention this particular plant in Inner Mongolia is simply because of its size. If you take a look at the picture below, you will see what I mean.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="208" src="" width="320" /></a></div><br />Now you might think that such a plant would employ a lot of people to perform tedious manual operations to check the size and the quality of the sunflower seeds before they are processed to make the oil.<br /><br />But that's where you'd be wrong. It's for certain that many of the processes are automated, not in the least the sorting of the seeds according to their color.<br /><br />The picture below, for example, testifies to that fact. Taken from inside the plant itself, it appears to show a plethora of color sorting machines from <a href="" target="_blank">Anhui Jiexun Optoelectronic Technology</a> (Hefei, Anhui, China). This company has produced an array of such systems to automate the sorting of all sorts of agricultural products -- including rice, cereals, beans, nuts and tea!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="208" src="" width="320" /></a></div><br />Now for those of you still reeling from the news that Vision 2013 has been cancelled, let me remind you that <a href="" target="_blank">Vision China 2013</a> will still be held between October 16-18 this year at the China International Exhibition Center (Beijing, China).<br /><br />Perhaps now is the perfect time for those of us who manufacture and market components used in vision systems to look a little further afield for new opportunities. China might be just the,'s good for the goosenoemail@noemail.orgAndy WilsonFirst staged in 1901, the Chicago Auto Show is the largest auto show in North America and has been held more times than any other auto exposition on the continent.<br /><br />Since 1901, a lot of changed in the automobile market, and a lot has changed at the show too. In the past, to catch a glimpse of the latest introductions from Ford, DeLorean or Cord, you physically had to attend the show. But things have changed. Thanks to the marvel of digital imaging technology, now that&rsquo;s not even necessary.<br /><br />That's because the organizers of the show have installed an array of webcams from <a href="" target="_blank">TrueLook Professional Webcam Systems</a> (Winston-Salem, NC, USA) to give those unfortunate souls who can&rsquo;t travel to Chicago a live HD view from the floor of the show.<br /><br />Not only are the TrueLook webcams accessible via the Chicago Auto Show&rsquo;s website, users can simultaneously view and control them as well, aiming and zooming them to see the over 1,000 vehicles on display from their favorite auto manufacturers.<br /><br />The webcams also let users save their photos, either to their computer or in an online photo album. Alternatively, they can be shared Facebook or Twitter. According to TrueLook, these interactive features, along with the ability to control the motorized cameras, have led to an increase in visitors to the website.<br /><br />While the deployment of such cameras at an auto show might have a lot of benefits, I can&rsquo;t really see the advantages of such a system at one of our own industry trade shows that focus on vision system design.<br /><br />You see, visitors to our trade shows don't just come to stand and stare at a new USB3 enabled CMOS imager or the attractive model that stands next to it, but to interact with company representatives to discover whether any of the products on offer might solve a particular challenge that they are facing as systems integrators.<br /><br />I can't help but think, however, that perhaps it's a little ironic that some of the camera technology on display at such shows has not been more effectively deployed by the show&rsquo;s organizers in the same way that it has at the Chicago Auto Show!<br /><br />The webcams at the Chicago Auto Show can be viewed <a href="" target="_blank">here</a>.<br />, white stuffnoemail@noemail.orgAndy WilsonWhen I was a small child, I used to really enjoy the sight of snow in winter. And I fondly remember (as a toddler, of course!) the winter of 1962 when the UK was hit by a massive snow storm that covered the entire country in up to six feet of snow.<br /><br />These days I'm not so fond of the winter and the misery that ensues after a big snowstorm. Inevitably, after such an event, I have to shovel my drive for hours just to take my car out. That's right. Perhaps it's my age, but the very thought of a snowstorm now sends shivers down my spine.<br /><br />There are those, however, who still clearly enjoy the snow. And Tim Garrett, an associate professor of atmospheric sciences at the <a href="" target="_blank">University of Utah</a> (Salt Lake City, UT, USA), is one of them. <br /><br />In fact, he&rsquo;s so enamored by the snow that he has developed a rather unique instrument for capturing images of snowflakes and measuring their speed as they fall.<br /><br />The so-called Multi-Angle Snowflake Camera (MASC) was developed in the Department of Atmospheric Sciences at the university with support from the US Army, NASA, and the National Science Foundation.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="242" src="" width="320" /></a></div><br />In operation, it takes 9 to 37 micron resolution stereographic photographs of falling snow from three angles, while simultaneously measuring their speed. The cameras are triggered by a vertically stacked bank of sensitive IR motion sensors and the speed is derived from successive triggers. The instrument itself is sensitive to snowflake sizes ranging from 100 micrometers to 30,000 micrometers.<br /><br />If you are interested in buying the camera, you will be delighted to hear that it can now be purchased through <a href="" target="_blank">Fallgatter Technologies</a>, a spin-off company from the university, that is, naturally enough, headed up by Dr. Garrett himself.<br /><br />The company&rsquo;s first delivery was made to the US Army for the serious purpose of researching into avalanches at Mammoth Mountain, which is situated west of the town of Mammoth Lakes, California.<br /><br />Having developed such an innovative camera to capture images of snowflakes as they fall, perhaps now Dr. Garrett could turn his attention to creating an inexpensive labor saving device that would help clear my drive after the white stuff has, headnoemail@noemail.orgAndy WilsonPez is more than just candy. That's right. It's "interactive candy" that is both enjoyable to eat and fun to play with. And that's partly because dispensers with new characters on them are introduced regularly.<br /><br />But there are some Pez candy dispensers that you won't be able to buy in any of the supermarkets, mass merchandisers, variety stores, drug stores, convenience stores, toy chains and gift stores that sell them throughout the US and Canada. <br /><br />That's because the heads on these particular dispensers have been custom built by folks working for the rather oddly named company, the '<a href="" target="_blank">Hot Pop Factory</a>' as holiday gifts for the employees of one of their clients.<br /><br />To do just that, the chaps at the Hot Pop Factory first scanned all 32 of the employees' heads in 3-D using the Microsoft Kinect camera. Astoundingly, they convinced everyone to allow them digitize their heads for "a mysterious research project", despite a lot of protesting.<br /><br />After the digital 3-D models of the subjects were generated, the scans were patched up with MeshMixer, a free software tool that can be used to make 3-D models. <br /><br />After some more modeling work to add a connection from the heads to the candy dispensers, they were ready to print.&nbsp; And many hours of printing later, the Hot Pop Factory had produced the 32 custom built heads that were then ready to install on the candy dispensers after the existing ones had been removed.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="257" src="" width="320" /></a></div><br />From the reaction of the individuals who received their holiday Pez dispensers, it would appear that the whole exercise was a huge success.<br /><br />However, I hope that my own publishing company doesn't decide to reward our staff with similar custom-built Pez dispensers this coming holiday season. Having just returned from hours in the dentist's chair having root canal treatment, I can't say the thought of eating any Pez candy seems like such a sweet idea at the present time.<br /><br />You can watch a video of the Kinect in action <a href=";feature=player_embedded" target="_blank">here</a>, the webnoemail@noemail.orgAndy WilsonThese days, it's important to have a presence on the interweb, because without it your company will be deemed to be either behind the times or out of business. <br /><br />Recognising this fact, many vision systems integrators and their suppliers have developed their own web sites in which they can tout their wares and demonstrate their expertise to their customer base.<br /><br />Sadly, however, after visiting numerous web sites over the past few weeks in search of new developments in the vision field, I'm sorry to say that too many companies are simply paying lip service to this technology rather than actually taking advantage of the benefits that it could potentially offer.<br /><br />In many cases, it would appear that while such companies may have been initially excited by the potential that the technology offered a few years ago, today they have actually abandoned the idea that the interweb is of any use at all.<br /><br />On one site that I visited recently, for example, I clicked on a specific link to see what new applications that a particular vision systems integrator might have been involved with, only to be taken to a page with a rather grotesque image and a caption that read "Your Page Has Been Hacked by Tony".<br /><br />While that was the most extreme example of company negligence that I found, there were plenty of others. On another systems integrator's web site, there were a host of links to case studies. Sadly, however, all of them took me to web pages that simply read "Page Not Found".<br /><br />Now you might think that this sort of thing only applies to small to medium sized enterprises. But you'd be wrong. When attempting to email the marketing department of a large robotics company, my email client informed me that my message had been returned due to the fact that no such email address could be found.<br /><br />After looking at your own web site, perhaps you might also find that it is also lacking in certain functionality. And if you do, there are a couple of things that you can do about it. <br /><br />On one hand, you might consider outsourcing the maintenance of the site to an external developer who will be able to consistently ensure that your site remains free from hackers and entirely functional. Alternatively, you could consider hiring an individual at your company whose sole responsibility it is to maintain your web site.<br /><br />Allocating a specific resource to manage your web site might make a lot of difference to the experience of any new potential customers. But be careful to ensure that you map out the specific goals that you are trying to achieve before embarking on any venture, just as you would when specifying the design of a new vision, foils thermal imagernoemail@noemail.orgAndy WilsonOver the past few years, thermal imaging cameras have been used to locate people by capturing images of the heat emitted by their bodies. <br /><br />That's because, of course, that when viewed through a thermal imaging camera, warm objects stand out well against cooler backgrounds, hence humans become easily visible against the environment. <br /><br />Now, due to the miniaturization of electronic and electro-mechanical components, such infra-red cameras can be easily mounted onto inexpensive small unmanned aerial vehicles that can be used by the police forces to assist with public safety missions.<br /><br />Although relatively few of such drones are currently flown over US soil, the Federal Aviation Administration (FAA) predicts that 30,000 drones will fill the nation's skies in less than 20 years.<br /><br />However, some Members of Congress and the public fear there are insufficient safeguards in place to ensure that drones are not used to spy on American citizens and unduly infringe upon their fundamental privacy.<br /><br />Proponents have responded by emphasizing their potential benefits, which may include protecting public safety, patrolling borders, and investigating and enforcing environmental and criminal law violations.<br /><br />Clothes designer <a href="" target="_blank">Adam Harvey</a>&nbsp; is one individual that falls into the former camp. It's clear that he thinks that thermal imaging systems mounted on drones are a threat to our civil liberties. And his concern with protecting the privacy of individuals has now led him to create a range of so-called 'Anti-Drone' garments designed with a fabric that apparently protects the wearer against thermal imaging surveillance.<br /><br />They work by using highly metallized fibers to reflect heat, thereby masking the wearer's thermal signature. Of the three 'Anti-Drone' pieces that have been created so far, two are inspired by Muslim dress: the burqa and the scarf. A third piece -- the hoodie -- is intended to thwart overhead thermal surveillance from drones.<br /><br />While I'm as concerned about protecting the privacy of the public as anyone else, I can't help but think that Mr. Harvey may not have thought his idea out quite as thoroughly as he should.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="242" src="" width="320" /></a></div><br />You see, while the metalized fiber burka shown above might well reduce the chances that an individual is spotted by a thermal imager mounted in a police drone, it will certainly increase the chances that the individual will be spotted by police on the ground, since he or she will stick out like a sore thumb.<br /><br />Reference: <a href="" target="_blank">Drones in Domestic Surveillance Operations: Fourth Amendment Implications and Legislative by Richard M. Thompson II.</a><br />, can help clear landminesnoemail@noemail.orgAndy WilsonThe United Nations estimates that there are more than 110m landmines scattered in 70 countries and, using current technologies, it would take over 1100 years and more than $33bn to clear them.<br /><br />To help speed up the process, the UK-based Engineering and Physical Sciences Research Council (EPSRC) has formed a partnership with Find A Better Way (FABW) -- a charity founded by Sir Bobby Charlton -- to fund one or more research projects focused on new ways of detecting landmines.<br /><br />During a recent event held at Lloyds of London, the organizations jointly launched a call for outlines for those wishing to submit full proposals for projects, lasting between one to four years, to address research challenges in areas of landmine detection.<br /><br />FABW is making around &pound;1m available for the research and it is expected that many individuals will have research ideas to contribute. FABW aims to develop technology to accelerate the detection and safe removal of landmines globally.<br /><br />The outfits are looking for research proposals which will lead to the development -- within a seven-year timescale -- of new technologies able to make the process of detecting and locating landmines for humanitarian clearance faster, cheaper and safer.<br /><br />Clearly, there's scope here for those involved in the world of vision systems to make a significant contribution to these efforts. Outside the UK, many researchers such as Roger Achkar from the America University of Science and Technology (Beirut, Lebanon), for example, have already done so. <br /><br />Last year, Achkar disclosed that his team had developed a robot that captures images of contaminated areas which are then processed by an artificial neural network to classify both the make and model of landmines. The results of his research work can be found <a href="" target="_blank">here</a>.<br /><br />But clearly, a lot more work needs to be done and the new initiative is a welcome step in the right direction. Those interested in the project can find a list of organizations and individuals who are eligible to apply for funding <a href="" target="_blank">here</a>. The closing date for submission of outlines is March 25 2013.<br /><br />More information on the project itself can be found on the EPSRC web site <a href="" target="_blank">here</a>, app captures drunk driversnoemail@noemail.orgAndy WilsonFrank Vahid, a computer science professor in the Bourns College of Engineering at the <a href="" target="_blank">University of California, Riverside</a> (UCR; Riverside, CA, USA) has created a new smart-phone app to help eradicate problems caused by drunk drivers.<br /><br />The professor developed the app after realizing that while the police always ask individuals for the license plate of vehicles potentially being driven by drunks, it's not always easy for them to provide such details. And that's where the new smart phone app comes into its own.<br /><br />After downloading the free Android and iPhone app called DuiCam, all a driver needs to do is mount their smart-phone on the dashboard of their car. Once fired up, the app will then enable the smart-phone to constantly record what is happening in front of the car, while deleting footage after 30 minutes so the memory on the smart-phone isn't overwhelmed.<br /><br />If app users see what looks like a drunk driver, they can then replay the video and zoom in to look at the license plate and other identifying marks on the offending car which they can pass on to the police. The app even makes it possible to email a snapshot or the entire video to help investigators get the driver off the road.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="252" /></a></div><br />Five years ago, the technology for such an app wasn't widely available, but now virtually every cell phone has a good quality camera, and many people already have mounts for their dashboards or windshields, so they can easily use the camera feature on their smart-phone.<br /><br />Professor Vahid and UCR computer science majors Timothy Cherney and Daniel de Haas -- the students who programmed the new DuiCam app -- are now adding more features, such as automatic license plate recognition.<br /><br />I think that there is a lot of potential in this idea. Imagine, for example, that future versions of the app could automatically categorize (with reasonable accuracy, of course!) the sort of driving behavior that is deemed to be representative of a drunk driver and then automatically send the license plate number it captures and identifies to the nearest police vehicle!<br /><br />That would truly make those folks who are still foolish enough to drive after having imbibed a few cocktails to think twice before getting in their vehicles. <br /><br />If you like the professor's idea and would like to help sponsor further development, you can email him at, and/or give directly at the UCR donation page.<br /><br />Information on the app is available at <a href=""></a>. Readers can download the <a href=";mt=8" target="_blank">iPhone version here</a> and the <a href=";feature=search_result#?t=W251bGwsMSwyLDEsImNvbS5lc2x1Y3IuZHVpY2FtIl0" target="_blank">Android version here</a>.<br /><br /><u><b>Related items from Vision Systems Design.</b></u><br /><br /><b>1. <a href="" target="_blank">Vision system helps curb drunken drivers</a></b><br /><br />Drunk driving may soon become a thing of the past thanks to a face-recognition program developed by a pair of University of Windsor (Windsor, Ontario, Canada) engineering graduate students.<br /><br /><b>2. <a href="" target="_blank">Thermal imaging software detects drunks</a></b><br /><br />Greek researchers have developed software to analyze images from thermal imaging cameras to objectively determine whether a person has consumed an excessive amount of alcohol.<br /><br />, systems designnoemail@noemail.orgAndy WilsonThe pain associated with a kidney stone can be very severe. As anyone who has ever had one will testify, the pain tends to come in waves and can be so great that it can cause individuals to double over in agony.<br /><br />Medical professionals recognize the symptoms caused by such stones, after which they usually perform an x-ray scan to determine their location and size. When the x-rays pass through soft body tissues, they cause the x-ray film to turn black. But when a calcium stone is present the x-ray cannot pass through it and the image of the stone shows up as white on the x-ray image.<br /><br />If kidney stones cannot be dissolved by drugs, the favored procedure is lithotripsy. Lithotripsy works by focusing shock waves onto the kidney stones in an effort to break them into pieces small enough to pass out of the body. After the procedure, another x-ray is taken to see if the procedure has been successful in clearing the kidney stone.<br /><br />Now, researchers in the UK led by Professor Tim Leighton from Southampton University (Southampton, UK) in collaboration with Guy's and St Thomas' Foundation Trust (GSTT) and Precision Acoustics (Dorchester, UK) have developed an acoustic instrument called the "smart stethoscope" that also aims to assess whether the lithotripsy treatment is working, obviating the need for more x-rays.<br /><br />In operation, a transducer is placed on a patient's skin as they undergo shock wave treatment for kidney stones, and the smart stethoscope system the transducer is attached to analyzes the characteristics of the acoustic signals from the stone after each shock wave has hit it. By doing so, it can determine whether the treatment has been effective or not at breaking it up.<br /><br />According to Dr. Fiammetta Fedele of GSTT, the instrument has diagnosed successful treatments with 94.7 per cent accuracy in clinical trials. The UK National Health Service (NHS) is now trialing the smart stethoscope as part of major plans to reduce inaccurate diagnoses and ineffective treatments, and so far GSTT has used the sensor on over 200 patients.<br /><br />What fascinates me most about this development is that is that it represents an acoustic alternative to a well established tried and tested imaging approach that has served the medical field well for years. <br /><br />Of course, characterizing materials and products by analyzing their acoustic properties in industrial settings is nothing new. However, it's usually the case that such acoustic scanning systems are deployed because of their unique abilities to find hidden defects within assemblies and materials that can occur during manufacturing, characteristics that pure vision-based systems are unable to spot. <br /><br />But the work by the UK researchers at Southampton University shows that there is scope to develop such acoustic systems in industry as a direct replacement for vision-based inspection systems. And I'd be pleased to hear from any of our readers who might have done just, has its benefitsnoemail@noemail.orgAndy WilsonMost of the engineers I know who design and build vision-based systems would describe their work as creative and intellectually challenging. And although their jobs might often be frustrating and exasperating, few of them would describe their work as anything but boring.<br /><br />However, while most of us might think of being bored at work as a negative experience, a new study suggests it can have positive results including an increase in creativity -- simply because it gives us time to daydream.<br /><br />That is the finding of a study being presented today by Dr. Sandi Mann and Rebekah Cadman from the University of Central Lancashire (Preston, UK) at the Annual Conference of the British Psychological Society Division of Occupational Psychology.<br /><br />To reach their conclusion, Dr. Mann and Ms. Cadman conducted two studies. In the first, 40 people were asked to carry out a boring task (copying numbers out of a telephone directory) for 15 minutes, and were then asked to complete another task (coming up with different uses for a pair of polystyrene cups) that gave them a chance to display their creativity.<br /><br />It transpired that the 40 people who had first copied out the telephone numbers were more creative than a control group of 40 who had just been asked to come up with uses for the cups.<br /><br />To see if daydreaming was a factor in this effect, a second boring task was introduced that allowed even more daydreaming than the boring writing task. This second study saw 30 people copying out the numbers as before, but also included a second group of 30 reading rather than writing them.<br /><br />Again the researchers found that the people in the control group were least creative, but the people who read the names were more creative than those who had to write them out. This suggests that more passive boring activities -- like reading or perhaps attending meetings -- can lead to more creativity, whereas writing, by reducing the scope for daydreaming, reduces the creativity-enhancing effects of boredom.<br /><br />Dr. Mann believes that boredom at work has always been seen as something to be eliminated, but she thinks that it may now be time to embrace it to enhance our creativity.<br /><br />So the next time that you are stuck for a novel solution to one of your customer's automated inspection tasks, perhaps you should take some time out to attend a meeting, read out numbers from the telephone directory or catch up on filing some old reports. If Dr. Mann is right, just 15 minutes spent doing so might work wonders!, holidays!noemail@noemail.orgAndy WilsonAs the holiday season approaches once more, it is time again for the publisher, editors and hard working sales representatives at Vision Systems Design to take a well deserved break from our labors until the New Year comes sneaking around.<br /><br />Personally, I'm looking forward to these holidays and the ritual of waking up eagerly on Christmas morning to see what delightful pieces of high-technology equipment that Papa No&euml;l may have deposited beneath the resplendent $40 Christmas tree which stands proudly erect in my living room.<br /><br />Perhaps this year, I might be elated to find that one of my generous friends, relatives or work colleagues has brought me a new camera or piece of video equipment with which I can deploy to capture the thrilling magic of the holidays. Before doing so, however, you can be sure that I will look up the particular specification of the imagers used in them to discover exactly what line pairs per millimeter that they can resolve!<br /><br />But even if I don't receive any new high-technology items this year, I can be almost one hundred per cent certain that someone will have bought me a new pair of cotton socks and some underwear. <br /><br />Any fathers amongst our readership will recognize the importance of such presents, since hardworking engineers and editors like me rarely have time to buy such items of loathing (Surely, clothing? -- Ed.) during the course of the year!<br /><br />What ever presents I receive, one thing is for sure. Come Christmas day, I will be sitting down with friends and family to tuck into a nicely stuffed bird, accompanied by a plethora of roast potatoes, sprouts and cranberry sauce. Not to mention the canap&eacute;s, aperitifs and petit fours that I plan to wash down with some mulled wine.<br /><br />Having satiated myself, I may well take a little rest on the couch to watch the endless variety of intriguing variety shows that will undoubtedly be broadcast on television this holiday season. <br /><br />It might sound like a quiet time for some, but for me, it's a great respite from the hurly-burly world of publishing that consumes so much of my existence during the rest of the year.<br /><br />Enough said. On behalf of the entire team here at Vision Systems Design, I would like to thank those folks in the industry that have supported our efforts over the past year through advertising, as well as those that have taken the time and trouble to discuss the vision systems that they have developed with our editors for the benefit of our readers. <br /><br />I sincerely hope that next year will bring you and your families, as well as the companies that you work for the good fortune and prosperity that you deserve. Have a wonderful holiday season. I hope your bird tastes as good as mine.<br />, visionnoemail@noemail.orgAndy WilsonAnyone who has ever attended the birthday party of a small child will know that at some point in the proceedings an event will occur that inevitably upsets one of more of the children present.<br /><br />And so it was when Dave Wilson, our beleaguered European Editor visited the small sleepy hamlet of Steeple Claydon in Buckinghamshire, England last weekend to attend the birthday party of the daughter of a close friend.<br /><br />Towards the end of the party, the birthday girl's helium filled balloon was let loose from its moorings by another child at the party, only to ascend to the 50ft high ceiling of the seventeenth century village hall where the party was being held.<br /><br />Seeing the lack of disappointment on the child's face, our European editor strode off into the village hall kitchen to ask if any of the adults present might know of any innovative means by which the rogue balloon could be brought down from its lofty heights.<br /><br />Sadly, most of them shook their heads in despair. One suggested waiting until the balloon deflated. Another suggested hiring a very long ladder. No-one was able to offer any practical suggestions to the conundrum at all.<br /><br />Upon returning to the main hall, our European Editor was somewhat taken aback to discover that the balloon in question was lying on the floor in the hands of the father of the child, who proceeded to attach a weight to it before passing it back to his daughter.<br /><br />Needless to say, our European Editor was absolutely intrigued to discover how such a feat had been accomplished and approached the father to discuss the means that he had employed to retrieve the balloon from such a height.<br /><br />Well, I must tell you, the solution was rather ingenious. The child's father had taken a small, yet rather heavy packet of the children's candy and wrapped it with a voluminous amount of double-sided sticky tape. Having done so, he pitched the projectile directly at the balloon to which it affixed itself. Naturally enough, after it had done so, the laden balloon then descended rapidly to the floor of the village hall!<br /><br />The retrieval of a balloon from the ceiling of a village hall might at first appear to have little to do with the subject of vision systems design. But those with years of experience in the industry will see it differently.<br /><br />You see, the solution to the thorny problem of how to retrieve the balloon was only derived through an indirect and creative thought process, using reasoning that was not immediately obvious. And it's that sort of lateral thinking that also sets the successful companies in the vision systems design business aside from the ones that are, of burgersnoemail@noemail.orgAndy WilsonAn average fast-food restaurant spends $135,000 every year in costs to employ individuals to produce hamburgers. But all that could soon become a thing of the past if the engineers at <a href="" target="_blank">Momentum Machines</a> (San Francisco, CA, USA) have anything to do with it.<br /><br />For the engineers there are working on developing a robotic system that can do everything those employees can presently do, except better. Indeed, they are claiming that with their new robotic system in place, the labor savings will enable future restaurants to offer "gourmet quality" burgers at fast-food prices.<br /><br />The robotic system will be able to offer custom meat grinds for every single customer. So if you want a patty with one third pork and two thirds bison ground after you place your order, that won't be a problem. Aside from mixing up and cooking the meat, the system can also slice toppings like tomatoes and pickles immediately before it places the slices onto a burger, providing customers with the freshest burger possible.<br /><br />The result of all of this, at least according to the company, is that the consumer will be presented with a product that is more consistent and more sanitary. And since the system will be able to produce 360 hamburgers per hour, there will be more than enough to go around!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="204" src="" width="320" /></a></div><br />While it all might sound a bit far fetched, the team at Momentum Machines has an impressive background. They were trained in mechanical engineering, control systems, and physics at institutions such as Berkeley, Stanford, UCSB and Utah University. And their work experience includes firms such as iRobot, NASA, Sandia National Labs, Semiconductor Technology Associates and Tesla.<br /><br />They are being advised in their endeavors by Don Fox, the CEO of Firehouse Subs and 2011 National Restaurant News Operator of the year, and The Culinary Edge, a highly esteemed restaurant consulting group. Investment capital is being provided by Lemnos Labs.<br /><br />Now the results of such a system will clearly have a great impact on the folks who presently work in fast-food chains. So the noble minded folks at Momentum Machines aim to help out those who may need to "transition" to a new job as a result of their technology by offering discounted technical training to any former line cook of a restaurant that deploys the new system.<br /><br />If you have a degree or two, on the other hand, you might even consider working for the company itself. Currently, it is looking to hire a mechatronics engineer as well as a machine vision specialist to further the development of the system. So if you know anything about vision systems design and love a good hamburger, you know where to, of a legendnoemail@noemail.orgAndy WilsonDr. Bryce E. Bayer, the former Eastman Kodak scientist who invented the standard color filter pattern that bears his name, has died.<br /><br />Aged 83, Bayer died on November 13 in Bath, Maine. According to a report in the New York Times, the cause of Bayer's death was a long illness related to dementia.<br /><br />In Bayer-based color imagers, pixels on an image sensor are covered with a mosaic of red, green, and blue filters. In the Bayer pattern, 50% of the pixels are green, 25% are red, and 25% are blue. A technique called Bayer demosaicing is used to generate the red, green and blue values of each pixel to obtain a useful image from sensors that employ the Bayer filter.<br /><br />The American scientist chose to use twice as many green pixels as red or blue to mimic the resolution and the sensitivity to green light of the human eye.<br /><br />Today, there are other techniques that are used to produce color images. One such technique uses a prism to split light into three components that are then imaged by three separate sensors. Another uses a layered design where each point on the sensor array has photosensitive receptors for all three primary colors.<br /><br />Despite these advances, the Bayer filter -- which was patented when he worked for Eastman Kodak in 1976 -- is the one that is most commonly found in most consumer cameras, camcorders, and scanners to create color images.<br /><br />The staff of Vision Systems Design magazine would like to express our sincere condolences to Dr. Bayer's family.<br /><br />He will be remembered as one of the greatest pioneers of digital imaging, get smartnoemail@noemail.orgadministratorAny fan of the British TV science fiction series Dr. Who will be only too familiar with one of his oldest arch enemies, a race of creatures called the Autons -- life-sized living showroom dummies that stagger about the streets blowing up anything in their path using weapons concealed within their hands.<br /><br />When they first appeared on the show back in the 1970s, the Autons created quite a stir amongst the Dr. Who fan base. Despite the fact that they did not look particularly realistic, there was something quite terrifying about their dumb, robotic like movements and expressionless faces that struck fear into the audience at the time.<br /><br />Since the 1970s, robotics technology has, of course, come a long way. While it might have seemed inconceivable back then that showroom dummies might one day be equipped with any technology to make them more lifelike, or indeed, more intimidating, today it's almost expected of them!<br /><br />The folks at <a href="" target="_blank">Kee Square</a> (Milan, Italy) and <a href="" target="_blank">Almax </a>(Mariano Comense, Italy) a leading manufacturer of mannequins would surely be the first to agree.<br /><br />For just recently the two companies took the wraps off a new mannequin that embraces an intelligent analytical vision system that can help shop owners across the world "observe" and analyze who is attracted to items in their stores and reveal important details about them such as their age range, gender and race.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="272" src="" width="320" /></a></div><br />The mannequin itself has a camera installed in its head that captures the facial features of people passing through a store -- data which is then analyzed by software to provide statistical and contextual information about them to the store owners. The embedded software can also provide data such as the number of people passing in front of a window and at what times of day they did so.<br /><br />Fortunately for store customers, the new mannequins are not quite as sophisticated as the Autons in Dr. Who. Although they are somewhat more attractive, they are not, for example, able to move. Nor do they come equipped with any sort of weaponry such as ray guns with which to inflict harm upon innocent shoppers.<br /><br />But perhaps the best feature about them is that they are made of shock-proof polystyrene and finished with water based paints. That means that when the time comes for them to retire they can be easily recycled into something more useful.<br /><br />More information on the mannequins can be found <a href="" target="_blank">here</a>. More information on Dr. Who can be found <a href="" target="_blank">here</a>, at Tiffany'snoemail@noemail.orgAndy WilsonOver the past decades, many individuals faced with a lack of work in their own countries have migrated to somewhat more affluent countries to eke out a living.<br /><br />Many of these folks have moved to countries in Europe and to the United States of America -- either legally or illegally -- to find work in agricultural jobs, performing the sorts of tasks that the residents of those countries would either find too demeaning or too low-paid to take up.<br /><br />These agricultural jobs usually involving living and working on farms for long hours picking fruit and vegetables for a minimum wage. And although that minimum wage far exceeds what such folks might be able to earn in their own countries, one month's pay is usually barely enough to keep a roof over their head, let alone buy them a nice entrecôte tranch&eacute;e at the <a href="" target="_blank">L'Atelier de Jo&euml;l Robuchon</a>.<br /><br />Now if things aren't tough enough for these poor migrant workers, they are made to feel even worse by political groups that insist that the menial jobs that they perform pulling potatoes and picking oranges have taken jobs from the natives of those countries, whose lives themselves have become naturally poorer to due the work opportunities that are no longer available. <br /><br />The farmers in the US and Europe, of course, see things a bit differently. Without such low-paid workers, their produce would not be competitive with farmers from further afield. Indeed, in many cases, even though their pickers and pluckers are paid minimum wage, the farmers still find it hard to compete with other farmers around the world who employ their workers for even less money.<br /><br />But it looks as if, in the not too distant future, that all of this is about to change, thanks to the deployment of robotic harvesting machinery that is under development in the US and the European Union.<br /><br />Just this week, for example, <a href="" target="_blank">Vision Systems Design</a> reported on two new developments in the field (no pun intended). One of these was the development of a $6m project involving engineers at Purdue University and Vision Robotics who have teamed up to develop an automated vision-based grapevine pruner. The second was a fully automatic vision-based robotic system to harvest both white and violet asparagus that is being funded under a European grant.<br /><br />These projects, of course, represent just the tip of the iceberg. Numerous other projects of a similar nature are development across the world that will revolutionize farming forever. Of course, it may make some time before such systems are perfected, but there's no doubt in my mind that given enough time and effort that they will be.<br /><br />The future impact on the migrant workers, however, is less clear. Will they then return to their native countries where automation is less prevalent to seek work, or travel further afield? Sadly, whether they run to the west to Tulip, Texas or to the east to Somaliland, their future employment is all used up.<br />, supportnoemail@noemail.orgAndy Wilson<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div>When discussing the design of any new vision system with systems integrators, I'm always intrigued to discover what specific hardware and software they chose to implement their systems.<br /><br />More often than not, the two key reasons any product is chosen, of course, is based on its technical merits and its price. But there is always a third, and perhaps more important reason, that systems integrators opt for the products that they do. And that's the support that they receive from the distributor, or reseller, that sells them the product.<br /><br />As one might expect, the distributor or reseller that fully comprehends, and can explain both the advantages -- and the disadvantages -- of his products, is more likely to win an order than one that is simply shipping products without much of an idea of how they work or how to integrate them into a system.<br /><br />But these days, there is more to winning a sale than that. The distributor or reseller that also has some understanding of how his products will fit into the bigger scheme of things has an even greater advantage over those that simply have a basic knowledge of one or two product lines. Indeed, it is this more holistic approach that will almost guarantee that a product is specified into a new machine. <br /><br />In one recent conversation I had with a systems integrator, he stated quite clearly that his choice of camera, system software and lighting products had been heavily influenced by the reseller that he discussed his machine vision needs with.<br /><br />That reseller was obviously not only knowledgeable about many aspects of image processing, but was also wily enough to be able to leverage the expertise he passed along to the systems integrator into quite a lucrative sale.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="235" src="" width="320" /></a></div><br />In these strained economic times, however, many companies are reducing the number of experienced folks that they have on board in favor of younger, less well-paid individuals. Naturally enough, however, these folks haven't had enough years in the industry to be au fait with anything other than a basic understanding of their own company's product lines.<br /><br />Fortunately for the systems integrator that I spoke to, he had been fortunate enough to find and work with a reseller that clearly understood the merits of hiring and keeping experienced multifaceted individuals who could assist him with the task of developing his vision, vision at allnoemail@noemail.orgAndy WilsonAnyone with an X-Box hooked up to a Kinect camera will appreciate the fact that gesture recognition has added all sorts of interactive possibilities to gaming that simply weren't possible before.<br /><br />But a vision system isn't the only way of detecting the gestures of individuals to enable them to control computer systems, as one company proved this month when it launched an alternative gesture recognition technology that might challenge the role of vision in certain applications.<br /><br />That company was none other than Microchip Technology (Chandler, AZ, USA), whose so-called GestIC system is based on the idea of equipping a device such as a tablet PC with a number of thin electrodes that create an electric field around the device when an electric current is passed through them.<br /><br />Once a user's hand then moves into the area around the tablet, the electrical field distribution becomes distorted as the electrical field lines intercepted by the hand are shunted to ground through the human body. The distortion of the field is then detected by a number of receiver electrodes integrated onto the top layer of the device.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="232" src="" width="320" /></a></div>To support this concept, Microchip Technology has -- as you might have expected -- produced an integrated circuit named the MGC3130 that not only acts as a signal generator but also contains signal conditioning and analog to digital converters that convert the analog signals from the receivers into a digital format. <br /><br />Once they are in that format, a 32-bit signal processor analyses the signals using an on-chip software suite that can track the x/y/z position of the hand as well as determine the gestures of a user. These are then relayed to an applications processor in the system that performs commands such as opening applications, pointing, clicking, zooming and scrolling.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="196" src="" width="320" /></a></div><br />While the folks at Microchip Technology believe that the GestIC system will enable the "next breakthrough in human-machine-interface design", and are touting the fact that it offers the lowest power consumption of any 3-D sensing technology -- the technology is still limited to a detection range of up to 15 cm.<br /><br />So while it does offer an interesting alternative to a camera-based system, I don't think that the folks at Microsoft will be too worried that it will ever compete with their Kinect camera.<br /><br />Samples of Microchip's MGC3130 -- which comes in a 5x5 mm 28-pin QFN package -- are available today. Volume production is expected in April 2013 at $2.26 each in high volumes. An evaluation kit is available today for $169. More information is available <a href="" target="_blank">here</a>, Italian goalnoemail@noemail.orgAndy WilsonThose of you who traveled to last week's VISION 2012 show in Stuttgart might have noticed that I wasn't the only editor from Vision Systems Design to attend the event.<br /><br />That's right. On my trip to Germany I was accompanied by none other than our European editor Dave who was also there to discover what was new, original and inventive in the vision systems business.<br /><br />During his time at the show, I asked Dave to stop to chat with Signor Donato Montanari, the General Manager of the Vision Business Unit of Datalogic (Bologna, Italy), a company which -- as you may recall -- took over Minneapolis, MN-based PPT Vision last year. <br /><br />I wanted Dave to find out how a large multinational company like Datalogic was faring in these precarious economic times, as well as to discover what new technical developments, if any, had taken place since the takeover.<br /><br />On the European front, Dave was hardly surprised to hear that Datalogic vision business had remained pretty much flat this year, since most of Southern Europe is still in the economic doldrums. But Signor Montanari painted a much more optimistic picture of his company's fortunes in the US and Asia. Thanks to the fact that the entire US Datalogic sales force had been brought to bear to sell the new vision product line, business was up ten percent this year in the US and a whopping forty percent in Asia.<br /><br />But what of the technology that Datalogic inherited, I hear you ask? Well, apparently, there have been some changes there too. While the old PPT had subcontracted out the manufacture of its cameras, the Datalogic management has now brought the manufacturing process in-house.<br /><br />But that's not all. On the hardware front, the PPT cameras that were based on an embedded PC architecture have now been redesigned and rebuilt based on digital signal processors, resulting in a subsequent cost reduction. And, in a six month effort, the existing PPT drag and drop vision programming software environment has been ported over to them.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="144" src="" width="320" /></a></div><br />Now, as many of you may know, PPT had a rather interesting business model with respect to its cameras and software. If you bought cameras from the company, the software development environment was provided for free. For the time being, it appears as if Datalogic will be keeping to that model. <br /><br />But next year Signor Montanari said that Datalogic had plans to make the integration of third party software into its software development environment a whole lot easier for engineers than it has been in the past. And he also said that the company was beefing up its technical support centers across the globe to offer the capability of customizing the PPT software for specific customer applications.<br /><br />Whether the company becomes a dominant player in the machine vision business still remains to be seen. But from listening to Signor Montanari speak, Dave seems convinced that it's a goal that the Italians will be trying their best to, 2012: A Space Odysseynoemail@noemail.orgAndy WilsonAccording the most recent figures released by its organizers, the Stuttgart VISION 2012 show was still the place to be seen for those involved in the machine vision industry. Testifying to that fact, more than 7,000 visitors attended the 25th anniversary of the show last week, roughly the same number that showed up last year.<br /><br />Unlike previous years, this year all the exhibitors found themselves under one roof in the L-Bank Forum of the Stuttgart exhibition center. And there were plenty of them for the attendees to visit too -- the 25th anniversary of the show saw no less than 372 exhibitors parading their wares &ndash; an increase on the 351 exhibitors that attended the show last year.<br /><br />And what a sight it was too. Unlike previous years, many smaller- to medium-sized companies had opted for much larger booths at this year's show. In a clear attempt to impress the attendees and outdo their competition, they found themselves cheek by jowl with more well established outfits, dwarfing them with booths that appeared to be almost as high as the Bradbury Building.<br /><br />There was an increase in the number of those exhibitors that came from outside Germany this year too. While last year saw just 46 per cent of those exhibiting come from further afield, this year, the figure was up to 49 per cent. Representing 32 countries in all, the exhibitors brought with them cameras, vision sensors, frame grabbers, software tools, illumination systems, lenses, accessories as well as complete machine vision systems.<br /><br />Of the attendees to the show, the organizers say that 85 per cent were involved in purchasing and procurement decisions in their company. As you might expect, most of them were primarily interested in machine vision components and applications. But an increasing number of visitors expressed an interest in turnkey machine vision systems as well.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="278" src="" width="320" /></a></div><br />Aside from checking out the new products on display, the VISION show was also a place where one could gain some insight into how vibrant the vision system industry is. At the VISION Press lunch held on Tuesday November 6, for example, Dr. Olaf Munkelt, the Managing Director of image processing software vendor MVTech Software and Chairman of the Executive Board of the VDMA Machine Vision Group presented an overview of the state of the German machine vision market. <br /><br />The figures he showed highlighted the fact that the total turnover for machine vision systems in Germany was expected to remain pretty much flat this year, with a growth of just two percent in 2013. But there was better news from outside Germany, where orders for machine vision systems were predicted to rise at a somewhat higher rate. <br /><br />But not every company is experiencing low growth rates. One executive that I ran into on my way back to England from VISION 2012 claimed that his company had experienced a remarkable 20 per cent growth in orders this year, a trend he clearly expected to continue next year as he was actively looking to hire a number of engineers to meet the demand for his products.<br /><br />Next year, the VISION 2013 will be staged two months earlier from September 24 to 26 2013, so none of us will have quite as long to wait to get our next dose of machine vision technology. <br /><br />But will that be long enough for those involved in our industry to really develop any game changing technology? One company owner I spoke to didn't think so. He said that his outfit would be doing no more than demonstrating the same products that he displayed this year. By then, he said, at least his engineering team might have had time to iron out all the bugs in them!, of the futurenoemail@noemail.orgAndy WilsonTwenty five years ago, a machine vision system that performed a simple inspection task may have cost $100,000. Today, a similar system based around a smart camera can perform the same task for $3,000.<br /><br />The decrease in the price of the sensors, processors and lighting components used to manufacture such systems has been driven by the widespread deployment of those components in high-volume consumer products. And that trend is likely to continue into the future.<br /><br />As the cost of the hardware of such systems has decreased, so too have the capabilities of integrated software development environments. As such, rather than hand code their systems from scratch, designers can now choose from a variety of software packages with large libraries of image processing functions which they can use to program their systems.<br /><br />The combination of inexpensive hardware and easy to use programming tools has enabled OEM integrators to develop systems in a much shorter period of time than ever before, offering them the possibility of developing several systems each year for customers in a variety of industries.<br /><br />The result of the decreased price of hardware and the ease of use of many software packages has also allowed many sophisticated end users to take on the role once performed by the systems integrator, developing their own systems in house rather than turn to outside expertise.<br /><br />Over the next ten years, engineers can expect to see more of the same. As the system hardware decreases in price, however, they can also expect to see companies develop more highly specialized processors, sensors and lighting systems in an attempt to differentiate their product lines from those of their competition. <br /><br />On the software front, developers will continue to refine the capabilities of their software packages as well as adding greater capabilities while driving down the cost of deployment by offering subsets of their full development environments in the form of software apps to their customers.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />As 3-D hardware and software becomes more prevalent, designers will also be challenged to understand how capturing and processing images in 3-D might enable them to develop more complex systems to tackle their vision systems applications.<br /><br />In the December issue of <a href="">Vision Systems Design</a>, I'll be bringing out my crystal ball to see if I can predict some more emerging trends in the field of machine vision. Be sure to pick up your copy when it lands on your, Christmas timenoemail@noemail.orgAndy WilsonRegular readers of this blog might remember that a week or so ago I reported on a New Zealand engineer by the name of <a href="">Mark Hampton</a> who is attempting to fund the development of a right-angled lens for the Apple iPhone camera and Apple iPad through a site called <a href="" target="_blank">Kickstarter</a>.<br /><br />As I mentioned before, the New York City-based Kickstarter web site is a funding platform which enables creative individuals to post ideas for potential new products on the site.<br /><br />If readers of the web site like a particular product enough to buy it, they can pre-order it by pledging money to the company that has designed it. If the company then succeeds in reaching its funding goal to manufacture the product, all backers' credit cards are charged and the products are produced and delivered. If the project falls short of reaching its goals, no one is charged.<br /><br />Well, I'm now pleased to report that $17,030 has already been pledged for Hampton's project and with only a $27,500 target to hit, it now looks as if his dream of making his product a reality will soon come true.<br /><br />While tracking the fortunes of Hampton, I&rsquo;ve also been checking out the other products that the Kickstarter web site has successfully funded. And I&rsquo;m pleased to say that I have found one in particular that would make the perfect present for a whole bunch of my friends this Christmas. <br /><br />The product itself is an iPhone platform called Galileo that can be controlled remotely from an Apple iPad or other iOS device. Capable of 360 degree pan-and-tilt at speeds up to 200 degree per second in any orientation, Galileo should prove not only useful amateur photographers but also folks with babies and toddlers who'd like to keep an eye on their activities!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="243" src="" width="320" /></a></div>Rather amazingly, to put the little beast into production, entrepreneurs Josh Guyot and JoeBen Bevirt were originally seeking pledges of up to $100,000 on the Kickstarter site, but it appears that the project garnered so much interest that over 5,000 people backed the idea with the result that the team raked in a whopping $702,427.<br /><br />Unfortunately, much to my dismay, it's still impossible to actually purchase one of the little robotic beasts from <a href="" target="_blank">Motrr</a> (Santa Cruz, CA, USA) -- the company that the duo set up to sell the units. At the present time, the best that one can do is to sign up to be notified via email when the Galileo will be available for sale.<br /><br />Hopefully, that will be in time for, seekingnoemail@noemail.orgAndy Wilson<div class="separator" style="clear: both; text-align: center;"></div>Have you ever nodded off during a lecture or a seminar? I know I have. In my case, it's usually when I'm presented with long-winded discussions about the financial state of the economy, rather than when I'm treated to an engaging treatise on how a particular individual has developed an innovative vision inspection system.<br /><br />As a student, I had the same problem. If the lecturer wasn't particularly engaging, I found that my mind tended to wander to some other entirely more fanciful place, where I imagined that I might be occupied by some all together more interesting activities. <br /><br />Recognizing that other students have the same attention deficit disorders, a professor of physics education at <a href="" target="_blank">Kennesaw State University</a> (Kennesaw, GA, USA) has been trying to uncover why by equipping them with eye-tracking technology during lectures in the classroom.<br /><br />His first-of-its-kind study aims to provide new insights into effective teaching techniques that can keep students engaged and motivated to learn during lectures.<br /><br />By using glasses equipped with eye-tracking technology from <a href="" target="_blank">Tobii Technology</a> (Danderyd, Sweden), Professor David Rosengrant was able to measure what students observe during a lecture, how much of their time was dedicated to the material presented in the class, and discover what the factors were that distracted them the most. <br /><br />Professor Rosengrant's pilot study was held over a four-month period with eight college students in 70-minute pre-elementary education lectures at Kennesaw State University.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="" width="320" /></a></div><br />The study discredited the widely accepted belief that classroom attention peaks during the first 15 minutes of class and then generally tapers off. Instead, Rosengrant discovered that classroom attention is actually impacted by various factors throughout the duration of lecture.<br /><br />Those factors include the verbal presentation of new material that is not contained within an instructor's PowerPoint presentation, the use of humor by the instructor and the proximity of the instructor to the student, which all contribute to greater attention from the student.<br /><br />Professor Rosengrant's study also concluded that "digital distractions" such as mobile phones and the Web -- particularly Facebook -- are the greatest inhibitors to retaining students' attention in the classroom. <br /><br />When I read that, I started to get a bit hot under the collar. While I appreciate that not all academics are born teachers, the least the students who attend their classes can do is to respect them enough not to fiddle with their digital paraphernalia during their lectures. Even I would show a poor presenter that amount of consideration, and that's saying something.<br /><br /><b>R</b><b>elated items on eye tracking technology from Vision Systems Design that you might also find of interest.</b><br /><br /><b>1. <a href="">Eye tracking helps spot movement disorder</a></b><br /><br />Tobii Technology (Danderyd, Sweden) has selected a behavioral research team that used eye-tracking technology to enhance its understanding of Progressive Supranuclear Palsy (PSP) as the winner of its annual Tobii EyeTrack Award.<br /><br /><b>2. <a href="">Researchers can track what catches a designer's eye</a>&nbsp;</b><br /><br />An eye-tracking system developed by researchers at The Open University and the University of Leeds (Leeds, UK) aims to remove the constraints on creativity imposed by computer-aided design (CAD) tools.<br /><br /><b>3. <a href="">Eye test for Alzheimer's disease</a>&nbsp;</b><br /><br />UK researchers have demonstrated that people with Alzheimer's disease have difficulty with one particular type of eye-tracking test.<br /><br /><b>4. <a href="">Eye tracker spots liars with greater accuracy</a></b><br /><br />Computer scientists at the University of Buffalo (UB; Buffalo, NY, USA) are exploring whether machines can read visual cues that give away deceit.<br /><br />, analysis makes shopping simplenoemail@noemail.orgAndy WilsonA 25-year old with a master's degree in computer science from <a href="" target="_blank">Bristol University</a> (Bristol, UK) has picked up a $100,000 cash prize after winning the Cisco British Innovation Gateway Awards for developing a novel image matching application that can take the drudgery out of shopping for clothes.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div>Jenny Griffiths -- one of only two women in her class of thirty at the university -- developed the idea after getting frustrated with attempting to locate and purchase clothes that she liked at a reasonable price. So she went out and used her new found knowledge to make the whole process a whole lot easier.<br /><br />The result of her hard work is what's now known as "Snap Fashion" -- a visual search engine that lets consumers search for clothing using images instead of words.<br /><br />In a nutshell, it comprises a free a smartphone app that a user first fires up to take an image of a product that she might like to buy but perhaps can't afford. The image is then delivered to the Snap Fashion server where algorithms developed by Griffiths automatically analyze it and return images of similar -- hopefully less expensive products -- from a variety of retailers' websites within five seconds. The results can then be filtered based on any aspect of the product -- such as the color and cut of a dress. Finally, an item can be purchased directly from the retailer, while Snap Fashion earns commission on every sale.<br /><br />The net is cast wide courtesy of Snap Fashion's database, which currently counts more than 100 major retailers. It's a catalogue that boasts high street giants including Gap, Jigsaw, Jaeger, Uniqlo, Warehouse, L K Bennett, French Connection, Reiss, Monsoon, and Kurt Geiger, in addition to retailers from to to, and a host of department stores such as Harrods, Selfridges, Liberty, House of Fraser, and US fashion emporium Bloomingdales.<br /><br />Not just a shopping tool, there are other tricks to Snap Fashion too, such as a personal shopping service that offers tips and advice on what styles best suit the user's personal body shape via body shape recognition technology.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="153" src="" width="320" /></a></div><br />The Cisco British Innovation Gateway Awards were launched this year with the aim of recognizing and supporting up-and-coming innovators, entrepreneurs and businesses. And naturally enough, I'm delighted that one of the first winners of the award has developed a product related to image analysis!<br /><br />But if you are a budding inventor in the UK and feeling a bit miffed that you hadn't heard of the awards in time to enter them this year, don't worry. The good news is that the contest -- which aims to attract high-potential technology startups that are seeking investment and support -- is running over a five period.<br /><br />More information on the Cisco British Innovation Gateway Awards can be found <a href="" target="_blank">here</a>. Snap Fashion's home page can be found <a href="" target="_blank">here</a>, starting manufacturingnoemail@noemail.orgAndy WilsonIn today's tough economic climate, it's difficult for small teams of engineers to obtain funding from the banks to finance the development of their new products no matter how original or innovative they might be.<br /><br />Now, however, thanks to a website called Kickstarter, there's an alternative way that engineers with bright ideas can reduce the financial burden of getting their new products into the hands of early adopters.<br /><br />The folks behind the New York City-based Kickstarter web site describe it as an "all-or nothing" funding platform which enables creative individuals to post ideas for potential new products on the site. <br /><br />If readers of the web site like a particular product enough to buy it, they can pre-order it by pledging money to the company that has designed it. If the company then succeeds in reaching its funding goal to manufacture the product, all backers' credit cards are charged and the products are produced and delivered. If the project falls short of reaching its goals, no one is charged. <br /><br />One of the individuals excited about the Kickstarter site is Auckland, New Zealand-based engineer Mark Hampton who is hoping to raise enough funding on the site to make his dreams of producing a right-angled lens for the Apple iPhone camera and Apple iPad come true.<br /><br />Hampton started the development of his so-called HiLO lens in 2011. Since then he teamed up with an optical engineer, a mechanical designer, and an app developer to demonstrate the effectiveness of a prototype of the device which he now hopes to take into full production through his Kickstarter campaign.<br /><br />According to Hampton, the HiLO product is built from three custom designed lenses and a prism. A free app that will come with the product corrects for the mirroring of the image caused by the prism and improves image quality. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="267" src="" width="320" /></a></div><br />It's a pretty simple concept, but one that might well fill a niche in the market for individuals who want to take high angle and low angle photos on their iPhones.<br /><br />Backers on Kickstarter can pre-purchase one of Hampton's HiLO lenses now. If the backers pledge a total of $27,500, then the pledges will be collected and an initial production run of the lens system will be made in China. So far, Hampton and his team have raised $8,450. Hopefully, more support will be forthcoming!<br /><br />Mark Hampton's project page on the Kickstarter web site can be found <a href="" target="_blank">here</a>, times in Asia?noemail@noemail.orgAndy WilsonA UK researcher made a projection this month that Asian countries -- excluding Japan -- will be the largest market for machine vision systems in 2016.<br /><br />According to John Morse, the author of the latest machine vision report from <a href="" target="_blank">IMS Research </a>(Wellingborough, UK), Japan has always been the largest market for machine vision in the Asia Pacific region. But despite this, Japan's economic growth is currently slow largely due to decreasing demand for its exports. <br /><br />Morse says that this is not expected to improve much over the next five years, because Japan's leading position is being eroded as other countries within the region embrace automation in their production facilities.<br /><br />Nevertheless, the report claims that the Asian region itself -- with the exception of Japan that is -- is collectively forecast to generate revenues from sales of machine vision systems that will exceed those generated in the Americas after 2012. This rapid growth is expected to continue -- revenues from the Asian region will even surpass revenues generated in Europe, the Middle East and Africa (EMEA) after 2015.<br /><br />The report projects that the strongest growth for machine vision systems will be in China, South Korea and Taiwan, reflecting the general economic growth forecast in these countries. <br /><br />The latest outlook from the International Monetary Fund (IMF) would appear to give a lot of credibility to the IMS report. The IMF is projecting, for example, that in Asia, growth in Real Gross Domestic Product (GDP) will average 6.7 percent in 2012, and is forecast to accelerate to 7.25 percent in the second half of 2012. <br /><br />In its latest World Economic Outlook, unveiled in Tokyo ahead of the IMF-World Bank 2012 Annual Meetings, the IMF said that the advanced economies, however, were unlikely to fare as well. <br /><br />In the US, growth will average 2.2 percent this year. Real GDP is projected to expand by about 1.5 percent during the second half of 2012, rising to 2.75 percent later in 2013. <br /><br />In the Euro area, it's not even that rosy. There, real GDP is projected to decline by 0.4 percent in 2012 overall during the second half of 2012 with public spending cutbacks and the still-weak financial system weighing on prospects.<br /><br />But despite the bright prospects that both IMS and the IMF have painted for the folks in Asia, I can't help but feel that -- with decreasing exports to the US and Europe -- they might just see their growth stunted,'s upnoemail@noemail.orgAndy WilsonWhen my nephew left college with a master's degree in computer science and electronic engineering, he was headhunted by more than a few firms, some of which were in the field of engineering and some of which were in the field of financial services.<br /><br />Being a talented young man, he was faced with a choice -- should he take one of the jobs he was offered by one of the engineering companies, or should he accept a more lucrative position at a financial organization in The Big City.<br /><br />After deliberating the issue for several days, he decided to let his heart rule his wallet and took a job working for a software development company, rather than swan off to make his fortune working in a profession that was somewhat unrelated to his education.<br /><br />It's an issue many graduates are faced with. After spending tens of thousands of dollars on their education, they are inevitably drawn to the idea of making as much money as possible to pay off their loans, even if it means leaving the field of engineering to do so.<br /><br />What brought this issue home to me again this week was a recent article in "The Dartmouth", the daily student newspaper of Dartmouth College, which just happens to be America's oldest college newspaper. <br /><br />The article -- which was written by Hannah Wang -- detailed the development of an Android application that uses data captured by the camera in a smartphone to analyze a person's driving habits. To do so, the application analyzes drivers' physical motions, such as head turning and blinking rates.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />The application came about as the result of a project by a chap called Thomas Bao who graduated this year from the college. Apparently, Bao began the project knowing little about machine learning, computer vision or Java. But that didn't stop him from studying the subject rigorously enough to develop the application.<br /><br />Since then, Bing You, a visiting researcher from Taiwan's Academia Sinica, has integrated Bao's driver-side app into a larger project called CarSafe, which uses the dual cameras on a smart phone to detect both driver-side and road-side information to alert drivers about potentially dangerous situations, such as unsafe following distances.<br /><br />Having now left the college, however, the talented Bao has waved goodbye to the engineering profession. According to the article, he is now working at Evolution Capital Management, a hedge fund based in Hawaii.<br /><br />Now the article, of course, doesn't specifically say why Bao chose to do so. It may have been for financial reasons, or it could have been because he is fond of big wave surfing. But whatever the reason, I think it's a shame that Bao and many other talented folk like him don&rsquo;t remain in the engineering business like my nephew chose to do.<br /><br /><i><b>Reference:</b></i><br /><br />1. <a href="" target="_blank">Professor creates phone app for safer driving habits</a>, N' Reshoringnoemail@noemail.orgAndy WilsonA <a href="" target="_blank">Michigan State University</a> (East Lansing, MI, USA) academic has authored a new study that claims that many US firms are moving or considering moving their manufacturing operations back to domestic soil from overseas.<br /><br />According to Tobias Schoenherr, an assistant professor of supply chain management, rising labor costs in emerging countries, high oil prices and increasing transportation costs and global risks such as political instability are fueling the trend.<br /><br />"Going overseas is not the panacea that it was thought of just a decade or so ago. Companies have realized the challenges and thus are moving back to the US," says Schoenherr.<br /><br />Schoenherr's study found that 40 per cent of manufacturing firms believe there is an increased movement of "reshoring" -- or moving manufacturing plants back to the US from countries such as China and India. While the results differed by industry, the trend was led by aerospace and defense, industrial parts and equipment, electronics, and medical and surgical supplies.<br /><br />The study, which was sponsored by the Council of Supply Chain Management Professionals and based on a survey of 319 firms, also found that nearly 38 per cent of companies indicated that their direct competitors have already reshored.<br /><br />In addition to rising costs and global risks, Schoenherr said companies are concerned with the erosion of intellectual property overseas and product quality problems, which can be difficult to fix when dealing with multiple time zones and language and cultural barriers.<br /><br />Rob Glassburn, the Vice President of Operations at <a href="" target="_blank">3D Engineering Solutions</a> (Cincinnati, OH, USA), would be the first to agree with Schoenherr. In a recent blog, Glassburn described how his company had recently been called in to reverse engineer parts for a popular airsoft gun maker that once manufactured its products abroad. And that, according to Glassburn, was a direct consequence of such intellectual property theft.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="" width="320" /></a></div><br />Specifically, Glassburn wrote, problems arose when the particular gun maker learned that its offshore manufacturer was using its proprietary tooling to "back door sell" guns made with its own equipment. To curb the practice, it closed down operations and moved manufacturing back to the US, which helped secure 300 domestic jobs.<br /><br />Sadly, however, no CAD models or prints of the gun parts were accessible when the gun maker returned production back to the US, and that's why 3D Engineering Solutions was called in. By employing the company's 3D laser scanning technology to digitize the assembly of air gun parts, the company was then able to create tooling to manufacture parts in the US once more.<br /><br />One can only hope that if the trend to reshore continues, it will also mean more business for those machine builders in the vision industry who develop systems to automate the process of inspecting those products as well!, soil simplifies root imagingnoemail@noemail.orgAndy Wilson<a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a> <br />As anyone who knows me will testify, I'm about as fond of gardening as I am of golf. My ideal garden would either be covered over with concrete or short pile synthetic turf, thereby eliminating the need to maintain a lawn or take care of any plants and shrubs.<br /><br />Despite that fact, I'm always intrigued to read how researchers and scientists across the world are using innovative image processing systems to analyze the behavior of plants.<br /><br />There's no doubt that by studying the growth of the roots of plants, and determining what factors influence it, scientists might develop hardier variety of crops that might be more resistant to disease and climate change.<br /><br />In February this year, one team of researchers at the University of Nottingham (Nottingham, UK) was awarded a 3.5m Euro grant to do just that. They plan to image wheat roots in a move that will enable them to select new agricultural varieties that are more efficient at water and nutrient uptake.<br /><br />To do so, the researchers there plan to use X-ray Micro Computed Tomography to capture images of the shape and branching patterns of roots in soil. Those images will then be fed into the researchers "RooTrak" software which overcomes the problem of distinguishing between roots and other elements in the soil. <br /><br />Now, however, discerning the roots of the plants from the soil surrounding them could become a lot easier, thanks to a team from the James Hutton Institute (Aberdeen, Scotland) and the University of Abertay (Dundee, Scotland) who have developed a see-through soil based on a synthetic composite known as Nafion.<br /><br />They claim that the product is very similar to real soil in terms of physical and biological variables, in terms of its water retention, its ability to hold nutrients and its capability for sustaining plant growth.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="300" /></a></div><br />Lionel Dupuy, a theoretical biologist in the ecological sciences group at the James Hutton Institute, said that the transparent soil could be used by researchers to study the spread and transmission of soil borne pathogens, screen the root systems of a range of genotypes, as well as understand how plants or microbes access nutrients that are heterogeneously distributed in the soil.<br /><br />While the formulation of the new soil may well have taken the scientists two years to perfect, to me, the real lesson to be learned from its development is the degree of lateral thinking that the researchers employed to solve the problem of how best to capture images of roots in soil.<br /><br />Rather than just throw complex hardware and software at the problem, they took a completely different approach by creating a new media that may ultimately enable researchers to image the roots of plants using vision systems that are a lot simpler than those that are in use today.<br /><br /><i><b>References:</b></i><br /><b><br />1. <a href="" target="_blank">Software gets to the root of the problem</a></b><br /><br />A team of researchers at the University of Nottingham (Nottingham, UK) has developed image analysis software that can automatically distinguish plant roots from other materials found in soil.<br /><br /><b>2. <a href="" target="_blank">Robotic image-processing system analyzes plant growth</a></b><br /><br />Researchers at the University of Wisconsin&ndash;Madison (Madison, WI, USA) have developed an image-processing system that captures time-lapse images of how plants grow.<br /><br /><b>3. <a href="" target="_blank">Cameras get to the root of global warming</a></b><br /><br />Researchers at the Oak Ridge National Laboratory (Oak Ridge, TN) are to use a system of minirhizotrons to examine the effects on elevated temperatures and levels of carbon dioxide on the roots of plants in,, weissbier, and vision systemsnoemail@noemail.orgAndy WilsonLast week, I dispatched our industrious European Editor Dave Wilson off to the rather lovely Bavarian city of Munich to gain some insight into the work that is being undertaken by companies in the region.<br /><br />During his brief sojourn in Germany, Dave met up with a number of outfits involved in the business of developing vision systems. One of these was Opto -- a small to medium-sized private enterprise with around 35 employees based in the town of Grafelfing on the outskirts of Munich.<br /><br />Now at the outset, it might seem that a company of such a size might not have a whole lot to discuss. But first appearances can be deceptive, as Dave discovered when Markus Riedi, the President of Opto, gave him a brief presentation on what the company had been up to over the years.<br /><br />During that presentation, Dave realized that, while the company might best be known for the optical components that it markets, in fact, around 55 percent of its business comes from developing rather complex custom-built products, where it combines its expertise in optics, mechanics, software and electronics to deliver complete modules that its customers can integrate into their own machines.<br /><br />Herr Riedi showed Dave several examples of the sorts of engineering projects that the company had undertaken. One was an integrated imaging module developed for the inspection of semiconductor dies. Another was an optical subsystem used to inspect pixels on an LCD screen. Then, there was an opto-mechanical module for integration into a laser eye surgery system. And, last but not least, was an imaging system the company had developed to image cells in an embryo incubation machine.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="257" src="" width="320" /></a></div><br />After the presentation, Herr Riedi told Dave that his company was very selective about the companies that it works with to develop products, and only targets areas where the company can provide a lot of value added expertise.<br /><br />And the strategy appears to be paying off. From a 0.5m Euro business in 2006, Herr Riedi has grown the company to the 7m Euro business that it is today. By 2020, he told Dave that he hopes to push that figure up to the 20m Euro mark.<br /><br />One way he plans to do that is to actively promote the products that his customers are manufacturing. The idea is a simple one -- the more products they sell, the more subsystems that Opto sells. To do so, Reidi has already started to populate his company's web site with examples of the end-user products that his complex optical subsystems have been designed into.<br /><br />Impressed with the caliber of companies like Opto, Dave is now looking forward to the day when he might take another trip to Bavaria to meet up with yet more folks involved in the imaging business. But although he tells me that his motives are purely altruistic, I have a suspicion that the quality of the local Bavarian weisswurst and weissbier might also have something to do with, Systems in Actionnoemail@noemail.orgAndy WilsonAs regular readers of this blog might recall, a few weeks ago I decided to hold a competition in which I challenged systems integrators to email me images of their very own vision systems in action.<br /><br />To encourage readers to enter the aptly named "Vision Systems in Action 2012" competition, I promised that the winning images that we received would be published in an upcoming blog, providing the winners with lots of publicity and, potentially, a few sales leads as well. <br /><br />Because the competition didn't come with any prizes, however, the response was less than spectacular. Nevertheless, the Vision Systems Design judging panel were impressed by the high standard and diversity of the photographs we did receive. And now, after several hours deliberating over the entries, I'm pleased to say that our judges have chosen a winner as well as a runner up.<br /><br />The winner of the "Vision Systems in Action 2012" competition is none other than Earl Yardley, the Director of <a href="" target="_blank">Industrial Vision Systems</a> (Kingston Bagpuize, UK) who submitted a rather stunning image of a vision system his company has developed to inspect a medical device.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="248" /></a></div><br />The judges unanimously decided that Yardley's photograph should take first prize, not only for its quality, but the fact that it followed the overall brief set by the judging panel. They were particularly impressed by the photographer's use of lighting as well as the effective use of the color blue which dominated the image. <br /><br />The runner-up in the "Vision Systems in Action 2012" competition was Vincent Marcoux, the sales and marketing co-ordinator of <a href="" target="_blank">Telops</a> (Quebec, Canada). He submitted a rather stunning picture of the Chateau Frontenac which was designated a National Historic Site of Canada in 1980. Marcoux captured the image of the chateau using the company's very own HD-IR 1280 x 1024 infrared camera.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="256" src="" width="320" /></a></div><br />The judges were extremely impressed by the exquisiteness of the image, as well as the sense of foreboding that it conveyed. Our panel was particularly taken by the effectiveness of the infrared imaging technique as well as the striking use of the color orange which dominated the image.<br /><br />As the Editor-in-Chief of <a href="">Vision Systems Design</a>, I would like to thank everyone for their interest in the "Vision Systems in Action" competition and for taking the time and effort to participate. Perhaps next year, we shall do it, the winners are?noemail@noemail.orgAndy WilsonAs regular readers to this blog might recall, a few weeks ago I decided to hold a competition in which I challenged systems integrators to email me images of their very own vision systems in action.<br /><br />To encourage readers to enter the aptly named "Vision Systems in Action 2012" competition, I promised that the winning images that we received would be published in an upcoming blog, providing the winners with lots of publicity and, potentially, a few sales leads as well. <br /><br />Because the competition didn't come with any prizes, however, the response was less than spectacular. Nevertheless, the Vision Systems Design judging panel were impressed by the high standard and diversity of the photographs we did receive. And now, after several hours deliberating over the entries, I'm pleased to say that our judges have chosen a winner as well as a runner up.<br /><br />The winner of the "Vision Systems in Action 2012" competition is none other than Earl Yardley, the Director of <a href="" target="_blank">Industrial Vision Systems</a> (Kingston Bagpuize, UK) who submitted a rather stunning image of a vision system his company has developed to inspect a medical device.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="248" /></a></div><br />The judges unanimously decided that Yardley's photograph should take first prize, not only for its quality, but the fact that it followed the overall brief set by the judging panel. They were particularly impressed by the photographer's use of lighting as well as the effective use of the color blue which dominated the image. <br /><br />The runner-up in the "Vision Systems in Action 2012" competition was Vincent Marcoux, the sales and marketing co-ordinator of <a href="" target="_blank">Telops</a> (Quebec, Canada). He submitted a rather stunning picture of the Chateau Frontenac which was designated a National Historic Site of Canada in 1980. Marcoux captured the image of the chateau using the company's very own HD-IR 1280 x 1024 infrared camera.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="256" src="" width="320" /></a></div><br />The judges were extremely impressed by the exquisiteness of the image, as well as the sense of foreboding that it conveyed. Our panel was particularly taken by the effectiveness of the infrared imaging technique as well as the striking use of the color orange which dominated the image.<br /><br />As the Editor-in-Chief of <a href="">Vision Systems Design</a>, I would like to thank everyone for their interest in the "Vision Systems in Action" competition and for taking the time and effort to participate. Perhaps next year, we shall do it again.<div class="blogger-post-footer"><img width='1' height='1' src='' alt='' /></div>, your own supercomputernoemail@noemail.orgAndy WilsonMany image processing tasks are computationally intensive. As such, system integrators are always on the lookout for any means that will help them to accelerate their application software.<br /><br />One way to do this is to determine whether an application could be optimized -- either by hand or by using optimization tools such as Vector Fabrics' (Eindhoven, The Netherlands) <a href="" target="_blank">Pareon</a> -- to enable it to take advantage of the many processing cores that are in the latest microprocessors from AMD and Intel. <br /><br />If an application can be considered to be easily separated into a number of parallel tasks -- such as those known in the industry as "embarrassingly parallel problems" -- then the only limitation the systems integrator has is how to source enough inexpensive processors to perform the task.<br /><br />Fortunately, since the advent of the GPU, cores are plentiful. As such, many engineers are harnessing the power of games engines such as GE Force&rsquo;s GTX 470 -- which sports no less than 448 CUDA cores and 1GByte of memory -- to <a href="" target="_blank">vastly accelerate their image processing applications</a>.<br /><br />Now in a few cases where engineers really need to harness even more hardware power, they have only one alternative -- build it themselves. That, indeed, is exactly what engineers at the Air Force Research Laboratory (Rome, NY, USA) have done.<br /><br />Their massive supercomputer -- which was developed for the Air Force for image processing tasks -- is ranked as one of the fortieth fastest computers in the world. Yet, believe it or not, it has been constructed by wiring together no less than <a href="" target="_blank">1,700 off-the-shelf PlayStation 3 gaming consoles</a>!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="217" src="" width="320" /></a></div><br />Now if you are anything like me, you are probably wondering how you might be able to design such a beast yourself, while doing so without shelling out an inordinate sum of money to buy so many Sony games consoles.<br /><br />If you do, you might like to check out the web page of Professor Simon Cox from the University of Southampton (Southampton, UK), who, together with a team of computer scientists at the university (and his six year old son James) has built a supercomputer out of <a href="" target="_blank">Raspberry Pi's</a>, a rats nest of cables and an awful lot of Lego.<br /><br />"As soon as we were able to source sufficient Raspberry Pi computers we wanted to see if it was possible to link them together into a supercomputer. We installed and built all of the necessary software on the Pi starting from a standard Debian Wheezy system image," says Professor Cox.<br /><br />The machine, named "Iridis-Pi" after the university's Iridis supercomputer, runs off a single 13A mains socket and uses a Message Passing Interface to enable the processing nodes to communicate over Ethernet. The system has a total of 64 processors and 1Tb of memory (16GByte SD cards for each Raspberry Pi).<br /><br />Now I'm not about to claim that this supercomputer is going to rank up there with the PlayStation-based system built for the US Air Force, but it certainly would be a fun project to build and experiment on. And at a price of under $4000, who wouldn't want to give it a go?<br /><br />Fortunately, for those interested in doing so, the learned Professor has published a step-by-step guide so you can <a href="" target="_blank">build your own supercomputer Raspberry Pi supercomputer </a>without too much effort.<br /><br />The Southampton team wants to see the low-cost supercomputer used to enable students to tackle complex engineering and scientific challenges. Maybe the system isn't really the most cost effective way to do that, but it certainly is inspirational.<br /><br /><i>Editor's note:&nbsp;</i> PA Consulting Group and the Raspberry Pi Foundation have teamed up to challenge schoolchildren, students and computer programmers to develop a useful application using a Raspberry Pi that will make the world a better place. I'm sure they would welcome ideas from the imaging community! Details on the competition can be found <a href=";I-_-Raspi-_-Homepage" target="_blank">here</a>.<br /><br /><br />, your iPhone into an IR cameranoemail@noemail.orgAndy WilsonIf you live an old drafty house like I do, you're probably not looking forward to another long cold winter -- not in the least because you will inevitably find yourself shelling out exorbitant sums of money just to keep the place nice and toasty.<br /><br />Fortunately, since the advent of thermal imaging cameras, it's now pretty easy to identify patterns of heat loss from your property and to then take some remedial action to fix any problems.<br /><br />Due to the cost of the cameras, however, it's unlikely that you will want to go out and buy one yourself. It's more likely that you will call on the services of a professional home inspector or energy auditor who will bring their own thermal imaging kit around to your properties to perform the task. <br /><br />Even a professional survey, however, isn't likely to come cheap, although probably a darned sight less expensive than buying your own camera.<br /><br />Faced with these two alternatives, engineer Andy Rawson decided to turn his iPhone into a thermal camera by developing custom-built hardware and software solution that would interface to it. <br /><br />More specifically, Rawson designed a PCB board that sports a <a href="" target="_blank">Melexis</a> (Ieper, Belgium) MLX90620 FIRray device which can measure thermal radiation between -20&deg;C to 300&deg;C thanks to its 16 x 4 element far infrared (FIR) thermopile sensor array. The software then transmits the thermal images collected by the infrared sensor on Rawson&rsquo;s board to the iPhone through its dock connector after which they are overlaid onto the phone's display together with numerical temperature values.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="" width="320" /></a></div><br />Having developed the hardware and the software, Rawson says that he would now like to make and sell the systems so others can save money and energy. He figures he should be able to manufacture and sell them for around $150.<br /><br />Nevertheless, this is also going to be an open source hardware project, so if you want to make your own systems, that's fine by him too. A man of his words, Rawson posted the iPhone code and the board layout on the internet this week. Interested readers can find it <a href="" target="_blank">here</a>.<br /><br />While he might be a talented engineer, Rawson admits that he is terrible at dreaming up names for his projects! So he's encouraging people to submit names for the new design to his web site. The winner will receive one of the thermal imaging systems for free.<br /><br />A video of the thermal imaging system in action can be seen on YouTube <a href=";v=pIb1scnD67o" target="_blank">here</a>, comes homenoemail@noemail.orgAndy WilsonThree Swedish researchers from the Centre for Autonomous Systems (CAS) at the <a href="" target="_blank">Kungliga Tekniska Hogskolan</a> (KTH) in Stockholm, Sweden are asking people to get involved in a crowd sourcing project to build a library of 3-D models of objects captured using their Microsoft Kinect cameras.<br /><br />The idea behind the so-called Kinect@Home project -- which was started by Alper Aydemir, Rasmus Goransson and Professor Patric Jensfeltaims -- is to attempt to acquire a vast number of such models from the general public that robotics and computer vision researchers can then use to improve their algorithms.<br /><br />The researchers chose the Microsoft Kinect camera for some pretty obvious reasons. Not only can it be used to capture both RGB images and depth values of objects, since its launch it has entered the homes of some 20 million people, making it a perfect piece of hardware for a crowd sourcing task.<br /><br />Before any captured image frames of an object from the Kinect can be uploaded to the Kinect@Home server, users first need to connect their Kinect camera to their PC and install a plug-in. Once they have done so, the website starts showing the live Kinect images on a browser to confirm that the software is working correctly.<br /><br />Next, the plug-in can be used to start uploading captured frames of an object to the researchers Kinect@Home server. After uploading is complete, optional metadata can be associated with the model of the object. As well as uploading their own models to the site, users can also download models created by others and import them into their own 3-D modeling software packages.<br /><br />To display the models over the web, the resolution of the models has been lowered at present, but the researchers say that as they acquire faster servers and more bandwidth, this will change dramatically.<br /><br />At present, the Kinect@Home browser software plug-in only runs on a PC running Microsoft Windows Vista, Windows 7 and 8, but the Swedish software engineers would be pleased to talk to any other software developers that might be interested in porting the browser plug-in to the Linux or Mac operating systems, as well as providing support for Microsoft&rsquo;s Software Developer Kit.<br /><br />If you do give the software a try and your models look a bit messed up when they appear in the browser, it's probably your fault. To get the best results from the system, the software developers advise users to move their Kinect cameras slowly and not to point them towards blank walls, featureless or empty spaces.<br /><br />Personally, I'm tempted to go out and buy a Kinect just to see what Kinect@Home is like. But if you already have one, you can try out the software <a href="" target="_blank">here</a>., with vision piece together damaged coralnoemail@noemail.orgAndy WilsonThe deep waters west of Scotland are characterized by the occurrence of large reef-forming corals that provide homes to thousands of animals. But Scottish corals are threatened by adverse impacts of bottom fishing that damages and kills large areas of reef. <br /><br />At present, the only solution to the problem is to employ scuba divers to reassemble the coral fragments on the reef framework. However, the method has had only a limited success because the divers cannot spend long periods underwater nor reach depths of over 200 meters where some of the deep-sea coral grows.<br /><br />Now, however, researchers at <a href="" target="_blank">Heriot-Watt University</a> (Edinburgh, Scotland) are embarking on a project that will see the teams of scuba divers replaced by a swarm of intelligent robots.<br /><br />The so-called "Coralbots" project is a collaborative effort led by Dr. Lea-Anne Henry from the School of Life Sciences in partnership with Professor David Corne from the School of Mathematical and Computer Science and Dr. Neil Robertson and Professor David Lane from the School of Engineering and Physical Sciences. <br /><br />Their idea is to use the small autonomous robots to seek out coral fragments and re-cement them to the reef. To help them do just that, the computers on board the robots will distinguish the fragments from other objects in the sea through the use of object recognition software which is under development.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="" width="320" /></a></div><br />If the researchers can realize their goals, swarms of such robots could be instantaneously deployed after a hurricane or in a deep area known to be impacted by trawling, and rebuild a reef in days or weeks.<br /><br />While it might seem pretty ambitious, the folks at Heriot-Watt have got plenty of experience in underwater robotics and signal and image processing. At the university's Ocean Systems Lab, they have previously developed obstacle avoidance and automatic video analysis algorithms, as well as autonomous docking and pipeline inspection systems.<br /><br />The team of researchers working on the new project is supported by Heriot-Watt Crucible Funding which is specifically designed to kick-start ambitious interdisciplinary projects.<br /><b><br />Reference: <a href="" target="_blank">Underwater robots to 'repair' Scotland's coral reefs.</a> BBC technology news.</b>, timenoemail@noemail.orgAndy WilsonSince its launch in 1990, the Hubble telescope has beamed hundreds of thousands of images back to Earth, shedding light on many of the great mysteries of astronomy.<br /><br />But of all the images that have been produced by the instruments on board the telescope, only a small proportion of them are visually attractive, and an even smaller number are ever actually seen by anyone outside the small groups of scientists that publish them.<br /><br />To rectify that matter, the folks at the European Space Agency (ESA) decided to hold a contest that would challenge members of the general public to take never-before-publicized images from Hubble's archives and to make them more visually captivating through the use of image processing techniques.<br /><br />This month, after sifting through more than 1000 submissions, the ESA has decided on the winner of its so-called Hubble's Hidden Treasures competition -- a chap by the name of Josh Lake from the USA who submitted a stunning image of NGC 1763, part of the N11 star-forming region in the Large Magellanic Cloud.<br /><br />Lake produced a two-color image of the NGC 1763 which contrasted the light from glowing hydrogen and nitrogen. The image is not in natural colors because hydrogen and nitrogen produce almost indistinguishable shades of red light, but Lake processed the images to separate out the blue and red, dramatically highlighting the structure of the region.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="" width="320" /></a></div><br />Through the publicity gained from the exercise, the organizers of the competition have undoubtedly attracted numerous people to the <a href="" target="_blank">Hubble web site</a> to see the many other spectacular images produced by the other folk that entered the contest.<br /><br />Here at Vision System Design, I&rsquo;d like to emulate the success of the Hubble's Hidden Treasures competition by inviting systems integrators to email me any astonishing images that they may have taken of their very own vision systems in action.<br /><br />My "Vision Systems in Action" competition may not come with any prizes, but I can promise that the best images that we receive will be published in an upcoming blog, providing the winners with lots of publicity and, potentially, a few sales leads as well.<br /><br />If you do decide to enter, of course, please do take the time to accompany any image you submit with a brief description of the vision system and what it is that it is inspecting. Otherwise, you will be immediately disqualified!<br /><br />The "Vision Systems in Action" competition will close on September 15, 2012. You can email your entries to me at <a href=""></a>, from the enemynoemail@noemail.orgAndy WilsonCamouflage is widely used by folks in the military to conceal personnel and vehicles, enabling them to blend in with their background environment or making them resemble anything other than what they really are.<br /><br />In modern warfare, however, a growing number of sensors can 'see' in parts of the spectrum where people cannot. Therefore, <a href="">designing camouflage</a> for a wide variety of terrains, and enabling it to be effective across the visual, ultraviolet, infrared and radar bands of the electromagnetic spectrum is crucial.<br /><br />One way to do this is to examine how the natural camouflage of animals enables them to hide from predators by blending in with their environment, and then mimicking those very same techniques using man-made materials.<br /><br />Thinking along such lines, a team of researchers from <a href="" target="_blank">Harvard University</a> (Cambridge, MA, USA)&nbsp; announced this month that they have developed a rather interesting system that allows robots inspired by creatures like starfish and squid to camouflage themselves against a background.<br /><br />To create the camouflage, the researchers create fine micro-channels in sheets of silicone using 3-D printers which they then use to dress the robots. Once they are covered with the sheets, the researchers can pump colored liquids into the channels, causing the robots to mimic the colors and patterns of their environment.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />&nbsp;The system's camouflage capabilities aren't limited to visible colors, however. By pumping heated or cooled liquids into the channels, the robots can also be thermally camouflaged. What's more, by pumping fluorescent liquids through the micro-channels, the silicone sheets wrapped around the robots can also be made to glow in the dark.<br /><br />According to Stephen Morin, a postdoctoral fellow in the Department of Chemistry and Chemical Biology at Harvard University, there is an enormous amount of spectral control that can be exerted with the system. In the future, he envisages designing color layers with multiple channels which can be activated independently<br /><br />Dr. Morin believes that the camouflage system that the Harvard researchers have developed will provide a test bed that will help researchers to answer some fundamental questions about how living organisms most efficiently disguise themselves.<br /><br />For my money, however, it might be more lucrative to see if the camouflage could be deployed to help the military hide its personnel in the field more effectively.<br /><br /><b>Reference: </b><a href="" target="_blank">Camouflage and Display for Soft Machines</a>, Science magazine, 17 August 2012:&nbsp; Vol. 337 no. 6096 pp. 828-83.<br />, tender trapnoemail@noemail.orgAndy WilsonOne of the great advantages of being the head editorial honcho here at <a href="">Vision Systems Design</a> magazine is that I'm able to spend a great deal of my time visiting systems builders who develop image processing systems that are deployed to inspect products in industrial environments.<br /><br />During the course of my conversations with the engineers at these companies, I'm always intrigued to discover -- and later convey to the readers of our magazine -- how they integrate a variety of hardware components and develop software using commercially available image processing software packages to achieve their goals.<br /><br />Although it's always intellectually stimulating to hear how engineers have built such systems, what has always interested me more are the reasons why engineers choose to use the hardware or software that they do.<br /><br />Primarily, of course, such decisions are driven by cost. If one piece of hardware for example, is less expensive than another and will perform adequately in any given application, then it&rsquo;s more likely than not to be chosen for the job. <br /><br />The choice of software, on the other hand, isn't always down to just the price of the software package itself. If a small company has invested time and money training its engineers to create programs using one particular software development environment, it's highly likely that that same software will be chosen time after time for the development of any new systems. The cost involved in retraining engineers to learn enough about a new package might be simply too exorbitant, even though it might offer some technical advantages.<br /><br />To ensure that they do not get stuck trapped with outmoded software, however, engineering managers at systems builders need to meet up with a number of image processing software vendors each year -- including the one that they currently use -- and ask them to provide an overview of the strategic direction that they plan to take in forthcoming years.<br /><br />If it becomes clear during such a meeting that there is a distinct lack of such direction on the software vendor's part, then those engineering managers should consider training at least one of their engineers to use a new package that might more effectively meet the demands of their own customers in the future.<br /><br />Certainly, having attended more than a few trade shows this year, it's become fairly obvious to me which software vendors are investing their own money in the future and which are simply paying lip service to the task. And if you don't know who I'm talking about, maybe you should get out, big headnoemail@noemail.orgAndy WilsonIt's been known for quite some time that the overall size of the brain of an individual can be used to judge how intelligent he or she is. More specifically, it's been discovered that the size of the brain itself accounts for about 6.7 percent of individual variation in intelligence.<br /><br />More recent research has pinpointed the brain's lateral prefrontal cortex, a region just behind the temple, as a critical hub for high-level mental processing, with activity levels there predicting another 5 percent of variation in individual intelligence.<br /><br />Now, new research from Washington University in St. Louis suggests that another 10 percent of individual differences in intelligence can be explained by the strength of the neural pathways connecting the left lateral prefrontal cortex to the rest of the brain.<br /><br />Washington University's Dr. Michael W. Cole -- a postdoctoral research fellow in cognitive neuroscience -- conducted the research that provides compelling evidence that those neural connections make a unique contribution to the cognitive processing underlying human intelligence.<br /><br />The discovery was made after the Washington University researchers analyzed functional magnetic resonance brain images captured as study participants rested passively and also when they were engaged in a series of mentally challenging tasks, such as indicating whether a currently displayed image was the same as one displayed three images ago.<br /><br />One possible explanation of the findings is that the lateral prefrontal region is a "flexible hub" that uses its extensive brain-wide connectivity to monitor and influence other brain regions. While other regions of the brain make their own special contribution to cognitive processing, it is the lateral prefrontal cortex that helps co-ordinate these processes and maintain focus on tasks at hand, in much the same way that the conductor of a symphony monitors and tweaks the real-time performance of an orchestra.<br /><br />Now this discovery, of course, could have some important implications. Imagine for, example, a future where employers insisted that all their prospective employees underwent such a scan as part of their interviewing process so that they could ensure that&nbsp; they always hired folks with lots of gray matter.<br /><br />That thought might worry you, but not me. You see, my old man was always telling me that I had a big head. Then again, maybe he never meant his remarks to be taken as a complement.<br /><br /><b>Interested in reading more about the uses of magnetic resonance imaging in medical applications? Here's a compendium of five top news stories on the subject that Vision Systems Design has published over the past year.</b><br /><br /><b>1. <a href="">MRI maps the development of the brain</a></b><br /><br />Working in collaboration with colleagues in South Korea, scientists at Nottingham University (Nottingham, UK) aim to create a detailed picture of how the Asian brain develops, taking into account the differences and variations which occur from person to person.<br /><br /><b>2. <a href="">Ultraviolet camera images the brain</a></b><br /><br />Researchers at Cedars-Sinai Medical Center (Los Angeles, CA, USA) and the Maxine Dunitz Neurosurgical Institute are investigating whether an ultraviolet camera on loan from NASA's Jet Propulsion Laboratory could help surgeons perform brain surgery more effectively.<br /><br /><b>3. <a href="">Imaging technique detects brain cancer</a></b><br /><br />University of Oxford (Oxford, UK) researchers have developed a contrast agent that recognizes and sticks to a molecule called VCAM-1 that is present in large amounts on blood vessels associated with cancer that has spread to the brain from other parts of the body.<br /><b><br />4. <a href="">Imaging the brain predicts the pain</a></b><br /><br />Researchers from the Stanford University School of Medicine (Stanford, CA, USA) have developed a computer-based system that can interpret functional magnetic resonance (fMRI) images of the brain to predict thermal pain.<br /><br /><b>5. <a href="">Camera takes a closer look at the workings of the brain</a></b><br /><br />Optical imaging of blood flow or oxygenation changes is useful for monitoring cortical activity in healthy subjects and individuals with epilepsy or those who have suffered a, good month for the motorsnoemail@noemail.orgAndy WilsonOne of the biggest markets for vision systems is in the automotive industry, where systems are not only deployed to perform numerous quality inspection tasks on the production line, but increasingly, within the automobiles themselves to provide additional levels of security for both drivers and pedestrians.<br /><br />So I was pleased to hear that this month has proved a particularly splendid one for the industry. According to a monthly sales forecast developed by J.D. Power and LMC Automotive, July&rsquo;s new-vehicle retail sales are expected to post the second strongest year-over-year growth rates during the past 12 months.<br /><br />"Retail sales got off to a fast start in July, and while they've slowed down a bit as the month has progressed, through the first 16 selling days, they're still up 15.1 percent, compared to July 2011," said John Humphrey, senior vice president of global automotive operations at J.D. Power and Associates. <br /><br />All major segments are expected to show year-over-year sales gains in July, with the exception of the midsize crossover utility vehicle segment. That aside, the sub-compact conventional, midsize conventional and compact conventional segments are projected to show year-over-year increases of 28 per cent or more.<br /><br />The report goes onto say that through the first half of this year, North American light-vehicle production volume increased 22 per cent, compared with the same period in 2011. More than 1.4 million additional vehicles have been built in the first six months this year, relative to the first half of 2011, with inventory replenishment and stronger demand in the first quarter being the main factor for the higher production volume. <br /><br />Honda and Toyota's production in the first half this year is up 75 per cent and 65 per cent, respectively, as both manufacturers continue to recover from the impact of the Japanese tsunami. What is more, US manufacturing growth is outperforming the rest of North America, with a 26 per cent year-to-date increase. Production in Mexico has increased 13 per cent and Canadian manufacturing up 19 per cent.<br /><br />While this is clearly good news for the auto makers themselves, it bodes well for many of us involved in the vision systems business who supply vision systems to them.<br /><br />You can find more details of the report <a href="" target="_blank">here</a>, this waynoemail@noemail.orgAndy WilsonAs the editor of Vision Systems Design, I get to fly around a lot during the course of my work. But during my travels, there is little time to observe the behavior of the individuals in the various countries that I visit.<br /><br />However, that's definitely something in which Dr. Rajshree Mootanah, the Director of the Medical Engineering Research Group at <a href="" target="_blank">Anglia Ruskin University</a> (Chelmsford, Essex) in the UK is interested.<br /><br />You see, the learned doctor is currently involved in a project to <a href="">measure the gait of individuals</a> with the aim of using the data he acquires as a measure by which the joint functions of those who have just undergone hip or knee surgery can be assessed.<br /><br />Now a lot of work in this area has already been carried out by researchers at the <a href="" target="_blank">Hospital for Special Surgery in New York</a>, one of the leading hospitals for orthopedics in the US. But the trouble is that the database of normal gaits from that hospital was captured, naturally enough, from New Yorkers.<br /><br />Dr. Mootanah believes that the people in the county of Essex in the UK are likely to have a different gait to New Yorkers, and that his research project to establish a local database will allow more accurate testing and analysis of UK patients.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />"The only database we have is of the New York population and we believe there may be slight but still significant differences to the way our local population walks due to the different racial make-up of the two groups," Dr. Mootanah says.<br /><br />For that reason, his team is now on the lookout for volunteers, aged 18 or over, who are able to walk without impediment. The volunteers will have the force of their steps measured by special pressure plates embedded in the floor while their gait will be recorded by a 3-D motion capture system.<br /><br />The results from Mootanah's research will certainly be interesting and may be more useful that he realizes. According to the New York Times, five studies presented at the Alzheimer&rsquo;s Association International Conference in Vancouver this month provided striking evidence that when a person's walk gets slower or becomes more variable or less controlled, his cognitive function is also suffering.<br /><br />So not only could the database created by Mootanah be useful as a means to evaluate patients who have undergone surgery, it might also provide a valuable tool to other researchers who might also use it to evaluate the cognitive functions of individuals.<br /><br /><b>Reference: <a href="" target="_blank">Footprints to Cognitive Decline and Alzheimer's Are seen in Gait</a>, The New York Times, July 16, 2012.</b>, vision beats the clocknoemail@noemail.orgAndy WilsonWhether you are short or tall, skinny or overweight, at one time in your life you have most likely used a pedestrian crosswalk to help you cross a busy street. <br /><br />Using a crosswalk is undoubtedly a much safer bet than simply choosing your own spot to cross the road and risk being hit by a moving vehicle. Nevertheless, using such crosswalks can be rather an intimidating affair.<br /><br />That's especially true of new those new fangled crossings commonly seen in Florida that incorporate a countdown timer to help pedestrians know just how long they have got to cross the road.<br /><br />While they undoubtedly put the spring back in the step of many pedestrians who can then visualize just how long they have got before a hoard of Fords start hurtling towards them, they don't do much for the blood pressure of disabled or elderly people who may not be able to increase their velocity to beat the countdown.<br /><br />Now, thanks to the help of a 3-D vision system, engineers at Migma (Walpole, MA) have come up with an interesting solution to the problem that has already been tested out at certain crosswalks with great effectiveness.<br /><br />The system itself makes use of a stereo vision-based infra-red camera that can detect pedestrians during the day and at night. The output from the camera is hooked up to a computer that runs pedestrian detection algorithms that extract 3-D features of the human figures on the crossing from any other images that are present.<br /><br />Detecting the pedestrians by using such a computer-based vision system enables the timing of the lights on the crosswalk to be controlled by their presence, a much more sensible approach than giving them an ultimatum which they may not be able to meet.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="224" src="" width="320" /></a></div><br />But is this really such a terrific idea? After all, how would the system be able to accommodate the behavior of those rowdy ne'er-do-wells who were intent on slowing traffic to a standstill by continuously walking back and forth across the crossing, perhaps in a protest to government cutbacks?<br /><br />I think I have an idea. Perhaps what the system needs is an intelligent neural network-based back end that can "learn" to identify the behavior of such folks and then alert the authorities to take the appropriate action if its spots them acting in an untoward fashion.<br /><br />Mind you, by the time you add up the cost of doing that, such a system might be just too gosh-darned expensive to make it worth installing. Back to the drawing board!, on Linuxnoemail@noemail.orgAndy WilsonYears ago, an old friend of mine remarked that if his car was controlled by the Windows operating system, it would take him ages to get anywhere. <br /><br />He reached this conclusion after spending many mornings downloading updates to his Windows-based system and then rebooting it before he could start work. From this, he surmised that if such Windows software was used to control the electronic systems in his car, he would spend an equal amount of time in the driver's seat waiting for updates before he could even put his key in the ignition.<br /><br />My friend, of course, was a complete and utter technical Luddite -- one of those chaps that would rather have lived in the age of steam where at least he would have some vague notion about how large amounts of very hot water could be used to propel vehicles along a track.<br /><br />Fortunately, however, the goods folks at the Massachusetts Institute of Technology (MIT; Cambridge, MA, USA) are not like my friend at all. They are always coming up with new and interesting ways to enhance the performance and safety of vehicles using computer systems and software.<br /><br />Just this week, for example, two of the folks there -- Sterling Anderson and Karl Iagnemma -- announced that they had developed a semi-autonomous safety system that can help drivers of vehicles avoid colliding into objects in their path.<br /><br />The system uses an onboard camera and laser rangefinder to identify hazards in a vehicle's environment. Then, system software analyzes the data and identifies safe zones that the vehicle can travel in. The system allows a driver to control the vehicle, but takes control of the wheel when the driver is about to exit a safe zone, thereby avoiding objects in the vehicle's path.&nbsp; <br /><br />Anderson, who has been testing the system in Michigan since last September, has observed an interesting phenomenon after several folks tried out the system on a course.<br /><br />Notably, those who trusted the system tended to perform better than those who didn't. For instance, when asked to hold the wheel straight, even in the face of a possible collision with an object, drivers who trusted the system to take control and avoid the object drove more quickly and confidently through the course than those who were wary of the system.<br /><br />So far, the team has run more than 1,200 trials of the system, with few collisions. Most of these occurred when glitches in the vehicle's camera failed to identify an obstacle. <br /><br />As for my Luddite friend, I don't know what he'd make of it all. I know one thing though. He'd be tickled pink that the researchers chose to perform experimental testing of the software they developed using a PC running the Linux operating system!<br /><br /><i>Editor's note: Interested readers can find a technical article entitled "Constraint-Based Planning and Control for Safe, Semi-Autonomous Operation of Vehicles" by MIT's Sterling Anderson, Sisir Karumanchi and Karl Iagnemma <a href="" target="_blank">here</a>. </i>, and roll visionnoemail@noemail.orgAndy Wilson<br />Our erstwhile European Editor is always sniffing around the Interweb to see if he can discover any snippets of information that might be useful to the readers of Vision Systems Design. I'm rather glad he is, because that's, in part, what I pay him rather exorbitant sums of money (by my standards, not his, of course!) to do.<br /><br />On many occasions during his surfing activities, he stumbles across news articles written by folks that have attempted to popularize the work of academics working in the field. Unfortunately, in doing so, many of the writers of such pieces generalize the work of the researchers such that the point of the work becomes almost incomprehensible.<br /><br />Fortunately, it's often the case that the folks that write such pieces take the time to provide hypertext links to direct the reader to a specific technical paper that the engineers have published in learned journals.<br /><br />Sadly, though, this hasn't proved much use to our European Editor, who has discovered that -- upon reaching the sites of such learned journals -- he is required to pay a certain sum of money to read any further about the system or software that has been designed and developed.<br /><br />Somewhat frustrated by this turn of events, the conniving old European Editor has figured out a way around the problem. That's right. Once realizing that a piece of work of interest has been developed that he thinks that you, our reader, might be interested in, he then performs a quick search for the author of the piece on the Interweb.<br /><br />Once he has located the author's home page, of course, he inevitably discovers that the researcher -- proud to have had his work accepted for publication by the learned journal -- has posted a copy of it as a PDF on his own personal web site! That&rsquo;s the place, of course, where the wily old hack discovers what is really under the hood of the technology and how relevant the work might be.<br /><br />But the whole affair worries me just a little. Should such information really be available for free?<br /><br />In one of my last blogs, I revealed how a chap who goes by the name of Pablo Caicedo has (apparently illegally) uploaded a 442-page volume entitled "<a href="">Image Processing and Mathematical Morphology: Fundamentals and Application</a>" by Professor Frank Shih from the New Jersey Institute of Technology onto a web site called Needless to say, the learned professor was less than impressed when we pointed this out to him.<br /><br />But are the academic researchers in the vision industry not guilty of the same form of infringement too, when they choose to publish PDF of the articles owned by the academic publishers on their own websites?<br /><br />It looks to me as if the paid for publishing industry is going down the chute, following in the footsteps of the recording and movie businesses. Perhaps that's why so many academics are now going on tour and on television like their rock and roll counterparts.<br /><div><br /></div>, in cars help fight planned accidentsnoemail@noemail.orgAndy WilsonIt's not very often that I take the trouble to read any press releases that are issued by insurance companies. My days, as you can imagine, are filled with attempting to interpret in an intelligent fashion much of the information that is (or to be more accurate, isn't) issued by the good folks in the vision systems design industry.<br /><br />But this week, I did spot one story that really caught my attention. A story issued by none other than, part of the Insurance Quotes USA network, a self-proclaimed all-in-one stop for tips, information, and quotes on car insurance.<br /><br />You see, this outfit is now recommending that all drivers purchase and install "dashboard cameras" to fight against what is known in the trade as "planned" automobile accidents. Having been recently involved in a minor incident with my own trusty Hyundai, I was intrigued to find out more about what constitutes such a planned accident and how the cameras could help.<br /><br />Apparently, what I discovered was that these planned accidents can occur in a couple of ways, resulting in the victims who have suffered from them having their car insurance premiums jacked up.<br /><br />The first planned accident scenario -- called Intentional Backing Up -- involves two vehicles stopped on an uphill road. The criminal in the front vehicle puts his vehicle into neutral and his car falls back onto the victim behind him. The criminal driver then claims that the driver from the vehicle behind has failed to brake and stop properly.<br /><br />The second accident scenario -- called Intentional Falling -- involves a pedestrian and a vehicle. When the victim's vehicle is stopped in front of a crosswalk, the pedestrian falls over intentionally and then acts as if the victim&rsquo;s vehicle has failed to brake and stop properly.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="" width="320" /></a></div><br />Victims are often given the "opportunity" to resolve the accident without going through the insurance agency and claim process by offering cash compensation up front. <br /><br />By having a dashboard camera in the car, however, such accidents can be recorded and victims can produce the evidence of the incident to the police. With that in mind, InsuranceQuotesUSA now recommends that drivers equip their vehicles with a 30 fps camera with a 5Mpixel color sensor and 16GB of memory that can capture 1-2 hours of data that is time and date stamped.<br /><br />Clearly, this is great news for camera manufacturers across the globe. I'm sure they are more than grateful to the company for pointing out why it is so important for every driver to hit the local store and make such a purchase as quickly as possible. <br /><br />Personally, however, I'm hitting up the <a href="" target="_blank">Gunstar web site</a> to see if I can get my hands on a second hand rocket propelled grenade launcher that I can affix to the hood of my car. Rather than a camera that they can't see, I believe that this will act as a much stronger deterrent to those in the criminal fraternity intent on carrying out such dastardly, crowd-sourcing project fails to take offnoemail@noemail.orgAndy WilsonIt must have seemed like a rather good idea to the folks at DARPA to hold a competition to discover who might be capable of designing, building and manufacturing an advanced small unmanned air vehicle (UAV) that would be capable of performing a simulated military perch-and-stare reconnaissance mission. <br /><br />Indeed, as the so-called <a href="" target="_blank">UAVForge project</a> took shape, they must have been delighted and encouraged to see more than 140 teams and 3,500 individuals from 153 countries crawl out of the woodwork to attempt to develop systems that would meet the rigorous demands laid down by the agency. But that was hardly surprising, since a whopping $100,000 prize was up for grabs for the team that could demonstrate that their system met the agency's goals.<br /><br />With so many teams competing and so much cash at stake, it seemed inevitable that one of the teams would win the competition. Sadly, however, none of the nine finalist teams managed to do so. When they demonstrated their air vehicles at an event at Fort Stewart, Georgia, not one of the teams proved that they had what it took to fly home with the big bucks.<br /><br />The fly-off scenario, conducted on a training site at Fort Stewart was a simulated military perch-and-stare reconnaissance mission that required that the teams' UAVs performed a vertical take-off, navigated to an area beyond the line of sight from the take-off location, land on a structure, capture video and then return to the starting point. <br /><br />While some teams were able to reach the observation area, none were able to land on the structure and complete the mission. Since no team completed the fly-off event, the $100,000 prize was not awarded, and a design will not be manufactured for further testing in a military exercise as originally envisaged by the folks at DARPA. <br /><br />If the failure of the so-called UAVForge project has proved one thing, it is that developing such a small unmanned air vehicle (UAV) is clearly beyond the role of so-called citizen scientists. <br /><br />Might I dare to suggest, then, that if such persistent, beyond-line-of-sight, perch and stare surveillance systems are still of importance to DARPA, they may be better off calling upon one of the usual large military contractors to help them out.<br /><br />That, however, is certain to cost a lot more than $100,000 in prize money, I'll wager.<br /><br /><b>Interested in reading more about UAVs? Then why not check out these recent news stories from Vision Systems Design?</b><br /><br /><b>1. <a href="">UAV captures 3-D images of buildings</a></b><br />&nbsp;Engineers at the University of Granada (Granada, Spain) are using UAVs to help them produce 3-D models of historical buildings. <br /><br /><b>2. <a href="">UAVs help utilities bring back the power</a></b><br />Researchers at New Mexico State University (NMSU; Las Cruces, NM, USA) and the Electric Power Research Institute (EPRI; Palo Alto, CA, USA) recently completed tests that concluded that unmanned aircraft can be safely and effectively used to assess power grid damage following a storm or natural disaster.<br /><b><br />3. <a href="">Small UAV uses hyperspectral imager</a></b><br />Headwall Photonics' (Fitchburg, MA, USA) Micro-Hyperspec imaging sensor is being successfully deployed onboard small commercial unmanned aerial vehicles (UAVs) to help agriculturalists monitor vegetation over wide areas.<br /><b><br />4. <a href="">Robot vision helps guide UAV for crop spraying</a></b><br />Australian researchers are developing a flying robot as small as a dinner plate and a fleet of eco-friendly robotic farmhands that could help cut down the amount of herbicide sprayed on crops.<br /><br /><b>5. <a href="">Cameras take flight on UAVs to help soldiers spot suspicious activity</a></b><br />An unmanned aerial vehicle (UAV) developed by a team of engineers from Middlesex University (London, UK) could help soldiers to spot hidden dangers during military operations.<br />, identify tech trendsnoemail@noemail.orgAndy WilsonOne of the biggest tasks of any senior manager is to identify new trends in technology and take advantage of that knowledge to develop new products before his competition.<br /><br />Traditionally, most folks have performed this task in a variety of ways -- by attending technical conferences and seminars, visiting trade shows and yes, even reading technical trade magazines such as Vision Systems Design.<br /><br />But the trouble with all these approaches is that they require a great deal of time consuming and exhaustive human effort. What makes things worse is that since no-one can claim to be a font of all knowledge, oftentimes the technologies that may appear new to one individual might actually be years old.<br /><br />Now, however, it looks as though it might be possible to employ the use of computer systems to help alleviate the drudgery of such research. At least, that's what the folks at <a href="" target="_blank">Rensselaer Polytechnic Institute</a> (Troy, NY, USA)&nbsp; believe they might be able to do.<br /><br />That's right. The scientists there have begun work on a new Intelligence Advanced Research Projects Activity (IARPA) project to develop computer systems that can help quickly identify emerging ideas and capabilities in technology. <br /><br />The research is part of the IARPA Foresight and Understanding from Scientific Exposition (FUSE) program under a team led by BAE Systems that includes Brandeis University, New York University, 1790 Analytics, and Rensselaer.<br /><br />The computer and web scientists at Rensselaer -- led by Professor Deborah McGuinness --- will work with the FUSE team to develop computer programs that will analyze millions of pages of text looking for the emergence of new technological and scientific trends in multiple languages. <br /><br />"No one can keep up with the massive amount of data currently out there even in one language, let alone in many different languages," McGuinness said in a recent statement. <br /><br />"(The project) will allow us to look at a far greater number of documents in less time to understand the significant trends that are out there. Once identified, these trends can then be better studied by human analysts."<br /><br />While this work is admirable, I'd like to suggest that the good folks at the Intelligence Advanced Research Projects Activity (IARPA) might also like to sponsor another group of software developers to create a computer program that could analyze the technical specifications, price and performance of products and correlate those characteristics with how successful they have been in the market.<br /><br />I think that this would provide a terrifically important tool to many individuals -- especially those in the vision industry -- who might then be able to make more informed decisions about what products to launch into the market.<br />, on about surveillancenoemail@noemail.orgAndy WilsonThis month, <a href="" target="_blank">Apple</a> (Cupertino, CA, USA) and <a href="" target="_blank">Google</a> (Mountain View, CA, USA) unveiled competing software applications that display <a href="">3-D maps</a> with an unprecedented level of detail. In order to create the maps, the companies are using planes equipped with high-resolution imaging equipment.<br /><br />Not everyone is happy about the situation. US Senator Charles E. Schumer is one of them. When the two industry giants made their announcements, he raised the issue that a race to develop the most comprehensive and precise mapping technology could have the consequence of eroding privacy and creating security risks.<br /><br />"By taking detailed pictures of individuals in intimate locations such as around a pool, or in their backyard, or even through their windows, these programs have the potential to put private images on public display. We need to hit the pause button here and figure out what is happening and how we can best protect peoples' privacy, without unduly impeding technological advancement," he said.<br /><br />On his web site, the US Senator argued that such detailed photographs could provide terrorists with detailed views of sensitive utilities. On current online maps, many <a href="">power lines</a>, power sub stations, and reservoir access points are not very visible due to the reduced resolution currently used. <br /><br />However, if highly detailed images become available, criminals could create more complete schematic maps of the power and water grids in the United States. With the vast amount of infrastructure across the country, it would be impossible to secure every location.<br /><br />To protect individuals' personal privacy and sensitive infrastructure sites, Schumer called on Apple and Google to fully disclose what privacy protection plans and safeguards they intend to put in place for the highly detailed and precise images they will be able to capture.<br /><br />Additionally, Schumer asked the companies to provide notification to communities as to when they plan to conduct mapping, to commit to blurring out photographs of individuals who are captured in the images, to give property owners the right to opt-out of having the company map their homes, and put protocols in place with law enforcement agencies to ensure that sensitive infrastructure details are blurred from published maps.<br /><br />That seems like an eminently sensible suggestion to me. After all, no-one I know would be very happy to discover that the two computer behemoths had inadvertently captured high-resolution images of them in any sort of compromising situations whatsoever.<br />, in a coal minenoemail@noemail.orgAndy WilsonAs late as 1987, canaries were used in British coal mines to act as an early warning system. If the canaries dropped down dead, the miners knew that they had most likely been killed by toxic gases -- a clear indication that it was probably time to hurry out of the mine before they suffered the same fate.<br /><br />Thankfully, the use -- or abuse -- of these particularly lovely songbirds was phased out in British mines in 1987, two years after the end of the British miners' strike which resulted in the UK Government of the time closing most of the state owned mines anyway.<br /><br />Now, however, the concept is being revived -- albeit in a somewhat different form -- by researchers at the UK-based <a href="" target="_blank">National Physical Laboratory</a> (NPL; Teddington, UK). Yes, that's right. The research team at NPL is conducting a study into the field of prognostics, the art of monitoring the health of electronic assemblies and estimating their remaining useful life. <br /><br />Knowing when an electronic assembly is going to fail can give a company a competitive edge, as it allows for longer periods of time between scheduled maintenance and an associated reduction in costs. By replacing components before they fail, equipment downtime can also be minimized.<br /><br />There are several different approaches to prognostics. But the researchers in the new project aim to examine the interconnections between in electronic assemblies, measuring their electrical impedance, noise and linearity to identify suitable indicators for predicting the remaining useful life of the components.<br /><br />And there's a vision aspect to the whole affair too, you'll be pleased to hear. The NPL team is also going to be trialing the use of <a href="">lock-in thermography</a> (LIT) too. Using the LIT system, they will generate thermal maps which they will then attempt to correlate with the age of the components of the electronic system.<br /><br />The researchers will also look at so-called 'canary components', which are designed to fail earlier than any other electronic component to warn of the impending failure of a device -- much in the way that those poor old birds were once used in coal mines as an indicator of the toxicity of the air.<br /><br />Industrial partners are encouraged to get involved in the project and to help decide on what components and conditions to include in the research. The folks at NPL advise any interested parties to get in touch with them by the 30th June, mathematical morphology worth nothing?noemail@noemail.orgAndy WilsonWhen our European Editor asked me if I could recommend a good book he could read to bone up on all things related to computer vision, one specific tome came to mind -- a particularly weighty volume entitled "Image Processing and Mathematical Morphology: Fundamentals and Applications."<br /><br />Written by Professor Frank Shih, the Director of the Computer Vision Laboratory at New Jersey Institute of Technology, this book lifts the lid off the subjects of processing and morphology -- two fields which have become increasingly important to the world of automated vision detection and inspection and object recognition. <br /><br />I recommended Professor Frank Shih's book because it provides a comprehensive overview of morphological image processing and analysis. It presents the necessary fundamentals, advanced techniques, and practical applications for researchers, scientists, engineers -- and yes, even humble Editors -- who work in image processing, machine vision, and pattern recognition disciplines.<br /><br />Our European Editor was quick to take me up on the idea. After a brief search on the Internet, he discovered that there were several copies of the book available on the Amazon UK web site at around $130. Not a bad price for a book that is so comprehensive, I thought.<br /><br />But these are tough economic times for us all. So rather than purchase the book outright, our parsimonious penny-pinching European editor decided to see if he could find another site where he might be able to obtain the book whilst parting with substantially less of his hard earned currency.<br /><br />After spending another hour or so of his time on the computer, the diligent Editor did indeed discover a copy of the publication that was considerably cheaper. More precisely, it cost absolutely nothing at all. That's right. He discovered that he could obtain the book without even parting with one silver sixpence from his old moth-infested piggy bank. <br /><br />All because of the fact that a chap who goes by the name of Pablo Caicedo has uploaded the 442-page work onto a web site called Indeed, just <a href="" target="_blank">by clicking here</a>, anyone can read and potentially download Professor Frank Shih's entire book for free.<br /><br />Somewhat taken back by this turn of events, our honest European Editor wondered if reading, or downloading, the book in question was strictly legal, or whether the chap that had uploaded it had done so without the permission of the author.<br /><br />Without wanting to become involved in any business that might compromise his moral integrity, our European Editor dropped Professor Shih an email to inform him that his book was available for free on the site, and asked him whether or not it was within his rights to download it.<br /><br />So far, Professor Shih hasn't replied to his, invasion at Oregon beachnoemail@noemail.orgAndy WilsonLast week, visitors to an Oregon beach one mile north of Newport reported that they had seen a loose dock floating offshore. Not content with staying in the ocean, the object then washed ashore whereupon it was immediately scrutinized by the State authorities.<br /><br />At that time, the origin of the object on Agate Beach was unknown, and there was no obvious evidence that it might have crossed the ocean. How could it have? After all, at seven feet tall, nineteen feet wide and sixty six feet long, the dock is very large and heavy.<br /><br />Then, two days later came the rather amazing news that a metal placard bearing Japanese writing was found attached to the derelict dock. The placard was forwarded to the Japanese consulate in Portland, Oregon who confirmed that the dock washed ashore was debris from the March 2011 tsunami in Japan.<br /><br />Shortly after the dock made landfall, staff from the Oregon Parks and Recreation Department checked it for traces of radioactivity. Fortunately, there wasn't any. But what scientists at the Hatfield Marine Science Center in Newport did find, however, was evidence of marine life.<br /><br />Now while some of that marine life was native to US coastal waters, some of it was specific to the waters of Japan. Among the exotic species were different kinds of mussels, barnacles and marine algae. One invasive marine algae in particular -- Undaria pinnatifida, commonly called wakame -- was present on the structure.<br /><br />As a precaution, the Oregon Department of Fish and Wildlife co-ordinated a group of volunteers to remove the organisms from the dock while also removing the salt water-dependent organisms from the beach.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="217" src="" width="320" /></a></div><br />&nbsp;Now, as any ecologist worth his salt will tell you, invasive species such as algae and mussels can inflict a lot of damage to a local ecosystem. But fortunately, however, there are some image processing systems out there in the market that can be suitably deployed to spot the little devils.<br /><br />One such system is the FlowCAM from marine instrumentation manufacturer <a href="" target="_blank">Fluid Imaging Technologies</a> (Yarmouth, ME, USA) which has already been put to use to analyze many types of microscopic organisms and particles in oceans, lakes, reservoirs and streams.<br /><br />But better yet, it can also be equipped with a cross-polarized illumination option which can be used to detect larval-stage invasive mussel species such as Zebra and Quagga mussels. It can do so since the skeletons of the organisms are calcareous and exhibit birefringence under cross-polarized light.<br /><br />The company says that using the FlowCAM with cross-polarization eliminates the human error that may be introduced using manual microscopy methods. And because the technique detects the larval stage of the species, it is able to detect the invasive species significantly earlier than other techniques.<br /><br />Thankfully then, it would appear that should anymore Japanese invasions of US beaches take place, at least we can arm ourselves with the technology to determine just how ecologically unfriendly they might, new vision for touch screen displaysnoemail@noemail.orgAndy WilsonMost new mobile devices such as cell phones and tablets make use of touch screen technology. And while that might give them an elegant look, it's not a great deal of help to the visually impaired, who may experience a great deal of difficulty using one.<br /><br />Now, however, engineers at an outfit called <a href="" target="_blank">Tactus Technology</a> (Fremont, CA, USA) have developed a rather nifty technical solution to the problem. <br /><br />The company's so-called patented deformable tactile surface called the "Tactile Layer" enables on-screen buttons to rise up from the surface of a touch screen when an application calls for them to do so. Users can feel, press down and interact with the physical buttons just like they would use keys on a keyboard.<br /><br />A quick look at the patented technology (for a list of patents, please see link below) reveals that the engineers at Tactus Technology created the system using a network of fluidic channels that are coupled to cavities underneath the specific areas on the display. When called upon to do so by an application, the fluid is pumped into these cavities, causing regions on the surface of the deformable display to be raised. When no longer required, the fluid is released from the cavity, leaving no trace of the deformity. <br /><br />The company says that because the Tactile Layer panel is a completely flat, transparent, dynamic surface, it adds no extra thickness to the standard touch screen display since it replaces a layer of the already existing display stack.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />Tactus Technology has already demonstrated the capability of the technology on a prototype Google Android tablet, as the result of a partnership between the company and <a href="" target="_blank">Touch Revolution</a> (Redwood City, CA, USA), a unit of TPK Holding -- the largest-volume glass projected capacitive multi-touch screen manufacturer in the world.<br /><br />While the company is obviously keen to market the technology to manufacturers of high volume devices, I can't help but think that there might also be some rather interesting applications for this technology in the field of machine vision too.<br /><br />The most obvious one, of course, would be in the touch panel screens that are commonly used as Human Machine Interfaces (HMIs) in vision systems that allow users to set up and/or modify the parameters of their vision inspection systems.<br /><br />Like consumers, many operators of such machines are also faced with poor typing speed, errors and insufficient feedback. And it's here that the Tactus technology could be mighty useful too.<br /><br />A list of patents that detail the technology behind the device is available <a href="" target="_blank">here</a>, sound rival for Kinectnoemail@noemail.orgAndy WilsonSince its introduction, systems developers have created a myriad of rather innovative applications using <a href="">Microsoft's Kinect</a> -- the motion sensing input device the company developed for the Xbox 360 video game console and Windows PCs.<br /><br />Naturally enough, the folks at Microsoft have also been busy conjuring up some <a href="">interesting applications for the Kinect system</a> too. Last month, for example, the company revealed that researchers from its Redmond, Washington research labs had teamed up with others from the University of California, Los Angeles (Los Angeles, CA, USA) to develop a system that can determine the identity of an individual from a group of individuals interacting with a multi-user interactive touch display.<br /><br />The so-called ShakeID system makes the assumption that each user is holding a smart-phone or other portable device whose movement is sensed by an in-built 3-axis accelerometer. The ShakeID system can then identify which user is using the multi-user interactive touch display by matching the motion sensed by the device to body motion captured by the <a href="">Kinect camera</a>. <br /><br />As far as they are aware, the researchers believe that this is the first attempt to fuse data from the <a href="">Kinect system</a> with inertial sensing from mobile devices to identify users of multi-touch interactive displays.<br /><br />But although gesture is becoming an increasingly popular means of interacting with computers, it's still relatively costly to deploy gesture recognition sensors in existing mobile platforms such as cell phones.<br /><br />And that's why another group of Microsoft researchers has teamed up with researchers from the University of Washington (Seattle, WA, USA) to develop what they are calling SoundWave, a system that makes use of the speaker and microphone already embedded in most commodity devices to sense gestures around the device.<br /><br />To do this, the SoundWave system generates an inaudible tone, which gets frequency-shifted when it reflects off moving objects like the hand. The shift is then measured with the microphone in the device to infer various gestures. <br /><br />Although the Microsoft team developed and tested the algorithms for the system on various laptops and desktop PCs, they believe that the approach could be extended to smart phones and tablets using the same frequency shift technique.<br /><br />More information on the ShakeID system can be found <a href="" target="_blank">here</a>.<br /><br />More information on the SoundWave system can be found <a href="" target="_blank">here</a>, image could save a streamnoemail@noemail.orgAndy WilsonVision systems have traditionally been employed in industrial applications to <a href="">inspect products</a> to ensure that they meet the requirements set down by manufacturers. <br /><br />But with the advent of the <a href="">camera-based mobile phone</a>, new applications are coming on stream that allow individual members of the public to take part in so-called "citizen based"&nbsp; projects in which they can capture images to help inspect the state of the environment. <br /><br />In one such project called Creek Watch, folks across the world can monitor watersheds and report their conditions using an <a href="">iPhone application</a> developed by IBM Research. Every update provides data that local water authorities can then use to track pollution, manage water resources and plan environmental programs.<br /><br />The free Creek Watch app is claimed to be easy to use. All individuals have to do is to stop by any waterway and, with the phone's GPS enabled, take a photo and submit three crucial pieces of data on the water level, flow rate and trash found.<br /><br />"That&rsquo;s all it takes to play your part in helping conserve and protect your local water resources," said Christine Robson, an IBM computer scientist who helped develop Creek Watch. "No expertise or training is required. This is an exercise in crowd sourcing, where every individual is encouraged to become a citizen scientist and get engaged with their environment."<br /><br />A new update to the app makes it easy for users to share their photos and findings on Facebook and Twitter, if they want to. The IBM researchers think that such postings are expected to encourage more users to use the app and allow them to collect more data.<br /><br />IBM Research aggregates the Creek Watch reports and makes them available at, where water control boards and other interested parties can filter the data and view it as an interactive map or download a spreadsheet. The California State Water Control Board is the first entity to partner with IBM and use Creek Watch to monitor the thousands of miles of creeks and steams across its jurisdiction.<br /><br />With the app in use in 25 countries so far, IBM researchers hope that Creek Watch adoption will continue to grow across the globe. "The iPhone's GPS system automatically ties each Creek Watch submission to a precise location, allowing water experts anywhere in the world to find local data to use for critical water management decisions," said Jeff Pierce, who leads the mobile computing research team at IBM's Almaden facility and helped develop Creek Watch.<br /><br />I can't help but think that this Creek Watch app is a rather good idea. Let's hope the idea will encourage other folks in the computer business to develop similar apps that will empower the general public to use their cameras for the benefit of mankind.<br />, infra-red comeback for the beetlesnoemail@noemail.orgAndy WilsonI've always been fascinated by the field of biomimetics. It's always amazed me how researchers involved in that field can rip apart biological systems found in animals and plants and then <a href="">create new man-made technology</a> that effectively performs the same function.<br /><br />So you can imagine how interested I was when I read this week that a team of German researchers from the University of Bonn (Bonn, Germany) have concluded that the sensors of black fire beetles might even be more sensitive than un-cooled <a href="ttp://">infrared sensors designed by man</a>!<br /><br />Apparently, the critters in question use their sensors to detect forest fires, even from great distances, since their wood-eating larvae can only develop in freshly burned trees. Naturally enough, since they have had years of experience in doing so, you might have expected that they would perform the function rather well.<br /><br />Now, with the help of other researchers at the Forschungszentrum caesar (Bonn, Germany) and the Technische Universität Dresden (Dresden, Germany), the researchers at Bonn have figured out how the beetle's infrared sensor actually works, and they have started to work on building their very own biomimetic copy.<br /><br />The researchers say that they have discovered that each beetle is kitted out with tiny cuticula spheres, smaller than the diameter of a fine hair that are filled with water that absorbs infra-red radiation very well. When these heat up, the water expands suddenly, and the resulting change in pressure is immediately detected by sensory cells.<br /><br />One they had figured that out, the researchers had to determine just how sensitive the sensors actually were. Naturally, that led them to think that if they could only put mini transmitters on the beetles, they would then be able to determine how far they flew to a burnt area and from that calculate the minimum radiated heat from the fire that the beetles were be attracted to. But at a length of about 1 cm, the poor beetles were too small to carry a transmitter for long distances.<br /><br />So the researchers relied on data from an event that happened in August 1925 when a large oil depot in Coalinga, California went up in flames. Reports from that era mentioned that the huge blaze attracted masses of charcoal beetles. Since the fire was in the forestless Central Valley of California, the researchers deduced that the beetles must have flown in from large forests on the western foothills of the Sierra Nevada about 130 kilometers away.<br /><br />The results of their calculations from that data indicate that infrared sensors of the beetles may be able to detect infra-red radiation better than any man-made uncooled infrared sensors currently available on the market.<br /><br />Rather cool stuff, I thought. But better yet, if the researchers really can build a biomimetic replica of the beetles' sensors, the infra-red detector based on the beetles' biology might change the way we detect forest fires, or <a href="">detect leaks in petrochemical plants</a>, forever.<br />, chamber exposed on CCTV cameranoemail@noemail.orgAndy WilsonWhen my brother's sewer pipe blocked up last year, he called out the helpful chaps from Dyno-Rod who took a closer look with their <a href="">CCTV equipment</a>.<br /><br />From the CCTV footage, they were able to determine that the cause of the problem was nothing more than a bunch of roots that had grown into the pipe work from a tree that had been planted close to the house. After that, it was simply a case of hauling the tree out of the backyard, getting the roots out of the pipe and relining it.<br /><br />Now one might think that performing such work analyzing the footage from CCTV cameras might be a little repetitive, not to mention dull and boring. But for some involved in the industry, it can actually be quite exciting!<br /><br />That&rsquo;s right. Take the case of another UK-based outfit called <a href="" target="_blank">Lanes for Drains</a> (Leeds, UK),&nbsp; for example, who earlier this year used their own sewer surveillance technology to reveal the 200-year-old hidden secret of one of the UK's largest man-made reservoirs.<br /><br />It all started when the company in question was called in to work on a &pound;5.5m project to repair a dam at the 108 hectare Chasewater reservoir which is situated near Lichfield in Staffordshire. The dam was built at the same time as the reservoir was created way back in the halcyon days of 1796, making it one of the oldest reservoir dams in the UK.<br /><br />More specifically, the folks at Lanes for Drains were asked to carry out a <a href="">CCTV survey</a> on a 100 meter long brick-lined drawdown culvert designed to control the release of water from the reservoir. But when they did, they found that the culvert, which was 1 meter high and 0.9 meter wide, was 70 per cent blocked with silt and bricks.<br /><br />So taking things in hand, they used a Kaiser-Whale recycling jet vacuum tanker to clear the debris while continuously monitoring progress with HD quality video footage from an Rovver (Remote Operated Video Vehicle Enhanced Receiver) crawler camera manufactured by <a href="" target="_blank">Ipek</a> (Hirschegg, Austria).<br /><br />During the process, the team discovered a large chamber not identified on the plans. The hidden chamber, measuring 1.5m by 2m, was discovered 25 meters into the culvert -- 3.5 meters under the floor of the reservoir!<br /><br />Lanes for Drains' lead engineer Dave Faris said that it was quite special to be able to work on a structure that had not been seen for over 200 years. And now, we can all take a look at the hidden chamber too, since the company has released an image of it captured from the camera aboard the Rovver.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />I know my brother will be interested to hear the news. But if his sewers ever block up again, I'm certain that the Dyno-Rod team he calls out will fail to find anything quite as interesting as the Lanes for Drains chaps, to get more businessnoemail@noemail.orgAndy WilsonThe president of the small to medium-sized machine builder had always made himself a reasonable living from developing custom-based vision systems to inspect one particular type of widget.<br /><br />And although the market for such widget inspection systems wasn't all that large, his system, or variants of it, had been purchased and widely used by most of the widget makers in the industry.<br /><br />Recognizing the fact that the Internet might provide his company with more exposure, the President decided to hire a developer to create a web site that would explain to any new potential customers the capabilities of his company.<br /><br />And that's exactly what he did. Prior to developing the web site, however, a member of the web site development team went to visit the outfit to find out more about the system. Once there, he was treated to an hour long dissertation by the company's marketing manager who explained to him exactly why there was a need to inspect such widgets and a very brief description of the system that they had developed.<br /><br />After the meeting, the web developer went back to his office and created a stunning web site for the company. Not only did the web site provide a background of the company and its university origins, it also detailed the inspection problems faced by the widget manufacturers. Unfortunately, however, there was only the briefest description of the system itself, the functions it performed and its performance &ndash; all listed in bullet points alongside a rather sorry-looking photograph.<br /><br />When the web site was launched, the President was confident that it would offer the world an insight into the capabilities of his company and lead to a number of new leads, not just from companies involved in widget manufacturing, but other outfits that might be faced with similar inspection problems.<br /><br />Sadly, of course, that didn't happen. Having won orders with most of the widget makers already, the website attracted no new customers whatsoever. The president and the marketing manger were disappointed &ndash; not in the least because they had spent a considerable amount of money developing it. And they were both at a loss to understand why the reaction had been so poor.<br /><br />Some months later, a journalist from a magazine that covered the field of vision systems design came to call upon the company. Unlike the previous interview, however, the journalist grilled the president to discover exactly how the system had been designed. He went away confident that his description of the hardware chosen for the system and the software that had been written for it by the engineering team would prove a hit with his readers.<br /><br />And it was. When the article was published, it became immediately apparent to many of the engineering readers how the company could tweak the widget-inspection system to help them inspect their own products too.<br /><br />The president is now a happy camper. Having received several enquiries from some potential new customers, he is now looking forward to expanding his business into new markets. More importantly, however, he has at last recognized the importance of publicizing the technical capabilities of his technical team rather than just promoting a single product line.<br />, system helps students find a place in the librarynoemail@noemail.orgAndy Wilson<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div>A team of researchers from the <a href="" target="_blank">Stevens Institute of Technology</a> (Hoboken NJ, USA) have developed a vision-based system that will enable students to find a seat in the university&rsquo;s library during the busiest periods of the semester.<br /><br />&ldquo;We noticed that we and many of our fellow students spent precious study time looking for seating in the S.C. Williams library,&rdquo; says team leader and computer engineer Richard Sanchez. <br /><br />But rather than grumble and head back to their dormitories, computer engineer Sanchez&rsquo; team decided to develop a system to solve the problem once and for all. Called Seatfinder, it's an innovative way of detecting what seats are available in the library that uses image processing to identify the presence of an individual.<br /><br />With the assistance of Professor Bruce McNair, Distinguished Service Professor of Electrical &amp; Computer Engineering and the Stevens IT department, the team deployed an IP camera with network connectivity to capture a live feed from the library. <br /><br />After capturing images of the seating, the IP camera transmits a live video feed over a wired network to a remote computer using an open source application called iSpy. Next, a motion detection algorithm is used to trigger a snapshot of the live camera feed any time an individual leaves or enters a table in the library. <br /><br />The team's code then processes the captured image to determine which areas are free or occupied, after which it updates a website with an image that represents occupied and empty chairs around the table. By checking the web site, other students can then find a space to sit and study a lot more quickly. <br /><br />Having successfully proven their system, the Seatfinder team now hopes to add more cameras and monitor more tables to increase the sophistication of the system. Eventually they envision deploying it at other venues where space can become scarce, such as restaurants, movie theatres or parking lots.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="" width="320" /></a></div><br />But I can see even more opportunities for the system, one of which is on public transportation. If you have ever traveled on any form of rail transportation in any large metropolitan area during the rush hour period, for example, you will know what a problem it is to find anywhere to sit. <br /><br />While the Seatfinder system can't obviously produce more seating on a full train, if some of our rail companies could deploy such a system in each of their carriages, it would reduce the time that hapless commuters spend walking up and down the carriages in the desperate hope of finding a spare seat. Those commuters lucky enough to be able to access a website from the train, that, day at the racesnoemail@noemail.orgAndy Wilson<div class="separator" style="clear: both; text-align: center;"></div>Many years ago, whilst working on an electronics journal in the United Kingdom, I was invited to a press conference to witness the unveiling of what was claimed to be the next big thing in electronic systems. <br /><br />To ensure that as many folks turned up as possible, the organizers of the event decided to hold the conference at one of England's famous <a href="">race tracks</a>,&nbsp; where they invited the press to remain after the company presentations to enjoy the rest of the day betting on the horses.<br /><br />As it transpired, the press conference itself turned out to be a crashing bore -- the system itself had already been launched months earlier in the US, and most of the press knew about it already. Sadly, the view of the racetrack wasn't much better. You see, although the organizers had erected a tent as close to the racecourse as possible, the view from it was somewhat restricted. The result was that the horses could only been seen for mere seconds as they raced by. <br /><br />Now, I'm pleased to say, a solution to this problem is a hand, thanks to a couple of savvy students from the <a href="" target="_blank">University of Arizona</a> (Tucson, Arizona, USA) who have come up with a solution based, of course, on the use of a vision system. That&rsquo;s right. David Matt and Kenleigh Hobby&rsquo;s new 'jockey cam' is a smart camera-based helmet that can stream real-time video from a jockey's head, putting the viewers right in the saddle rather than stuck by the side of the track.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="211" src="" width="320" /></a></div><br />The two entrepreneurs have even launched a new company called EquiSight to market the system, and have captured the attention of ESPN, the Ireland Tourism Board, racetracks around the world and venture capital investors.<br /><br />Since starting the company, it's been a whirlwind ride for the pair. In December 2011 they presented their system to more than 600 racing and gaming executives at the 38th Annual Symposium on Racing and Gaming in Tucson. In February 2012, they filmed 30 jockey-cam videos at prestigious race tracks and training centers on the East Coast. And in March 2012, <a href="" target="_blank">Wasabi Ventures</a> (San Mateo, CA, USA) selected EquiSight to receive venture capital support.<br /><br />EquiSight, which now holds three provisional patents on its technology, also recently inked an agreement with an engineering design firm to explore the potential application of helmet-cam technology for the military and law enforcement.<br /><br />As a member of the press, of course, I'm now looking forward to the day when I'm invited to take a look at the system at a press conference here in the good old US of A. Needless to say, if it's held at a race track, it's bound to be more enjoyable than the last press conference I attended at such an event all those years, in a virtual sandboxnoemail@noemail.orgAndy WilsonWhen I was a little lad, there was nothing I enjoyed more than playing in a sandbox in the back yard of my parents' house during the summer. That old sandbox became a place where I could construct my own virtual worlds filled with castles, mountains, river and oceans. It was, as I recall, jolly good fun.<br /><br />But in this age when computers dominate most every aspect of our lives, it should come as no surprise that even the humble sandbox has now been transformed into a digital experience. <br /><br />That&rsquo;s right. As part of an NSF-funded project on freshwater lake and watershed science, the good folks at <a href="" target="_blank">UC Davis</a> (Davis, CA, USA) have created a sandbox that allows users to create <a href="" target="">topographic models</a> by shaping real sand which is augmented in real time by an elevation color map, topographic contour lines, and simulated water!<br /><br />While it sounds like great fun for kids of all ages, the sandbox hardware built by project specialist Peter Gold of the UC Davis Department of Geology has actually been developed to teach geographic, geologic, and hydrologic concepts such as how to read a topography map, the meaning of contour lines, watersheds, catchment areas, and levees.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="150" src="" width="320" /></a></div><br />To do just that, the system makes use of the <a href="">Microsoft Kinect camera</a> which continuously collects images from the sandbox as the user interacts with the sand. The 30 FPS images from the camera are fed into a statistical evaluation filter which filters out moving objects such as hands or tools, reducing the noise inherent in the Kinect's depth data stream, and filling in missing data.<br /><br />After some software processing, the resulting topographic surface is then rendered by the projector suspended above the sandbox, with the effect that the projected topography exactly matches the topography of the real sand. The software uses a combination of several <a href="">OpenGL</a> shaders to color the surface by elevation using customizable color maps and to add the real-time topographic contour lines.<br /><br />At the same time, a water flow simulation is run in the background using another set of GLSL shaders. The simulation is run such that the water flows exactly at real speed assuming a 1:100 scale factor, unless turbulence in the flow forces too many integration steps for the driving graphics card to handle.<br /><br />The researcher say that the software they developed to create the virtual sandbox is based on the Vrui VR development toolkit and the Kinect 3D video processing framework. For those wishing to develop their own sandbox, they say that the software will soon be available for download under the GNU General Public License.<br /><br />That's surely good news for all those of us who would love to take a trip back to our childhood by sampling the new virtual delights of the sandbox in our own living rooms. <br /><br />More information on the augmented reality sandbox can be found <a href="" target="_blank">here</a>, rotting vision of the futurenoemail@noemail.orgAndy WilsonThere's no doubt that autonomous robots fitted out with vision systems can perform some pretty useful tasks. For the military, such robots can help prevent injuries in the battlefield by providing soldiers with a remote insight into the <a href="">nefarious misdoings</a> of the enemy.&nbsp; On civvy-street, they can be used for the equally important purposes of <a href="">improving the environment</a> or keeping the <a href="">borders of countries safe</a>.<br /><br />But these conventional robots are predominantly made of rigid resilient materials, many of which are non-biodegradable and have a negative impact on the natural ecology.<br /><br />That means that any robot deployed in the environment must be continually tracked and, once it has reached the end of its useable life, must be recovered, dismantled, and made safe. But there is also the risk that the robot will be irrecoverable with consequent damage to the eco-system. <br /><br />Now one might think that there's not a lot that can be done about this. After all, the computers, power sources and imagers used in such <a href="">robotic devices</a> are all man-made, and many of them are composed of some pretty toxic substances. And there doesn&rsquo;t appear to be any alternative to using them.<br /><br />But apparently, the academic folks at the <a href="" target="_blank">University of Bristol</a> (Bristol, UK) think differently. They believe that it might be possible to build robots that decompose once they have reached the end of their mission. While it all might sound a bit far-fetched, the idea has won Dr. Jonathan Rossiter, Senior Lecturer in the University of Bristol&rsquo;s Department of Engineering Mathematics, a two-year grant of over &pound;200,000 from the <a href="" target="_blank">Leverhulme Trust</a> (London, UK) to work on developing robots that rot.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />That's right. Dr. Rossiter, together with Dr. Ioannis Ieropoulos, Senior Research Fellow at the Department of Engineering, Design and Mathematics at the <a href="" target="_blank">University of the West of England</a> (UWE, Bristol, UK), aim to show that autonomous soft robotic artificial organisms can exhibit an important characteristic of biological organisms -- graceful decomposition after death. <br /><br />Since there will no longer be the need to track and then recover such robots, the deployment of large numbers of biodegradable robots in the environment will become inherently safe. Hundreds or thousands of robots could therefore potentially be deployed, safe in the knowledge that there will be no environmental impact.<br /><br />Now I'm sure that there are some among you who believe that Dr. Rossiter plans are all a bit pie in the sky. But I'm not one of them. This year, for example, we have already seen the development of <a href="">micro lens arrays </a>produced by a mineral precipitation by researchers at the <a href="" target="_blank">Max Planck Institute of Colloids and Interfaces </a>(Potsdam, Germany).<br /><br />Nevertheless, I'm going to be very interested to see if and how those UK researchers can build an entire robot that is totally biodegradable. I guess we'll just have to wait a couple more years to find out.<br /><br />For more information on the work of the researchers at the Bristol Robotics Laboratory -- including a robot powered on a diet of flies -- click <a href="" target="_blank">here</a>, elements fogged up no morenoemail@noemail.orgAndy WilsonDeveloping a vision-based system for the transportation sector is a far cry from building a system that performs inspection tasks in a factory setting. In <a href="">transportation systems</a>, many environmental issues such as extremes in temperature and humidity must be taken into account. What's more, engineers must also contend with dealing with the dirt and dust found in the natural environment that could affect the performance of their systems.<br /><br />Now, thanks to researchers at the <a href="" target="_blank">Massachusetts Institute of Technology</a> (MIT, Cambridge, MA, USA), those environmental issues commonly addressed by systems' developers in the world of transportation may finally become a thing of the past.<br /><br />That's right. You see, what the MIT researchers have done is to develop a new type of glass with a nano-textured array of conical features on its surface that not only resists fogging and glare but is self-cleaning too.<br /><br />The surface pattern on the glass itself consists of an array of nanoscale cones that are five times as tall as their base width of 200 nanometers. It is created using coating and etching techniques adapted from the <a href="">semiconductor industry</a>. Fabrication begins by coating a glass surface with several thin layers, including a photoresist layer, which is then illuminated with a grid pattern and etched away; successive etchings produce the conical shapes.<br /><br />Ultimately, the researchers hope that the surface pattern can be made using an inexpensive manufacturing process that could be applied to a plethora of optical devices, the screens of <a href="">smart phones</a> and televisions, solar panels, car windshields and even windows in buildings.<br /><br />According to mechanical engineering graduate student Kyoo-Chul Park, <a href="">a photovoltaic panel</a> can lose as much as 40 percent of its efficiency within six months as dust and dirt accumulate on its surface. But a solar panel protected by the new self-cleaning glass would be more efficient because more light would be transmitted through its surface. What's more, while conventional glass might reflect more than 50 percent of the light, the anti-reflection surface would reduce this reflection to a negligible level.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />The researchers say they drew their inspiration from nature, where textured surfaces ranging from lotus leaves to desert-beetle carapaces and moth eyes have developed in ways that often fulfill multiple purposes at once. Although the arrays of pointed nano-cones on the surface appear fragile, the researchers say they should be resistant to a wide range of forces, ranging from impact by raindrops in a strong downpour or wind-driven pollen and grit to direct poking with a finger. Further testing will be needed to demonstrate how well the nano-textured surfaces hold up over time in practical applications.<br /><br />In a nod to the vision industry, the researchers say that -- aside from solar panels -- the new glass could be used in optical devices such as microscopes and <a href="">cameras</a> that are be used in humid environments, where both the anti-reflective and anti-fogging capabilities could be useful. In touch-screen devices, the glass would not only eliminate reflections, but would also resist contamination by sweat.<br /><br />The news will surely peak the interest of those engineers currently using more elaborate ways to contend with such issues in the <a href="">transportation sector</a>. For them, the commercialization of products based on such technology will not come soon, through the undergroundnoemail@noemail.orgAndy WilsonGetting to grips with the machinations of European underground <a href="">transportation networks</a> can be notoriously difficult for those from foreign parts, partly due to the fact that they employ systems that many visitors may be unfamiliar with.<br /><br />And so it was when our industrious <a href="">European Editor</a> paid a recent visit to the beautiful city of Lisbon in Portugal to attend the <a href="" target="_blank">European Machine Vision Association&rsquo;s</a> (EMVA) annual get-together at the resplendent Hotel Tivoli on behalf of <a href="" target="_blank">Vision Systems Design</a>.<br /><br />After putting in a hard day's work listening to a variety of speakers describe the status of the image processing market in Europe, he decided to take a little trip on the local underground system to take in the sights of the city.<br /><br />Sadly, however, his attempts to buy a ticket for the Lisbon underground proved somewhat unsuccessful, until, that is, a rather attractive young Portuguese woman came to his aid. Having helped our hapless European Editor purchase a pass for the train, the two then became engaged in a brief conversation, during which it transpired that the lady was planning a new life in Australia due to the lack of opportunity in her own country.<br /><br />The decline of the PIGS (Portugal, Ireland, Greece and Spain) has been widely covered by the media. But gaining a first hand insight into the plight of a single individual really brought the problem home to our European Editor.<br /><br />Unsurprisingly, the data presented at the conference by Gabriele Jansen, the CEO of Vision Ventures and a member of the EMVA Executive Board, confirmed that the growth in gross domestic product in that neck of the European woods is predicted to be decidedly negative for the coming year. <br /><br />Indeed, while most European countries look set to enjoy a modest growth of between +0-2.5%, Spain and Italy will be facing a growth of between -0-2.5%, while poor old Portugal will fare worst of all with a growth of less than -2.5% this year.<br /><br />On the machine vision front, however, things aren't as gloomy. According to the preliminary data from the EMVA, European machine vision companies saw an increase in sales of 16% in 2011 compared with the year before.<br /><br />As for 2012, it looks like there is more good news on the horizon. The <a href="" target="_blank">VDMA</a> (Verband Deutscher Maschinen- und Anlagenbau) -- one of the key industrial associations in Europe -- expects total turnover of machine vision products to be up 5% in 2012 over 2011, a year which itself saw a 20% rise in turnover from both the domestic and export market.<br /><br />Germany, naturally enough, looks set to remain the number one market for machine vision systems in Europe, a fact that will undoubtedly mean that many bright individuals from the less successful regions such as Portugal will be attracted there to find their fortunes. <br /><br />Those considering moving from the economically bereft European states to a destination somewhat further afield -- such as the lady that helped our European Editor on the Lisbon underground -- might be interested to note that the Australian economy is still on the uptick, wonderful life rememberednoemail@noemail.orgadministratorThis January marked the passing of <a href="">Norman Wilson Edmund</a>, one of the true visionaries in the imaging business and the founder of <a href="">Edmund Optics</a> (Barrington, NJ, USA).<br /><br />Norman Wilson Edmund was known as the creator and entrepreneurial spirit behind Edmund Scientific, which later became Edmund Optics, and is accredited with inspiring many generations of youngsters to become interested in science and engineering.<br /><br />Now, to honor the contributions that he made to advance the science of optics, the company he founded has launched a new award. The Norman Edmund Inspiration Award will consist of an additional $5000 in product donations that will be presented to one of the three 2012 first-place prize recipients of the company's worldwide <a href="">Higher Education Grant program</a>, which is currently running in the Americas, Asia, and Europe.<br /><br />The Norman Edmund Inspiration Award will be given to the college or university optics program in science, technology, engineering or mathematics that best embodies the legacy of Norman Edmund.<br /><br />Announcing the award, Robert Edmund, CEO of Edmund Optics, recalled that his father had a lifelong commitment to <a href="">motivating and inspiring young people</a> to become involved in science, and the award would carry on his desire to excite another generation about innovation and discovery.<br /><br />The beneficiary of the product award will be chosen from the three previously selected 2012 first prize recipients, from the Americas, Europe, and Asia respectively. The award recipient will be determined by the Edmund Optics board of directors committee led by Joan Husted, daughter of Norman Edmund and an Edmund Optics board member.<br /><br />The prize recipients of the 2012 Higher Education Grant Program and the 2012 European Research and Innovation Award will be announced on Sept. 14, 2012. Product donations totaling $80,000 will be distributed to the award recipients in the three geographic locations. The winner of the Norman Edmund Inspiration Award will be announced on Oct. 10, 2012.<br /><br />Surely, there could be no better way to remember a man with such a love of science. And if you'd like to enter the awards, there's still time. Applications are being accepted until June 30, 2012. More information can be found on the Edmund Optics web site <a href="">here</a>, item in the bagging areanoemail@noemail.orgAndy WilsonAnyone who has been grocery shopping recently can't have failed to notice the numerous self-checkout lanes that many grocery stores are now installing in their premises as an alternative to traditional cashier-staffed checkouts.<br /><br />The reason for this is quite simple. By enabling consumers to scan the barcodes on their own items, and manually identify items such as fruits and vegetables which are then weighed, stores can man a six station check out with just a single person, cutting down on costs considerably.<br /><br />But by doing so, many stores have left themselves open to unscrupulous individuals who may either attempt to hoodwink such systems into believing that they are purchasing a lovely bunch of coconuts instead of a pack of somewhat more expensive sirloin steaks, or simply bag the items at the checkout without bothering to scan them at all.<br /><br />Apparently, things have got so bad on the pilfery front, that New England-based <a href="" target="_blank">Big Y</a> (Springfield, MA, USA) has abandoned any more self-checkout ideas it had planned, citing both customer service as well as shoplifting behind its decision.<br /><br />Fortunately, however, one company now believes it can offer a solution to the knotty problem -- a solution that is, of course, based around an intelligent vision analysis system. <br /><br />That company, <a href="" target="_blank">StopLift Checkout Vision Systems</a> (Cambridge, MA, USA) has developed a computer vision system that can interpret the behavior of the customer by analyzing and understanding body motions at the checkout. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="160" src="" width="320" /></a></div><br />By analyzing the digitized video, the so-called &ldquo;ScanItAll system&rdquo; scrutinizes how each item is handled to determine whether or not it was properly scanned. The patented system is capable of understanding fraudulent behavior, including when a bar code is covered up by hand.<br /><br />Of course, there are alternative approaches to cutting down fraud. One such approach which has been tested out by grocery retail giant <a href="" target="_blank">Kroger</a> (Cincinnati, OH, USA) involves the deployment of a tunnel called the Advantage Checkout. This system incorporates a battery of imaging scanners that read bar codes and letters and numbers on goods to identify them as they travel through the tunnel. <br /><br />Eliminating the need for customers to scan their own goods could prove equally as effective, if not more so, at cutting down on theft as analyzing customers' movements from video footage. Combining the two approaches could mean that thieves are faced with a much tougher time should they try to purloin the sirloin in the future.<br /><br /><u><b>References: </b></u><br /><br /><a href=";printsec=frontcover&amp;dq=7631808&amp;hl=en&amp;sa=X&amp;ei=12GFT4KeBuiC2gXi3bHsCA&amp;ved=0CDQQ6AEwAA" target="_blank">Method and apparatus for detecting suspicious activity using video analysis</a> <br /><a href="" target="_blank">Self-Checkout, Too Easy to Steal?</a> <br /><a href="" target="_blank">Self-checkout lanes boost convenience, theft risk</a><br /><a href=";ctype=content" target="_blank">New Kroger Bar Code Scan Tunnel Could Revolutionize Retail Checkout</a>, vision of ripe fruitnoemail@noemail.orgAndy WilsonOver the past few years, researchers have developed numerous robotic systems that can detect when fruits such as strawberries or tomatoes are ripe.<br /><br />For the most part, such systems work by locating a fruit on the plant and then analyzing its color with a vision system. Having determined that fruit is ripe, a robotic gripper is then used to pick them off the plant.<br /><br />Now, of course, vision is just one of the ways that human beings determine whether fruit is ready to eat. However, while vision is an important sense, we also rely upon a number of other senses to perform the same task -- notably, smell, touch and hearing.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="224" src="" width="320" /></a></div><br />So it is hardly surprising then that there are a number of folks who are also working to develop systems that can provide an automated alternative to those senses.<br /><br />Many researchers, for example, are working to develop, perfect and test &ldquo;electronic noses&rdquo; that can determine how mature a particular fruit is. Most of these electronic noses use sensor arrays that react to volatile compounds: the adsorption of volatile compounds on the sensor surface causes a physical change of the sensor which can then be detected.<br /><br />Aside from sniffing fruit, human beings often give their produce a good squeeze to see if it is ripe enough to eat. That is especially true for fruit like avocados and mangos, which we squeeze to determine their hardness or softness.<br /><br />Now squeezing fruit is a pretty straightforward task for a robot, especially one that might be equipped with capacitive-based pressure sensors on its grippers. Such sensors could be calibrated so that the system they are interfaced to could be able to ascertain the ripeness of a fruit. What is more, they could potentially be used in conjunction with both the aforementioned vision and electronic nose on a future agricultural robotic harvester to great effect.<br /><br />Lastly of course, let us not forget that some fruits have a characteristic sound when they are ripe, and so it is not uncommon to see some individuals tapping fruits like melons to determine whether they are ready to eat. So it is obvious that by analyzing the sound made by a reverberating melon hit by an actuator, software could tell you whether the melon is ripe or not.<br /><br />Clearly, while robotic harvesting systems of the future might well deploy a vision system, they might also host a plethora of other sensory devices. For that reason, system integrators in the vision business might do worse than to take a few refresher courses on olfaction, tactile sensing and audio engineering before embarking on any new design!<br /><br /><u><b>References:</b></u><br /><br />1. <a href="" target="_blank">Japanese robots to harvest ripe fruits with super vision and efficiency</a><br />2. <a href="" target="_blank">Send in the robots to pick the ripest fruit</a><br />3. <a href="" target="_blank">Clever robots for crops</a><br />4. <a href="" target="_blank">Intelligent harvesting robot</a><br />5. <a href="" target="_blank">There's an app for that: test how ripe a melon is with iWatermelon for iPhone</a><br /><br /><u><b>M</b><b>ore from <i>Vision Systems Design</i>:</b></u><br /><br />1.&nbsp; <a href="">Vision system helps sort carrots</a><br />2.&nbsp; <a href="">Machine-vision-based inspector sorts oranges and mandarins</a><br />3.&nbsp; <a href="">Hyperspectral imaging sorts blueberries</a><br />4.&nbsp; <a href="">Weeding system uses x-rays to detect tomato stems</a><br />5.&nbsp; <a href="">X-ray imaging checks cherry pits</a><br />6.&nbsp; <a href="">Vision system sorts strawberry plants</a><br />7.&nbsp; <a href="">Neya Systems awarded contract for produce classification</a><br />8.&nbsp; <a href="">Student app helps farmers to measure quality of rice</a><br />9.&nbsp; <a href="">Robotic image-processing system analyzes plant growth</a><br />10.<a href="">Vision system sorts out the plants</a>, distinct lack of visionnoemail@noemail.orgAndy WilsonLike many folks today, I'm often looking for ways to save a buck or a dime. And one perfect way to save a few pennies, of course, is to use that highly popular piece of computer software called Skype.<br /><br />Now for those of you that may have been living on Mars for the past few years, Skype is a program that you can download onto your PC to enable you to engage in either voice or video conversations with other Skype users over the Interweb, saving you both vast sums of money on telephone calls - especially international ones.<br /><br />For my part, I have been using Skype to discuss all things related to machine vision systems design with our European Editor, checking in with him on a regular basis to discover all the <a href="">vision-related news </a>that's coming out of hotbeds of innovation in such far flung places as Stuttgart, Germany and Cambridge, England.<br /><br />But the other night, the course of the conversation turned away from vision systems and onto the subject of house renovation. You see, our <a href="">European Editor </a>has recently had his house extensively refurbished and I was keen to see what the results looked like as we chatted on his <a href="">webcam</a>.<br /><br />Sadly, however, the webcam was not connected to his laptop. Rather, it had been plugged into a heavy PC tower which could not be carried around the house. But I was so keen to see the results of the work that I instructed the wretched Editor to download Skype onto his notebook and to plug the webcam into that, so that he could move around while I gazed in awe at his freshly painted rooms.<br /><br />Unwilling to disobey orders, the Editor did indeed attempt to download the Skype software onto his notebook. In fact, he attempted to do so several times before finally giving up. As he tried the download time and time again with no apparent luck, his language became extremely colorful - so much so that if his comments were to be repeated here they would surely offend the sensibilities of many of our readers.<br /><br />The poor European editor, it appeared, was having a great deal of problem actually registering to download the program. All because it required him to input a &ldquo;Captcha&rdquo; code that would enable the program to ensure that the registration screen was actually being filled out by a real human being.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img src="" border="0" /></a></div><br />This particular Captcha test required our dejected Editor to retype a series of rather warped and crowded letters that were displayed on the screen into a field below them. But the trouble was that the images were so exceptionally mangled that our Editor had little chance of recognizing any of them. Even when the system presented new Captchas after his failures, he failed to identify any of the letters.<br /><br />Clearly, there's an opportunity here for anyone involved in the vision systems business to develop some vision recognition software that would enable folks like our Editor to simply point his webcam at the screen and automatically read out those codes in an accurate fashion.<br /><br />Some academics at <a href="" target="_blank">Stanford University</a> (Stanford, CA, USA) have already taken a crack at the problem, employing machine vision algorithms to successfully crack 66 percent of Visa's Captchas, 70 percent of Blizzard's, and 25 percent of Wikipedia's. Apparently, a 1 percent successful cracking rate is regarded as grounds for the Captcha's immediate discontinuation.<br /><br /><b>Reference:</b> &ldquo;<a href="" target="_blank">Stanford Boffins on the Brink of Breaking Captcha Codes</a>&rdquo;, distinct lack of communicationnoemail@noemail.orgAndy WilsonBecause of the highly application-specific nature of many <a href="">vision systems</a>, small to medium sized enterprises must often bring together teams of engineers with highly-specialized knowledge to create new bespoke designs for their customers.<br /><br />Not only must these individuals have extensive experience in selecting the appropriate vision hardware for the job, but also be able to choose - and use - the appropriate tools to program the system.<br /><br />Most importantly, however, it is often the mechanical or optical engineer working at such companies who can make or break the design of a new vision system. Working with their hardware and software counterparts, these individuals can make vitally important suggestions as to how test and inspection fixtures should be rigged to optimise the visual inspection processes.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />Indeed, as smarter <a href="">off-the-shelf hardware</a> and <a href="">sophisticated software</a> relieves engineers from the encumbrance of developing their own bespoke image processing products, it is the optical or mechanical engineer that can often make the ultimate contribution to the success of a project.<br /><br />Sadly, however, the teams of mechanically-minded individuals employed by such companies often work in isolation, unaware of what sorts of mechanical or optical marvels may have been whipped up by their rivals - or even customers - to address similar issues to the ones that they are working on.<br /><br />That's hardly surprising, since many of such companies' customers force them to sign lengthy Non Disclosure Agreements (NDAs) before even embarking on the development work. Some customers even purchase the rights to the design after the machine has been built to prevent their rivals from developing similar equipment at their own facilities.<br /><br />But while this unbridled protectionism is understandable from the customer's perspective, preventing such information from being disseminated in the literature or on the Interweb actually does a complete disservice to the engineers working at the companies that are building the equipment.<br /><br />That's because, rather than being able to gain any sort of education from reading about how other engineers may have solved similar problems to their own - especially those in the all important field of mechanics and optics - they are effectively trapped in an secluded world where they can only call upon their own experience and ingenuity, which may, or may not be enough for the job.<br /><br />Is it time then for systems integrators to politely ask their customers to forgo the signing of such NDAs so that such mechanical and optical information can be made more widespread for the benefit of us all? Perhaps it is. But I'd be a fool to think that it will ever, wonderful world of visionnoemail@noemail.orgAndy WilsonEarlier in the month, whilst attending an entire <a href="">day's worth of seminars on 3-D imaging</a> presented by the folks at <a href="" target="_blank">Stemmer Imaging</a> (Tongham, UK) our European correspondent overheard a couple of CEOs discussing the lack of talented young engineers who were entering the wonderful world of vision systems design.<br /><br />As you might expect, the CEOs naturally felt that the low salaries offered by many of the Small to Medium Sized Enterprises that build <a href="">vision systems</a> for large OEM customers had a lot to do with it. <br /><br />They believed that the first class engineering talent - in the UK at least - was still being attracted away from the engineering profession to enter less satisfying, yet ultimately more rewarding fields, such as investment banking. <br /><br />While there may be some truth to that statement, I can't help but think that there's more to it than that. I think that the real problem is that manufacturers just haven't done much at all to make products that are affordable enough to allow children to experiment with hardware and software at an early age.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="" width="320" /></a></div><br />Well, I'm pleased to see that there are some folks out there who agree with me. More specifically, the folks at a <a href="" target="_blank">Cambridge University</a> (Cambridge, UK) based spinout called <a href="" target="_blank">Raspberry Pi</a>. The clever folks there have developed and are now selling a small $25 <a href="" target="_blank">Arm</a> (Cambridge, UK) based computer that into a TV and a keyboard and can be used by children (and interested adults!) as a means to learn programming.<br /><br />The idea behind the tiny and cheap computer for kids came in 2006, when Eben Upton and his colleagues at the <a href="" target="_blank">University of Cambridge&rsquo;s Computer Laboratory</a> (Cambridge, UK), including Rob Mullins, Jack Lang and Alan Mycroft, became concerned about the year-on-year decline in the numbers and skills levels of the A-Level students applying to read Computer Science in each academic year.<br /><br />By 2008, processors designed for mobile devices were becoming more affordable, and powerful enough to deliver multimedia content, a feature the team felt would make the board desirable to kids who wouldn't initially be interested in a purely programming-oriented device. <br /><br />Eben (now a chip architect at Broadcom), Rob, Jack and Alan, then teamed up with Pete Lomas at Norcott Technologies, and David Braben, co-author of the seminal BBC Micro game Elite, to form the Raspberry Pi Foundation, and the credit-card sized device was born.<br /><br />This is a terrific idea and one which some of our esteemed colleagues who market both hardware and software for the vision systems business might like to consider emulating. What better way to encourage those young programmers to get hooked on vision systems design at an early, all the basesnoemail@noemail.orgAndy WilsonA few months back, I had the great pleasure of taking a little trip in my trusty motor vehicle to visit a company that has been solving problems in <a href="">test and automation</a> for more than 10 years. Indeed, during that time, the company has developed a plethora of systems thanks to its specialized technical knowledge of the vision systems business.<br /><br />After interviewing the co-founder of the company for a couple of hours, I returned back to PennWell Towers to pen an article based on the design of a rather specialized, yet fascinating system that the engineers at the company had developed.<br /><br />After the article had been finished, it was published in the pages of <a href="">Vision Systems Design</a> magazine. And the response from the readers was excellent. All thanks, in part at least, to the fact that the savvy Editor-In-Chief has a rather clever knack of identifying stories that will whet the appetite of you, our readers.&nbsp; <br /><br />Fast forward a few months, and I'm now being asked by the powers that be at PennWell Corporation to suggest a suitable candidate who might be able to deliver a webcast for Vision Systems Design on a subject involving the development of a system similar to the one that I had written about earlier. <br /><br />Absolutely convinced that the individual that I had interviewed for the magazine article would make a perfect candidate for such a <a href="">web broadcast</a>, I put his name forward. And, after several more weeks had passed, his presentation was broadcast on the Interweb.<br /><br />The response to the presentation was as spectacular as the printed article had been. Indeed, after the presentation, the individual in question had several people email him to enquire whether he might be able to develop similar inspection systems for them. Needless to say, he was quite pleased by the whole affair to say the least and so was I.<br /><br />For systems integrators who have identified particularly specialized markets and/or industries which they would like to target, there's a lesson to be learned here. Instead of just describing the capabilities on your web sites by written means, why not consider developing a <a href="">web-based seminar</a> on the story too?<br /><br />Clearly, the effort is well worth the financial reward. And if you think that we here at Vision Systems Design can help you in such a venture, why not <a href="" target="_blank">drop me an email</a>?, clear as daylightnoemail@noemail.orgAndy WilsonThis month, a leading engineering integrator of robotic and machine vision systems and a fastening technology company announced they had joined forces to introduce robotic capabilities uniquely engineered for a fastener installation press. <br /><br />According to a statement issued on behalf of the two companies, the partnership 'provides an unprecedented marketplace opportunity for completely hands-off installation of self-clinching fasteners using the newly developed robotic system.'<br /><br />The statement goes on to say, that 'the robotic system developed for the fastening technology company is equipped to pick up, move, and position a work piece for alignment with holes where fasteners will be installed automatically by the press. After fastener installation, the robot removes the finished work piece and advances to the next job.'<br /><br />The system -- apparently -- is offered in two standard packages to accommodate various work piece sizes. The turnkey system integrates a conveyor with locator, output conveyor, gripper, robot, and a basic sequence program to interface with the press. Among available robot options, a robot slide can be supplied to expand work cells for multiple presses. <br /><br />The robotic system sounds like a step in the right direction for both companies. However, there's only one slight hitch. When <a href="">Vision Systems Design</a> contacted the leading engineering integrator of robotic and machine vision systems to discover whether or not the system had incorporated any interesting vision systems, we were informed that the system had not been built yet. <br /><br />Clearly then, this is yet another of those common instances where overeager companies and their public relations firms have forgotten how to use the future tense, leaving their customers somewhat disappointed that they cannot obtain a system that they may believe has already been, capture wildlife up closenoemail@noemail.orgAndy WilsonKeen to capture images of animals in their natural habitat in as up-close and personal manner as possible, a UK-based photographic team has taken it upon themselves to develop remote-controlled armored cameras.<br /><br />The new cameras aren't the first of William and Matthew Burrard-Lucas' efforts to use imaging devices in such a fashion. In fact, it was three years ago when they first came up with the idea of embarking on a project to get unique close-up, ground level photographs of African wildlife. <br /><br />To do just that, William Burrard-Lucas built the first of the so-called <a href="" target="_blank">BeetleCams</a> -- a remote controlled buggy with a DSLR camera mounted on top. They pair then travelled to Tanzania and used the buggy to capture photographs of <a href="">elephants</a> and buffalo. Sadly, however, the BeetleCam was almost destroyed in their only encounter with a lion.<br /><br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="157" src="" width="320" /></a></div><br />Not entirely satisfied with their efforts, they went about developing lion-proof versions of the BeetleCam -- one with more advanced capabilities and one with an armored shell. They then returned to Africa in 2011 to photograph the lions of the Masai Mara. Once again, their BeetleCam received a battering, but it survived, and they came back with a portfolio of lion photographs that exceeded all their expectations.<br /><br />And the development of the Beetlecam continues unabated, apparently. William Burrard-Lucas has now created a third generation of BeetleCam which has evolved to take into account the pairs' experiences from previous trips. <br /><br />But the really good news is that these BeetleCam are now available to purchase by anyone that might also be inspired to take <a href="">pictures of wildlife</a> in a similar fashion. The bad news is that the starting price for a basic BeetleCam is GBP1,250, which is pretty close to $2000 give or take a few British shillings. Having said that, each of the BeetleCams is custom built to meet the requirements of the user.<br /><br />The photography duo have stressed that their BeetleCams must be used responsibly. They urge those that might deploy one to respect the <a href="">animals they are photographing</a> and back off immediately if they think that the BeetleCam might be causing distress to the animals.&nbsp; <br /><br />I'd buy one myself. But there just aren't enough wild animals in New Hampshire to make it all that worthwhile, unless one counts some of the folks that frequent some of the local watering holes on a Friday night, that is. And I couldn't promise that taking pictures of such people might cause a considerable amount of distress, image processing relieves male shoppers from drudgerynoemail@noemail.orgAndy WilsonMost of the ladies that I've met in my life enjoy shopping for clothes. Most men, on the other hand, would seem to absolutely despise it. In fact, there's only one thing that men dislike more than shopping for clothes for themselves, and that's accompanying their significant other to purchase a new pair of jeans or a pretty frock.<br /><br />According to a straw poll taken among some of my closest male friends, the reason for this is the endless amount of time such shopping sprees take. Naturally, this is because of the fact that while two people might be the same height and wear the same size, the way their clothing fits their bodies can vary dramatically. As a result, up to 40 percent of clothing purchased both online and in person is returned because it doesn't fit properly.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="192" /></a></div><br /><br />Now, however, thanks to the power of Microsoft's (Redmond, WA) <a href="">Kinect</a>, those hours of waiting for your partner as she spends time in the changing room of the local department store might soon become a thing of the past.<br /><br />That's right! Much to the relief of menfolk worldwide, a London, UK-based outfit called <a href="" target="_blank">Bodymetrics</a> has taken eight of the <a href="">Kinect for Windows</a> sensors and integrated them into a 3-D body-mapping system 3-D <a href="">body-mapping system</a> called the Bodymetrics Pod, a system that was recently introduced to American shoppers during "Women's Denim Days" at Bloomingdale's in Century City, Los Angeles, CA.<br /><br />During Bloomingdale's Denim Days, held between the 15 to the 18 March, customers are able to get their body mapped and also become a Bodymetrics member. The free service enables customers to access an online account and order jeans based on their body shape. But the best thing of all must surely be the fact that the entire body-mapping process takes less than 5 seconds!<br /><br />As a taster of future things to come, customers will also be able to virtually see how jeans look on their body -- whether they are too tight, too loose, or just a perfect fit.<br /><br />Helping women shop for jeans in department stores is just the start of what Bodymetrics envisions for its body-mapping technologies. The company is working on a solution that can be used at home too. By using that, individuals will be able to scan their body and then go online to select, virtually try on, and purchase clothing that matches their body shape. So soon, there may be no need to leave the house either!<br /><br />Those interested in 3-D body mapping might be interested to know that there is an entire conference dedicated to this fascinating subject. "<a href="" target="_blank">3D Body Scanning Technologies"</a> takes place every year in October in Lugano,, drop-off made easier with visionnoemail@noemail.orgAndy WilsonThere's nothing our <a href="">European Editor</a> likes more each year than taking a couple of weeks or more away from the hustle and bustle of work to spend his time loafing around in France's sunshine playground and enjoying fine wine and delicious food.<br /><br />The trouble is, he does not much care for the journey to get there, nor the journey back. The UK airports are crowded affairs, you see, and boarding a plane often entails waiting in very long lines before one can obtain a boarding pass and check in one's bags.<br /><br />However, thanks to a rather innovative system developed by Netherlands-based <a href="" target="_blank">BagDrop</a> (Rotterdam, the Netherlands), our European Editor's personal issues at the airport might finally be a thing of the past.<br /><br />The new BagDrop systems looks set to make it easier for passengers to check in for a flight, obtain a boarding pass, as well as <a href="">drop off their luggage</a>, all without the need for any involvement from airport personnel.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br /><br />Using the system, a passenger is first identified through the use of a scanner that can read passports and barcodes from flight confirmation documents e-mailed by airlines, after which flight details are verified. The passenger is then issued with a boarding pass and a barcode label that must be affixed to his or her bag.<br /><br />In the next step, the bag is deposited into the BagDrop unit, which then checks the barcode to determine whether it was the same one that was printed for the passenger. In addition to calculating the volume of the bag using a <a href="">3-D imaging system</a>, the system also measures a number of other parameters, including the bag weight, dimensions, and shape to determine whether or not it can be conveyed by the system. For liability reasons, it also captures an image of the exterior of the bag.<br /><br />Once the bag has been accepted by the system, it then prints a claim tag that passengers can use to verify the identity of the bag upon arrival at their final destination.<br /><br />The BagDrop system is already up and running at Schiphol Airport where it has already processed thousands of passengers. Unfortunately, our European Editor hasn't seen any of the systems at Heathrow airport yet, so he's still going to have to wait in a long line to take his next holiday flight to France. That's if I allow him to take any holiday this, your beer with a QR codenoemail@noemail.orgAndy WilsonOriginally invented by Toyota subsidiary Denso Wave back in 1994 to track vehicles during manufacturing, the <a href="">QR code</a> has now become popular outside the automobile industry thanks to the prevalence of intelligent handheld devices such as smart phones equipped with inexpensive image sensors.<br /><br />One of the latest companies to use the code in a rather innovative way is the US beer giant Anheuser-Busch. It has recently started to print the QR codes on Budweiser packaging to allow consumers to trace the origins of the beer they hold in their hands back to one of the company's 12 US breweries.<br /><br />Yes, that's right. By using a <a href="">smart phone</a> equipped with a free "Track Your Bud" app, beer drinkers across the US can now scan the QR code on their Budweisers, enter the "Born On" date found on bottles and cans, and be taken on a guided tour of the creation of their individual beer by the Budweiser brewmaster responsible for it.<br /><br /><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="252" src="" width="320" /></a><br />The "Track Your Bud" app provides beer drinkers with an insight into the source and selection of ingredients for the beer, Budweiser's seven-step brewing process, when their beer began beechwood aging and which brewmaster tasted it multiple times throughout its brew cycle to ensure it met Budweiser's standards.<br /><br />"Track Your Bud also gives consumers insight into where Budweiser's raw materials come from -- including barley farms in Idaho, Montana, North Dakota, Minnesota, and Wisconsin; and hops farms in Idaho, Washington, Oregon, and Germany," says Jane Killebrew-Galeski, director of brewing, quality, and innovation for Anheuser-Busch.<br /><br />While the inventors of the QR code may never have dreamt that the technology that they developed all those years ago would ever be put to such innovative uses, this is clearly the wave of the future.<br /><br />Indeed, I can't wait until the day when all companies follow the responsible lead set by the US brewing giant. Enabling consumers to <a href="">trace the origins of the products</a> they purchase directly back to the specific individuals at the company that made them will allow them to see exactly who to contact should anything go wrong.<br /><br />For more information about the new Budweiser "Track Your Bud" program, check out <a href="" target="_blank"></a> or download the app.<br /><br />Bottoms up!, problem with visionnoemail@noemail.orgAndy WilsonIt's a common phrase amongst those folks in the Vision Systems Design world that the first five years in the business are the hardest.<br /><br />Now the reason for this is quite simple. To effectively build a vision system for a particular customer demands that an engineer at a <a href="">system integrator</a> must have a wealth of experience at his or her disposal.<br /><br />That's because vision systems aren't trivial to build. System integrators must first gain a very detailed understanding of the specific inspection task that needs to be performed by the system before they even go near a keyboard to use their knowledge in <a href="">specifying the hardware</a> and <a href="">software</a> that may be suitable for the job.<br /><br />Now while the system integrator is usually well aware of the specific set of questions that he needs to ask his potential customer before embarking on a project, oftentimes the customer in question will have no idea why he is asking them.<br /><br />That's because the hardware and the software that might be needed for such a system are highly dependent on a number of factors that the customer may not realize are vitally important considerations.<br /><br />So what tends to happen is that -- in some instances at least -- the system integrator gets involved in an exercise that is almost as painful as extracting teeth from the potential customer before he can get to work developing a system.<br /><br />Usually, of course, after the specifics of the inspection and the conditions under which a component must operate have been determined, it's plain sailing -- but not always.<br /><br />Finally understanding the capabilities of the vision system, the customer may then decide that he or she could potentially extend these capabilities to capture even more data from such a system, which could be used elsewhere in his or her production process.<br /><br />The hapless system integrator may be then called upon to modify a system that is almost completed to perform a bunch of additional tasks for which it wasn't initially intended.<br /><br />Needless to say, this all costs money. And while most system integrators are happy to make an additional buck for such work, they do so while cursing the customer for not thinking things through at the outset.<br /><br />Now it would be wonderful if it were possible to create a generic template of questions that system integrators could ask their customers to fill out prior to any initial meeting, a move that could avoid any potential strife.<br /><br />Unfortunately, due to the highly application-specific nature of the vision systems game, I don't really believe that's a viable option. Unless we can create an <a href="">artificial intelligence (AI) system</a> to extract the experience from those with years of industry background. Surely that would be a worthy goal for any of our imaging and vision associations to have a crack at, European look at 3-D imagingnoemail@noemail.orgAndy WilsonInterest in <a href="">3-D imaging</a> has never been greater. Everyone, it seems, is interested in knowing what applications this technology might help solve.<br /><br />So when I heard that the good folks at <a href="">Stemmer Imaging</a> (Tongham, UK) were putting on an entire day's worth of seminars on the subject at Mercedes-Benz World in Weybridge (located in Surrey, England), I instructed our European correspondent to scream down there in his old Honda Civic to check things out.<br /><br />Now you might think that folks who develop systems to inspect paper towels, examine the surfaces of women's skin, or scrutinize the tread on skis wouldn't have much in common. But <a href="">our assiduous European editor</a> discovered differently at the event.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="113" src="" width="320" /></a></div><br />That's right. At one particular seminar presented by Christian Benderoth, the sales manager of <a href="" target="_blank">GFMesstechnik</a> (Berlin, Germany), he discovered that manufacturers of all these products are using a rather nifty little handheld <a href="">3-D image scanner</a> to inspect the surfaces of their products.<br /><br />The product in question -- which has been developed by GFMesstechnik -- is called the MikroCADlite. It is, in essence, a small scanner that uses a digital light projector (<a href="">DLP</a>) from Texas Instruments (Dallas, TX, USA) to project a <a href="">structured light</a> pattern onto a surface. The images are captured by an imager in the scanner, then analyzed to reveal the details of surfaces.<br /><br />According to Benderoth, folks that make kitchen rolls embossed with aesthetically pleasing structures have used the scanner to produce color-coded height images of the embossed paper, in a move that allows them to check the consistency of the pattern. Their counterparts in the cosmetics business have created maps of the surface of skin around women's eyes to ascertain the before and after effects of cosmetics such as anti-aging cream. Not to be left out, engineers developing skis are using the system to capture 3-D images from which they can produce plots of the variation in tread across the bottom of a ski.<br /><br />After the presentation, our European Editor decided to see what literature he could pick up on the MikroCADlite system to take home and read during what little leisure time he has left after he's finished working on the <i>Vision System Design</i> magazine and web site.<br /><br />While flipping through the page of the document back home, he discovered that the MikroCADlite system had also been put to use examining the decor of the interior of car parts too. More specifically, at the back of the document there was a comparison between the decor of a 2005 BMW and a Honda Civic not entirely dissimilar to his own (see image).<br /><br />Now, of course, the sorry old fool is trying desperately to interpret the images to discover whether his UK-made Honda car has a superior interior finish to its German counterpart. It's the last time I send him, Systems Design: The Swimsuit Issue, Part 2noemail@noemail.orgAndy WilsonHard on the heels of the fascinating revelations in last week's blog (<a href="">Vision Systems Design: The Swimsuit Issue</a>) comes the news that the annual <i>Sports Illustrated</i> swimsuit issue has once again employed some rather nifty imaging technology to bring more life to the swimsuit models featured on its pages.<br /><br />For the second year in a row, the publisher has digitally watermarked photos of the scantily clad ladies in its issue using <a href="" target="_blank">Digimarc's</a> (Beaverton OR, USA) digital watermarking technology. For those of you who may not know, this digital watermarking technique embeds an imperceptible pattern into the image that can be detected by a smart phone but not by the human eye.<br /><br />Unlike a QR code, these watermarks have the advantage that they do not cover a part of an image or ruin the design of the pages of the swimsuit issue -- a move which I'm sure would upset many of the magazine's readers if they did.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="95" src="" width="320" /></a></div>To launch the behind-the-scenes videos of the nineteen different swimsuit models, users simply need to use a mobile swimsuit viewer app -- available for iPhone, iPad, and Android -- which then enables such devices to detect the digitally watermarked photos and launch the videos associated with them.<br /><br />The swimsuit viewer mobile app and video delivery process were built and are managed by <a href="" target="_blank">Nellymoser</a> (Arlington, MA, USA) -- a mobile marketing and technology services company. The app and video are powered by Nellymoser's "mobile engagement platform," which enables digital content to be presented across desktops, tablets, and phones.<br /><br />While the development of such a watermark image encoder and reader app might seem somewhat trivial to those of us in design and manufacturing, it is in fact high-volume applications such as these that may ultimately drive down the cost of the technology, allowing it to be used by all sorts of folks in industrial marketing too.<br /><br />Having said that, enabling a user to launch an interactive video of a machine vision system for automating the process of <a href="">de-boning lamb carcasses</a> or <a href="">removing the internal organs of animals such as cows or pigs</a> from a promotional flyer distributed at a trade show might be taking things a little too, Systems Design: The Swimsuit Issuenoemail@noemail.orgAndy WilsonMany popular magazines such as <i>Sports Illustrated</i> rely on publishing yearly issues dedicated to celebrating the predominantly female human form in swimsuits, a cunning wheeze that boosts circulation and sales.<br /><br />For years I have been wondering how a magazine such as <i>Vision Systems Design</i> might possibly be able to justify covering such a subject to the same effect. And recently I discovered the answer to my prayers.<br /><br />The solution, naturally enough, lies in writing numerous stories about <a href="">particle image velocimetry</a>, a technique that uses a <a href="">laser</a> to illuminate millions of reflective particles in water. When images of the same are then captured by <a href="">high-speed cameras</a>, they allow researchers to observe how the particles move around objects found in the water -- including, of course, folks wearing swimsuits.<br /><br />Through the use of such equipment, researchers hope to be able to develop more high-tech swimsuits that would give athletes a competitive advantage by reducing the drag of the water around their bodies as they swim.<br /><br />Now there's been quite a lot of research work performed in this area, predominantly at <a href="" target="_blank">Leeds University</a> (Leeds, UK), where a Speedo-sponsored team led by Professor Jeff Peakall has been engaged conducting tests to examine how efficiently different fabrics move through the water.<br /><br />Most recently, the university team was commissioned by the swimwear company to assist in the development of its new FASTSKIN3 Racing System swimsuit and spent 18 months testing levels of "fabric drag."<br /><br />In a statement to the press this month, Peakall said, "We're really excited because I think we've found out that some of the materials are appreciably faster than anything we've seen before, and I'm absolutely confident that this is going to be of great benefit to competitive swimmers."<br /><br />Not everyone is so optimistic. Take George Lauder, the Henry Bryant Bigelow Professor of Ichthyology at Harvard University (Cambridge, MA, USA), for example. He argues that the notion that simply donning a different swimsuit -- like a Speedo FASTSKIN II suit, with a surface purportedly designed to mimic shark skin to gain an edge on the competition -- is almost completely misplaced.<br /><br />Experiments conducted in Lauder's lab, and described in <i><a href="" target="_blank">The Journal of Experimental Biology</a></i>, reveal that, while sharks' sandpaper-like skin does allow the animals to swim faster and more efficiently, the surface of the high-tech swimsuits has no effect when it comes to reducing drag as swimmers move through the water.<br /><br />Indeed, Lauder claims to have conclusively shown that the surface properties themselves, which the manufacturer has in the past claimed to be biomimetic, don't do anything for propulsion.<br /><br />That's not to say that the suits as a whole do nothing to improve performance. Lauder also reasons that there are all sorts of effects at work that aren't due to the surface effects of the swimsuit.<br /><br />"Swimmers who wear these suits are squeezed into them extremely tightly, so they are very streamlined. They're so tight that they could actually change the circulation (of the swimmer), and increase the venous return to the body, and they are tailored to make it easier to maintain proper posture even when tired. I'm convinced they work, but it's not because of the surface," he says.<br /><br />All that remains to be seen now is whether my swimsuit column has done anything to improve the circulation of Vision Systems Design and boost its companion web site page views.<br /><br />References:<br />1. <a href="" target="_blank">Flumes and lasers test elite sportswear</a><br />2. <a href="" target="_blank">Skin deep</a>, search of a scoopnoemail@noemail.orgAndy WilsonEver in search of an exclusive scoop to whet the appetites of the Vision Systems Design readers, I often spend the evenings trawling through numerous announcements that have been made by researchers on scientific web sites such as <a href="" target="_blank">Eurekalert</a> and <a href="" target="_blank">Alphagalileo</a>.<br /><br />This week, through my web surfing activity, I came across the news that a group of researchers in the US and the UK have developed a means to put <a href="">MRI scanners</a> to work capturing images of the surfaces of the conductors found in lithium-ion batteries.<br /><br />As many of you know, <a href="">magnetic resonance imaging</a> (MRI) is a medical imaging technique widely used in radiology that can create detailed images of structures within the body.<br /><br />However, the strong magnetic fields that are used in such systems make them unsuitable for use with patients with metal implants because the conducting surfaces block the radio-frequency fields used in the systems.<br /><br />Now, researchers at Cambridge University (Cambridge, UK), Stony Brook University (Stony Brook, NY, USA), and New York University (NYU; New York, NY, USA) have turned that limitation into a virtue, using an MRI scanner to directly visualize the build-up of lithium metal deposits on the electrodes of lithium-ion batteries.<br /><br />Their work visualizing small changes on the surface of the batteries' electrodes might in principle allow many different battery designs and materials to be tested under normal operating conditions.<br /><br />Indeed, Professor Alexej Jerschow from the Department of Chemistry at NYU said that using such noninvasive MRI systems could provide insights into the <a href="">microscopic processes inside batteries</a>, which hold the key to eventually making them lighter, safer, and more versatile.<br /><br />I'm sure that one day they might be able to. Especially considering the number of other research groups across the world that are also using such MRI imaging systems to study electrochemically induced structural changes in batteries. You only have to perform a Google search of the scientific literature to find out just how many there are.<br /><br />So while the news might be interesting, it wasn't exactly the scoop that I was looking for when I began my search for an exclusive "new technology." Then again, perhaps the story simply illustrates the point that very few researchers work in isolation, and that innovative ideas often occur as a result of methodical, cooperative scientific effort and rarely through one single individual's "Eureka" moment.<br /><br />Those who are interested in learning more about the technique can find a complete technical description at <a href="" target="_blank"></a>, press the button, we do the restnoemail@noemail.orgAndy WilsonThis week, <a href="">Eastman Kodak</a> announced that it was to phase out its digital camera, pocket video camera, and digital picture frame businesses in the first half of this year.<br /><br />Founded by George Eastman in 1889, the company made its name selling inexpensive film cameras and making large margins from the film, chemicals, and paper that were required to capture and develop the images that they took.<br /><br />Now, by getting out of the digital camera business that replaced its old film cameras, the company expects to achieve operating savings of more than $100 million a year.<br /><br />It's a sad state of affairs but hardly unexpected. The move comes hot on the heels of <a href="">last month's announcement</a> that the company had filed for filed voluntary petitions for Chapter 11 business reorganization in the US Bankruptcy Court for the Southern District of New York.<br /><br />But perhaps it doesn't quite mean the end of the Kodak brand, because the company is seeking to expand its current brand licensing program by looking for interested parties to license the products instead.<br /><br />It's better news, thank goodness, at the company's <a href="">former Image Sensor Solutions</a> (ISS) division business. Now called Truesense Imaging -- after being acquired from Eastman Kodak by Platinum Equity through a transaction with Kodak that closed on Nov. 7, 2011 -- it would appear that the only thing to have changed at the company is its name.<br /><br />Truesense Imaging, still with its headquarters in Rochester, NY, has kept Kodak's research and development, marketing, and business operations intact, including its highly specialized <a href="">image-sensor manufacturing</a> operation.<br /><br />The name change, which was also only announced this week, is so recent that when our European correspondent met up with Truesense Imaging's Michael DeLuca at the <a href="">AIA Business Conference</a> just a few weeks ago in Orlando, Florida, he was still carrying a Kodak business card, which was -- naturally enough -- printed on Kodak paper.<br /><br />Perhaps he should hang onto it. It might be worth some money as an antique in the, systems come to the aid of homeownersnoemail@noemail.orgAndy WilsonWhen my brother's <a href="">sewer pipe blocked up</a> last year, he called out the helpful chaps from Dyno-Rod. But when it became clear that the cause of the blockage wasn't immediately obvious, they took a closer look with their CCTV equipment.<br /><br />Now, usually blocked drains can be caused by a number of factors, but most of the time it is simply a buildup of whatever's gone into the drain.<br /><br />But this time around, it wasn't. No, the CCTV footage showed quite clearly that the cause of the problem was roots that had grown into the pipework from a tree that had been planted close to the house.<br /><br />Having determined the cause of the problem, it was simply a case of hauling the tree out of the backyard, getting the roots out of the pipe and relining it.<br /><br />More recently, my brother's been in contact with me again. This time around, he is concerned that the gutters around the top of the house have been blocked with leaves and debris, causing water to overflow into the yard rather than drain away through the downpipes on the side of the house.<br /><br />Fortunately, after a quick look on the web, I discovered yet another system -- and one that also partially makes use of vision -- that might be able to help him here too.<br /><br />The new system, called the VertaLok Rotary Gutter Cleaning System, provides an effective solution to my brother's problem -- all without him having to put a foot on a ladder.<br /><br />The product itself consists of a number of extendable pole sections with an internal rotary drive that can be connected to a cordless drill, and a water channel that can be hooked up to a garden hose.<br /><br />On the end of the pole section -- the end that is destined for the gutter -- a user can attach a number of tools. These include a rotary paddle brush that works with the cordless drill for removal of leaves and loose debris, a gutter scoop with water jet nozzle for wet compost and tough debris, and a gutter brush with water jet nozzle for cleaning and rinsing.<br /><br />Bu the best bit of the whole system must surely be the mounting bracket that will allow my brother to attach a <a href="">digital camera</a> to the system, enabling him to perform a video inspection from ground level before or after he has finished cleaning his gutters.<br /><br />Now this development might not sound like the most <a href="">novel use of a vision system</a> that we have ever covered here on the Vision Systems Design web site. And indeed, if you think so, then I'd be tempted to agree with you.<br /><br />But it does go to show just how pervasive image capture and analysis is becoming in all our daily, tutorials help demystify the selection processnoemail@noemail.orgAndy Wilson<a href="">Determining the best industrial camera</a> for your new system without being able to test it in a final design is important, because choosing the appropriate camera from the outset can eliminate costly redesigns or upgrades in the future.<br /><br />For many of us that have been involved in the vision systems design business for a while, the selection of cameras best suited to perform a particular role in a system may come as second nature.<br /><br />However, for those folks who have entered the industry a little more recently, the superfluity of camera manufacturers and the different types of products that they make might appear to make such a selection process a rather daunting one.<br /><br />Now, to help such folks out, engineers at <a href="" target="_blank">Adimec</a> (Eindhoven, the Netherlands) have written a <a href="" target="_blank">three-part blog series</a> in which they suggest a step-by-step approach that will hopefully help in the camera selection process to enable you to find the best candidate for your particular job.<br /><br />While the selection of the appropriate camera is undoubtedly vital, the choice of lighting is no less important, and, indeed, often plays a critical role in creating a robust vision inspection system.<br /><br />Thankfully, there's also a rather good tutorial on the web that might help you in that area too. Written by Daryl Martin, from <a href="" target="_blank">Advanced illumination</a> (Rochester, VT, USA), and available on the <a href="" target="_blank">National Instruments site</a>, it demonstrates many machine-vision lighting concepts and theories.<br /><br />If these two web sites provide you with some assistance in the task of choosing a camera or a <a href="" target="_blank">lighting system</a> -- or if you know of any other useful resources that you think might be useful to others -- <a href="" target="_blank">why not drop me a line</a> to let me know?, the temperature of elephantsnoemail@noemail.orgAndy WilsonMany of our readers will be familiar with the principle of operation of <a href="">thermal imaging (infrared) cameras</a> and how they can be used in a variety of applications ranging from determining the <a href="">thermal loss of buildings</a>, <a href="">detecting specific gases</a>, or <a href="">monitoring production processes</a>.<br /><br />But like me, most people might be surprised to hear that a group of researchers from the <a href="" target="_blank">University of Guelph</a> (Ontario, Canada) are now using such cameras to study the thermoregulation of animals such as elephants.<br /><br />That's right. As a member in the Department of Animal and Poultry Science (APS), Esther Finegan and her students have filmed elephants in Busch Gardens zoological park in Florida with a thermal imaging camera to see how and when they store and radiate heat. She and her students are now pioneering similar thermoregulation studies at the Toronto Zoo.<br /><br />While the use of thermal imaging will undoubtedly prove to be an invaluable tool that will enable zookeepers and landscape architects to better design the animals' surroundings to keep them happy and healthy, this isn't the only means by which researchers have measured the temperature of such beasts.<br /><br /><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="180" src="" width="320" /></a><br />Last year, for example, scientists at the Research Institute of Wildlife Ecology (FIWI) at the <a href="" target="_blank">University of Veterinary Medicine</a> (Vienna, Austria) showed that Asian elephants respond to high daytime temperatures by significantly lowering their body temperature during the cooler night hours. By doing so they create a thermal reserve that allows them to store heat and so prevent heat stress as temperatures rise during the day.<br /><br />To reach that conclusion, they fed small telemeters to a group of captive elephants in Thailand and a group at the Munich Zoo Hellabrunn to monitor temperatures in the animals' gastrointestinal tract. The telemetry system, which permits the continuous recording of temperature, had previously been developed at the Research Institute of Wildlife Ecology.<br /><br />Statistical analysis of the data confirmed that while the overall mean body temperature was similar in both the Thai and the German elephants, fluctuations in body temperature were on average twice as large in the Thai animals as in the German ones. The Thai animals had both a higher daily peak temperature and a lower minimum temperature, which the scientists related to the higher mean ambient temperatures in Thailand.<br /><br />In fact, the body temperature of the Thai elephants dropped at night to well below the normal average, meaning that Thai elephants start the day with a much larger thermal reserve than their German counterparts.<br /><br />It just goes to show that, just as there's more than one way to skin a cat, there is also more than one way to take the temperature of an elephant. But if I were an elephant, I'd probably prefer the noninvasive image-processing approach rather than ingesting a telemetry, teams aided by image capture and transmission...and dogsnoemail@noemail.orgAndy WilsonGetting a person into an inhospitable location such as a <a href="">disaster zone</a>, or an area of conflict, isn't always easy. Dogs, however, don't have the same sorts of issues and can travel places where an individual might have difficulty.<br /><br />So <a href="">why not equip them with cameras</a> and microphones, so their handlers can see exactly what they're up to and whether they may have spotted anyone in distress?<br /><br /><br /><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="214" src="" width="320" /></a><br />Indeed, that high-tech vision of the rescue dog of the future was exactly what engineers at <a href="" target="_blank">Wood &amp; Douglas </a>(Tadley, Hampshire, UK) had in mind when they developed the Portable All-terrain Wireless System (PAWS) -- a system that is, naturally enough, designed to be worn by search and rescue dogs.<br /><br />Despite looking like a Hollywood sci-fi creation, with its head-mounted video camera and microphones, PAWS lets a rescue dog search without any discomfort, beaming video images back to its handler.<br /><br />With a camera that supports low light and infrared night-vision options, the dog-mounted video system can be used for search and rescue, supporting military operations, or even <a href="">explosives and drug detection</a>.<br /><br />Alan Wood, managing director of Wood &amp; Douglas, said that while it may look unusual or raise a smile at first sight, the capability to see a dog's point of view makes a hazardous job safer for both handler and dog and helps save lives. The dogs are not put off by the technology they carry and can give their handler a view of areas that they are unable to get to themselves.<br /><br />The company says that PAWS can be adapted to be worn by different dogs, delivering a video feed in real time to either a desktop or to a wearable receiver with a hands-free or head-mounted monitor.<br /><br />Despite the fact that the system has been designed to be worn by a dog, I can't help but think that the folks at Wood &amp; Douglas might well have created a product here that is far from being a dog, man with no tannoemail@noemail.orgAndy WilsonNot wanting to selfishly indulge in all of the great travel and business opportunities that come with the job of being the Editor of <i>Vision Systems Design</i> magazine, I have decided to dispatch our <a href="">Senior Editor Dave Wilson</a> off to one of the most prestigious of all of the events in the <a href="">Vision Systems Design calendar</a>.<br /><br />That's right. This week, he will be attending the 20th Annual Automated Imaging Association (<a href="">AIA</a>) Business Conference, which is being held at the Orlando World Center Marriott from January 18-20.<br /><br />While it all might sound like a bit of a holiday for our Senior Editor -- who is more accustomed to living under the slate gray Victorian skies of rainy England -- I can assure you that it's not.<br /><br />Indeed, to ensure that he does not spend his days lounging by one swimming pool or another, or even visiting resorts populated with mice featuring large ears, I have specifically asked him to cover a number of key industry events while he is in Florida.<br /><br />On Wednesday, for example, he'll be meeting up with the good folks at <a href="">PPT</a> for an entire day's session to learn a lot more about how their company's third-party integrators have developed numerous vision-based solutions for the industrial marketplace.<br /><br />After that, it's a work-packed two days at the AIA Business Conference itself, where Dave will be sitting through several important presentations including one entitled "Outlook for the Global Economy" by Alan Beaulieu, from the Institute for Trend Research.<br /><br />But that's not all he'll have to do. Oh no. There will be other sessions that he'll have to attend too, including one from George Chamberlain of Pleora Technologies on <a href="">AIA Standards</a> as well as an update on the <a href="">Machine Vision Market</a> in North America by Paul Kellett from ATC.<br /><br />Naturally enough, while he's there, Dave will be only too pleased to discuss any new, exciting applications that you or your company might have developed over the past year or so -- great fodder we might be able to develop as stories for upcoming issues of <i>Vision Systems Design</i> magazine.<br /><br />While it sounds like an arduous schedule, it should be easy going for an old pro like Dave. But just to make sure that it isn't too easy for him, I've even shipped down an old notebook PC of mine to the hotel where he's staying to ensure that -- should he actually have any free time -- he can spend it working, rather than goofing off.<br /><br />If Dave looks tired after the event, I'll know he has done his job properly. If he has a tan, however, <a href="">send me an e-mail</a> to let me,'s next for Kinect?noemail@noemail.orgAndy Wilson<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="111" src="" width="200" /></a></div>This past Monday, Microsoft's Craig Eisler formally announced that new <a href="">Kinect for Windows</a> hardware and accompanying software would be available from February 1 this year in 12 countries including the US at a suggested retail price of $249.<br /><br />Microsoft has chosen a hardware-only business model for Kinect for Windows, which means that the company will not be charging for the software development kit (SDK) or the runtime system. These will be available free to developers and end users, respectively. Independent developers will not pay license fees for the Kinect for Windows software or the ongoing software updates, and new Kinect for Windows hardware will be supported by Microsoft.<br /><br />Of particular interest to developers will be new firmware that enables the depth camera to see objects as close as 50 cm in front of the device without losing accuracy or precision, with graceful degradation down to 40 cm.<br /><br />The $249 price tag includes a one-year warranty and access to ongoing software updates for both speech and human tracking. Later this year, the company will offer special academic pricing (planned at $149) for qualified educational users.<br /><br />Addressing the reason why the pricing of the Kinect for Windows system was higher than the Kinect for Xbox system, Eisler said that Microsoft's ability to sell Kinect for Xbox 360 at its current price point is in large part subsidized by consumers buying a number of Kinect games, subscribing to Xbox Live, and making other transactions associated with the Xbox 360.<br /><br />In addition, he said that the Kinect for Xbox 360 was built for and tested with the Xbox 360 console only, which is why it was not licensed for general commercial use, supported, or under warranty when used on any other platform.<br /><br />The news will undoubtedly be greeted with some interest by system developers who may now consider using the Kinect system in a variety of <a href="">manufacturing and retail applications</a>.<br /><br />Those who might still be somewhat skeptical should note that -- during a keynote speech at the <a href="" target="_blank">CES show</a> (Las Vegas, Nevada) -- Microsoft's Steve Ballmer announced that Siemens, Citi, Boeing, American Express, Unilever, United Health Group, Mattel, and Toyota are just some of the companies that Microsoft is already working with to develop Kinect-based systems.<br /><br />Clearly, Microsoft has great hopes that the platform will be successful <a href="">outside the gaming arena</a>. And if you have an interesting idea of how you could use the Kinect system, then you might like to consider taking part in a Microsoft's initiative called the Kinect Accelerator incubation project, which is run by <a href="" target="_blank">Microsoft BizSpark</a>.<br /><br />The project will give ten successful companies an investment of $20,000 each to develop a system around the Kinect on either Windows or Xbox 360. At the end of the program, each company will have an opportunity to present at an Investor Demo Day to angel investors, venture capitalists, Microsoft executives, and media and industry professionals.<br /><br />Applications are being accepted through Jan. 25, 2012, so there's still time to make a, aids design for the visually impairednoemail@noemail.orgAndy WilsonOne of the problems with getting older is that one's ability to see fine details deteriorates, as does the ability to see in the dark. And that's not good if you drive a motor vehicle, because it means that -- unless you have <a href="">bifocal or varifocal glasses</a> -- you could struggle to read the instrument cluster while driving.<br /><br />Recognizing that fact, researchers at <a href="" target="_blank">Cambridge University's</a> (Cambridge, UK) Engineering Design Centre have developed a Vision Impairment Simulator that enables designers and engineers to gain a better understanding of the effects of a wide variety of visual impairments.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="192" src="" width="320" /></a></div>The tool -- developed by Cambridge design research associate Dr. Sam Waller -- allows a user to simulate visual impairments on any image. After an image is loaded into the simulator, an operator can select a visual impairment and look at the image as someone with that impairment would see it.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="193" src="" width="320" /></a></div><br />Even in the case of <a href="">age-related macular degeneration</a>, where the loss of central vision moves around with the eye, the software simulates the effect by allowing a user to move the "blind spot" around to see its effect on different parts of the image.<br /><br />The software has already proved a hit with engineers at auto giant Ford (Brentwood, UK), who are using it to study and optimize the design of their instrument displays to ensure they can be safely and comfortably read by as many drivers as possible.<br /><br />Interestingly enough, the software isn't limited to applications in the automotive field. It has also been used to improve the design of mobile phones and for teaching what&rsquo;s known as inclusive design at several universities.<br /><br />Analyzing population statistics and creating design tools that enable designers to engineer products that offer a better experience across a wider range of users is clearly an important issue, and one that will become more important as a greater number of people live to older age.<br /><br />So now perhaps its time for more companies -- even those in the vision industry -- to make a New Year's resolution to determine how their systems might be made more accessible to a wider group of individuals as, and themnoemail@noemail.orgAndy Wilson<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="206" src="" width="320" /></a></div>Last month, an <a href="" target="_blank">RQ-170 Sentinel UAV</a> nicknamed the "Beast of Kandahar" fell into the hands of the Iranians after the United States Department of Defense lost control of it while it was flying through Iranian airspace.<br /><br />Needless to say, the high-tech piece of Lockheed Martin gear was immediately put on display by the Iranians, who claimed to have brought the <a href="">unmanned reconnaissance vehicle</a> down to earth by sophisticated electronic counter-warfare measures.<br /><br />Whether they did, or whether the landing was simply due to a malfunction of a system onboard the aircraft itself, the whole affair proved very embarrassing for the US Government, which formally requested that the aircraft be returned to its rightful owners.<br /><br />The Iranians, however, didn't see things quite the same way. Instead they issued a formal complaint to the United Nations Security Council stating that the incident was tantamount to an act of hostility against their country in contravention of international law.<br /><br />The whole affair raises an important issue about the deployment of such unmanned aircraft -- notably, that there do not appear to be any hard and fast rules that govern when such <a href="">UAVs</a> can be flown over a country given the fact that the government of that country has not granted permission for such operations to take place.<br /><br />To rectify this dilemma, perhaps it's now time that an international body drew up a set of guidelines for what is -- and is not -- deemed to be the acceptable use of such systems and for what purposes.<br /><br />Such an idealistic notion, however, is unlikely to find much favor at the present time, especially with countries that feel that they have the right to fly such aircraft over whatever country's airspace they like in the interest of their own national security.<br /><br />But such guidelines won't seem so idealistic in the future, I'm sure, when countries such as Iran reverse-engineer the downed unmanned aerial technology and then feel that they have equal rights to perform reciprocal measures on the countries that have been snooping on them for years. That's if they have the know-how to do, processing is all in the mind?noemail@noemail.orgAndy WilsonIf you are anything like me, you probably gave out one or two <a href="">video games</a> as presents to some of your younger relatives over the holiday season. If you did, however, you ought to be aware of the danger involved, and the potential repercussions of your actions.<br /><br />Apparently, according to research carried out by academics in the UK and Sweden, some video game players are becoming so immersed in their <a href="">virtual gaming environments</a> that -- when they stop playing -- they transfer some of their virtual experiences to the real world.<br /><br />That's right. Researchers led by Angelica Ortiz de Gortari and Professor Mark Griffiths from Nottingham Trent University's International Gaming Research Unit, and Professor Karin Aronsson from Stockholm University, have revealed that some gamers experience what they call "Game Transfer Phenomena" (GTP), which results in them doing things in the real world as if they were still in the game!<br /><br />Extreme examples of GTP have included gamers thinking in the same way as when they were gaming, such as reaching for a search button when looking for someone in a crowd and seeing energy boxes appear above people's heads.<br /><br />Aside from the game players, though, I wonder if this research might also have some implications for software developers working in the vision systems business, many of whom also work long hours staring at computer screens, often taking their work home with them.<br /><br />How many of these individuals, I wonder, also imagine that they are performing image-processing tasks when going about their daily routine? Have you, for example, ever believed that you were performing a <a href="">hyperspectral analysis</a> when considering whether or not to purchase apples in the supermarket, optical character recognition to check the sell-by date on the fruit, or even a <a href="">histogram equalization</a> on the face of the attractive young lady at the checkout line?<br /><br />While Professor Mark Griffiths, director of the International Gaming Research Unit at Nottingham Trent University, said that he found that intensive gaming may lead to negative psychological, emotional, or behavioral consequences, the same might hold true for those of us who spend too much time at work developing image-processing software.<br /><br />Thank goodness, then, that we will soon be able to look forward to a few more days respite from our toils to celebrate the New Year.<br /><br />Happy New, time is here againnoemail@noemail.orgAndy WilsonIt's that time of year again. That time when many of us will be erecting a fir in the corner, decking the halls with boughs of holly, and sitting back to enjoy a glass of mulled wine as we roast chestnuts over an open fire.<br /><br />That's right. It's Christmas, the festive season in which we put our work to one side for a while to enjoy a few well-deserved days off to spend with our friends and family.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="172" src="" width="320" /></a></div><br /><br />But before the festivities can begin, there are numerous chores that must be performed. And one of these, of course, is to send Christmas greetings to all our friends and colleagues.<br /><br />Traditionally, such messages of comfort and joy have been sent via the postal service. After purchasing a box of Christmas cards, many of us spend hours writing individual messages inside them, after which the cards are duly inserted into envelopes, addressed, and taken down to the Post Office where they are mailed.<br /><br />In these days of automation, however, some of us no longer leave the comfort of our armchairs to perform the task, preferring to use e-mail greeting cards instead. While such e-mail messages may never have the quite the same personal appeal as a real piece of card with a Christmas scene printed upon it, they certainly are a cost-effective alternative to sending out the real thing.<br /><br />With such electronic wizardry automating our traditional time-intensive Christmas labors, it's interesting to consider by what means we might be delivering our Christmas greetings to our friends and colleagues in the future.<br /><br />Well, I think the folks at <a href="">Edmund Optics</a> might have found the answer. To distribute a Christmas message to their audience in the vision systems design industry, the innovative Edmund Optics team has produced a rather amusing video that they have uploaded onto YouTube where it can be viewed by all and sundry.<br /><br />But this isn't just any video greeting. Oh no. The entertaining video features a number of Edmund Optics' employees playing a familiar Christmas tune on the company's own range of <a href="">telecentric lenses</a>. That's right. Watch carefully and you will see the so-called "<a href=";list=UUA3_rChA1L_NGk6gNqi25Q&amp;index=1&amp;feature=plcp" target="_blank">Telecentric Bell Choir</a>" [click for YouTube video] ringing the lenses to play that Christmas favorite "Carol of the Bells."<br /><br />From my perspective, this form of sending holiday greetings to friends and family is clearly the wave of the future. What's more -- for Edmund Optics at least -- it might be a way to generate a whole new market for its acoustically-enabled telecentric product line.<br /><br />Happy holidays!, light bulbs with visionnoemail@noemail.orgAndy WilsonThe United Nations is urging countries across the globe to phase-out old style incandescent light bulbs and switch to <a href="">low-energy compact fluorescent light</a> (CFL) bulbs to save billions of dollars in energy costs as well as help combat climate change.<br /><br />One issue with such bulbs is that they contain minute traces of mercury, however, and hence should be recycled to prevent the release of mercury into the environment rather than just tossed into a dumpster.<br /><br />This, of course, has created an enormous opportunity to automate the collection of old CFL bulbs -- an opportunity that one machine maker in the UK has clearly identified.<br /><br />That's right. Partly thanks to the stealthy deployment of a machine vision system, London, UK-based <a href="" target="_blank">Revend Recycling</a> has now developed a machine to collect light bulbs in exchange for discount vouchers or other consumer rewards.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br /><br />When using the so-called "Light Bulb Recycling <a href="">Reverse Vending</a>" machine, an individual is guided through the recycling process by a touch-screen menu. After the unwanted bulbs are placed into the machine, they are then identified by the vision system, after which the machine softly drops the bulbs into a storage container.<br /><br />The machine then automatically dispenses a reward incentive voucher, which can be chosen from a large selection of different rewards on the touch-screen.<br /><br />To enable recovery and recycling statistics to be collated, the recycling data captured from every light bulb received are transmitted to a secure central database. An <a href="">embedded computer system</a> in the machine also determines when the light bulb storage container in the machine is full, and when it is, the machine automatically sends a text or email when it nears full capacity, so that it can be emptied.<br /><br />So far, the vision based recycling machine has proved to be a bit of a hit. The Scandinavian modern-style furniture and accessories store Ikea, for example, recently inked an agreement with Revend Recycling, and soon the store's customers in the UK, Germany, and Denmark will have the option to recycle used light bulbs with the machines. As an added feature, the recycling machines can also be purchased with an add-on system for collecting domestic, standard aims to accelerate image processingnoemail@noemail.orgAndy WilsonIt goes without saying that computer vision has become an essential ingredient of many modern systems, where it has been used for numerous purposes including <a href="">gesture tracking</a>, <a href="">smart video surveillance</a>, automatic driver assistance, visual inspection, and robotics.<br /><br />Many modern consumer computer-based devices -- from smart phones to desktop computers -- can be capable of running vision applications, but to do so, they often require hardware-accelerated vision algorithms to enable them to work in real time.<br /><br />Consequently, many hardware vendors have developed accelerated computer vision libraries for their products: CoreImage by Apple, IPP by Intel, NPP by Nvidia, IMGLIB and VLIB by TI, the recently announced <a href="">FastCV</a> by Qualcomm.<br /><br />As each of these companies develops its own API, however, the market fragments, creating a need for an open standard that will simplify the development of efficient cross-platform computer vision applications.<br /><br />Now, <a href="" target="_blank">Khronos'</a> (Beaverton, OR, USA) vision working group aims to do just that, by developing an open, royalty-free cross-platform API standard that will be able to accelerate high-level libraries, such as the popular OpenCV open-source vision library, or be used by applications directly.<br /><br />The folks at the Khronos Group say that any interested company is welcome to join the group to make contributions, influence the direction of the specification, and gain early access to draft specifications before a first public release within 12 months.<br /><br />The vision working group will commence work during January 2012. More details on joining Khronos can be found at <a href="" target="_blank"></a> or e-mailing <a href=""></a>, vision is just a gamenoemail@noemail.orgAndy WilsonIn the late 1970s, game maker Atari launched what was to become one of the most popular video games of the era -- Asteroids.<br /><br />Those of our readers old enough to remember might recall how thrilling it was back then to navigate a spaceship through an asteroid field which was periodically traversed by flying saucers, shooting and destroying both while being careful not to collide with either.<br /><br />Today, of course, with the advent of new home consoles such as the <a href="">Sony Playstation</a>, <a href="">Microsoft Xbox</a>, and <a href="">Nintendo Wii</a> -- and the availability of a plethora of more graphically pleasing and complex games -- one might be forgiven for thinking that games like Asteroids are ancient history.<br /><br />Well, apparently not. Because thanks in part to some rather innovative Swedish image-processing technology, it looks as if old games might be about to make a comeback.<br /><br />That&rsquo;s right. This month, eye tracking and control systems developer <a href="" target="_blank">Tobii Technology</a> (Danderyd, Sweden) took the wraps off &ldquo;EyeAsteroids,&rdquo; a game it claimed was the world&rsquo;s first arcade game totally run by eye control.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="165" /></a></div><br /><br /><br />In the company&rsquo;s EyeAsteroid game, players have the chance to save the world (yet again) from an impending asteroid collision. As a slew of asteroids move closer to Earth, the gamer looks at them in order to fire a laser that destroys the rocks and saves the world from destruction.<br /><br />Henrik Eskilsson, the chief executive officer of Tobii Technology, believes the addition of eye control to computer games is the most significant development in the gaming industry since the introduction of motion control systems such as the Nintendo Wii. And if he&rsquo;s right, that&rsquo;s a big market opportunity for all sorts of folks that are associated with the vision systems business.<br /><br />But perhaps more importantly, the game might make interested parties take a look at another of the company&rsquo;s product offerings: an image recognition system that can find and track the eyes of drivers in order to inform an automotive safety system of the driver&rsquo;s state, regardless of changes in environmental conditions.<br /><br />Because while saving Earth from asteroids using the eyes might be good fun and games, saving lives on the highway through tracking the eyes of motorists is a much more distinguished achievement, and one that -- in the long run -- might also prove to be a more lucrative business opportunity.<br /><br />Nevertheless, those folks more captivated by the former application of the technology will be only too pleased to know that the EyeAsteroids game is available for purchase by companies and individuals. Tobii Technology plans a limited production run of 50 units that will be available for $15,000,'s a bug's life (revisited)noemail@noemail.orgAndy WilsonRemotely operated <a href="">unmanned aerial vehicles</a> (UAVs) equipped with wireless video and still cameras can be used in a variety of applications, from assisting the military and law enforcement agencies in surveillance duties, to inspecting large remote structures such as pipelines and electric transmission lines.<br /><br />Typically, however, such vehicles are quite sizable, bulky beasts due to the fact that they must carry a power source as well as large cameras. And that can limit the sorts of applications that they can effectively handle.<br /><br />Now however, it would appear that researchers at the University of Michigan College of Engineering have come up with an interesting idea that might one day see vehicles as small as insects carrying out such duties in confined spaces.<br /><br />And that&rsquo;s because the vehicles that they are proposing to "build" are, in fact, real insects that could be fitted out with the necessary technology to turn them into <a href="">mobile reconnaissance devices</a>.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="256" src="" width="320" /></a></div><br /><br />The work is at an early stage of development at the moment of course. To date, professor Khalil Najafi, the chair of electrical and computer engineering, and doctoral student Erkan Aktakka are figuring out ways to "harvest" energy from either the body heat or <a href="">movement of the insects</a> as a means to power the cameras, microphones, and other sensors and communications equipment that they might carry.<br /><br />As interesting as it all sounds, there are obviously bigger engineering challenges ahead than just conquering the energy harvesting issue. One obvious problem is how the researchers will eventually control the insects once they have been fitted out with their energy harvesting devices and appropriate vision systems.<br /><br />Then again, they may not need to. If a plague of such insects were dropped in Biblical proportions upon a rogue state for clandestine monitoring purposes by our armed forces, the chances that one of them would reveal some useful information would be pretty high.<br /><br />The religious and political consequences of letting loose high-tech pestilent biotechnology on such countries, however, might be so profound that the little fliers never get off the ground.<br /><br /><i>Editor's note:</i> The research work at the university was funded by the Hybrid Insect Micro Electromechanical Systems program of the Defense Advanced Research Projects Agency under grant No., can see clearly nownoemail@noemail.orgAndy WilsonWhile <a href="">I've always been short-sighted</a>, until not long ago it was always pretty easy for me to read books or magazines while wearing the same set of glasses that helped me see at great distances.<br /><br />But over the past couple of years, it became apparent that I not only needed glasses to correct for myopia but also to assist with looking at things closer to hand.<br /><br />To solve my dual myopic-hyperopic headache, I turned to my local optician who suggested that a pair of bifocals or varifocal lenses might do the trick. And it did. Thanks to her recommendation, I now sport a rather expensive pair of glasses with varifocals that enable me to focus my eyes onto both objects far and near.<br /><br />As great as these varifocals are, however, I accept that they aren't for everyone. In fact, some people dislike them as they find it difficult to get used to which areas of the lens they have to look through!<br /><br />One optics company -- Roanoke, Virginia-based PixelOptics -- has come up with a unique solution to the problem: an electronic set of glasses called emPower that has a layer of <a href="">liquid crystals</a> in each lens that can instantly create a near-focus zone, either when the user either touches a small panel on the side of the frames or in response to up and down movements of the head.<br /><br />Under development for 12 years, the new system, which is protected by nearly 300 patents and patent applications pending around the world, looks to be yet another interesting option for those folks with optical issues like mine.<br /><br />I'd like to think that there might be some use for this technology in the industrial marketplace, too, but I haven't quite figured out where that might be yet.<br /><br />I can't, for example, envisage any system integrator actually manually swiping cameras fitted with such lenses to change their focal length while they might be inspecting parts at high speed in an industrial setting. Nor could I imagine that many engineers would build a system to move such a camera up and down to do the same -- an <a href="">autofocus system</a> would surely be a lot more effective!<br /><br />Nevertheless, I'm keeping an open mind about the whole affair, because the imaging business is replete with individuals that can take ideas from one marketplace and put them to use in, your eyenoemail@noemail.orgAndy WilsonWhile industrial vision systems might seem pretty sophisticated beasts, none has really come close to matching the astonishing characteristics of the <a href="">human eye</a>.<br /><br />Despite that fact, even the human eye is often less than perfect, as those suffering from short or long-sightedness will testify. Those folks inevitably end up seeking to correct such problems either by wearing <a href="">spectacles or contact lenses</a>.<br /><br />Not content with developing a contact lens to correct vision anomalies, however, a team of engineers at the University of Washington and Aalto University, Finland, have now developed a prototype contact lens that takes the concept of streaming real-time information into the eye a step closer to reality.<br /><br />The contact lens itself has a built-in antenna to harvest power sent out by an external source, as well as an IC to store the energy and transfer it to a single blue LED that shines light into the eye.<br /><br />One major problem the researchers had to overcome was the fact that the human eye -- with its minimum focal distance of several centimeters -- cannot resolve objects on a contact lens. Any information projected on to the lens would probably appear blurry. To resolve the issue, they incorporated a set of <a href="">Fresnel lenses</a> into the device to focus light from the LED onto the retina.<br /><br />After demonstrating the operation and safety of the contact on a rabbit, they found that significant improvements are now necessary before fully functional, remotely powered displays actually become a reality. While the device, for example, could be wirelessly powered in free space from approximately 1 m away, this was reduced to about 2 cm when placed on the rabbit's eye.<br /><br />Another issue facing the researchers is to create a future version of the contact lens with a display that can produce more than one color at a higher resolution. While the existing prototype lens only has one single controllable pixel, they believe that in the future such devices might incorporate hundreds of pixels that would allow users to read e-mails or text messages.<br /><br />I don&rsquo;t know about you, but I can&rsquo;t wait for the time in which I might be able to have such e-mails, text messages, or, heaven forbid, Twitter feeds, projected directly into my eye, especially when I am on vacation. No, I wouldn&rsquo;t even put my pet rabbit through such an ordeal.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="176" src="" width="320" /></a></div><br /><br />Shown: Conceptual model of a multipixel contact lens display. 1: LED. 2: Power harvesting circuitry. 3: Antenna. 4. Interconnects. 5. Transparent polymer. 6. Virtual, technologies competenoemail@noemail.orgAndy WilsonOver the past few years, advances in imaging technology have led to the development of some astonishing products in the medical field. Perhaps none has proved more useful at diagnosing brain activity as functional magnetic resonance imaging or <a href="">fMRI</a>.<br /><br />But the fMRI technology does have its drawbacks. While it has a good spatial resolution of a few millimeters, it suffers from a poor temporal resolution of a few seconds.<br /><br />In contrast, electroencephalography (EEG) -- a complementary technique that records the electrical signals from the coordinated activity of large numbers of nerve cells through electrodes attached to the scalp -- has the opposite problem.<br /><br />While it has the advantage of being able to detect rapid changes in neural activity with millisecond temporal resolution, it suffers from a poor ability to pinpoint the location of <a href="">brain activity</a>. In other words, it has poor spatial resolution.<br /><br />Hence the usefulness of EEG is limited, not just because its spatial resolution is comparatively poor, but also due to the fact that it can also be insensitive because of the many signals from the brain that are mixed together. It does, however, have the advantage of being portable and comparatively cheap, and therefore is appropriate for a clinical setting, unlike an <a href="">MRI scanner</a> that is large and comparatively expensive.<br /><br />Fortunately, in research labs at Cardiff University Brain Research Imaging Centre (CUBRIC), it is now possible to perform EEG and fMRI simultaneously, and this fact may lead to the birth of a new diagnostic system thanks to the marriage of both the technologies.<br /><br />That's right. At Cardiff University, a team led by professor Richard Wise proposes to improve the spatial resolution of EEG by using EEG and fMRI measurements acquired simultaneously on healthy volunteers to discover correlations between the EEG and fMRI data from which they will produce a statistical model.<br /><br />Subtle features of the EEG signal, which are not normally easily identified but which are associated with the spatial location of the source of neural activity, will be highlighted by their association with the fMRI data, which is good at pinpointing locations in space.<br /><br />Having established the relationship between the EEG and fMRI data in mathematical terms, EEG data alone will then be used to simulate fMRI scans. These simulated fMRI scans might then one day be used by clinicians as a new means to diagnose brain activity -- minimizing the requirement for an fMRI scan to be carried out on a patient.<br /><br />It's an interesting idea, for sure, and one that holds the possibility of seeing an advanced medical imaging technology partly doing itself out of a job!, 3-D dental scanner wins VISION 2011 prizenoemail@noemail.orgAndy WilsonAnyone who has been to the dentist can testify to the fact that undergoing root canal and crown therapy or being measured up for dentures isn't the most pleasant of experiences.<br /><br />So I was particularly pleased to see that a dental scanner that promises to take the misery out of such a process has won this year's EUR5000 top prize at the <a href="">VISION 2011 trade fair</a> in Stuttgart.<br /><br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br /><br />Today, creating a model of the mouth is a fairly primitive procedure. Teeth are cast using an impression compound that is placed in the mouth of a patient and left to set. A resulting plaster model of the teeth is then prepared from the impression, after which the model is digitized using a stationary scanner. In a final step, dentures can be produced from the model with the aid of a CAD/CAM system.<br /><br />But all of that is set to change thanks to the new system developed by the prize-winning engineers at the <a href="">Austrian Institute of Technology</a> (AIT), which will obviate the need for dentists to make dental impressions of the mouth, making the entire process less unpleasant and time-consuming.<br /><br />The AIT system itself is based on a small <a href="">3-D scanner</a> that is placed inside the mouth. The scanner illuminates the mouth with light after which two cameras capture images in real time. A data file -- which previously had to be created in the numerous stages described earlier -- is then created and transmitted to a PC over a USB port where the 3-D model can be visualized (see video).<br /><object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase=",0,47,0" height="250" id="flashObj" width="310"><param name="movie" value="" /><param name="bgcolor" value="#FFFFFF" /><param name="flashVars" value="@videoPlayer=1290337918001&playerID=1290919409001&playerKey=AQ~~,AAAAAEheacc~,POub7blnBC-8b3rvUzZbDS531OKsrBZs&domain=embed&dynamicStreaming=true" /><param name="base" value="" /><param name="seamlesstabbing" value="false" /><param name="allowFullScreen" value="true" /><param name="swLiveConnect" value="true" /><param name="allowScriptAccess" value="always" /><embed src="" bgcolor="#FFFFFF" flashVars="@videoPlayer=1290337918001&playerID=1290919409001&playerKey=AQ~~,AAAAAEheacc~,POub7blnBC-8b3rvUzZbDS531OKsrBZs&domain=embed&dynamicStreaming=true" base="" name="flashObj" width="310" height="250" seamlesstabbing="false" type="application/x-shockwave-flash" allowFullScreen="true" swLiveConnect="true" allowScriptAccess="always" pluginspage=""></embed></object><br /><br />According to the AIT researchers, a complete jaw arch can be measured in 3 to 5 minutes, and the accuracy of the completed model is to within 20 microns.<br /><br />The stereo method for measuring the location of the teeth and the design of the scanner have been patented jointly by Klagenfurt am Worthersee-based startup a.tron3d and AIT. But those outside the dental industry can license the stereo software on an individual basis -- as PC software, as a program library for Windows and Linux, or as firmware for embedded devices such as smart cameras.<br /><br />For its part, a.tron3d -- which holds the exclusive rights for the dental industry -- plans to release the scanner, called the Bluescan-I, by March 2012.<br /><br />Sadly, that'll not be of too much use to me since I have already had much dental treatment on my teeth using the older, more primitive measurement method. But the good news is that it will certainly help new patients who will no longer have to experience almost choking when their mouths are full of that rather horrid tasting impression, cards and 3-D imagingnoemail@noemail.orgAndy WilsonTraveling to Europe can be an exhilarating experience. The chance to make contact with the Old World and its customs can be both delightful and enchanting. But it can also be frustrating, especially for visitors from the United States.<br /><br />My visit to <a href="">VISION 2011</a> in Stuttgart was no exception. Stopping off to catch up with my brother in the UK after the show, I discovered that the many (petrol) gas stations in the country were unable to accept my credit cards at the pump due to the fact that they had not been enabled with a so-called "chip and PIN."<br /><br />That's right. In the UK, at least, it's common for credit and debit cards to come equipped with an embedded microprocessor which is interrogated by any number of automated terminals to provide goods and services once the user has entered a Personal Identification Number (PIN) that is uniquely associated with the card.<br /><br />As frustrated as I was by the inability of the gas pumps to accept my chip-less card, my brother Dave saw the beasts as just a small step toward a completely automated future -- one in which vision systems could play an important role.<br /><br />You see, having spent the past three days trawling around the VISION 2011 show, he had come across many companies that were developing <a href="">3-D vision systems</a>. And while some of these were to be used in rather specific bin-picking applications or in capturing <a href="">images of traffic</a> on German highways, others could be used to capture images of the human body.<br /><br />Capturing such images, Dave said, could create an enormous market far bigger than the field of machine vision -- especially if such 3-D images of the body could then be made small enough that they could be downloaded onto the memory of a credit-card-sized device.<br /><br />Imagine, he inferred, if a complete image of an individual's body were to be encapsulated in such a way. Gone would be the need to wander around a store to search for an item of clothing that fits. Upon entering the store, a computer system would simply interrogate a user's card to identify an individual by his size and highlight where appropriate clothes could be found.<br /><br />Medical professionals could benefit too. Upon entering a doctor's surgery, the current image of an individual's body could be immediately compared to a past image contained on the individual's credit card, providing doctors with an instant indication of any dramatic charges to body size that might indicate any medical problems.<br /><br />Dave believes that there's enormous potential for such technology. But as much as he believes that such devices might make our lives so much easier in the future, I only wish I had one of those existing European chip and PIN cards today so that I might have been able to top up the tank at the gas, booming business?noemail@noemail.orgAndy WilsonIf you watch the daily news on television, you might be forgiven for thinking that Europe is in a complete financial and economic mess. But you wouldn't think so if you attended last week's <a href="">VISION Show 2011</a>. For there, a record number of companies and attendees filled the halls of the Messe Stuttgart, proving that despite the problems that might face the folks in the Eurozone, the vision industry still appears to be booming.<br /><br />That's right. There's no doubt about it. This year's VISION 2011 show in Stuttgart, Germany was an unparalleled success. More than 350 exhibitors attended the show, an increase of 8.4% on the number that were there last year. And from the number of interested parties that were walking the aisles of the show, I'd say that interest in the industry is as high as it has ever been.<br /><br />But what is really going on in the market? Is it booming or stagnant? To find out, I attended the annual networking reception held in the halls of the show, where Gregory Hollows from the <a href="">Automated Imaging Association</a> (AIA, United States) was joined by Sung-ho Huh from the Korean Machine Vision Association (KMVA) and Isabel Yang from the China Machine Vision Union (CMVU) to present the state of the market in their various countries.<br /><br />Out from the crisis of 2009 which saw the vision systems market down 20%, Gregory Hollows -- the vice-chair of the AIA board of directors -- said that the vision systems market in the US had rallied, experiencing 4% growth this year. Not bad news on the US front, then.<br /><br />Korea's Sung-ho Huh had pretty much the same to say. According to him, the Korean market is in pretty good shape, too, and had experienced growth of 5% this year. Of course, as one might expect, the picture from China was even rosier, with Isabel Yang telling us that the vision system market in China had experienced a growth rate of around 10%.<br /><br />After the reception, of course, came the analysis. A few folks that I spoke to were somewhat worried about prospects for the market next year. While they had a reasonable 2011, they weren't expecting things to stay as positive in 2012. Others were interested to know how they might enter the more lucrative <a href="">Asian marketplace</a>, which they saw as a prime opportunity worth exploiting. And there were a few, I must admit, that didn't believe that the Chinese economy was quite as rosy as it was painted, citing a number of enormous factory openings there that had been put on hold due to weak demand in the West.<br /><br />Interpreting market figures from any of the above organizations, no matter how carefully they are researched, might never provide a true indication of how vibrant the vision systems marketplace is. Perhaps the only true market indicator could be found by counting the number of companies and attendees at VISION 2011 itself. If that's anything to go by, I'd say that the vision business is still in pretty good shape!, identitynoemail@noemail.orgAndy WilsonNext week marks the start of one of the biggest event in Vision Systems Design's calendar -- the VISION 2011 show in Stuttgart Germany.<br /><br />Accompanying me to the show this year will be Susan Smith, our publisher; Judy Leger, our national sales manager; and the latest addition to our editorial team, <a href="">Dave Wilson</a>.<br /><br />As many of you may know, Dave joined the magazine just last month to increase our presence in the important European market. But what some of you may not know that Dave is also my identical twin brother, a fact I thought I'd make perfectly clear before the show begins, in order to diminish the confusion that will inevitably arise as a case of mistaken identity on the show floor.<br /><br />You see, although numerous <a href="">vision system algorithms</a> have been developed over the years to differentiate between products of a similar nature, I'm sorry to say that most <a href="">human beings' visual systems</a> -- even those in the machine vision industry -- seem to be incapable of differentiating between the two of us, despite the fact that I clearly inherited all the brains and good looks.<br /><br />For that reason, I have programmed my brother's central processing unit to respond to the greeting "Hi, Andy" whenever he hears it, after which he will instigate a verbal subroutine, which will explain that he is simply a poor imitation of the real thing.<br /><br />However, if you do run into the man instead of me, you will find that he is equally as willing to learn what new technologies are being discussed at the show.<br /><br />He would be especially delighted to discuss any applications of machine vision related to the use of smart cameras and hyperspectral imaging. Please be sure to bend the man's ear if you see him!, $50,000 courtesy of the US Government!noemail@noemail.orgAndy WilsonToday's troops often confiscate remnants of destroyed documents from war zones, but reconstructing entire documents from them is a daunting task.<br /><br />To discover if they can unearth a more effective means to do just that, the folks at <a href="">DARPA</a> have come up with a challenge that they hope will encourage individuals to develop a more automated solution.<br /><br />That's right. The defense organization is hoping that by offering a whopping $50,000 in prize money, entrants to its so-called "Shredder Challenge" will generate some ideas that it might be able to make use of.<br /><br />The Challenge itself consists of solving five individual puzzles embedded in the content of documents that have all been shredded by different means. To participate in the challenge, participants must download the images of the shredded documents from the challenge web site, reconstruct the documents, solve the puzzles, and submit the correct answers before Dec. 4, 2011.<br /><br />Points will be awarded to those who provide the correct answers to the mandatory questions associated with each puzzle. $1,000 will be awarded for each point scored up to $50,000 for a perfect score. DARPA will then award one cash prize of up to $50,000 to the participant who scored the highest total number of points by the deadline.<br /><br />Registration is open to all eligible parties at <a href=""></a>, which provides detailed rules and images of the shredded documents for the five problems.<br /><br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="267" src="" width="320" /></a></div><br /><br />Clearly, this is an application that would benefit from the expert knowledge of those in the <a href="">image processing </a>field who might be able to develop -- or deploy -- a set of <a href="">vision-based algorithms</a> to reconstruct the documents and hence solve the puzzles.<br /><br />Interestingly enough, of course, several individuals contributing to the discussion forums on the Shredder Challenge web site are taking exactly that, runs at 40,000 frames per secondnoemail@noemail.orgAndy WilsonA camera invented at the Langley Field Laboratory has captured images at an astonishing 40,000 frames/s, providing researchers with a great deal of insight concerning the phenomenon of knock in spark-ignition engines over a six-year period.<br /><br />The high-speed motion picture camera operates on a principle that its inventors call optical compensation. The photosensitive film used in the camera is kept continuously in motion and the photographic images are moved with the film such that each image remains stationary relative to the film during the time of its exposure.<br /><br />That's right. This isn't a digital camera at all, but a film camera. But perhaps even more remarkable is that it that was invented in February 1936! The first working version of the camera was constructed in the Norfolk Navy Yard during 1938 and the camera operated successfully first time on December 16, 1938 at Langley Field.<br /><br />Now, <a href="">thanks to an article</a> written by Cearcy Miller, interested readers can not only discover exactly how the camera was designed but also view some high-speed motion pictures of familiar objects that illustrate the quality of the photographs taken by the camera at the time.<br /><br />If you thought that <a href="">high-speed imaging</a> was a relatively new idea, why not check out how the engineers solved the problem all those years ago!, dissertation to product releasenoemail@noemail.orgAndy WilsonIn 2006, Ren Ng's PhD research on <a href="">lightfield photography</a> won Stanford University's prize for best thesis in computer science as well as the internationally recognized ACM Dissertation award.<br /><br />Since leaving Stanford, Dr. Ng has been busy starting up his own company called <a href="">Lytro</a> (Mountain View, CA, USA), to commercialize a camera based on the principles of lightfield technology while making it practical enough for everyday use [see the Vision Insider blog entry "<a href="">Lightfield camera headed for the consumer market</a>"].<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="319" /></a></div><br /><br />This week saw the result of his efforts, as Lytro took the wraps off a set of three cameras that can all capture the color, intensity, and direction of all the light in a scene, enabling users to focus the images they take after the fact.<br /><br />The cameras themselves aren't all that different -- except for the paint job on the outside. The first two, in Electric Blue and Graphite, cost $399 and are capable of storing 350 pictures. A Red Hot version -- at the somewhat higher price of $499 -- is capable of storing 750.<br /><br />With no unnecessary modes or dials, the cameras feature just two buttons (power and shutter) and have a glass touch-screen that allows pictures to be viewed and refocused directly before they are downloaded to a computer.<br /><br />To illustrate the capabilities of the new cameras, a number of Lytro employees and select testers have taken some snaps and uploaded the results to the company's so-called <a href=";_suid=160">Living Pictures Gallery</a>, where they can be viewed and refocused on the web.<br /><br />As savvy a marketing idea as that is, I can't say the same behind the company's choice of computer platform which runs its free desktop application that imports pictures from camera to computer. Rather than produce software for the enormously popular Windows PC, the company chose to support Mac OS X in its initial release.<br /><br />Despite this minor upset, the company does have more exciting projects in the works. Next year, for example, it plans to launch a set of software tools that will allow the lightfield pictures to be viewed on any 3-D display and to enable viewers to shift the perspective of the, to Coventrynoemail@noemail.orgAndy WilsonThis week, I dispatched our <a href="">European editor Dave Wilson</a> off to the Photonex trade show in Coventry in the UK to discover what novel machine-vision systems might be under development in Europe.<br /><br />Starting off early to beat the traffic jams on the motorway, he arrived at the Ricoh show grounds at the ungodly hour of eight in the morning. But that gave him a good two hours to plan the day ahead before the doors of the show opened -- which is exactly what he did.<br /><br />Whilst browsing through the technical seminar program organized by the UK Industrial Vision Association (<a href="">UKIVA</a>) over a breakfast of Mexican food, one presentation in particular caught his eye.<br /><br />Entitled &ldquo;3D imaging in action,&rdquo; it promised to reveal how a Sony smart camera and a GigE camera could be used together to create a 3-D image-processing system that could analyze the characteristics of parts on a rotating table.<br /><br />The demonstration by Paul Wilson, managing director of Scorpion Vision (Lymington, UK; <a href=""></a>), would illustrate the very techniques that had been used by a system integrator who had developed a robotic vision system that could first identify -- and then manipulate -- car wheels of different sizes and heights.<br /><br />And indeed it did. During the short presentation, Wilson explained how the <a href="">Scorpion Vision software</a> developed by Tordivel (Oslo, Norway; <a href=""></a>) had been used to create the application which was first capturing three-dimensional images of the parts and then making measurements on them. The entire application ran under an embedded version of Windows XP on the Sony smart camera.<br /><br />Interestingly enough, software such as Tordivel&rsquo;s allows applications such as this to be developed by a user with few, if any, programming skills. Instead, they are created through a graphical user interface from which a user chooses a number of different tools to perform whatever image-analysis tasks are required.<br /><br />The ease by which such software allows system integrators to build systems runs in stark contrast to other more traditional forms of programming, or even more contemporary ones that make use of graphical development environments. Both of these require a greater level of software expertise and training than such non-programmed graphical user interfaces.<br /><br />Even so, the more sophisticated and easier the software is to use, the more expensive it is likely to be, a fact that was not lost on Scorpion Vision&rsquo;s managing director as he spoke to our man Dave at the show.<br /><br />Nevertheless, he also argued that higher initial software costs can often be quickly offset by the greater number of systems that can be developed by a user in any given period of time -- an equally important consideration to be taken into account when considering which package to use to develop your own 3-D vision, Jobs, the OEM integrator, and menoemail@noemail.orgAndy WilsonBack in 1990, I decided to start my own publishing company. Transatlantic Publishing, as the outfit was called, was formed specifically to print a new magazine called <i>The OEM Integrator</i>, a journal that targeted folks building systems from <a href="">off-the-shelf hardware and software</a>.<br /><br />I hadn't given much thought to that publication for years, until last week that is, when my brother telephoned me to say that he had unearthed a copy of the premier issue of the publication, complete with the entire media pack that was produced to support it.<br /><br />Intrigued to discover what gems might have been written way back then, I asked him to email me a PDF of one or two of the stories that had appeared in that first issue.<br /><br />As you can imagine, I had to chuckle when I opened the email attachment. For there, in all its glory, was a roundup of new hardware and software products that had been announced for none other than the Apple NuBus, a 32-bit parallel computer bus incorporated into computer products for a very brief period of time by Steve Jobs' Apple Computer. [<b>UPDATE: </b><a href="">Click here to read the article from the 90s!</a>]<br /><br />Despite my enthusiasm for the new bus board standard, NuBus didn't last too long, and when Apple switched to the <a href="">PCI bus</a> in the mid-1990s, NuBus quickly vanished.<br /><br />But the bus that my brother chose to write an even lengthier piece on had even less success in the marketplace. His article touted the benefits of the Futurebus -- a bus that many then believed would be the successor to the <a href="">VME</a>. Sadly, however, the effort to standardize this new bus took so long that everyone involved lost interest, and Futurebus was hardly used at all.<br /><br />Both these articles point out one important fact that all industry commentators would do well to take heed of. If you are going to make predictions as to what new technology is going to set the world on fire, you've got to be very, very careful indeed!, repurposing can be innovative, toonoemail@noemail.orgAndy WilsonMore years ago than I care to remember, the president of a small engineering company asked me if I would join several other members of his engineering team on a panel to help judge a competition that he was running in conjunction with the local high school.<br /><br />The idea behind the competition was pretty simple. Ten groups of students had each been supplied with a pile of <a href="">off-the-shelf computer peripherals</a> that the engineering company had no longer any use for, and tasked with the role of coming up with novel uses for them.<br /><br />As the teams presented their ideas to the panel, it became obvious that they were all lateral thinkers. Many of them had ripped out the innards of the keyboards, mice, and loudspeakers they had been provided with and repurposed them in unusual and innovative ways to solve specific engineering problems.<br /><br />Recently, a number of engineering teams across the US have taken a similar approach to solving their own problems, too, but this time with the help of more sophisticated off-the-shelf consumer technology -- more specifically, inexpensive smart phones.<br /><br />Engineers at the <a href="">California Institute of Technology</a>, for example, have taken one of the beasts and used it to build a "smart"petri dish to image cell cultures. Those at the <a href="">University of California-Davis</a> have transformed an iPhone into a system that can perform microscopy. And engineers at <a href="">Worcester Polytechnic Institute</a> have developed an app that uses the video camera of a phone to measure heart rate, respiration rate, and blood oxygen saturation.<br /><br />Taking existing system-level components and using them in novel ways may never win those engineers the same accolades that the designers of the original components often receive. But the work of such lateral thinkers is no less original. Their work just goes to show that great product ideas do not necessarily have to be entirely game-changing. Sometimes, repurposing existing technology can be equally as, simplifies system specificationnoemail@noemail.orgAndy WilsonNational Instruments' <a href="">NI Week</a> in Austin, TX was a great chance to learn how designers of vision-based systems used the company's <a href="">LabVIEW</a> graphical programming software to ease the burden of software development.<br /><br />But as useful as such software is, I couldn't help but think that it doesn't come close to addressing the bigger issues faced by system developers at a much higher, more abstract level.<br /><br />You see, defining the exact nature of any inspection problem is the most taxing issue that system integrators face. And only when that has been done can they set to work <a href="">choosing the lighting</a>, the <a href="">cameras</a>, and the computer, and writing the software that is up to the task.<br /><br />It's obvious, then, that software like LabVIEW only helps tackle one small part of this problem. But imagine if it could also <a href="">select the hardware</a>, based simply on a higher-level description of an inspection task. And then optimally partition the software application across such hardware.<br /><br />From chatting to the NI folks in Texas, I got the feeling that I'm not alone in thinking that this is the way forward. I think they do, too. But it'll probably be a while before we see a LabVIEW-style product emerge into the market with that kind of functionality built in.<br /><br />In the meantime, be sure to check out our October issue (coming online soon!) to see how one of NI's existing partners -- Coleman Technologies -- has used the LabVIEW software development environment to create software for a system that can rapidly inspect dinnerware for flaws.<br /><br />Needless to say, the National Instruments software didn't choose the hardware for the system. But perhaps we will be writing an article about how it could do so in the next few, the military vision beansnoemail@noemail.orgAndy WilsonWhile there are many fascinating application challenges that have been resolved by machine-vision systems, there are many that go unreported.<br /><br />That's because the original equipment manufacturers (<a href="">OEMs</a>) that create such vision-based machines are required to sign <a href="">non-disclosure agreements</a> (NDAs) with their customers to restrict what information can be revealed.<br /><br />Oftentimes, it&rsquo;s not just the specifications of the machine that are required to be kept under wraps. These NDAs also restrict the disclosure of the challenge that needed to be addressed before the development of the system even commenced.<br /><br />Now, you might think that the development of <a href="">vision systems for the military marketplace</a> might be an even more secretive affair. After all, building a vision system to protect those in battle would initially appear to be much more imperative than keeping quiet about a machine that inspects food or fuel cells.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="193" src="" width="320" /></a></div><br /><br />While the specifics of military designs are almost impossible to obtain legally, that's not true, however, for depictions of the systems that the military would like to see developed in the future.<br /><br />Often such descriptions are found in extensive detail on numerous military procurement sites, even down to the sorts of software algorithms and hardware implementations that are required to be deployed.<br /><br />Could it be that in doing so, though, the military minds are handing over potentially constructive information to research teams in rogue states? If they are, then surely they are making a mockery of the very International Traffic in Arms Regulations (<a href="">ITAR</a>), which control the export and import of defense-related materials and, is all in the mindnoemail@noemail.orgAndy WilsonIn the 1983 science-fiction movie classic <span style="font-style: italic;">Brainstorm</span>, a team of scientists invents a helmet that allows sensations and emotions to be recorded from a person's brain and converted to tape so that others can experience them.<br /><br />While this seemed quite unbelievable thirty years ago, it now appears that scientists at the University of California-Berkeley are bringing these futuristic ideas a little closer to reality!<br /><br />As farfetched as it might sound, the university team in professor Jack Gallant's laboratory has developed a system that uses functional magnetic resonance imaging (<a href="">fMRI</a>) and computational algorithms to "decode" and then "reconstruct" visual experiences such as watching movies.<br /><br />UC-Berkeley's Dr. Shinji Nishimoto and two other research team members served as guinea pigs to test out the system, which required them to remain still inside an MRI scanner for hours at a time.<br /><br />While they were in the scanner, they watched two separate sets of movie trailers, while the fMRI system measured the blood flow in their occipitotemporal visual cortexes. On a computer, the images of the blood flow taken by the scanner were then divided into sections, after which they were fed into a computer that learned which visual patterns in the movie corresponded with particular <a href="">brain activity</a>.<br /><br />Brain activity evoked by a second set of clips was then used to test a movie <a href="">reconstruction algorithm</a> developed by the researchers. This was done by feeding random YouTube videos into the computer program. The 100 clips that the program decided were closest to the clips that the subject had probably seen based on the brain activity were then merged to produce a continuous reconstruction of the original clips.<br /><br />The researchers' ideas might one day lead to the development of a system that could produce moving images that represent dreams and memories, too. If they do achieve that goal, however, I can only hope that the images are just as blurry as the ones that they have produced already. Anything sharper might be a little embarrassing!, more effective than infrared imagingnoemail@noemail.orgAndy WilsonAll technology can be used for both good and evil purposes. Take <a href="">infrared cameras</a>, for example. While they can be used to provide a good indication of where your house might need a little more insulation, they can also be used by crooks to capture the details of the PIN you use each time you slip your card into an ATM to withdraw cash.<br /><br />That, at least, is the opinion of a band of researchers from the <a href="">University of California at San Diego</a> (San Diego, CA, USA) who have apparently now demonstrated that the secret codes typed in by banking customers on ATMs can be recorded by a digital infrared camera due to the residual heat left behind on their keypads.<br /><br />According to an article on MIT&rsquo;s <a href="">Technology Review</a> web site, the California academics showed that a digital infrared camera can read the digits of a customer's PIN number on the keypad more than 80% of the time if used immediately; if the camera is used a minute later, it can still detect the correct digits about half the time.<br /><br />Keaton Mowery, a doctoral student in computer science at UCS, conducted the research with fellow student Sarah Meiklejohn and professor Stefan Savage.<br /><br />But even Mowery had to admit that the likelihood of anyone attacking an ATM in such a manner was low, partly due to the $18,000 cost of buying such a camera or its $2000 per month rental fee. He even acknowledged that mugging would prove a lot more reliable means to extract money from the ATM user, albeit the technique isn't quite as elegant as using an imaging system to do, bay uses high-tech imagingnoemail@noemail.orgAndy WilsonWhile the exploitation of vision systems has made inspection tasks more automated, those systems have also reduced or eliminated the need for unskilled workers.<br /><br />But such workers won't be the only ones to suffer from the onslaught of vision technology -- pretty soon even skilled folks in professions such as medicine might start to see their roles diminished by automation systems, too.<p></p><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 320px; height: 214px;" src="" alt="" id="BLOGGER_PHOTO_ID_5655674652143589522" border="0" /></a><br /><br />As a precursor of things to come, take a look at a new system developed by researchers at the <a href="">Leicester University</a> as a means of helping doctors to <a href="">noninvasively diagnose disease</a>.<br /><br />Surrounding a conventional hospital bed, thermal, <a href="">multispectral</a>, <a href="">hyperspectral</a>, and <a href="">ultrasound</a> imagers gather information from patients. Complementing the imaging lineup is a real-time mass <a href="">spectrometer</a> that can analyze gases present in a patient's breath to detect for signs of disease.<br /><br />Professor Mark Sims, the University of Leicester researcher who led the development of the system, said that its aim was to replace a doctor's eyes with imaging systems, and his nose with breath analysis systems.<br /><br />Even though nearly all the technologies employed in the system have been used in one way or another, Sims said that they have never all been used in an integrated manner.<br /><br />Clearly, though, if this instrumentation were coupled to advanced software that could correlate all the information captured from a patient with a database of known disease traits, one would have a pretty powerful tool through which to diagnose disease.<br /><br />The doctors, of course, would then have to find something else to occupy their time. But just think of the cost savings that could be, to follow...noemail@noemail.orgAndy WilsonStarting this month, the Vision Insider blog will offer you the reader an opinionated twice a week update about industry trends, market reports, new products and technologies. While many of these will be staff-written, we will be using this forum to allow you our readers to opine on subjects such as machine vision and image processing <a href="">standards</a>, <a href="">software</a>, <a href="">hardware</a>, and <a href="">tradeshows</a> you find useful.<br /><br />If you have any strong opinions that you feel we could publish (without of course being taken to court), I would be only too pleased to hear from you. And, of course, if you disagree with what I have said, you are only too free to leave a comment using our easy to use feedback, new journeynoemail@noemail.orgConard Holton<div class="MsoNormal" style="margin: 0in 0in 0pt;">Conard Holton has accepted a new position as Associate Publisher and Editor in Chief of a sister publication, <a href="">Laser Focus World</a>. Andy Wilson, founding editor of Vision Systems Design, will be taking over the role of editor in chief. Andy has been the technical mainstay of Vision Systems Design since its beginning fifteen years ago, writing many of the articles that have established it as the premier resource for machine vision and image processing. </div>, camera headed for the consumer marketnoemail@noemail.orgConard HoltonA recent article in the <a href=";ref=technology">New York Times</a> reveals that Lytro, a Mountain View, CA startup, plans to release a lightfield camera into the point-and-shoot consumer market later this year, allowing professional and amateur photographers to "take shots first and focus later."<br /><br />With $50 million in venture funding, the company is led by Ren Ng, a Stanford University Ph.D. who wrote his thesis on the subject of lightfield cameras. His work and that of others are described in the <a href="">March 2008 Vision Systems Design article <span style="font-style: italic;">Sharply Focused</span></a>.<br /><br />Several research organizations and machine vision camera manufacturers have developed lightfield cameras, including Stanford, MIT, Mitsubishi Electric Research Labs, <a href="">Raytrix</a>, and Point Grey Research. Like traditional cameras, lightfield cameras gather light using a single lens. However, by placing an array of lens elements at the image plane, the structure of light impinging on different sub-regions of the lens aperture can be captured.<br /><br />By capturing data from these multiple sub-regions, software-based image-processing techniques can be used to select image data from any sub-region within the aperture. In this way, images can be recreated that represent views of the object from different positions, recreating a stereo effect. Better still, if the scene depth is calculated from the raw image, each pixel can be refocused individually to give an image that is in focus everywhere.<br /><br />An interactive photo from the <a href="">Lytro website</a> gives an idea of the potenial uses of lightfield cameras (hint: click on an area of the image to focus). And there will be a Facebook app.<br /><br /><iframe allowfullscreen="" frameborder="0" height="400" scrolling="no" src=";utm_medium=EmbedLink" width="400"></iframe>, Braggins rememberednoemail@noemail.orgConard HoltonDon Braggins, a long-standing and highly respected figure in the machine vision industry, has passed away at age 70. Founder of Machine Vision Systems Consultancy in Royston, England, in 1983, Don specialized in image processing and analysis and was a frequent contributor to and participant in organizations such as the European Machine Vision Association and the UK Industrial Vision Association (UKIVA). First as a founding member of the UKIVA in 1992, he became its director in 1995, and helped guide its development for many years. He remained a consultant to the association until diagnosed with an inoperable brain tumor in 2010.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 121px; height: 81px;" src="" alt="" id="BLOGGER_PHOTO_ID_5614067148479627986" border="0" /></a>Traveling frequently with his wife Anne, Don was welcomed by companies, universities, and trade organizations around the world for his experience, insights, and good humor. Before establishing his own company, he was product marketing manager for image analysis products at Cambridge Instruments. A graduate of Clare College, Cambridge University, he was a Chartered Engineer and a Fellow of SPIE.<br /><br />Machine Vision Systems Consultancy was known for its independence as a source of information about machine vision products and services. Clients varied from multi-nationals, to startup companies, venture capitalists, and OEMs.<br /><br />As editor of technical journals and frequent contributor to trade press magazines, Don regularly researched the European market for industrial vision systems for individual clients and associations. Between 2000 and 2002 he served as a non-executive board member of Fastcom Technology, a Swiss spinout from EPFL Lausanne. He was also a board member of Falcon Vision in Hungary, providing international marketing advice and technology sourcing, and introduced Falcon to the French company Edixia, which subsequently bought a controlling stake.<br /><br />&ldquo;Don knew the machine vision industry like the back of his hand,&rdquo; remembers Andy Wilson, Editor of Vision Systems Design. &ldquo;You could always rely on him to direct you towards the latest developments and innovations shown at a trade show. He was not only knowledgeable but would freely share his valuable opinions and thoughts with anyone who cared to ask. I will miss him.&rdquo;<br /><br />In addition to his wife Anne, Don is survived by two children and five grandchildren.<br /><br />The staff of Vision Systems Design extend our sincerest condolences to the Braggins family.<br /><br />--Conard Holton, <a href="">Vision Systems Design</a>, meeting--machine vision business good, missed the volcanonoemail@noemail.orgConard HoltonFrom May 12-14, more than 140 attendees at the 2011 EMVA business conference in Amsterdam celebrated the soaring market for machine vision products and recounted <a href="">tales of traveling home from the 2010 meeting in Istanbul</a> through the Eyjafjallajoekull volcano "Cloud". They didn't realize they would just miss yet another cloud from an Icelandic volcano, Gr&iacute;msvötn, which erupted just seven days after the conference ended and disrupted air travel across parts of northern Europe.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor:pointer; cursor:hand;width: 300px; height: 225px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5611438171273250850" /></a><br /><br />Market health of European machine vision companies and companies doing business in Europe was reported to be excellent, with many focused on application areas that exhibit particular strength, especially automotive manufacturing, transportation imaging, and surveillance. From 2009 to 2010, overall growth by European companies was almost 35% (the drop from 2008 to 2009 had been -21% overall). <br /><br />Germany expects to see overall growth of at least 11% in 2011, putting it back on a trend line consistent with pre-2009 sales, and about 20% growth was expected globally, according to the EMVA. Europe should see about 22% and the Americas about 19%. Asia, after a 62% year-over-year growth in 2010, should see a growth of 18% in 2011.<br /><br />Describing the market trends, Gabriele Jansen, president of Jansen C.E.O. and Member of the EMVA Executive Board, attributed the strong rebound in machine vision sales to several factors:<br />- Increase in industrial production<br />- Broad-based improvement in sentiment among industry managers due to a significant increases in overall orders and in production trends<br />- Decline in the inventory of finished goods to historically low levels<br />- Remaining effects of stimulus programs for specific industries (eg, automotive)<br />- Strong increase in demand for machine vision in Asia<br /><br />The conference mood was excited but a bit nervous, causing many to ask: How can this growth be sustained? <br /><br />To inspire attendees to ponder answers to that question, the conference included several speakers who focused on the future, with talks about <a href="">globalization and sustainablity</a>, finding new markets using the <a href="">Blue Ocean strategy</a>, and how to think about and <a href="">manage for the future</a>.<br /><br />Ramesh Rashkar, an MIT professor, described his <a href="">lab's work</a> in lightfield imaging and computional light transport, including recent work on a <a href="">camera that can look around a corner</a>. Speakers also address the rapid advances being made in <a href="">service robots</a> and <a href="">machine vision for agriculture</a>. <br /><br />The networking, as always at EMVA events, was excellent. Talking with so many colleagues in the machine vision "industry" also reminded me that machine vision is not an industry per se. At its core, it is the integration of technologies and products that provide services or applications that benefit true industries such as automotive or consumer goods manufacturing, security, transportation, and agriculture.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 154px; height: 320px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5611439753850820594" /></a>Finally, a parting view of the Cloud that was missed: This Envisat image from ESA, acquired on 24 May 2011, shows a large cloud of ash northeast of Scotland that has been carried by winds from Iceland&rsquo;s Gr&iacute;msvötn volcano about 1000 km away. The Gr&iacute;msvötn volcano, located in southeast Iceland about 200 km east of Reykjavik, began erupting on 21 May for the first time since, iPhone App for 3-D imagingnoemail@noemail.orgConard HoltonIt's not exactly machine vision yet, but a researcher at <a href="">Georgia Tech</a>, <a href="">Grant Schindler</a>, has created what appears to be the first 3-D scanner app for an iPhone 4. <br /><br />Using both the screen and the front-facing camera, the app--called <a href="">Trimensional</a>--detects patterns of light reflected off a face to build a true 3-D model.<br /><br />Schindler says Trimensional can now share movies and animated GIFs of your 3-D scans and users can unlock the 3-D Model Export feature to create physical copies of any scanned object on a 3-D printer, or import textured 3-D scans into popular 3-D graphics software. <br /><br />I wonder if and when such iPhone apps can be used in machine vision applications?<br /><br /><iframe width="560" height="349" src="" frameborder="0"allowfullscreen></iframe>, vision vs Angry Birdsnoemail@noemail.orgConard HoltonReaders of Vision Systems Design may know that <a href="">OptoFidelity</a> (Tampere, Finland) makes systems that test user experience with products such as PDAs and mobile phones. In fact, our <a href="">October 2009 cover story</a> featured one of the company&rsquo;s automated test systems, the WatchDog, which performed just such a test using a JAI Camera Link camera and National Instruments frame grabber. The WatchDog system&rsquo;s interface allows a user to correlate an exact measured response of both the refresh rate of the screen and any user interaction with the device.<br /><br />Now, OptoFidelity has expanded its world and made a commercial-quality video called <a href="">&ldquo;Man vs Robot&rdquo;</a> about a vision-guided robotic system it has built that can beat humans playing <a href="">Angry Birds</a> &mdash; which, in the unlikely case you haven&rsquo;t heard, is a computer game from Rovio Mobile (Espoo, Finland) being played by millions of people on various mobile displays.<br /><br />Some of the fun, machine vision, and robot technology behind that video appears in this &ldquo;Making of Man vs Robot&rdquo; video, also from OptoFidelity.<br /><br /><iframe width="560" height="349" src="" frameborder="0" allowfullscreen></iframe>, guides Justin the Robot playing ballnoemail@noemail.orgConard HoltonThis video explains it all. The DLR in Germany has developed Justin over the years as a very adaptable research robot, able to perform duties from acting as a butler to potentially working on a satellite.<br /><br />The vision and motion capabilities shown by Justin in the video are remarkable. We will be covering more of such capabilities in the coming months based the recently completed research report: <a href="">Vision for Service Robots</a>, on sale on our website.<br /><br /><iframe width="425" height="349" src="" frameborder="0" allowfullscreen></iframe>, vision industry consolidation - what's next?noemail@noemail.orgConard HoltonNow that the recession is over and profits are rising fast, it seems that many companies are considering how to expand their markets and solidify both geographical and technological positions. The <a href="">acquisition of LMI Technologies by Augusta Technologie</a>, parent of Allied Vision Technologies, is just the most recent example of course.<br /><br />Augusta also recently acquired P+S Technik (digital film expertise) and VDS Vosskühler (infrared specialist), and in 2009 acquired Prosilica (GigE cameras).<br /><br />Teledyne has been reconfiguring the machine vision world with its recent acquisitions of <a href="">DALSA </a>(cameras, boards, software) and <a href="">Nova Sensors</a> (infrared), and the <a href="">partial acquisition of Optech</a> (airborne and space imaging).<br /><br />And, in this year alone, <a href="">Pro-Lite acquired light measurement supplier SphereOptics</a>. Camera systems supplier <a href="">NET New Electronic Technology acquired iv-tec</a>, which develops algorithms and real-time image-processing software. And <a href="">Adept acquired food-packaging equipment supplier InMoTx</a> after having acquired service robot maker Mobility in 2010. <br /><br />These are only the most recent and obvious acquisitions. Numerous OEMs and peripheral software and hardware makers have also merged or been acquired. It's a trend long predicted in the machine vision world. What hardware and software products will be in demand by those seeking to expand? What's next in the drive to create full-product-line vendors to serve vision system integrators and end-users?, controlled equipment in action at Fukushimanoemail@noemail.orgConard HoltonThis video from IDG News Service highlights some of the roles played by robotic equipment in the analysis and recovery from the disasters at the Fukushima nuclear power plants in Japan.<br /><br /><object style="height: 390px; width: 640px"><param name="movie" value=""><param name="allowFullScreen" value="true"><param name="allowScriptAccess" value="always"><embed src="" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="640" height="390"></object><br /><br />The plant operator Tokyo Electric Power Company (Tepco) deployed three camera-equipped, remote-controlled excavators donated by Shimizu and Kajima to clear radioactive debris around the unit 3 reactor. <a href="">Robots sent to Japan</a> by Qinetiq North America are still being evaluated before deployment to the site.<br /><br />In addition, Tepco launched a Honeywell T-Hawk micro air vehicle to survey the plant from above, according to a <a href="">report on CNET</a>, fun with Kinect and machine visionnoemail@noemail.orgConard HoltonIn the course of researching our <a href="">Vision for Service Robots market report</a>, it became obvious that low-end vision systems would be a great boon to robot developers of all sorts. And indeed, researchers are taking advantage of low-cost consumer sensors to design increasingly capable and inexpensive robots. <br /><br />The Microsoft Kinect, designed for the Xbox 360 game system, has set new records for consumer sales and is generating considerable excitement among robot hobbyist and researchers. The Kinect sells for about $150 and its embedded time-of-flight camera and infrared sensors can be used as a vision system for some service robots. For some interesting applications of Kinect in service robots, IEEE Spectrum has a good blog: <a href="">Top 10 Robotic Kinect Hacks</a>.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor:pointer; cursor:hand;width: 320px; height: 179px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5594782739352824770" /></a><br /><br />But hobbyists and service robot makers aren&rsquo;t the only one taking advantage of Kinect. MVTec Software has just <a href="">tested Kinect in 3-D applications</a> for industrial tasks such as bin picking, packaging, and palletizing, as well as for research and development.<br /><br />And Eye Vision Technology has used the Kinect sensor with its <a href="">EyeScan 3D system for robotic applications</a> such as depalletization and sorting components on the assembly, search and assess Japanese disaster sitesnoemail@noemail.orgConard HoltonIncreasingly, vision-guided service robots are being deployed for rescue and assessment tasks following the earthquake and tsunami in Japan. A <a href="">recent blog on IEEE Spectrum</a> covers the deployment of KOHGA3 by a team from Kyoto University. <br /><br />The team used the remote-controlled ground robot to enter a gymnasium in Hachinohe, Aomori Prefecture, in the northeastern portion of Japan's Honshu island, and assess damages. They tried to inspect other damaged buildings in the region with limited success.<br /><br />The robotics team is led by Professor Fumitoshi Matsuno. KOHGA3 has four sets of tracks that allow it to traverse rubble, climb steps, and go over inclines up to 45 degrees. The robot carries three CCD cameras, a thermal imaging camera, laser scanner, LED light, attitude sensor, and a gas sensor. Its 4-degrees-of-freedom robotic arm is nearly 1 meter long and equipped with CCD camera, carbon-dioxide sensor, thermal sensor, and LED light.<br /><br /><iframe title="YouTube video player" width="480" height="390" src="" frameborder="0" allowfullscreen></iframe><br /><br />In addition, there are several <a href="">early reports on robot forays or plans</a>, and numerous teams from various robot organizations are making themselves available to help. For example, you can follow the efforts in Japan of Dr. Robin Murphy, who directs the Center for Robot-Assisted Search and Rescue (CRASAR) at Texas A&M University, on <a href="">her blog</a>, 2011 shows synergy of showsnoemail@noemail.orgConard HoltonI don&rsquo;t know whether <a href="">Automate 2011</a> (held at McCormick Place in Chicago, March 21-24) was a success for every exhibitor and attendee, but it had all the necessary elements. Unofficial numbers for the show were 170 exhibitors and over 7500 attendees.<br /><br />The floor traffic, which waxed and waned during the four days of the show, seemed to consist of many system integrators, tool manufacturers, and warehouse system providers&mdash;ideal traffic for the show and attributable to the very large, co-located ProMat show on materials handling. Indeed, walking around ProMat was a bit like walking through a display of machine vision in action.<br /><br />Having recently completed <a href="">a market report on the use of vision in service robots</a> in, among other applications, warehousing, it was great to see the robots at Kiva Systems <a href="">gliding eerily around the floor</a>, using embedded smart camera technology to read Data Matrix codes on the floor and to position themselves below shelves that they then picked up and moved to a human who would pick out desired parts. <br /><br />In additional good news for the machine vision industry, the International Federation of Robotics (IFR) presented the preliminary results of its annual statistics for industrial robots, including vision-guided robots. In 2010, with more than 115,000 industrial robots shipped, the number of units sold worldwide almost doubled from 2009, the lowest year since the early 1990s.<br /><br />Here are some slides from the IFR presentation, which show trends from 2001 to 2011, and the strength of markets in Asia, especially in Korea. An <a href="">article on the subject</a> explores the trends in different regions and industries (click on images to enlarge).<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor:pointer; cursor:hand;width: 320px; height: 200px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5589958042905156050" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor:pointer; cursor:hand;width: 320px; height: 200px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5589958166861919250" /></a>, earthquake impact machine vision component supply?noemail@noemail.orgConard HoltonNews about the supply chain of vision and electronic components coming from Japan has so far been tentative and sporadic. The ongoing effects of the earthquake, tsunami, and nuclear power plant failures have dominated the news but indications of specific global economic consequences are emerging. <br /><br />A <a href="">New York Times article today</a> investigates some of the impacts, noting that a Texas Instruments plant north of Tokyo that make A/D chips and accounts for 10% of TI's global output was damaged and won't resume full production until September. Toshiba has closed some of its production lines, potentially affecting the availability of NAND flash chips.<br /><br />The port of Sendai-Shiogama is heavily damaged. It is the 13th largest Japanese port in container shipments and of particular importance to Sony, Canon, and Pioneer. FedEx shut service to much of eastern Japan, including Tokyo, following the earthquake but now reports resumption with some service delays.<br /><br />Please let me know if you have any related information -, with video cameranoemail@noemail.orgConard HoltonA new surveillance device may be arriving at your bird feeder soon. Yesterday, AeroVironment (Monrovia, CA), announced that it had got its <a href="">Nano Hummingbird</a> to precisely hover and fly forward, fast. Weighing two thirds of an ounce, including batteries and video cameras, the prototype was built as part of the DARPA Nano Air Vehicle program.<br /><br />The final concept demonstrator is capable of climbing and descending vertically, flying sideways left and right, flying forward and backward, as well as rotating clockwise and counter-clockwise, under remote control. During the demonstration the Nano Hummingbird flew in and out of a building through a normal-size doorway.<br /><br /><iframe title="YouTube video player" width="640" height="390" src="" frameborder="0" allowfullscreen></iframe><br /><br />The hand-made prototype aircraft has a wingspan of 16 cm (6.5 inches) and can fitted with a removable body fairing, which is shaped to have the appearance of a hummingbird. The company, which makes a variety of <a href="">unmanned aerial vehicles</a> used by the military, says the Nano is larger and heavier than an average hummingbird, but is smaller and lighter than the largest hummingbird currently found in nature.<br /><br />Vision Systems Design is publishing a market report on vision for such UAVs and other service robots. For more information, <a href="">click here</a>, Year - New Web sitenoemail@noemail.orgConard HoltonThe redesign of the Vision Systems Design Web site is the first in a series of new developments for 2011. Our address remains: <a href=""></a>, but you will find a new look and numerous enhancements to help engineers and system integrators make use of machine vision and image processing products and technologies.<br /><br />One of the key improvements is the visibility of our topic centers on any page of the site in the navigation bar. These topic centers focus on <a href="">Factory Automation</a>, <a href="">Non-Industrial Vision</a>, <a href="">Cameras</a>, <a href="">Boards & Software</a>, <a href="">Lighting & Optics</a>, and <a href="">Robotics</a>.<br /><br />You'll also find <a href="">more videos </a>and a new video player, and the addition of Editorial Digests on relevant subjects such as <a href="">3-D imaging </a>and <a href="">solar cell manufacturing</a>.<br /><br />In addition, you will easily find the <a href="">current issue </a>and archives of our magazine, more links to our <a href="">Buyers Guide </a>and <a href="">Industrial Camera Directory</a>, faster page load times, and a link to <a href="">OptoIQ</a>, which is the portal site for several PennWell publications related to lasers, photonics, bio-optics, and research.<br /><br />Please let me know what you think of the new site and what other changes we might make.<br /><br />Best wishes for the New Year.<br /><br />Conard Holton<br />Editor in Chief<br />Vision Systems Design<br />, 2010 : Parting viewsnoemail@noemail.orgConard HoltonAfter three demanding days on the show floor and travel home from Stuttgart (in many cases delayed by a fierce storm over northern Europe), it&rsquo;s clear that VISION 2010 was a major success, a relief to the organizers and exhibitors, and a good omen for 2011.<br /><br /><a href=""><img style="cursor:pointer; cursor:hand;width: 320px; height: 214px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5540660665332258962" /></a><br />(Figure: The Mid-Size Robocup Team: Tech United Eindhoven put on a show)<br /><br />In no particular order, here are some observations:<br /><br />- This tradeshows is alive and well. It drew a record 6800 visitors, 1600 exhibitors, and 323 exhibiting companies. Forty four percent of the exhibitors came from outside Germany, primarily from the US and South Korea. Thirty five percent of the visitors came from outside Germany<br /><br />- Most companies reported strong to torrid growth in the past months. Basler, for example, grew very fast but expects sales will soften in the first and second quarters of 2011, before strengthening again in the second half of next year.<br /><br />- A price war is being waged in some lower-ends of the camera world. Several of the competing vendors are German and continue making their cameras in Germany despite higher labor costs. Their plan is to win on the basis of innovation and quality. Part of their strategy is to use consumer interfaces, move from CCD to CMOS sensors, add capabilities in software not hardware, and continue developing smaller camera footprints.<br /><br />- Several of the larger camera vendors told me that over 60 percent of their business is non-industrial--that was surprising but shouldn&rsquo;t have been. It&rsquo;s been predicted for several years and now it&rsquo;s true. Growth is very fast in transportation/traffic management, security, and point of sale/entertainment applications. Each sector (especially security) has its own barriers to entry, with large, established system vendors and low-cost competition from &ldquo;legacy&rdquo; CCTV cameras.<br /><br />- According to system integrator Luster LightVision in Beijing, the Chinese machine vision market is roughly $76 million industrial and $50 million non-industrial, with transportation and logistics the fastest growing segments. 2010 will be the year that machine vision really takes off although it is hampered by the fact that the markets are so cost-sensitive, there is little interdisciplinary system design knowledge or machine vision expertise, and the industry organization is weak.<br /><br />- GigE is seeing steady global growth, with the GigE Vision standard being used in some non-machine vision applications, including embedded military and medical applications.<br /><br />- Tradeshows are inspiring because they give engineers a chance to get out of the office, lab, or factory and talk to suppliers, developers, or speakers at technical session. Google may be a great tool for search or for comparing components, but human interaction is a better source of innovation, or maybe for learning that someone else has already solved your problem. Engineering management should encourage their staff to get out, ask questions, and see what the world is doing. </p>, 2010 : Day One, VISION Award, market newsnoemail@noemail.orgConard HoltonStuttgart, November 9&mdash;The anticipation seems to have been rewarded for the now-official 323 exhibitors (a record number), as most exhibitors were delighted with the quantity and quality of leads. More coverage of the show is available at <a href="">this link</a>.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 198px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5537709539668682850" /></a><br /><br />At a press conference the winner of the VISION Award was announced: SICK, for its ColorRanger E. Of the 23 entries, it was probably inevitable that some sort of 3-D product would win since 3-D is one of the fastest growing technology segments in machine vision.<br /><br />The <a href="">ColorRanger E</a> combines high-speed 3-D and color linescan capabilities at more than 11 kHz. Applications include wood inspection, quality control in baking or food processing, and solar wafer shape and color quality assurance.<br /><br />The list of contestants contains many other innovative products&mdash;you will find the list at the end of this blog.<br /><br />During a press conference, Dr. Olaf Munkelt, chairman of the VDMA Machine Vision Group, talked about the unprecedented collapse of the global market and its impact on machine vision sales&mdash;&ldquo;basic trust is still lacking&rdquo;, he said. Nonetheless the industry has come bounding back, with German sales expected to rise 18% in 2010 over 2009, giving the German industry a value of 1.1 billion euros. Inspection, Munkelt said, remains the most important driver of sales, especially in the automotive market, and 3-D metrology is rapidly increasing.<br /><br />In a video interview with me that we&rsquo;ll post soon, Munkelt said that the global sales of machine vision break out roughly into one third each for Europe, Asia, and North America. He noted that the US share is dropping in relative terms and that part of the reason for this may be that investment in R&D is declining. This observation is not new and it&rsquo;s one that concerns many of us in the US.<br /><br />Basler also held a press conference in which its leaders, Dietmar Ley and Arndt Bake, described the company as moving to be a &ldquo;pure play&rdquo; camera company, leaving its systems business side. Bake said that Basler sees significant new growth opportunities in traffic (eg, tolls, parking, law enforcement), surveillance, and point-of-sale applications (eg, recycling, wheel alignment).<br /><br />Finally, for a taste of the innovative technologies on display at the show, here&rsquo;s the list of the 22 entries that did not win the VISION Award. We&rsquo;ll be covering some of these products online and in future issues of the magazine:<br /><br />&bull; ABAQuS, Using the M200CT to check the quality of barcodes and 2-D codes as a prerequisite to automated data input with barcode scanners<br />&bull; Allied Vision Technologies, Redefining the limits of GigE Vision bandwidth using Link Aggregation<br />&bull; Aqsense, Global dimensional inspection and geometric features measurement<br />&bull; BAP Image Systems, High-speed scanning basing on CIS-sensors<br />&bull; Basler Vision Technologies, New CMOS camera generation<br />&bull; Chronos Vision, High speed, real-time video eye tracking<br />&bull; Dalsa, BOA smart camera<br />&bull; Effilux, LED lighting for 3-D profilometry<br />&bull; FLIR Commercial Systems, New FLIR A615<br />&bull; Frankfurt Laser Company, HEML high-power temperature stabilised laser diode module<br />&bull; Fraunhofer IDMT, Fraunhofer eye tracker&ndash;a calibration free solution with scalable and configurable Hough IP Core<br />&bull; Imaging Diagnostics, Auto-focus camera using standard DSLR lenses<br />&bull; Inviso, Brain-inspired machine vision<br />&bull; Keyetech, Keyetech texture-based recogniser<br />&bull; New Imaging Technologies, Native WDR: a radical, innovative breakthrough in CMOS sensors based on the Magic technology<br />&bull; OPT Machine Vision Tech, Introduction of AOI light application in machine vision<br />&bull; Photometrics, EMCCD camera &ndash; automatic, real-time imaging data standardization<br />&bull; PMD Technologies, First robust and feasible gesture control using time-of-flight technology<br />&bull; Raytrix, Single lens 3-D-camera with extended depth of field for industrial inspection<br />&bull; Schott Lighting and Imaging, Telecentric zoom lens ML-Z07545HR<br />&bull; Smartvision, BlobMax, A new method for a safe inspection of planar and curved surfaces with diffuse or specular reflection<br />&bull; Softhard Technology, Currera-R atom based industrial smart camera<br />&bull; Sony Image Sensing Solutions Europe, The smallest C-mount digital camera<br />&bull; Spotrack, Method for the positioning of a multiple pan/tilt devices in single object tracking applications (multiple camera tracking)<br />&bull; Technos Japan, The first visual inspection system that does not miss defects<br />&bull; Vision for Vision, Interactive workshop for fast prototyping<br />&bull; Xenics, The Lynx: a novel high-performance line scan camera, 2010 - As the show beginsnoemail@noemail.orgConard HoltonNovember 8, Stuttgart--Preparations are hectically concluding for tomorrow's start of the 23rd annual VISION show. This is the third year that the show is held in the Neue Messe Stuttgart--the new, very large and sprawling complex next to the airport. The old Killesberg location was cozier for sure, but could never have accommodated the 306 exhibitors or 6000+ attendees expected for this show.<br /><br />Based on the enthusiasm and energy of people I've spoken to so far, the show should be better than ever. The recovery of the industry is evident, with many companies reporting banner quarters or years. One of the biggest complaints is the shortage of components such as sensors, so vendors are struggling to build cameras fast enough to meet demand. Not such a bad problem after a very difficult recession.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 240px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5537218613990611890" /></a><br /><br />We will be covering the show in several ways. We'll be posting videos about the show, the markets, and multiple technical sessions on topics such as GigE Vision, megapixel lenses, CMOS sensors, data transmission standards, and CoaXPress. We'll also be recording videos from the Global Vision Standards demonstration booth, including demonstrations of Camera Link HS and GigE Vision. GigE Vision will be especially interesting to follow because it is beginning to be used outside of the machine vision world, including in embedded military and medical applications. <br /><br />Also, we will be posting a series of sponsored videos from machine vision vendors who describe their products and applications in some depth.<br /><br />And of course, in the coming issues of Vision Systems Design magazine and online we'll be publishing in-depth technical articles by editor Andy Wilson about the most interesting technical developments at the show, including the winner of the VISION, 2010: a study in variations of machine visionnoemail@noemail.orgConard HoltonAs machine vision technologies and products become more established across multiple industries, tradeshows such as the forthcoming <a href="">VISION 2010 </a>to be held November 9-11 at the New Stuttgart Trade Fair Centre in Stuttgart, Germany, will reflect these trends. <br /><br />Indeed, during VISION 2010, a panel discussion entitled: &ldquo;<a href="">Green Vision &ndash; Driving Factor for a Green Future</a>&rdquo; will focus on how machine vision can be used in systems to protect the environment, conserve resources, increase energy efficiency, and develop more environmentally friendly products.<br /><br />In addition to <a href="">highlighting innovations in industrial camera and system design</a>, the show will also include a demonstration of autonomous robot footballers, an application park highlighting the role machine vision plays in testing and production processes, an area demonstrating international machine vision standards, joint booths for startup companies, and a series of seminars for those new to machine vision.<br /><br />According to the organizers, Messe Stuttgart, attendance is already on track to exceed last year, both in terms of exhibitors (now over 300) and attendees. Those who wish to see the many sides of machine vision would do well not to miss the, imaging heads commercialnoemail@noemail.orgConard HoltonMultispectral imaging enables several discrete images in the visible and IR bands of the spectrum to be captured and processed. To capture continuous spectral bands from the ultraviolet to the far infrared, hyperspectral imaging is a powerful if often expensive imaging tool.<br /><br />Hyperspectral remote-sensing applications have flourished for several decades. Now, low-cost imaging spectrometers are being introduced that allow innovative approaches to applications such as medical diagnostics, metallurgy, sorting materials, food processing, and microscopy.<br /><br />We recently published an article by Rand Swanson at Resonon describing a <a href="">compact hyperspectral imaging system that can be flown in a Cessna aircraft to monitor the spread of leafy spurge</a>, an invasive weed that reduces grazing forage for livestock. <br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 297px; height: 320px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5509825226045521250" /></a><br /><br />We&rsquo;ve also reported on the use of hyperspectral imaging to <a href="">detect the food pathogen Campylobacter</a> and to <a href="">sort walnuts</a>.<br /><br />A hyperspectral imaging <a href="">microscopy system also allows detailed examination of LED structures in the visible and near-IR</a>.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 170px; height: 320px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5509825452902968130" /></a><br /><br />You can find more examples by searching our website. I expect to see numerous such articles in the future. For example, we&rsquo;ll be describing a hyperspectral blueberry sorting system from EVK in Austria in our September, Pharma needs machine visionnoemail@noemail.orgConard HoltonDiscussions on the <a href="">Vision System Design Group on LinkedIn </a>have recently reflected the growing interest in using machine vision to inspect pharmaceutical products. We have published a series of technical article that might be of interest.<br /><br />One article about a <a href="">pharmaceutical packing system that uses IR and visible sensors </a>describes how American SensoRx developed a system that inspects tablets, capsules, caplets, and gels at very high speeds before they are packed up and shipped to distributors.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 211px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5508677164922563106" /></a><br />Another describes how the German company <a href="">Boehringer uses FireWire cameras to inspect capsules</a> used in inhaled medications for respiratory disease.<br /><br />And yet another describes how <a href="">Pfizer added an x-ray system</a> to its visible light inspection system to check tablets in blister packs.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 285px; height: 190px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5508677494177943554" /></a>,, with vision, at your servicenoemail@noemail.orgConard HoltonA recent <a href="">article </a>and <a href="">video </a>in the <em>New York Times</em> describes <a href="">Bandit</a>, a robot built by researchers at the University of Southern California, which interacts with autistic children. Three-foot-tall Bandit can maintain &ldquo;eye&rdquo; contact with an autistic child and, sometimes, use playful or sympathetic actions to overcome withdrawn behavior. <br /><a href=""><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 190px; height: 141px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5497088138998811314" /></a> <br />Another robot, named <a href="">RUBI&mdash;Robot Using Bayesian Inference</a>&mdash;at the University of California, San Diego, images children&rsquo;s faces, recognizes basic emotions from facial muscle movement, and responds with verbal and physical gestures of encouragement. <br /> <br />These service robots are part of a rapidly growing wave of robotic human helpers. In the classroom they may supplement the work of human teachers, during surgery they may perform delicate procedures, and on the battlefield they may help disarm a roadside bomb, as described in our <a href="">June 2010 cover story</a>. <br /> <br />The technological differences between these service robots--with their vision and image processing functions--and robots used in industrial applications can be small. For example, <a href="">a recent article on our website </a>describes the work of researchers at the Technical University of Munich who are imaging non-verbal communications such as gestures and facial expressions as a method of interacting with robots. To date, they have demonstrated that their work can help those that require assisted living and workers in automated production plants, where background noise may make speech recognition difficult. <br /> <br />Recently, European researchers have built <a href="">a robot for 'on-demand' rubbish collection</a> &ndash; just make a call and it will soon arrive at your door. It's ideal for collecting waste in the narrow streets of many historical towns. <br /> <br /><object width="640" height="385"><param name="movie" value=""></param><param name="allowFullScreen" value="true"></param><param name="allowScriptAccess" value="always"></param><embed src="" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="640" height="385"></embed></object <br /> <br /> <br />About the size of a person, it can navigate the narrowest of alleys, stop outside your door and take your rubbish away. And the best bit is this: You don't have to remember when to put your bin out, but simply make a telephone call. Soon the robot is waiting outside your door, ready to receive your rubbish. <br />, VSD online--Hyperspectral imaging, minature autofocus lenses, 3-D visionnoemail@noemail.orgConard Holton<a href=""><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 100px; height: 130px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5495669853378569394" /></a><br />To capture continuous spectral bands from the UV to the far IR, hyperspectral imaging has become a powerful imaging tool. In our July issue, Rand Swanson at Resonon describes a <a href="">compact hyperspectral imaging system </a>that has been flown in a Cessna aircraft to monitor the spread of leafy spurge, an invasive weed that reduces grazing forage for livestock.<br /><br />In our Product Focus article, editor Andy Wilson describes recent developments in <a href="">miniaturized autofocus lenses</a>. Whether based on electro-optical, electromechanical, thermo-optical, or acousto-mechanical techniques, these tunable optics will find cutting-edge applications in smart machine-vision systems, endoscopy systems, and mobile phones.<br /><br />Our <a href="">cover story </a>shows how an optical tester based on an off-the-shelf camera system can be used to calibrate centering errors of lenses to ensure the imaging quality of an optical assembly or subassembly. <br /><br />3-D vision remains one of the most alluring areas for innovation in machine vision development. While dual-camera and time-of-flight sensors are becoming increasingly important, other options such as the one described in an <a href="">article about ISee3D </a>now allow stereo images to be captured from a single camera/lens combination.<br /><br />We also have articles on: <a href="">inspection of wood surfaces for defects </a>by researchers at AIDO in Spain; an <a href="">algorithm that uses partial deriatives </a>to improve edge detection; and an <a href="">FFT processor that performs phase correlation</a>, about machine vision â�?�? interested?noemail@noemail.orgConard HoltonSome cynics I know mock the idea of blogging, but I think it&rsquo;s a good way to explore a subject such as machine vision. And a blogger might even be paid the highest compliment--having your blog blogged about.<br /><br />A case in point: editor Andy Wilson&rsquo;s <a href="">My View video blog </a>on the <a href="">Vision Systems Design website </a>was recently blogged about by Laura Hoffman, who runs the Microscan blog <a href="">SolutionConnection</a>, along with colleagues such as John Agapakis.<br /><br />Numerous other companies in the machine vision industry have blogs or are trying to figure out what they could write that wouldn&rsquo;t rattle internal corporate feathers but would still be interesting. Like Microscan, Thor Vollset at <a href="">ScorpionVision&rsquo;s blog </a>is aiming to keep readers up to date on the company and how customers can use its products.<br /><br />Then there are system integrators who have occasional blogs on their own sites or on a magazine site. These include David Dechow at Aptura Machine Vision Solutions with his <a href="">Regarding Machine Vision blog</a>, and <a href=" ">Ned Lecky </a>at Lecky Integration and <a href="">John Nagle</a> at Nagle Research. Also, there are journalists who blog about related topics, such as Frank Tobe at <a href="">Everything Robotic </a>and Gabriele Jansen at <a href=" ">Inspect-online blog</a>.<br /><br />And of course there is the popular and anonymous B Grey at <a href="">machinevision4users blog</a>, who presumably hides his or her identity out of concern about industry (or employer?) reaction. The postings vary from technical observations and comparisons to witty digs. But anonymity is both a shield and a crutch. Speaking as a journalist who must live with the consequences of what I write, I think B Grey should stand forth and be counted.<br /><br />Whether anonymous or very public, all bloggers can attest to the fact that it&rsquo;s not easy to post frequently and have something interesting or new--or at least amusing--to say. Yet it can be quite rewarding, personally and professionally.<br /><br />Blogs can become good networking and marketing tools that engage people. And you can re-post blogs to other social media sites such as Linked In, where you will find many relevant groups such as the <a href="">Vision Systems Design Group</a>, the <a href="">Machine Vision Group</a>, the <a href="">Image Processing Group</a>, and the <a href="">3D Machine Vision Group</a>. Most of these groups have hundreds or even thousands of members. <br /><br />If you have a comment on what I&rsquo;ve written, please post it on this blog.<br /><br />If you&rsquo;re reading this and interested in contributing a regular or at least somewhat regular blog to <em>Vision Systems Design</em>, please let me know:, issue: vision-guided robots, software, and Three Mile Islandnoemail@noemail.orgConard Holton<a href=""><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 100px; height: 131px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5485698804706472178" /></a><br /><br />Our June issue is now available on our website. The articles in it point to some of the many ways in which machine vision is evolving. <br /><br />I glimpsed this potential in 1982, when I watched the feed from the first remote video camera lowered into a reactor vessel at Three Mile Island, after the nuclear accident had destroyed the reactor core in 1979. It took several years to develop the imaging equipment for this first foray and, in the years that followed, many cameras and robots would gather information about damage and perform cleanup operations in highly radioactive areas of the plant. Here's a picture of Rover, developed with Carnegie Mellon University.<br /><a href=""><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 216px; height: 320px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5485698567641118610" /></a><br /><br />Although robots were not then sophisticated enough to perform major operations&mdash;and stereo vision was practically a dream&mdash;the future of vision-guided robots was obvious. Some colleagues and I wrote a history of the cleanup, including the robotic and imaging technologies that were used. You can download a PDF of the history published by the Electric Power Research Institute by <a href="">clicking HERE</a>.<br /><br /><br />Remotely operated vehicles are now playing an increasing role in other crises. Our cover story in the June issue, for example, shows how <a href="">3-D displays can help remote operators in the military</a> safely handle and dispose of explosive devices using robots. <br /><br />Another article explains how <a href="">single-sensor image fusion technology </a>could enable simpler and more effective imaging of potential threats in security and defense operations.<br /><br />Machine vision is not always on the front line of environmental and political challenges, however. Researchers from the University of Ilmenau in Germany are using image processing techniques to <a href="">evaluate the quality of wheat after it is harvested</a>. <br /><br />And, as contributing editor Winn Hardin explains, manufacturers are using other <a href="">machine vision techniques ensure that the steel tubes produced for oil and gas production </a>are of the highest quality.<br /><br />This broadening range of biomedical, robotics, military, and aerospace applications is leading software vendors to expand the functionality of their products beyond simple measurement functions, as editor Andy Wilson writes in his <a href="">Product Focus article on machine vision software</a>. <br /><br />Indeed, new opportunities for machine vision and image processing systems are occurring every year. To take advantage of these developments, however, suppliers of machine vision systems will have to look outside the box of conventional industrial manufacturing and into niche applications that span the gamut from agriculture to space,, vision, and ALS - forget eye-tracking for shoppersnoemail@noemail.orgConard HoltonWe've seen <a href="">eye-tracking systems </a>that help determine the preferences of shoppers or website browsers. Here's one that could really benefit people who suffer from physical limitations: <a href="">The EyeWriter project</a>.<br /><br />It's an ongoing collaborative research effort to empower people who are suffering from amyotrophic lateral sclerosis (ALS; aka Lou Gehrig's disease) with creative technologies. It allows graffiti writers and artists with paralysis resulting from ALS to draw using only their eyes.<br /><br />The collaborative consists of members of Free Art and Technology (FAT), OpenFrameworks, the Graffiti Research Lab, and The Ebeling Group communities. They have teamed-up with LA graffiti writer, publisher, and activist, Tony Quan, aka TEMPTONE. He was diagnosed with ALS in 2003, a disease which has left him almost completely physically paralyzed… except for his eyes.<br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 153px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5481544142232519922" /></a><br />The long-term goal is to create a professional/social network of software developers, hardware hackers, urban projection artists and ALS patients from around the world who are using local materials and open source research to creatively connect and make eye art.<br /><br /><object width="400" height="225"><param name="allowfullscreen" value="true"><param name="allowscriptaccess" value="always"><param name="movie" value=";;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=ffffff&amp;fullscreen=1"><embed src=";;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=ffffff&amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object><p><a href="">The Eyewriter</a> from <a href="">Evan Roth</a> on <a href="">Vimeo</a>.</p><p><br /><a href="">Click HERE </a>to see the Specification Sheet for the Eyewriter, including the <a href="">low-cost vision components </a>that are needed.</p>, Vision Show revealed--on videonoemail@noemail.orgConard HoltonThe most interesting thing for me about The Vision Show in Boston (May 25-27) was the simple fact that about 80 exhibitors put on a very upbeat and comprehensive showing of machine vision components available to integrators and end-users.<br /><br />In one place you could see and touch the <a href="">cameras</a>, <a href="">lighting</a>, <a href="">boards</a>, <a href="">cabling</a>, etc that you might want to design into your next system. In the technical sessions and tutorials, you could also be instructed in many of the fundamentals of the technology and understand how products perform.<br /><br />Here&rsquo;s a video with Jeff Burnstein from the <a href="">AIA </a>talking about the show and what&rsquo;s coming next.<br /><br /><center> <embed src="" bgcolor="#99cc33" flashVars="playerId=90579791001&viewerSecureGatewayURL=" base="" name="flashObj" width="310" height="250" seamlesstabbing="false" type="application/x-shockwave-flash" swLiveConnect="true" pluginspage=""></embed></center><br /><br />Of course the fact that most vendors were reporting good to great sales numbers really helped. The overall mood of the roughly 1900 attendees and exhibitors was so strikingly different than during the depths of the recession that it was impossible not to get caught up in the good feelings.<br /><br />Follow this link to our <a href="">Video Showcase</a> to see some of the videos that were made during the, military envisions how imaging catches insurgentsnoemail@noemail.orgConard HoltonUsing very high-resolution digital cameras, multispectral imaging, and laser ranging, the UK&rsquo;s <a href="">Defence Science and Technology Laboratory </a>(DSTL) says that new imaging technology will be used within 5 years to recognize insurgents or terrorists. <br /><br />DSTL, which develops and tests the latest technologies for the Ministry of Defence, had members of its staff act out insurgent-like behavior, while developers and engineers took on the role of "good guys", pursuing and monitoring them.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 206px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5480104858033578994" /></a><br /><br />The military twist was that these high-tech surveillance techniques are being combined with software that can pick out unusual patterns in behavior--such as two vehicles meeting in a concealed area. The surveillance, DSTL says, will eventually help to "win the battle" against insurgency. For more information, read the excellent <a href="">BBC News article</a>, battles Gulf oil disasternoemail@noemail.orgConard HoltonSatellite imaging and paricle image velocimetry are two of the imaging techniques being deployed against the oil spill in the Gulf of Mexico. A <a href="">May 22 article in the New York Times</a> describes several of the techniques that researchers are using to try a get an accurate measurement of the oil spill.<br /><br />One approach is described in more detail by one of the Times authors, Steve Wereley at Purdue University, in a PowerPoint presentation entitled &ldquo;<a href="">Oil Flow Rate Analysis &ndash; Deepwater Horizons Accident&rdquo;</a>. He predicts that the Deepwater Horizon Gulf of Mexico oil spill is more than 50 times worse than initial BP predictions. <br /><br />Using an imaging technique called <a href=" ">particle image velocimetry</a> (PIV), Wereley analyzed video obtained from BP to compute the magnitude of oil flowing from the site. According to his presentation, Wereley estimates that between 56,000 and 84,000 barrels a day are currently pouring into the Gulf of Mexico. Doug Suttle, chief operating officer for BP, initially said he thinks the estimate of 1,000 barrels a day is accurate, although <a href="">BP is now admitting they have underestimated the amount of oil leaking</a>.<br /><br />To obtain his figures, Wereley computed the average plume velocity of the oil using PIV techniques, multiplied this figure by the cross-sectional area to find the volume flow rate, and then converted this figure to barrels per day.<br /><br />PIV is an optical method of fluid visualization. It is used to obtain instantaneous velocity measurements and related properties in fluids. By measuring features in the fluid, motion of these features is used to calculate velocity information of the flow being studied.<br /><br />A live video of the oil leak, provided by BP over Ustream is available on <a href=""> </a>- search: live oil spill cam.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 300px; height: 225px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5474903362734320562" /></a><br /><br />All this imaging doesn't even take into account the dozen or so remote underwater vehicles that are now in operation near the sea bed, around the leak, streaming video back to a control center in Houston. <br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 177px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5474903037584502258" /></a><br /><br />The oil spill is a disaster that maybe imaging and machine vision can help understand and, vision lives in Irannoemail@noemail.orgConard HoltonWhile <a href="">cruising the Bosphorus</a> I met a vision system integrator from Tehran. Kasra Ravanbakhsh is the co-founder and managing director of <a href="">Kasra Hooshmand Engineering</a> (KDI). I was of course taken with him since he attributed his attendance at the EMVA Business Conference in Istanbul with seeing an advertisement for it in Vision Systems Design.<br /><br />It turns out that along with my own blog about the conference and travel home under The Volcanic Cloud, Kasra has made a <a href="">blog with many pictures about the EMVA conference</a>.<br /><br />KDI was formed in 2003 as private joint stock company in Tehran. The company's previous name was Kasra Digital Instruments and it still uses that abbreviation and logo.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 96px; height: 88px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5470852167868538850" /></a><br />Kasra says his company excels in developing machine vision systems, PC-based automation and monitoring, industrial automation, data acquisition, LabVIEW programming, microcontroller-based systems, and instrumentation. It is also very involved in cleanroom design and installation.<br />&nbsp;<br />He also claims that KDI is the only professional developer of machine vision and real-time image processing-based inspection and control systems in Iran. KDI operates in industries such as pharmaceutical, glassware, packaging, military, aerospace, paper, food and beverage, and steel and aluminum production.<br /><br />Kasra made many good contacts during the conference and perhaps opened the eyes of his new friends to some of the technical and intellectual life that stands just beyond their usual reach&mdash;not to mention some potential sales opportunities.<br /><br />Europeans often note that their North American colleagues come from a &ldquo;young&rdquo; culture on the far side of the Atlantic. <a href="">A bit of Persian history </a>as described on the KDI website helps to put real antiquity into perspective!<br /><br />Conard Holton<br /><br /><a href="Vision Systems Design"></a>, interactive atlas about global automationnoemail@noemail.orgConard HoltonIf you manufacture automation equipment, including machine vision systems and robots, and you&rsquo;re wondering where in the world to look for commercial growth opportunities, then you should review the Automation Atlas. <br /><br />The Atlas shows the relative degree of automation in a country by showing the estimated number of robots per employees in processing industries. For more information and to use the Atlas, <a href="">click here</a>.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 300px; height: 214px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5470455838759448242" /></a><br /><br />The Atlas was commissioned by the <a href="">AUTOMATICA trade fair </a>(held at Messe Munich, 7-11 June) and created by the statistical department of IFR - International Federation of Robotics, which is sponsoring the co-located ROBOTIK conference. The very interesting <a href="">conference program </a>is now available on the IFR website.<br /><br />The IFR says only one-third of companies use automation technologies such as industrial robots or process-integrated quality control. For example, according to the Automation Atlas, countries in Eastern Europe employ relatively little automation technology--fewer than 50 industrial robots per 10,000 employees in the processing industry. The robot figure is only between 100 and 200 in Slovenia.<br /><br />And globally there are clearly opportunities for growth in the pharmaceutical, cosmetics, and medical equipment industries, where the number of industrial robots in use is estimated to be fewer than 50 per 10,000 employees. In contrast, there are an estimated 400 to 700 robots for the same number of employees in the automobile industry.<br /><br />Conard Holton,<br /><a href="">Vision Systems Design</a>, the Bosphorus Expressâ�?�?Istanbul to Munichnoemail@noemail.orgConard HoltonFor the 150 people attending the <a href="">European Machine Vision Association Business Conference</a> in Istanbul last week, the meeting began as a fascinating visit to a beautiful city with a rich history, but not one usually on the machine-vision meeting circuit. <br /><br />As the presentations, <a href="">market reports</a>, networking, and boat cruise passed, the specter of &ldquo;The Cloud&rdquo; from the Icelandic volcano began to dominate everyone&rsquo;s thinking. European airspace was shutting down as we took a scenic cruise up the Bosphorus past historic Dolmabahçe Palace:<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 240px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5463052021545246114" /></a><br /><br />By the last day of the conference--Saturday, April 17--it was clear that all plans to fly home were in jeopardy. I was able to fly out on Sunday because the flight was direct to New York&rsquo;s JFK and we could skirt the southern edge of Europe.<br /><br />However, my colleague and <a href="">Vision Systems Design</a> sales rep, <strong>Johann Bylek</strong>, had a different adventure on his way home to Munich.<br /><br />Here is his report:<br /><br /><strong>An unexpected adventure trip from Istanbul to Munich</strong><br /><br />After the EMVA conference in Istanbul most of the European attendees were not able to fly home because of the Icelandic volcanic ash cloud. Most European airports were closed and all flights cancelled. As a result, most people were stuck in Istanbul.<br /><br />It was nearby impossible to connect with any airline since all telephone lines were overloaded. Rental cars and trains were sold out across Europe, and thousands of passengers were hanging around the airports. <br /><br />A group of attendees--with special thanks to the &ldquo;chief coordinator&rdquo; Dr. Horst G. Heinol-Heikkinen (CEO, <a href="">Asentics</a>)&mdash;began discussing other possibilities to get home.<br /><br />After many false leads, it was possible to find and hire a Bulgarian bus to drive to Istanbul and pick up a group of 34 people. These passengers would then be driven over 2000 km (about 1250 miles) to Munich. And the cost for the bus and two drivers? About €290 ($385) per passenger.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 237px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5463063691395108578" /></a><br /><br />Starting at 7:00 pm Sunday evening in Istanbul, we reached the Bulgarian border at 10:00 pm. We had to use a side road to avoid the highway customs station where about 200 buses were waiting for immigration. <br /><br />Pushing on to the Serbian border we had to wait in line for three hours because of six other buses ahead of us--everybody has got a Serbian stamp in their passport. One of our group, Manfred Schaffrath, from Profactor, was picked out for detailed luggage inspection, maybe searching for cigarettes or drugs. We wondered if perhaps he was suspected because he is Austrian!<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 239px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5463063893316245826" /></a><br /><br />Driving the whole night and half another day the bus passed Belgrade at 3:00 pm on Monday, and then going further on through Hungary and Austria to Munich.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 220px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5463064055522166306" /></a><br /><br />After a 37 hour bus ride through five different countries with different languages and currencies, the group arrived in Munich on Tuesday morning at 8:00 am. Everybody was tired but happy to be back in Germany. <br /><br />Images courtesy of Manfred Schaffrath, <a href="">Profactor</a>, at trade showsnoemail@noemail.orgConard HoltonPredicting the market outlook for machine vision products can seem akin to interpreting the patterns of tea leaves or Tarot cards or even practicing <a href="">myomancy </a>&ndash; studying the movements of mice to foretell the future. However, those attending and exhibiting at this spring&rsquo;s spate of machine vision and image processing trade shows may practice a modern version of myomancy to give themselves a sense of market momentum. <br /><br />This week, for example, the <a href="">SPIE Defense, Security, and Sensing </a>show held in Orlando, FL, will provide attendees an impression of the state of the markets for imaging components and applications, especially those used in <a href="">infrared applications</a>. A strong technical conference accompanies the show.<br /><br />The month after, in Boston, <a href="">The Vision Show</a>, May 25-27, will give both exhibitors and attendees alike an idea of the health of machine vision industry in North America, particularly the health of component makers. <br /><br /><a href="">Automatica</a>, held in Munich, Germany, June 8-11, will reveal similar prospects for components and systems in Europe, especially as the show includes a strong robotic exhibition and the collocated technical symposium, <a href="">ISR/Robotik</a>. The show also takes place at the same time and exhibition center as <a href="">Intersolar 2010</a>, which will attract a vast audience for those involved in solar energy products and services, and is a <a href="">fast growing area for machine-vision </a>components and systems.<br /><br />Myomancy, anyone?, fool for milknoemail@noemail.orgConard HoltonWhen I first read about automated cow milking machines that use machine vision, I thought it was amusing. Last year, <a href="">LMI Technologies </a>was working with GEA Farm Technologies to <a href="">adapt its 3-D time-of-flight imager to the task </a>of producing happier cows and higher yields.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 314px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5455172389527147282" /></a><br /><br />Now I find that robot maker <a href="">Fanuc Robotics </a>has taken the concept of automated milking to an advanced stage with a herd-milking system that can be seen in this video. Machine vision just keeps getting more interesting.<br /><br /><object width="660" height="405"><param name="movie" value=""></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="660" height="405"></embed></object>, a bit better all the timenoemail@noemail.orgConard HoltonIt seems that the market for machine vision products and systems is improving. <a href="">IMS Research </a>in the UK says the recovery in the world machine vision market is gathering pace, as shown by its latest quarterly report consolidating revenue data from major suppliers.<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 271px; height: 308px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5454805491743716322" /></a><br /><br />John Morse, the managing analyst for the tracking report says, &ldquo;The market appears to be recovering faster than previously forecast.&rdquo; The North America market grew the most between the third and forth quarter of 2009; but there was good growth also from both Europe/Middle East/Africa and Asia Pacific. &ldquo;Our data show that the low point was Q1 2009 and revenues have grown each subsequent quarter. If this trend continues, the market will be back to its 2008 levels by 2011."<br /><br />This corresponds to discussions I've had with system integrators and vendors. Blogger B Grey at <a href="">Machine Vision 4 Users </a>even notes that delivery times on lenses for machine vision (now getting longer) could be an indicator of recovery. <br /><br />What form of tea-leaf-reading works for you?<br /><br />Here are some more revealing charts that John Morse sent to me:<br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 186px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5454887961846145762" /></a><br /><br /><a href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 195px;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5454888070725799490" /></a>, in Imagingnoemail@noemail.orgConard HoltonIf you'd like to know what's going on with the VC crowd and new ideas in imaging, you could attend the <a href=""><strong>MIT Imaging Ventures </strong></a>class on March 30, 2010. <br /><br />If you missed it, here is the panel of entrepreneurs and technologists--a list that is very interesting to check out:<br />&bull; Kenny Kubala, <a href="">FiveFocal </a>~ Advanced imaging and optics;<br />&bull; Rob Rowe, <a href="">Lumidigm </a>~ Biometric fingerprint systems;<br />&bull; Mark Holzbach, <a href="">Zebra Imaging </a>~ Holographic products;<br />&bull; Kari Pulli, <a href="">Nokia Research Imaging </a>~ Imaging on mobiles<br /><br />Follow <em><a href="">Vision Systems Design </a></em>on:<br /><a href="">Twitter</a><br /><a href="">Facebook</a><br /><a href="">LinkedIn</a>, could be a big deal for machine visionnoemail@noemail.orgConard HoltonSimpler cameras with embedded intelligence sounds like a good idea. In fact many vendors of <a href="">smart cameras for machine vision</a> are already heading in this direction, adding FPGAs, DSPs, and CPUs to their products so that their customers can build ever-more sophisticated systems without some of the software development needed for custom applications.<br /><br />But wait! It seems that a group of 10-year-old kids is working on the same idea. Actually, it&rsquo;s not quite the same idea since the kids are performing this task using a simple camera kit called BigShot (<a href=""></a>). The creator of BigShot is Shree Nayar, chairman of Columbia University&rsquo;s computer-science department and director of the Computer Vision Laboratory.<br /><br />BigShot is a build-it-yourself camera. It comes in a kit with less than 20 parts that snap and screw together simply. When it&rsquo;s finished, users can peer through the transparent back and, with the help of labels preprinted on the plastic, show curious friends how the camera works. The labels point out the microprocessor, the memory chip, and other features that let this homemade device digitally capture, store, and reproduce images.<br /><br /><embed src="" bgcolor="#99cc33" flashVars="playerId=64650069001&viewerSecureGatewayURL=" base="" name="flashObj" width="310" height="250" seamlesstabbing="false" type="application/x-shockwave-flash" swLiveConnect="true" pluginspage=""></embed><br /><br />BigShot takes normal, panoramic, and even three-dimensional pictures. But the real point of the camera isn&rsquo;t the photos. It&rsquo;s to use the camera as an excuse to expose the kids to as many science and engineering concepts as possible.<br /><br />Nayar worked with a group of contractors to flesh out his initial design and build the first set of working prototypes. He also worked with a group of undergraduate and graduate students at Columbia to develop the online educational materials, design the Bigshot website, and conduct the field tests.<br /><br />So far there have been test sites in New York City, Bengaluru, India, and Vung Tao, Vietnam, where, the camera has served as a means for children of very different social and economic backgrounds to communicate and express themselves.<br /><br />What can vendors and integrators of machine-vision products learn from such an undertaking? One lesson, perhaps, is that it is critical to educate young people in science and engineering and encourage some to follow these career paths.<br /><br />Another is that simplicity and transparency help make technology a more useful tool--whether in education, manufacturing, security, biomedical research, or human relations.<br /><br />In ways we may not yet recognize, the future success of such machine vision and image processing applications is already being secured by the interest, enthusiasm, and energy of 10-year olds fiddling with do-it-yourself, evolutionnoemail@noemail.orgConard Holton<span style="font-family:arial;">This blog and our redesigned <em>Vision Systems Design</em> web site (<a href=""></a>) are part of our response to rapid changes in publishing--both print and digital. With the New Year we have begun collaboration with three related technology publications--<em>Laser Focus World</em>, <em>Industrial Laser Solutions</em>, and <em>BioOptics World</em>-- to provide a richer user experience, more content, and a broader reach.</span><br /><span style="font-family:arial;"><br /></span><span style="font-family:arial;">We'll continue providing our VSD audience with relevant machine-vision news, system design articles, tutorials, videos, webcasts, and white papers within an umbrella site called OptoIQ: Gateway to the application of light.<br /><br />As you'll see, the VSD web site has Topic Centers that focus on applications such as factory automation, biomedical research, food packaging, solar cell manufacturing, traffic monitoring, and vision-guided robotics. We also have product Topic Centers that focus on cameras, machine-vision software, frame grabbers, lighting, and lenses.</span><br /><span style="font-family:arial;"></span><br /><span style="font-family:arial;">High-speed and infrared imaging each command their own Topic Centers. A dedicated Product Center delivers all the most recent products along with our Buyers Guide and Industrial Camera Directory.<br /><br />Information delivery media may continue to evolve, but we'll make sure that the essential quality of unique editorial content from <em>Vision Systems Design</em> remains unchanged.<br /><br /><br /></span><span style="font-family:arial;"></span>, 500

Cannot serve request to /content/vsd/en/blogs/vision-insider/_jcr_content.feed on this server

ApacheSling/2.2 (Day-Servlet-Engine/4.1.52, Java HotSpot(TM) 64-Bit Server VM 1.7.0_51, Windows Server 2012 6.2 amd64)