Anki’s new miniature Vector offers a new type of home robot
Consumer robotics and artificial intelligence (AI) company Anki (San Francisco, CA, USA; https://www.anki.com/) has announced the development of Vector, a new miniature robot that is designed to provide a different type of experience than some of the other, more well-known homerobots.
Standing just three inches high, Vector is a home robot “with personality,” that is fully autonomous, cloud-connected, and “always on,” according to Anki, the company that developed the Cozmo mobile robot (the #1 best-selling toy on Amazon in 2017 in U.S., U.K., and France), as well as Overdrive intelligent racing system.
The robot is designed to fit naturally into a person’s daily life with minimal maintenance. It does so through the technology that Anki has equipped the robot with—including sensors and artificial intelligence technologies—that enables it to see its surroundings, recognize people and objects, hear what is happening, find and connect to its charger, and avoid obstacles whilenavigating.
Vector’s “brain” is built on a Qualcomm (San Diego, CA, USA; www.qualcomm.com) Snapdragon processor (1.2GHz), that enables on-device AI capabilities such as machine learning algorithms that help it detect and avoid objects. It features a four-microphone array, a single point Time-of-Flight near-infrared (NIR) laser, an HD camera with 120° field of view, 802.11n Wi-Fi, Bluetooth, and four cliff sensors, which are infrared emitters installed under the corners of the robot that prevent it from falling off edges. When low on energy, Vector can locate and roll back to its charger to boost its battery.
“For more than five years, Anki has brought together a team of experts across various fields to create the world’s first affordable, character-rich robot capable of surprising and delighting humans,” said Boris Sofman, CEO and co-founder at Anki.
Vector is also equipped with a high-resolution color display which is used to highlight Vector’s nearly 1,000 animations that are designed to give the robot personality as it reacts to itsenvironment.
Additionally, Vector has a capacitive touch sensor built on its back that enables it to respond to human touch, as well as the ability to communicate through a unique sound palette.
Useful or fun features touted by Anki include the ability of the robot to dance when it hears music, a custom text-to-speech feature for answering questions, the ability to take pictures for you, and to even play games.
Vector will retail for $249.99 with one base charger and one interactive accessory cube. Anki notes that—while Vector is an always-on, autonomous robot, it does require a smart device running the companion app, which is available on iOS and Android, for initial setup.
Researchers deploy hyperspectral camera on underwater robot for Great Barrier Reef monitoring
Researchers from the Australian Institute of Marine Science (AIMS; https://www.aims.gov.au) have deployed an underwater robot carrying a hyperspectral camera in trials that would enable greater monitoring of the Great BarrierReef.
The AIMS technology development and engineering team spent two weeks at sea testing a remotely operated vehicle (ROV) called the Blue ROV2 which has semi-autonomous navigation capabilities. AIMS Technology Transformation leader Melanie Olsen (pictured) and her team put a hyperspectral camera onto the ROV, which features a dive capability of100m.
Deployed onto the ROV was a Nano-Hyperspec hyperspectral camera from Headwall Photonics (Bolton, MA, USA; www.headwallphotonics.com), which operates in the 400 to 1000 nm spectral range. The camera features a 640 x 480 CMOS image sensor with a 7.4 µm pixel size that achieves frame rates up to 350 fps. In addition, the Nano-Hyperspec also features onboard data processing and storage and has 640 spatial bands and 270 spectral bands, while featuring a GigE Vision interface, 17 mm lens, and a storage capacity of480GB.
With the ability the capture more than 270 bands of color information, the hyperspectral camera enables the ROV to survey the reef in more detail, including mapping of the ocean floor, depth of the water, identifying bleached corals, and more, according to AIMS, which is working in partnership with the Queensland University of Technology to leverage Australian expertise in shallow water marinerobotics.
“This is the first time a hyperspectral camera has been trialed underwater on our ROVs,” Olsen said.
In addition to the Blue ROV2, the team also deployed the hyperspectral camera on a drone. This was the first time that the team deployed ROVs and drones simultaneously during night-timemissions.
“We did some revolutionary stuff during this trial, we also flew the 900g hyperspectral camera under our large aerial drone off our research vessel (RV) Cape Ferguson, over a coral transect on John Brewer Reef, which is one of our long-term monitoring sites,” said Olsen.
Technologies such as robots and hyperspectral cameras, according to Olsen, help the team stay competitive and improve their researchendeavors.
“We want to remain globally competitive and so we are boosting our technological capabilities. Robotics helps us to monitor larger and new sections of the reef in areas that would otherwise be dangerous todivers.”
She added, “These robots will soon be helping to free up our marine science researchers to do the important work of looking at how to help support these reefs,” shesaid.
Additionally, the robots enable the monitoring of aspects of coral reefs the team has not been able to accomplish previously, while also keeping human divers out of harm’s way, in the form of crocodiles, marine stingers, orsharks.
Using these technologies during the two-week trial showed that the team was able to perform missions at night, while also allowing them to go deeper, according to Olsen.
Related: In the October issue, an article detailing the use of underwater robots for the monitoring and protection of the Great Barrier Reef was also described. (http://bit.ly/VSD-SFA). In this application, local researchers used an underwater vision-guided robot called the RangerBot to identify and destroy crown-of-thorns starfish, which destroy coral in the Great Barrier Reef.
Machine learning software and vision-guided robot team up to find Waldo in under five seconds
Creative Agency Redpepper (Nashville, TN, USA; redpepper.land) has developed “There’s Waldo,” which combines a robotic arm, an embedded camera, and machine learning software that can find “Waldo” in under fiveseconds.
The robot arm is a uArm Swift Pro from UFACTORY (Shenzhen, China; www.ufactory.cc) controlled by a Raspberry Pi single-board computer (https://www.raspberrypi.org) using the PYARM Python library. Once initialized, the robot is instructed to extend its arm and capture an image of the canvas below using a Vision Camera Kit from UFACTORY, which is based on an OpenMV (Atlanta, GA, USA; https://openmv.io) Cam M7 open source embedded camera.
This camera features an STM32F765VI ARM Cortex M7 processor from TMicroelectronics (Geneva, Switzerland; https://www.st.com) running at 216 MHz and is based on the 640 x 480 OV7725 CMOS image sensor from OmniVision Technologies (Santa Clara, CA, USA; www.ovt.com) that acquires images at a speed of 60fps.
Images are captured by the camera, which then uses OpenCV to find the possible Waldo faces in the photo. The faces are then sent to be analyzed by Google’s AutoML machine learning model service, which compares each one against the trained Waldo model.
Available since January, Google’s Cloud AutoML is a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their business needs, by leveraging Google’s transfer learning and Neural Architecture Search technology, according to the company.
Additionally, Cloud Auto ML also provides a graphical user interface for training , evaluating, improving, and deploying models based on a user’s owndata.
If a confident match of 95% (0.95) or higher is found, the robot arm is instructed to extend to the coordinates of the matching face and point at it using an attached silicone hand.
If there are multiple Waldos in one photo, according to Redpepper, it will point to each one.
“While only a prototype, the fastest There’s Waldo has pointed out a match has been 4.45 seconds, which is better than most 5-year-olds,” said the company on its YouTubevideo.
View a video of the robot in action: http://bit.ly/VSD-WAL.
View more information on Google’s Cloud AutoML here: http://bit.ly/VSD-AML.
Machine vision system inspects and verifies correct assembly of butterfly valves
When a Tier 1 supplier of automotive exhaust system components turned toward automation for quality inspection, it reached out to Neff Group Distributors (Indianapolis, IN, USA; https://neffautomation.com) to develop a machine vision system that verifies not only the integrity of a specific part, but also its proper assembly prior toshipment.
Previously, the supplier did not have a process in place to detect defects on butterfly valves, and as a result, an end customer reported that the parts were flawed upon delivery. At one end of the butterfly valve is the point where the closure plate, or vane, pivots. That pivot point is critical to the function of the valve. A misshapen, broken, or otherwise damaged pivot can hinder the butterfly valve’s operation.
An image of a valve with a broken right side of the pin.
To automate the quality control process, Neff Group Distributors designed a compact machine vision system for part inspection that could be installed in an environment with a limited amount ofspace.
For the vision aspect of the system, the company chose a Cognex (Natick, MA, USA; www.cognex.com) In-Sight 7200 smart camera, which features an 800 x 600 CMOS image sensor that acquires images at a speed of 102 fps and is equipped with the company’s PatMax pattern detection algorithm.
To illuminate the parts in question, the company chose two LM45 lights from Smart Vision Lights (Muskegon, MI, USA; www.smartvisionlights.com), which offer four LEDs and feature MultiDrive and OverDrive technologies. A MultiDrive controller allows the light to operate continuously or in OverDrive high-pulse strobe mode.
At the inspection station, the butterfly valve is manually placed into a fixture, and prior to the final assembly/weld application, the smart camera and lights are triggered to inspect its internal integrity.
The camera and lights sit just above the side of the valve body, looking directly at the pivot point. The closure plate, according to Neff Group Distributors, is placed in the completely open position so as not to obscure the pivot point. The two LED lights are placed on either side of the closure plate because the plate bisects theimage.
The machine vision system runs two Cognex PatMax algorithms, one of each “half-moon” side of the part. Neff Group Distributors taught the pattern on the full, intact part and set the match percentage high enough to fail any defective parts. Parts that pass inspection are welded and sent on to the next phase. Failed parts, however, are manually removed and placed into a “failed” containment system to bereworked.
As a result of Neff’s machine vision system, defects in the valves are now automatically identified, while also verifying the correct assembly before parts are welded and shipped off to customers.