Is this the home robot people have been waiting for?
Making its debut at the Consumer Electronics Show (CES) 2017 in Las Vegas was Kuri, a "friendly" vision-guided robot designed for the home.
While many robots were introduced at CES, Kuri is an interesting one. First, this robot's design avoids the uncanny valley, as it is much closer to R2D2 than one of those scary-albeit wildly impressive-DARPA humanoid robots. Mayfield Robotics (Redwood City, CA, USA; www.mayfieldrobotics.com), a Bosch (Stuttgart, Germany; www.bosch.com) startup company, built the robot to be "friendly," and is "built to connect with you and help bring technology to life." The company states that the robot can understand context and surroundings, recognize specific people, and respond to questions with facial expressions, head movements, and unique "lovable" sounds.
In others words, the company designed the robot to have a personality, with the robot providing a "spark of life to any home."
Chris Matthews, Mayfield Robotics' vice president of marketing, told Digital Trends (Portland, OR, USA; www.digitaltrends.com) that the company sought out to build a robot that feels less like a piece of technology and more like a "companion."
On top of that, the robot is moderately priced at $699, which is less than the iRobot (Bedford, MA, USA; www.irobot.com) Roomba 980 robot vacuum.
"For generations, people have dreamed of having their own personal robot in the home, and we've been focused on making that dream more of a reality," said Sarah Osentoski, COO and co-founder of Mayfield Robotics. "We're proud to introduce Kuri to the world and can't wait to see how he touches the lives of everyone, ranging from parents and children to early technology adopters."
The 20 in. tall, 12 in. wide robot features asynchronous motors, a capacitive touch sensor, microphones, and speakers. It also has an HD camera and sensors that enables it to map and detect edges and objects for orientation and navigation. Additionally, Kuri has a built-in LED to indicate its current state of mood. When it is time to charge, the robot navigates back to the charging dock itself.
"While insanely cute on the outside, Kuri contains serious technologies on the inside that represent the latest developments in smartphones, gaming, and robotics," said Kaijen Hsiao, CTO and co-founder of Mayfield Robotics. "We hope Kuri introduces people - especially kids - to the power of technology and can inspire a new world of possibilities for their future.
Is Kuri the home robot that we've all been waiting for? It very well could be. Perhaps it is the logical next progression in the development of the "smart home," as products like Amazon Echo and Google Home gain popularity. Time will tell though, as Mayfield Robotics plans to ship the first robots out for the holiday season in 2017.
World's largest map of visible universe released
Images captured by the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS1; Haleakala, HI, USA; neo.jpl.nasa.gov/programs/panstarrs.html) have been compiled to form the world's largest map of the visible universe.
The Pan-STARRS project at the University of Hawaii Institute for Astronomy (www.ifa.hawaii.edu) is releasing images to the public via the Space Telescope Science Institute (STScI; Baltimore, MD, USA; www.stsci.edu). Pan-STARRS consists of astronomical cameras, telescopes and a computing facility that surveys the sky for moving objects on a continual basis, including accurate astrometry and photometry of already detected objects.
Pan-STARRS1 (PS1), one of the two telescopes located on site in Haleakala, Hawaii, captured half a million exposures, each about 45 seconds in length, over a period of four years. The shape of the image comes from making a map of the celestial sphere, like a map of Earth, but leaving out the southern quarter. If printed at full resolution, the image would be 1.5 miles long, and one would have to get close and squint to see details, according to the University of Hawaii Institute for Astronomy.
PS1 has a 3° field of view and is equipped with a 1.4 Gigapixel mosaic focal plane CCD camera. The focal plane has 60 separately mounted, close-packed CCDs arranged in an 8 x 8 array. Each CCD device, called an Orthogonal Transfer Array, has 4800 x 4800 pixels, separated in 64 cells, each of 600 x 600 pixels. This Gigapixel camera (Or "GPC) saw first light on August 22, 2007, imaging the Andromeda Galaxy. Each image captured by the camera requires about 2 GBytes of storage, and exposure times will be 30 to 60 seconds, with an additional minute or so for computer processing.
"The Pan-STARRS1 Surveys allow anyone to access millions of images and use the database and catalogs containing precision measurements of billions of stars and galaxies," said Dr. Ken Chambers, Director of the Pan-STARRS Observatories. "Pan-STARRS has made discoveries from Near Earth Objects and Kuiper Belt Objects in the Solar System to lonely planets between the stars; it has mapped the dust in three dimensions in our galaxy and found new streams of stars; and it has found new kinds of exploding stars and distant quasars in the early universe."
He added, "With this release we anticipate that scientists-as well as students and even casual users-around the world will make many new discoveries about the universe from the wealth of data collected by Pan-STARRS."
The four years' worth of images captured comprise 3 billion separate sources, including stars, galaxies, and various other objects. In total, the collection contains 2.2 petabytes of data, which is equivalent to one billion selfies, or one hundred times the total content of Wikipedia, according to the university.
This research was done as part of the PS1 Science Consortium, a collaboration of 10 research institutions in four countries with support from NASA and the National Science Foundation (NSF). Consortium observations for the sky survey, mapping everything visible from Hawaii, were completed in April 2014, with data now being released publicly. The roll-out is being done in two stages, as this release, "Static Sky," shows an average value for position, brightness, and colors for individual epochs. The second rollout, which is scheduled for 2017, will provide a catalog that gives information and images for each individual epoch.
High-speed cameras captures footage of largest autonomous drone swarm to date
A recent 60 Minutes special detailed the struggles a CBS News (New York, NY, USA; www.cbsnews.com) team encountered when it came to filming a swarm of 100 Perdix drones during U.S. Department of Defense testing. In order to capture footage of the small, fast-moving objects, the video team used high-speed cameras after days of trying to figure out what to do.
Developed by MIT's Lincoln Labs (Lexington, MA, USA; www.ll.mit.edu) for the Department of Defense, Perdix drones (pictured) are small drones designed to operate as a team, or as a "swarm. The autonomous drones are designed for military operations and represent what the Pentagon's Dr. Will Roper said (http://bit.ly/VSD-DOD) is a "glimpse into the future of combat." In order to share this story with the world, however, the 60 Minutes team first had to figure out how to get the drones-which travel upwards of 40-50 miles per hour-on film.
When cameraman Ron Dean went to the Fort Devens Army base in Massachusetts for an exploratory shoot to see if he could film a few Perdix drones in flight, he realized that it may not be possible. The idea struck, however, to see if a golf cameraman-whose job it is to track a small white ball flying across sky-could assist in their challenge. This is where Rudy Niedermeyer, an experienced golf cameraman, stepped in.
At first, the filming proved difficult, as Niedermeyer could not seem to capture footage of the drones, which are self-directed, meaning the flight trajectory is unpredictable.
"No way," Niedermeyer said on 60 Minutes. "I'm like, I can't believe I can't do this."
Much progress was made, however, when a Sony Broadcast and Business Solutions (New York, NY, USA; pro.sony.com) 4300 4K camera was brought in. The camera is attached by cable to a server and enables footage to be slowed down. The Sony 4300 features a 3-chip 2/3" type CMOS image sensor and reaches speeds of up to 480 fps. This camera is also used at the Super Bowl, the NBA finals, the MLB World Series, and the Masters golf tournament. With this, producer Mary Walsh saw steady progress on the shots, but an even faster camera was brought in to capture the high-speed footage.
Joining Niedermeyer was Justin Hall, a cameraman who specializes in live replay for major televised sports events, who used a Phantom (Wayne, NJ, USA; www.phantomhighspeed.com) Flex camera to capture the Perdix drones. The thermo-electrically-cooled Phantom Flex camera features a 4 MPixel global electronic shutter CMOS image sensor with 10 μm pixel size, 12-bit depth, and can reach speeds of up to 1,455 fps in full resolution. Additionally, the now-discontinued camera features a dynamic range of 54.6 dB, 16 GB/32 GB high-speed internal RAM, and two shooting modes: standard mode for highest frame rates, and HQ mode for image quality.
The resulting footage, which can be seen on the 60 Minutes special, shows the largest ever autonomous drone swarm-which were launched from three F-18 jet fighters-performing a complicated swarm maneuver for the first time.
Perdix drones are an experimental project conducted by the Special Capabilities Office of the United States Department of Defense which aims to develop autonomous micro-drones to be used for surveillance. The drone has two sets of wings, a plastic body containing a lithium battery, a small camera, and a 2.6 in. propeller at the rear. 3D printing is used to create Perdix drone bodies, while the onboard software can be updated as needed.
View the 60 Minutes video: http://bit.ly/VSD-60M.
Computer vision and deep learning technology at the heart of Amazon Go
Amazon (Seattle, WA, USA; www.amazon.com) has unveiled its first convenience store called "Amazon Go," which uses computer vision and deep learning algorithms to enable shoppers to get what they want without having to go through checkout lines.
Amazon Go is currently in private beta testing in Seattle but will reportedly open to the public this year. The shopping experience, according to Amazon, is made possible by the same types of technologies used in self-driving cars. That is, computer vision, sensor fusion, and deep learning technologies.
With "Just Walk Out" technology, users can enter the store with the Amazon Go app, shop for products, and walk out of the store without lines or checkout. The technology automatically detects when products are taken or returned to shelves and keeps track of them in a virtual cart. When the shopping is finished, users leave the store and their Amazon account is charged shortly thereafter.
While details are not provided, the Amazon patent filings (http://bit.ly/VSD-AMZN) show that the cameras used in Amazon Go may include RGB cameras, depth sensing cameras, and infrared sensors. Within the patent filings, however, are some additional details that suggest simply using the app to enter may not be quite as simple as it sounds.
It is noted that upon detection of someone entering and/or passing through a transition area, various techniques may be used for identification. This includes a camera that captures an image that is processed using facial recognition, and that "in some implementations, one or more input devices may collect data that is used to identify when the user enters the materials handling facility," which is what the store is referred to in the filings.
In addition, user position may also be monitored as they move about the store. So as users enter or pass through a transition area, their identity and position are maintained.
It also shows that, in order for the system to "automatically detect" when items are taken from a shelf, that cameras "capture a series of images of a user's hand before the hand crosses a plane into the inventory location and also capture a series of images of the user's hand after it exits the inventory location."
Based on a comparison of the images, it can be determined whether the user picked an item up, or placed an item back down. Software can identify if the user is holding an item in their hand before the hand crosses into the inventory location, but is not holding an item when the hand is removed from the inventory location.
A shopper's past purchase history may also be used to help identify an item when it is picked up, according to the filings:
"For example, if the inventory management system cannot determine if the picked item is a bottle of ketchup or a bottle of mustard, the inventory management system may consider past purchase history and/or what items the user has already picked from other inventory locations. For example, if the user historically has only picked/purchased ketchup, that information may be used to confirm that the user has likely picked ketchup from the inventory location."
It also says in that some cases, data from other input devices may be used to assist in determining the identity of items picked or placed in inventory locations.
"For example, it is determined that an item is placed into an inventory location, in addition to image analysis, a weight of the item may be determined based on data received from a scale, pressure sensor, load cell, etc., located at the inventory location. The image analysis may be able to reduce the list of potentially matching items down to a small list. The weight of the placed item may be compared to a stored weight for each of the potentially matching items to identify the item that was actually placed in the inventory location.
By combining multiple inputs, a higher confidence score can be generated increasing the probability that the identified item matches the item actually picked from the inventory location and/or placed at the inventory location. In another example, one or more radio frequency identifier ("RFID") readers may collect or detect an RFID tag identifier associated with a RFID tag included in the item."
While the patent applications were filed in September 2014, it does appear that what the company is showing thus far is similar to what is described in the filings, noted GeekWire (Seattle, WA, USA; www.geekwire.com)
As of now, Amazon Go is only open to Amazon employees, but customers are able to sign up via the via the Amazon site to be notified when the store opens to the public.