Machine Learning Enables Robot to Grab Transparent and Shiny Objects
Gripping robots struggling to pick up transparent objects might become a thing of the past.
Here is a thing you probably haven’t thought of before: How do robots really see transparent and reflective objects? Well, trick question — they actually don’t really see them properly which is why they can’t grasp kitchen stables such as a shiny knife.
However, roboticists at Carnegie Mellon University have had success with a technique they’ve developed for teaching robots to pick up such objects.
Their newly found technique doesn’t demand fancy sensors, exhaustive training, or human guidance. It relies on one thing only: a color camera.
CMU scientists developed a color camera system that can identify shapes based on color and trained it to imitate the depth system and essentially assume shape to grasp objects. In order to do this, they used depth camera images of opaque objects next to color images of those same objects.
David Held, an assistant professor at CMU’s Robotics Institute, said, “We do sometimes miss, but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects.”
While the system wasn’t foolproof, the multimodal transfer learning used to train the system was so effective that it was almost as good as the depth camera system at grasping opaque objects.