The machine, dubbed Dense Object Nets (DON), looks at objects as sets of points which function as visual roadmaps of sorts.
This approach lets robots understand and manipulate objects and allows them to pick up a particular object among a clutter of similar objects – a valuable skill for the sorts of machines that companies such as Amazon and Walmart utilize in their own warehouses, the investigators stated.
“A number of approaches to manipulation can not identify specific parts of a thing across the numerous orientations that object may encounter,” said Lucas Manuelli, doctoral student at the CSAIL.
“For example, present calculations would be unable to grasp a mug by its own handle, especially if the mug could be in multiple orientations, like upright, or on its side,” Manuelli added.
The DON system, essentially creates a collection of coordinates on a given thing, which serve as a kind of visual roadmap of these objects, to give the robot a better understanding of what it requires to grasp, and where.
It’s”self-supervised” and doesn’t require any individual annotations.
In the study, 1 pair of tests performed on a gentle caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy’s best ear from a range of various configurations.
This revealed that, among other things, the machine has the power to differentiate left from right on symmetrical objects.
“In factories robots frequently require complex part feeders to work faithfully,” Manuelli said. “However a system similar to this that can comprehend items’ orientations could only have an image and have the ability to grasp and fix the object so.”
The team will present their paper on the machine at forthcoming Seminar on Robot Learning at Zurich, Switzerland.