fbpx
  • TossingBot – Unifying Physics and Deep Learning To Learn Tossing

    Researchers from Google, Princeton, Columbia University and MIT unified physics and deep learning to develop a tossing robot called, TossingBot.

    Grasping has been one of the most famous challenges in robotics among a few others in the past couple of decades. Significant progress was made in this area and it seemed that robots have learned the task of grasping, picking up objects and placing them in a specific place.

    However, these robots use quite complex systems built with multiple modules, manually designed features and they use inverse kinematics to perform the task. Even more, these robots do not perform well with all kinds of objects and in all kinds of scenarios.

     

    “Throwing is a particularly difficult task as it depends on many factors: from how the object is picked up (i.e., “pre-throw conditions”), to the object’s physical properties like mass, friction, aerodynamics, etc. For example, if you grasp a screwdriver by the handle near the center of mass and throw it, it would land much closer than if you had grasped it from the metal tip, which would swing forward and land much farther away. “, says Andy Zeng, Student Researcher at Robotics at Google

     

    In a new study, researchers tried to integrate simple laws of physics with deep learning for the same objective. To do so, they designed an end-to-end system that is able to learn to grasp and throw objects in selected boxes outside the robot’s range.

    The system is based on a single deep neural network, which due to its specific architecture is able to learn both the tasks of grasping and throwing. In fact, the system is mapping visual observations (RGB-D images) directly to control parameters for motion primitives.

     

     

    The model is learned in a self-supervised manner, so the robot is improving itself over time. According to researchers, TossingBot is able to learn quickly and generalizes well to new unseen scenarios.

    The pre-print version of the paper was published on arxiv. Also, more details can be found on the official blog post.