Master of Puppets: Multi-modal Robot Activity Segmentation from Teleoperated Demonstrations

Overview of the proposed robot activity recognition system.

Abstract

Programming robots for complex tasks in unstructured settings (e.g., light manufacturing, extreme environments) cannot be accomplished solely by analytical methods. Learning from teleoperated human demonstrations is a promising approach to decrease the programming burden and to obtain more effective controllers. However, the recorded demonstrations need to be decomposed into atomic actions to facilitate the representation of the desired behaviour, which can be very challenging in real-world settings. In this study, we propose a method that uses features extracted from robot motion and tactile data to automatically segment atomic actions from a teleoperation sequence. We created a publicly available dataset with demonstrations of robotic pick-and-place of three different objects in single-object and cluttered situations. We use a custom-built teleoperation system that maps the user’s hand and fingertips poses into a three-fingered dexterous robot hand equipped with tactile sensors. Our findings suggest that the proposed feature set generalises the activities in different episodes of the same object and between items of similar size. Furthermore, they suggest that tactile sensing contributes to higher performance in recognising activities within demonstrations.

Publication
IEEE International Conference of Development and Learning 2022