In February, I was invited to give a talk at the University of Leeds for the Pints of Robotics series. The title of the talk is Learning Human Actions: From Perception to Robot Learning and it summarises the results of the research done during my PhD at LCAS UoL and the ongoing work at ARQ QMUL.
I’m sharing with you the recording of the talk and the link to the event description:
While understanding and replicating human activities is an easy task for other humans, it is still a complicated skill to program into a robot. Understanding human actions have become an increasingly popular research topic for its wide set of applications like automated grocery stores, surveillance, sports coaching software or ambient assisted living. In robotics, being able to understand human actions not only allows us to classify an action automatically but also to learn how to replicate the same action to achieve a specific goal. Being able to teach robots how to achieve certain goals by demonstrating the movements can reduce the programming effort and allows us to teach more complex tasks. Human demonstration of simple robotic tasks has already found its way to industry (e.g. robotic painting, simple pick and place of rigid objects), but still, it cannot be applied to the dexterous handling of generic objects (e.g. soft and delicate objects), that would result in larger applicability (e.g. food handling). In this talk, an approach for the recognition of human social activities sequences using a combination of probabilistic temporal ensembles and a system for the teleoperation of a robot hand using a cheap setting composed of a haptic glove and a depth camera is presented.