Invited talk – Learning human actions from perception to robot learning

In February, I was invited to give a talk at the University of Leeds for the Pints of Robotics series. The title of the talk is Learning Human Actions: From Perception to Robot Learning and it summarises the results of the research done during my PhD at LCAS UoL and the ongoing work at ARQ QMUL.
I’m sharing with you the recording of the talk and the link to the event description:

Abstract

While understanding and replicating human activities is an easy task for other humans, it is still a complicated skill to program into a robot. Understanding human actions have become an increasingly popular research topic for its wide set of applications like automated grocery stores, surveillance, sports coaching software or ambient assisted living. In robotics, being able to understand human actions not only allows us to classify an action automatically but also to learn how to replicate the same action to achieve a specific goal. Being able to teach robots how to achieve certain goals by demonstrating the movements can reduce the programming effort and allows us to teach more complex tasks. Human demonstration of simple robotic tasks has already found its way to industry (e.g. robotic painting, simple pick and place of rigid objects), but still, it cannot be applied to the dexterous handling of generic objects (e.g. soft and delicate objects), that would result in larger applicability (e.g. food handling). In this talk, an approach for the recognition of human social activities sequences using a combination of probabilistic temporal ensembles and a system for the teleoperation of a robot hand using a cheap setting composed of a haptic glove and a depth camera is presented.

PhD Graduation

On the 11th of December, I had my PhD graduation ceremony.

Here you can find the presentation I used for my PhD Thesis defense occurred last year.
The title of my thesis is “Social Activity Recognition for Service Robots using a Depth Camera

[googleapps domain=”docs” dir=”presentation/d/12Gdb4-3jOCF6OBClQExiP6NJBTZ_8FeWx2RkMW4ZgzM/embed” query=”start=false&loop=true&delayms=3000″ width=”1920″ height=”450″ /]

Presentation: Human Activity Recognition and Monitoring

During the Symposium organized by the British machine vision association (BMVA), focused on Human Activity Recognition and Monitoring, I had the chance to present the research work performed during my PhD studentship.

In particular, I’ve presented my work titled Social Activity Recognition from Continuous RGB-D Sequences.
I have performed this research work alongside my colleagues Sehran Cosar and Nicola Bellotto in collaboration with Diego R. Faria (Aston University).

Social Activity Recognition from Continuous RGB-D Sequences.Front of the "Social Activity Recognition on Continuous RGB-D Sequences" Presentation

The presentation describes our approach to recognise social activities for indoor robots.
Therefore, it focuses on tracked skeletons data, in the case of untrimmed activity video clips.
The approach detects the temporal intervals of human social interactions to then recognise the type of social activities performed by a pair of persons.

This research is a continuation of my works recently presented at IROS 2016 and ROMAN 2017.
Furthermore, It includes the results of the experiments described in the paper submitted for revision to the International Journal of Social Robotics.

Event Description:

The event has been very interesting. Many important researchers were presenting their work on Human Activity Recognition from different contexts and aimed at a wide spectrum of applications.

Logo of the BMVA Event: Human Activity Recognition and Monitoring

The event was chaired by Ardhendu Behera (Edge Hill University), Nicola Bellotto (University of Lincoln) & Charith Abhayaratne (University of Sheffield).
It included Keynote speakers such as Prof David Hogg (University of Leeds), Dr Alessandro Vinciarelli (University of Glasgow), Prof Ian Craddock (University of Bristol) and Prof Yiannis Demiris (Imperial College London).

The full program of the event with all the presenters can be found here.

Here you can find the slides used for my presentation:

[googleapps domain=”docs” dir=”presentation/d/e/2PACX-1vTNppiHTOblS_ozYmhbmDWRFQ7HiZhviFezS3gMsCaA6UmcMeZnCj5TLFcDSM_bs21EftTX3CNiOYWX/embed” query=”start=false&loop=true&delayms=3000″ width=”1920″ height=”450″ /]

ROMAN 2017 Publication

The article “Automatic detection of human interactions from RGB-D data for social activity classification” accepted for oral presentation at The 26th IEEE International Symposium on Robot and Human Interactive Communication (ROMAN 2017).
The article has been written with Diego Faria, Sehran Cosar and Nicola Bellotto.

The Article will be available here.

A video of the dataset used in this study can be found below:

IROS 2016 Publication

Paper “Social activity recognition based on probabilistic merging of skeleton features with proximity priors from RGB-D data” accepted for oral presentation at theInternational Conference on Intelligent Robots and Systems (IROS 2016).
The article has been written with Diego Faria, Urbano Nunes and Nicola Bellotto.

The Article will be available here.

A video of the dataset used in this study can be found below:

ECAI 2016 Publication accepted

A long paper “Learning Temporal Context for Activity Recognition” accepted for oral presentation at the European Conference on Artificial Intelligence 2016.
The article has been written by me, Tomas  Krajník, Tom Duckett and Nicola Bellotto.

The Article will be available here.
Code and description of the experiment tools can be found in here.