The Human Activity Understanding (HAU) group builds models of human interaction with the environment using Computer Vision, Sensor Fusion, and AI techniques. The insights gained from these models can be used to build technology that improves human well-being, comfort and convenience.
Current focus: Understanding Human-Object Interactions using Computer Vision and Machine Learning
Under this topic, we build models of human-object interactions using camera data (RGB and Depth) recorded in indoor environments. Depending on the availability, werable sensors (e.g. Inertial Measurement Units) or sensors installed in the environment (RFID, motion detectors, etc.) may be fused together with RGB-D data.
Enjoy below some impressions from our ongoing work on the 3D human activity simulator (Zakour, Marsil; Mellouli, Alaeddine; Chaudhari, Rahul: HOIsim: Synthesizing Realistic 3D Human-Object Interaction Data for Human Activity Recognition. 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), 2021).