Text-to-Motion Retrieval: Towards Joint Understanding of Human Motion Data and Natural Language
Authors | |
---|---|
Year of publication | 2023 |
Type | Article in Proceedings |
Conference | 46th International Conference on Research and Development in Information Retrieval (SIGIR) |
MU Faculty or unit | |
Citation | |
web | https://doi.org/10.1145/3539618.3592069 |
Doi | http://dx.doi.org/10.1145/3539618.3592069 |
Keywords | human motion data;skeleton sequences;CLIP;BERT;deep language models;ViViT;motion retrieval;cross-modal retrieval |
Description | Due to recent advances in pose-estimation methods, human motion can be extracted from a common video in the form of 3D skeleton sequences. Despite wonderful application opportunities, effective and efficient content-based access to large volumes of such spatio-temporal skeleton data still remains a challenging problem. In this paper, we propose a novel content-based text-to-motion retrieval task, which aims at retrieving relevant motions based on a specified natural-language textual description. To define baselines for this uncharted task, we employ the BERT and CLIP language representations to encode the text modality and successful spatio-temporal models to encode the motion modality. We additionally introduce our transformer-based approach, called Motion Transformer (MoT), which employs divided space-time attention to effectively aggregate the different skeleton joints in space and time. Inspired by the recent progress in text-to-image/video matching, we experiment with two widely-adopted metric-learning loss functions. Finally, we set up a common evaluation protocol by defining qualitative metrics for assessing the quality of the retrieved motions, targeting the two recently-introduced KIT Motion-Language and HumanML3D datasets. The code for reproducing our results is available here: https://github.com/mesnico/text-to-motion-retrieval. |
Related projects: |