Tri-Modal Motion Retrieval by Learning a Joint Embedding Space

1ShanghaiTech University 2 Institute of Advanced Technology, Chinese Academy of Sciences

Abstract

Information retrieval is an ever-evolving and crucial research domain. The substantial demand for high-quality human motion data especially in online acquirement has led to a surge in human motion research works.

Prior works have mainly concentrated on dual-modality learning, such as text and motion tasks, but three-modality learning has been rarely explored. Intuitively, an extra introduced modality can enrich a model’s application scenario, and more importantly, an adequate choice of the extra modality can also act as an intermediary and enhance the alignment between the other two disparate modalities. In this work, we introduce LAVIMO (LAnguage-VIdeo-MOtion alignment), a novel framework for three-modality learning integrating human-centric videos as an additional modality, thereby effectively bridging the gap between text and motion. Moreover, our approach leverages a specially designed attention mechanism to foster enhanced alignment and synergistic effects among text, video, and motion modalities.

Empirically, our results on the HumanML3D and KIT-ML datasets show that LAVIMO achieves state-of-the-art performance in various motion-related cross-modal retrieval tasks, including text-to-motion, motion-to-text, video-to-motion and motion-to-video.

Text Retrieval Demo

a person stands in a defensive stance with right arm and leg forward, then uses the right forearm for a clock across the body.

a man is laying on his stomach, moving his legs and arms around as if swimming.

Synthetic Video Retrieval Demo

Real Video Retrieval Demo

BibTeX


      @misc{yin2024trimodal,
        title={Tri-Modal Motion Retrieval by Learning a Joint Embedding Space}, 
        author={Kangning Yin and Shihao Zou and Yuxuan Ge and Zheng Tian},
        year={2024},
        eprint={2403.00691},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
      }