Action Motifs: Self-Supervised Hierarchical Representation of Human Body Movements

2026-04-30Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors developed a new way to understand human body movements by breaking them down into small joint actions called Action Atoms, which then combine over time into bigger patterns called Action Motifs. They created a model named A4Mer that learns these patterns automatically from 3D pose data without needing labels. To help with this, they also made a new dataset using cameras placed on people's feet to capture movements despite parts of the body being hidden. Their method helps improve tasks like recognizing actions and predicting future movements by capturing meaningful chunks of how people move.

human pose estimationhierarchical representation3D pose sequencelatent Transformerself-supervised learningAction AtomsAction Motifsmasked token predictionSMPL annotationsmotion prediction
Authors
Genki Kinoshita, Shu Nakamura, Ryo Kawahara, Shohei Nobuhara, Yasutomo Kawanishi, Ko Nishino
Abstract
Effective human behavior modeling requires a representation of the human body movement that capitalizes on its compositionality. We propose a hierarchical representation consisting of Action Atoms that capture the atomic joint movements and Action Motifs that are formed by their temporal compositions and encode similar body movements found across different overall human actions. We derive A4Mer, a nested latent Transformer to learn this hierarchical representation from human pose data in a fully self-supervised manner. A4Mer splits a 3D pose sequence into variable-length segments and represents each segment as a single latent token (Action Atoms). Through bottom-up representation learning, temporal patterns composed of these Action Atoms, which capture meaningful temporal spans of reusable, semantic segments of body movements, naturally emerge (Action Motifs). A4Mer achieves this with a unified pretext task of masked token prediction in their respective latent spaces. We also introduce Action Motif Dataset (AMD), a large-scale dataset of multi-view human behavior videos with full SMPL annotations. We introduce a novel use of cameras by mounting them on the feet to achieve their frame-wise annotations despite frequent and heavy body occlusions. Experimental results demonstrate the effectiveness of A4Mer for extracting meaningful Action Motifs, which significantly benefit human behavior modeling tasks including action recognition, motion prediction, and motion interpolation.