LEXIS: LatEnt ProXimal Interaction Signatures for 3D HOI from an Image
2026-04-22 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionMachine Learning
AI summaryⓘ
The authors created a new method to better understand how people interact with objects in 3D using just a single color image. Instead of only checking if the body and object touch or not, they capture how close every part of the body and object are to each other. They introduced a special way to describe typical interaction patterns using a learned code called LEXIS and use it in a system called LEXIS-Flow to make more accurate and physically realistic 3D reconstructions. Their method works better than previous ones in tests and helps create more believable 3D scenes showing how humans and objects interact.
3D Human-Object InteractionRGB imageInterFieldsLEXISVQ-VAEDiffusion frameworkMesh reconstructionContact modelingProximity modelingPhysical plausibility
Authors
Dimitrije Antić, Alvaro Budria, George Paschalidis, Sai Kumar Dwivedi, Dimitrios Tzionas
Abstract
Reconstructing 3D Human-Object Interaction from an RGB image is essential for perceptive systems. Yet, this remains challenging as it requires capturing the subtle physical coupling between the body and objects. While current methods rely on sparse, binary contact cues, these fail to model the continuous proximity and dense spatial relationships that characterize natural interactions. We address this limitation via InterFields, a representation that encodes dense, continuous proximity across the entire body and object surfaces. However, inferring these fields from single images is inherently ill-posed. To tackle this, our intuition is that interaction patterns are characteristically structured by the action and object geometry. We capture this structure in LEXIS, a novel discrete manifold of interaction signatures learned via a VQ-VAE. We then develop LEXIS-Flow, a diffusion framework that leverages LEXIS signatures to estimate human and object meshes alongside their InterFields. Notably, these InterFields help in a guided refinement that ensures physically-plausible, proximity-aware reconstructions without requiring post-hoc optimization. Evaluation on Open3DHOI and BEHAVE shows that LEXIS-Flow significantly outperforms existing SotA baselines in reconstruction, contact, and proximity quality. Our approach not only improves generalization but also yields reconstructions perceived as more realistic, moving us closer to holistic 3D scene understanding. Code & models will be public at https://anticdimi.github.io/lexis.