DriveTok: 3D Driving Scene Tokenization for Unified Multi-View Reconstruction and Understanding
2026-03-19 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionMachine Learning
AI summaryⓘ
The authors created DriveTok, a new way to turn multi-view driving images into simple scene tokens that capture a lot of visual information all at once. Unlike older methods that work only with single camera views, DriveTok uses 3D information to better understand and reconstruct scenes from multiple cameras around a car. It combines different types of data like color, depth, and object types to help autonomous systems recognize their surroundings more efficiently. Tests on a popular driving dataset showed that DriveTok works well for tasks like image rebuilding, identifying objects, measuring distance, and mapping the 3D space.
vision-language-action modelsworld modelstokenizationmulti-view reconstruction3D deformable cross-attentionsemantic segmentationdepth prediction3D occupancy predictiontransformersnuScenes dataset
Authors
Dong Zhuo, Wenzhao Zheng, Sicheng Zuo, Siming Yan, Lu Hou, Jie Zhou, Jiwen Lu
Abstract
With the growing adoption of vision-language-action models and world models in autonomous driving systems, scalable image tokenization becomes crucial as the interface for the visual modality. However, most existing tokenizers are designed for monocular and 2D scenes, leading to inefficiency and inter-view inconsistency when applied to high-resolution multi-view driving scenes. To address this, we propose DriveTok, an efficient 3D driving scene tokenizer for unified multi-view reconstruction and understanding. DriveTok first obtains semantically rich visual features from vision foundation models and then transforms them into the scene tokens with 3D deformable cross-attention. For decoding, we employ a multi-view transformer to reconstruct multi-view features from the scene tokens and use multiple heads to obtain RGB, depth, and semantic reconstructions. We also add a 3D head directly on the scene tokens for 3D semantic occupancy prediction for better spatial awareness. With the multiple training objectives, DriveTok learns unified scene tokens that integrate semantic, geometric, and textural information for efficient multi-view tokenization. Extensive experiments on the widely used nuScenes dataset demonstrate that the scene tokens from DriveTok perform well on image reconstruction, semantic segmentation, depth prediction, and 3D occupancy prediction tasks.