123D: Unifying Multi-Modal Autonomous Driving Data at Scale
2026-05-08 • Robotics
RoboticsComputer Vision and Pattern Recognition
AI summaryⓘ
The authors created 123D, a tool that brings together many different self-driving car datasets which usually have different formats and don't work easily together. Their system organizes various types of data like camera images and sensor readings with flexible timing so they can be used together smoothly. They combined data from eight real-world and one synthetic dataset totaling thousands of hours of driving, and provided tools to analyze and visualize it. The authors also tested how well data annotations and sensor positions match across datasets and showed practical uses like improving 3D object detection and planning. Their work helps researchers use many datasets together more easily.
Autonomous DrivingMultimodal DataDataset Synchronization3D Object DetectionPose and CalibrationReinforcement LearningAnnotationsSynthetic DataData Visualization
Authors
Daniel Dauner, Valentin Charraut, Bastian Berle, Tianyu Li, Long Nguyen, Jiabao Wang, Changhui Jing, Maximilian Igl, Holger Caesar, Boris Ivanovic, Yiyi Liao, Andreas Geiger, Kashyap Chitta
Abstract
The pursuit of autonomous driving has produced one of the richest sensor data collections in all of robotics. However, its scale and diversity remain largely untapped. Each dataset adopts different 2D and 3D modalities, such as cameras, lidar, ego states, annotations, traffic lights, and HD maps, with different rates and synchronization schemes. They come in fragmented formats requiring complex dependencies that cannot natively coexist in the same development environment. Further, major inconsistencies in annotation conventions prevent training or measuring generalization across multiple datasets. We present 123D, an open-source framework that unifies such multi-modal driving data through a single API. To handle synchronization, we store each modality as an independent timestamped event stream with no prescribed rate, enabling synchronous or asynchronous access across arbitrary datasets. Using 123D, we consolidate eight real-world driving datasets spanning 3,300 hours and 90,000 kilometers, together with a synthetic dataset with configurable collection scripts, and provide tools for data analysis and visualization. We conduct a systematic study comparing annotation statistics and assessing each dataset's pose and calibration accuracy. Further, we showcase two applications 123D enables: cross-dataset 3D object detection transfer and reinforcement learning for planning, and offer recommendations for future directions. Code and documentation are available at https://github.com/kesai-labs/py123d.